A Distributed Approach to System-Level Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil
2012-01-01
Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Characterization of the powertrain components for a hybrid quadricycle
NASA Astrophysics Data System (ADS)
De Santis, M.; Agnelli, S.; Silvestri, L.; Di Ilio, G.; Giannini, O.
2016-06-01
This paper presents the experimental characterization of a prototyping hybrid electric quadricycle, which is equipped with two independently actuated hub (in-wheel) motors and powered by a 51 V 132 Ah LiFeYPO4 battery pack. Such a vehicle employs two hub motors located in the rear axles in order to independently drive/brake the rear wheels; such architecture allows to implement a torque vectoring system to improve the vehicle dynamics. Due to its actuation flexibility, energy efficiency and performance potentials, this architecture is one of the promising powertrain design for electric quadricycle. Experimental data obtained from measurements on the vehicle powertrain components going from the battery pack to the inverter and to the in-wheel motor were employed to generate the hub motor torque response and power efficiency maps in both driving and regenerative braking modes. Furthermore, the vehicle is equipped with a gasoline internal combustion engine as range extender whose efficiency was also characterized.
Nonlinear Extraction of Independent Components of Natural Images Using Radial Gaussianization
Lyu, Siwei; Simoncelli, Eero P.
2011-01-01
We consider the problem of efficiently encoding a signal by transforming it to a new representation whose components are statistically independent. A widely studied linear solution, known as independent component analysis (ICA), exists for the case when the signal is generated as a linear transformation of independent nongaussian sources. Here, we examine a complementary case, in which the source is nongaussian and elliptically symmetric. In this case, no invertible linear transform suffices to decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial gaussianization (RG), is able to remove all dependencies. We then examine this methodology in the context of natural image statistics. We first show that distributions of spatially proximal bandpass filter responses are better described as elliptical than as linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either nearby pairs or blocks of bandpass filter responses is significantly greater than that achieved by ICA. Finally, we show that the RG transformation may be closely approximated by divisive normalization, which has been used to model the nonlinear response properties of visual neurons. PMID:19191599
ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features.
Mognon, Andrea; Jovicich, Jorge; Bruzzone, Lorenzo; Buiatti, Marco
2011-02-01
A successful method for removing artifacts from electroencephalogram (EEG) recordings is Independent Component Analysis (ICA), but its implementation remains largely user-dependent. Here, we propose a completely automatic algorithm (ADJUST) that identifies artifacted independent components by combining stereotyped artifact-specific spatial and temporal features. Features were optimized to capture blinks, eye movements, and generic discontinuities on a feature selection dataset. Validation on a totally different EEG dataset shows that (1) ADJUST's classification of independent components largely matches a manual one by experts (agreement on 95.2% of the data variance), and (2) Removal of the artifacted components detected by ADJUST leads to neat reconstruction of visual and auditory event-related potentials from heavily artifacted data. These results demonstrate that ADJUST provides a fast, efficient, and automatic way to use ICA for artifact removal. Copyright © 2010 Society for Psychophysiological Research.
McCarty, James; Parrinello, Michele
2017-11-28
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
NASA Astrophysics Data System (ADS)
McCarty, James; Parrinello, Michele
2017-11-01
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
Monakhova, Yulia B; Godelmann, Rolf; Kuballa, Thomas; Mushtakova, Svetlana P; Rutledge, Douglas N
2015-08-15
Discriminant analysis (DA) methods, such as linear discriminant analysis (LDA) or factorial discriminant analysis (FDA), are well-known chemometric approaches for solving classification problems in chemistry. In most applications, principle components analysis (PCA) is used as the first step to generate orthogonal eigenvectors and the corresponding sample scores are utilized to generate discriminant features for the discrimination. Independent components analysis (ICA) based on the minimization of mutual information can be used as an alternative to PCA as a preprocessing tool for LDA and FDA classification. To illustrate the performance of this ICA/DA methodology, four representative nuclear magnetic resonance (NMR) data sets of wine samples were used. The classification was performed regarding grape variety, year of vintage and geographical origin. The average increase for ICA/DA in comparison with PCA/DA in the percentage of correct classification varied between 6±1% and 8±2%. The maximum increase in classification efficiency of 11±2% was observed for discrimination of the year of vintage (ICA/FDA) and geographical origin (ICA/LDA). The procedure to determine the number of extracted features (PCs, ICs) for the optimum DA models was discussed. The use of independent components (ICs) instead of principle components (PCs) resulted in improved classification performance of DA methods. The ICA/LDA method is preferable to ICA/FDA for recognition tasks based on NMR spectroscopic measurements. Copyright © 2015 Elsevier B.V. All rights reserved.
A fundamental study of drag and an assessment of conventional drag-due-to-lift reduction devices
NASA Astrophysics Data System (ADS)
Yates, J. E.; Donald, C. D.
1986-09-01
The integral conservation laws of fluid mechanics are used to assess the drag efficiency of lifting wings, both CTOL and various out-of-plane configurations. The drag-due-to-lift is separated into two major components: (1) the induced drag-due-to-lift that depends on aspect ratio but is relatively independent of Reynolds number; (2) the form drag-due-to-lift that is independent of aspect ratio but dependent on the details of the wing section design, planform and Reynolds number. For each lifting configuration there is an optimal load distribution that yields the minimum value of drag-due-to-lift. For well designed high aspect ratio CTOL wings the two drag components are independent. With modern design technology CTOL wings can be (and usually are) designed with a drag-due-to-lift efficiency close to unity. Wing tip-devices (winglets, feathers, sails, etc.) can improve drag-due-to-lift efficiency by 10 to 15% if they are designed as an integral part of the wing. As add-on devices they can be detrimental. It is estimated that 25% improvements of wing drag-due-to-lift efficiency can be obtained with joined tip configurations and vertically separated lifting elements without considering additional benefits that might be realized by improved structural efficiency. It is strongly recommended that an integrated aerodynamic/structural approach be taken in the design of (or research on) future out-of-plane configurations.
A fundamental study of drag and an assessment of conventional drag-due-to-lift reduction devices
NASA Technical Reports Server (NTRS)
Yates, J. E.; Donald, C. D.
1986-01-01
The integral conservation laws of fluid mechanics are used to assess the drag efficiency of lifting wings, both CTOL and various out-of-plane configurations. The drag-due-to-lift is separated into two major components: (1) the induced drag-due-to-lift that depends on aspect ratio but is relatively independent of Reynolds number; (2) the form drag-due-to-lift that is independent of aspect ratio but dependent on the details of the wing section design, planform and Reynolds number. For each lifting configuration there is an optimal load distribution that yields the minimum value of drag-due-to-lift. For well designed high aspect ratio CTOL wings the two drag components are independent. With modern design technology CTOL wings can be (and usually are) designed with a drag-due-to-lift efficiency close to unity. Wing tip-devices (winglets, feathers, sails, etc.) can improve drag-due-to-lift efficiency by 10 to 15% if they are designed as an integral part of the wing. As add-on devices they can be detrimental. It is estimated that 25% improvements of wing drag-due-to-lift efficiency can be obtained with joined tip configurations and vertically separated lifting elements without considering additional benefits that might be realized by improved structural efficiency. It is strongly recommended that an integrated aerodynamic/structural approach be taken in the design of (or research on) future out-of-plane configurations.
ERIC Educational Resources Information Center
McQueen, James M.; Tyler, Michael D.; Cutler, Anne
2012-01-01
Children hear new words from many different talkers; to learn words most efficiently, they should be able to represent them independently of talker-specific pronunciation detail. However, do children know what the component sounds of words should be, and can they use that knowledge to deal with different talkers' phonetic realizations? Experiment…
Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua
2010-08-01
In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.
A natural basis for efficient brain-actuated control
NASA Technical Reports Server (NTRS)
Makeig, S.; Enghoff, S.; Jung, T. P.; Sejnowski, T. J.
2000-01-01
The prospect of noninvasive brain-actuated control of computerized screen displays or locomotive devices is of interest to many and of crucial importance to a few 'locked-in' subjects who experience near total motor paralysis while retaining sensory and mental faculties. Currently several groups are attempting to achieve brain-actuated control of screen displays using operant conditioning of particular features of the spontaneous scalp electroencephalogram (EEG) including central mu-rhythms (9-12 Hz). A new EEG decomposition technique, independent component analysis (ICA), appears to be a foundation for new research in the design of systems for detection and operant control of endogenous EEG rhythms to achieve flexible EEG-based communication. ICA separates multichannel EEG data into spatially static and temporally independent components including separate components accounting for posterior alpha rhythms and central mu activities. We demonstrate using data from a visual selective attention task that ICA-derived mu-components can show much stronger spectral reactivity to motor events than activity measures for single scalp channels. ICA decompositions of spontaneous EEG would thus appear to form a natural basis for operant conditioning to achieve efficient and multidimensional brain-actuated control in motor-limited and locked-in subjects.
A Linguistic Model in Component Oriented Programming
NASA Astrophysics Data System (ADS)
Crăciunean, Daniel Cristian; Crăciunean, Vasile
2016-12-01
It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.
Electron beam control for barely separated beams
Douglas, David R.; Ament, Lucas J. P.
2017-04-18
A method for achieving independent control of multiple beams in close proximity to one another, such as in a multi-pass accelerator where coaxial beams are at different energies, but moving on a common axis, and need to be split into spatially separated beams for efficient recirculation transport. The method for independent control includes placing a magnet arrangement in the path of the barely separated beams with the magnet arrangement including at least two multipole magnets spaced closely together and having a multipole distribution including at least one odd multipole and one even multipole. The magnetic fields are then tuned to cancel out for a first of the barely separated beams to allow independent control of the second beam with common magnets. The magnetic fields may be tuned to cancel out either the dipole component or tuned to cancel out the quadrupole component in order to independently control the separate beams.
Measurement-device-independent quantum key distribution.
Lo, Hoi-Kwong; Curty, Marcos; Qi, Bing
2012-03-30
How to remove detector side channel attacks has been a notoriously hard problem in quantum cryptography. Here, we propose a simple solution to this problem--measurement-device-independent quantum key distribution (QKD). It not only removes all detector side channels, but also doubles the secure distance with conventional lasers. Our proposal can be implemented with standard optical components with low detection efficiency and highly lossy channels. In contrast to the previous solution of full device independent QKD, the realization of our idea does not require detectors of near unity detection efficiency in combination with a qubit amplifier (based on teleportation) or a quantum nondemolition measurement of the number of photons in a pulse. Furthermore, its key generation rate is many orders of magnitude higher than that based on full device independent QKD. The results show that long-distance quantum cryptography over say 200 km will remain secure even with seriously flawed detectors.
Independent component analysis algorithm FPGA design to perform real-time blind source separation
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke
2015-05-01
The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.
Classification of breast tissue in mammograms using efficient coding.
Costa, Daniel D; Campos, Lúcio F; Barros, Allan K
2011-06-24
Female breast cancer is the major cause of death by cancer in western countries. Efforts in Computer Vision have been made in order to improve the diagnostic accuracy by radiologists. Some methods of lesion diagnosis in mammogram images were developed based in the technique of principal component analysis which has been used in efficient coding of signals and 2D Gabor wavelets used for computer vision applications and modeling biological vision. In this work, we present a methodology that uses efficient coding along with linear discriminant analysis to distinguish between mass and non-mass from 5090 region of interest from mammograms. The results show that the best rates of success reached with Gabor wavelets and principal component analysis were 85.28% and 87.28%, respectively. In comparison, the model of efficient coding presented here reached up to 90.07%. Altogether, the results presented demonstrate that independent component analysis performed successfully the efficient coding in order to discriminate mass from non-mass tissues. In addition, we have observed that LDA with ICA bases showed high predictive performance for some datasets and thus provide significant support for a more detailed clinical investigation.
Gap plasmon-based metasurfaces for total control of reflected light
Pors, Anders; Albrektsen, Ole; Radko, Ilya P.; Bozhevolnyi, Sergey I.
2013-01-01
In the quest to miniaturise photonics, it is of paramount importance to control light at the nanoscale. We reveal the main physical mechanism responsible for operation of gap plasmon-based gradient metasurfaces, comprising a periodic arrangement of metal nanobricks, and suggest that two degrees of freedom in the nanobrick geometry allow one to independently control the reflection phases of orthogonal light polarisations. We demonstrate, both theoretically and experimentally, how orthogonal linear polarisations of light at wavelengths close to 800 nm can be manipulated independently, efficiently and in a broad wavelength range by realising polarisation beam splitters and polarisation-independent beam steering, showing at the same time the robustness of metasurface designs towards fabrication tolerances. The presented approach establishes a new class of compact optical components, viz., plasmonic metasurfaces with controlled gradient birefringence, with no dielectric counterparts. It can straightforwardly be adapted to realise new optical components with hitherto inaccessible functionalities. PMID:23831621
The design of photovoltaic plants - An optimization procedure
NASA Astrophysics Data System (ADS)
Bartoli, B.; Cuomo, V.; Fontana, F.; Serio, C.; Silvestrini, V.
An analytical model is developed to match the components and overall size of a solar power facility (comprising photovoltaic array), maximum-power tracker, battery storage system, and inverter) to the load requirements and climatic conditions of a proposed site at the smallest possible cost. Input parameters are the efficiencies and unit costs of the components, the load fraction to be covered (for stand-alone systems), the statistically analyzed meteorological data, and the cost and efficiency data of the support system (for fuel-generator-assisted plants). Numerical results are presented in graphs and tables for sites in Italy, and it is found that the explicit form of the model equation is independent of locality, at least for this region.
Assisting Adult Higher Education via Personal Computer: Technology and Distance Education.
ERIC Educational Resources Information Center
Spradley, Evelyn
1993-01-01
Thomas Edison State College (New Jersey) has developed a computer-assisted distance learning system to make undergraduate study more accessible, efficient, and effective for nontraditional students. The three main components: an infrastructure to provide varied technical services; an independent study course system; and diagnostic, online pretests…
NASA Astrophysics Data System (ADS)
Meksiarun, Phiranuphon; Ishigaki, Mika; Huck-Pezzei, Verena A. C.; Huck, Christian W.; Wongravee, Kanet; Sato, Hidetoshi; Ozaki, Yukihiro
2017-03-01
This study aimed to extract the paraffin component from paraffin-embedded oral cancer tissue spectra using three multivariate analysis (MVA) methods; Independent Component Analysis (ICA), Partial Least Squares (PLS) and Independent Component - Partial Least Square (IC-PLS). The estimated paraffin components were used for removing the contribution of paraffin from the tissue spectra. These three methods were compared in terms of the efficiency of paraffin removal and the ability to retain the tissue information. It was found that ICA, PLS and IC-PLS could remove the paraffin component from the spectra at almost the same level while Principal Component Analysis (PCA) was incapable. In terms of retaining cancer tissue spectral integrity, effects of PLS and IC-PLS on the non-paraffin region were significantly less than that of ICA where cancer tissue spectral areas were deteriorated. The paraffin-removed spectra were used for constructing Raman images of oral cancer tissue and compared with Hematoxylin and Eosin (H&E) stained tissues for verification. This study has demonstrated the capability of Raman spectroscopy together with multivariate analysis methods as a diagnostic tool for the paraffin-embedded tissue section.
Jiles, D.C.
1991-04-16
A multiparameter magnetic inspection system is disclosed for providing an efficient and economical way to derive a plurality of independent measurements regarding magnetic properties of the magnetic material under investigation. The plurality of transducers for a plurality of different types of measurements operatively connected to the specimen. The transducers are in turn connected to analytical circuits for converting transducer signals to meaningful measurement signals of the magnetic properties of the specimen. The measurement signals are processed and can be simultaneously communicated to a control component. The measurement signals can also be selectively plotted against one another. The control component operates the functioning of the analytical circuits and operates and controls components to impose magnetic fields of desired characteristics upon the specimen. The system therefore allows contemporaneous or simultaneous derivation of the plurality of different independent magnetic properties of the material which can then be processed to derive characteristics of the material. 1 figure.
Jiles, David C.
1991-04-16
A multiparameter magnetic inspection system for providing an efficient and economical way to derive a plurality of independent measurements regarding magnetic properties of the magnetic material under investigation. The plurality of transducers for a plurality of different types of measurements operatively connected to the specimen. The transducers are in turn connected to analytical circuits for converting transducer signals to meaningful measurement signals of the magnetic properties of the specimen. The measurement signals are processed and can be simultaneously communicated to a control component. The measurement signals can also be selectively plotted against one another. The control component operates the functioning of the analytical circuits and operates and controls components to impose magnetic fields of desired characteristics upon the specimen. The system therefore allows contemporaneous or simultaneous derivation of the plurality of different independent magnetic properties of the material which can then be processed to derive characteristics of the material.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
NASA Astrophysics Data System (ADS)
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
SaaS Platform for Time Series Data Handling
NASA Astrophysics Data System (ADS)
Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail
2018-02-01
The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.
Digital visual communications using a Perceptual Components Architecture
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
1991-01-01
The next era of space exploration will generate extraordinary volumes of image data, and management of this image data is beyond current technical capabilities. We propose a strategy for coding visual information that exploits the known properties of early human vision. This Perceptual Components Architecture codes images and image sequences in terms of discrete samples from limited bands of color, spatial frequency, orientation, and temporal frequency. This spatiotemporal pyramid offers efficiency (low bit rate), variable resolution, device independence, error-tolerance, and extensibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Enhanced automatic artifact detection based on independent component analysis and Renyi's entropy.
Mammone, Nadia; Morabito, Francesco Carlo
2008-09-01
Artifacts are disturbances that may occur during signal acquisition and may affect their processing. The aim of this paper is to propose a technique for automatically detecting artifacts from the electroencephalographic (EEG) recordings. In particular, a technique based on both Independent Component Analysis (ICA) to extract artifactual signals and on Renyi's entropy to automatically detect them is presented. This technique is compared to the widely known approach based on ICA and the joint use of kurtosis and Shannon's entropy. The novel processing technique is shown to detect on average 92.6% of the artifactual signals against the average 68.7% of the previous technique on the studied available database. Moreover, Renyi's entropy is shown to be able to detect muscle and very low frequency activity as well as to discriminate them from other kinds of artifacts. In order to achieve an efficient rejection of the artifacts while minimizing the information loss, future efforts will be devoted to the improvement of blind artifact separation from EEG in order to ensure a very efficient isolation of the artifactual activity from any signals deriving from other brain tasks.
Fetal source extraction from magnetocardiographic recordings by dependent component analysis
NASA Astrophysics Data System (ADS)
de Araujo, Draulio B.; Kardec Barros, Allan; Estombelo-Montesco, Carlos; Zhao, Hui; Roque da Silva Filho, A. C.; Baffa, Oswaldo; Wakai, Ronald; Ohnishi, Noboru
2005-10-01
Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.
Comparing architectural solutions of IPT application SDKs utilizing H.323 and SIP
NASA Astrophysics Data System (ADS)
Keskinarkaus, Anja; Korhonen, Jani; Ohtonen, Timo; Kilpelanaho, Vesa; Koskinen, Esa; Sauvola, Jaakko J.
2001-07-01
This paper presents two approaches to efficient service development for Internet Telephony. In first approach we consider services ranging from core call signaling features and media control as stated in ITU-T's H.323 to end user services that supports user interaction. The second approach supports IETF's SIP protocol. We compare these from differing architectural perspectives, economy of network and terminal development, and propose efficient architecture models for both protocols. In their design, the main criteria were component independence, lightweight operation and portability in heterogeneous end-to-end environments. In proposed architecture, the vertical division of call signaling and streaming media control logic allows for using the components either individually or combined, depending on the level of functionality required by an application.
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.; Weisshaar, Andreas; Li, Jian; Beheim, Glenn
1995-01-01
To determine the feasibility of coupling the output of a single-mode optical fiber into a single-mode rib waveguide in a temperature varying environment, a theoretical calculation of the coupling efficiency between the two was investigated. Due to the complex geometry of the rib guide, there is no analytical solution to the wave equation for the guided modes, thus, approximation and/or numerical techniques must be utilized to determine the field patterns of the guide. In this study, three solution methods were used for both the fiber and guide fields; the effective-index method (EIM), Marcatili's approximation, and a Fourier method. These methods were utilized independently to calculate the electric field profile of each component at two temperatures, 20 C and 300 C, representing a nominal and high temperature. Using the electric field profile calculated from each method, the theoretical coupling efficiency between an elliptical-core optical fiber and a rib waveguide was calculated using the overlap integral and the results were compared. It was determined that a high coupling efficiency can be achieved when the two components are aligned. The coupling efficiency was more sensitive to alignment offsets in the y direction than the x, due to the elliptical modal field profile of both components. Changes in the coupling efficiency over temperature were found to be minimal.
Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping
2014-01-01
This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.
Hage, Steffen R; Jiang, Tinglei; Berquist, Sean W; Feng, Jiang; Metzner, Walter
2014-07-15
One of the most efficient mechanisms to optimize signal-to-noise ratios is the Lombard effect - an involuntary rise in call amplitude due to ambient noise. It is often accompanied by changes in the spectro-temporal composition of calls. We examined the effects of broadband-filtered noise on the spectro-temporal composition of horseshoe bat echolocation calls, which consist of a constant-frequency component and initial and terminal frequency-modulated components. We found that the frequency-modulated components became larger for almost all noise conditions, whereas the bandwidth of the constant-frequency component increased only when broadband-filtered noise was centered on or above the calls' dominant or fundamental frequency. This indicates that ambient noise independently modifies the associated acoustic parameters of the Lombard effect, such as spectro-temporal features, and could significantly affect the bat's ability to detect and locate targets. Our findings may be of significance in evaluating the impact of environmental noise on echolocation behavior in bats. © 2014. Published by The Company of Biologists Ltd.
Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs
NASA Technical Reports Server (NTRS)
Somani, Arun K.
1996-01-01
Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.
González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio
2015-03-01
A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
Factorial Experiments: Efficient Tools for Evaluation of Intervention Components
Collins, Linda M.; Dziak, John J.; Kugler, Kari C.; Trail, Jessica B.
2014-01-01
Background An understanding of the individual and combined effects of a set of intervention components is important for moving the science of preventive medicine interventions forward. This understanding can often be achieved in an efficient and economical way via a factorial experiment, in which two or more independent variables are manipulated. The factorial experiment is a complement to the randomized controlled trial (RCT); the two designs address different research questions. Purpose This article offers an introduction to factorial experiments aimed at investigators trained primarily in the RCT. Method The factorial experiment is compared and contrasted with other experimental designs used commonly in intervention science to highlight where each is most efficient and appropriate. Results Several points are made: factorial experiments make very efficient use of experimental subjects when the data are properly analyzed; a factorial experiment can have excellent statistical power even if it has relatively few subjects per experimental condition; and when conducting research to select components for inclusion in a multicomponent intervention, interactions should be studied rather than avoided. Conclusions Investigators in preventive medicine and related areas should begin considering factorial experiments alongside other approaches. Experimental designs should be chosen from a resource management perspective, which states that the best experimental design is the one that provides the greatest scientific benefit without exceeding available resources. PMID:25092122
Spadone, Sara; de Pasquale, Francesco; Mantini, Dante; Della Penna, Stefania
2012-09-01
Independent component analysis (ICA) is typically applied on functional magnetic resonance imaging, electroencephalographic and magnetoencephalographic (MEG) data due to its data-driven nature. In these applications, ICA needs to be extended from single to multi-session and multi-subject studies for interpreting and assigning a statistical significance at the group level. Here a novel strategy for analyzing MEG independent components (ICs) is presented, Multivariate Algorithm for Grouping MEG Independent Components K-means based (MAGMICK). The proposed approach is able to capture spatio-temporal dynamics of brain activity in MEG studies by running ICA at subject level and then clustering the ICs across sessions and subjects. Distinctive features of MAGMICK are: i) the implementation of an efficient set of "MEG fingerprints" designed to summarize properties of MEG ICs as they are built on spatial, temporal and spectral parameters; ii) the implementation of a modified version of the standard K-means procedure to improve its data-driven character. This algorithm groups the obtained ICs automatically estimating the number of clusters through an adaptive weighting of the parameters and a constraint on the ICs independence, i.e. components coming from the same session (at subject level) or subject (at group level) cannot be grouped together. The performances of MAGMICK are illustrated by analyzing two sets of MEG data obtained during a finger tapping task and median nerve stimulation. The results demonstrate that the method can extract consistent patterns of spatial topography and spectral properties across sessions and subjects that are in good agreement with the literature. In addition, these results are compared to those from a modified version of affinity propagation clustering method. The comparison, evaluated in terms of different clustering validity indices, shows that our methodology often outperforms the clustering algorithm. Eventually, these results are confirmed by a comparison with a MEG tailored version of the self-organizing group ICA, which is largely used for fMRI IC clustering. Copyright © 2012 Elsevier Inc. All rights reserved.
Pei Li; Jing He; A. Lynn Abbott; Daniel L. Schmoldt
1996-01-01
This paper analyses computed tomography (CT) images of hardwood logs, with the goal of locating internal defects. The ability to detect and identify defects automatically is a critical component of efficiency improvements for future sawmills and veneer mills. This paper describes an approach in which 1) histogram equalization is used during preprocessing to normalize...
Efficient and Robust Signal Approximations
2009-05-01
otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse
Yang, Yi Isaac; Parrinello, Michele
2018-06-12
Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.
A new multicriteria risk mapping approach based on a multiattribute frontier concept.
Yemshanov, Denys; Koch, Frank H; Ben-Haim, Yakov; Downing, Marla; Sapio, Frank; Siltanen, Marty
2013-09-01
Invasive species risk maps provide broad guidance on where to allocate resources for pest monitoring and regulation, but they often present individual risk components (such as climatic suitability, host abundance, or introduction potential) as independent entities. These independent risk components are integrated using various multicriteria analysis techniques that typically require prior knowledge of the risk components' importance. Such information is often nonexistent for many invasive pests. This study proposes a new approach for building integrated risk maps using the principle of a multiattribute efficient frontier and analyzing the partial order of elements of a risk map as distributed in multidimensional criteria space. The integrated risks are estimated as subsequent multiattribute frontiers in dimensions of individual risk criteria. We demonstrate the approach with the example of Agrilus biguttatus Fabricius, a high-risk pest that may threaten North American oak forests in the near future. Drawing on U.S. and Canadian data, we compare the performance of the multiattribute ranking against a multicriteria linear weighted averaging technique in the presence of uncertainties, using the concept of robustness from info-gap decision theory. The results show major geographic hotspots where the consideration of tradeoffs between multiple risk components changes integrated risk rankings. Both methods delineate similar geographical regions of high and low risks. Overall, aggregation based on a delineation of multiattribute efficient frontiers can be a useful tool to prioritize risks for anticipated invasive pests, which usually have an extremely poor prior knowledge base. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
The effectiveness and efficiency of disease management programs for patients with chronic diseases.
Hisashige, Akinori
2012-11-26
Disease management (DM) approach is increasingly advocated as a means of improving effectiveness and efficiency of healthcare for chronic diseases. To evaluate the evidence on effectiveness and efficiency of DM, evidence synthesis was carried out. To locate eligible meta-analyses and systematic reviews, we searched Medline, EMBASE, the Cochrane Library, SCI-EXPANDED, SSCI, A&HCI, DARE, HTA and NHS EED from 1995 to 2010. Two reviewers independently extracted data and assessed a study quality. Twenty-eight meta-analyses and systematic reviews were included for synthesizing evidence. The proportion of articles which observed improvement with a reasonable amount of evidence was the highest at process (69%), followed by health services (63%), QOL (57%), health outcomes (51%), satisfaction (50%), costs (38%) and so on. As to mortality, statistically significant results were observed only in coronary heart disease. Important components in DM, such as a multidisciplinary approach, were identified. The evidence synthesized shows considerable evidence in the effectiveness and efficiency of DM programs in process, health services, QOL and so on. The question is no longer whether DM programs work, but rather which type or component of DM programs works best and efficiently in the context of each healthcare system or country.
Classification of fMRI resting-state maps using machine learning techniques: A comparative study
NASA Astrophysics Data System (ADS)
Gallos, Ioannis; Siettos, Constantinos
2017-11-01
We compare the efficiency of Principal Component Analysis (PCA) and nonlinear learning manifold algorithms (ISOMAP and Diffusion maps) for classifying brain maps between groups of schizophrenia patients and healthy from fMRI scans during a resting-state experiment. After a standard pre-processing pipeline, we applied spatial Independent component analysis (ICA) to reduce (a) noise and (b) spatial-temporal dimensionality of fMRI maps. On the cross-correlation matrix of the ICA components, we applied PCA, ISOMAP and Diffusion Maps to find an embedded low-dimensional space. Finally, support-vector-machines (SVM) and k-NN algorithms were used to evaluate the performance of the algorithms in classifying between the two groups.
Garrett, W. Ray
1997-01-01
A method and apparatus for measuring partial pressures of gaseous components within a mixture. The apparatus comprises generally at least one tunable laser source, a beam splitter, mirrors, optical filter, an optical spectrometer, and a data recorder. Measured in the forward direction along the path of the laser, the intensity of the emission spectra of the gaseous component, at wavelengths characteristic of the gas component being measured, are suppressed. Measured in the backward direction, the peak intensities characteristic of a given gaseous component will be wavelength shifted. These effects on peak intensity wavelengths are linearly dependent on the partial pressure of the compound being measured, but independent of the partial pressures of other gases which are present within the sample. The method and apparatus allow for efficient measurement of gaseous components.
Garrett, W.R.
1997-11-11
A method and apparatus are disclosed for measuring partial pressures of gaseous components within a mixture. The apparatus comprises generally at least one tunable laser source, a beam splitter, mirrors, optical filter, an optical spectrometer, and a data recorder. Measured in the forward direction along the path of the laser, the intensity of the emission spectra of the gaseous component, at wavelengths characteristic of the gas component being measured, are suppressed. Measured in the backward direction, the peak intensities characteristic of a given gaseous component will be wavelength shifted. These effects on peak intensity wavelengths are linearly dependent on the partial pressure of the compound being measured, but independent of the partial pressures of other gases which are present within the sample. The method and apparatus allow for efficient measurement of gaseous components. 9 figs.
Semi-blind Bayesian inference of CMB map and power spectrum
NASA Astrophysics Data System (ADS)
Vansyngel, Flavien; Wandelt, Benjamin D.; Cardoso, Jean-François; Benabed, Karim
2016-04-01
We present a new blind formulation of the cosmic microwave background (CMB) inference problem. The approach relies on a phenomenological model of the multifrequency microwave sky without the need for physical models of the individual components. For all-sky and high resolution data, it unifies parts of the analysis that had previously been treated separately such as component separation and power spectrum inference. We describe an efficient sampling scheme that fully explores the component separation uncertainties on the inferred CMB products such as maps and/or power spectra. External information about individual components can be incorporated as a prior giving a flexible way to progressively and continuously introduce physical component separation from a maximally blind approach. We connect our Bayesian formalism to existing approaches such as Commander, spectral mismatch independent component analysis (SMICA), and internal linear combination (ILC), and discuss possible future extensions.
Improved Round Trip Efficiency for Regenerative Fuel Cell Systems
2012-05-11
advanced components that enable closed-loop, zero emission, low signature energy storage. The system utilizes proton exchange membrane ( PEM ) fuel cell ...regenerative fuel cell (RFC) systems based on proton exchange membrane ( PEM ) technology. An RFC consists of a fuel cell powerplant, an electrolysis...based on an air independent, hydrogen-oxygen, PEM RFC is feasible within the near term if development efforts proceed forward. Fuel Cell
How Muscle Structure and Composition Influence Meat and Flesh Quality
Listrat, Anne; Lebret, Bénédicte; Louveau, Isabelle; Astruc, Thierry; Bonnet, Muriel; Lefaucheur, Louis; Picard, Brigitte; Bugeon, Jérôme
2016-01-01
Skeletal muscle consists of several tissues, such as muscle fibers and connective and adipose tissues. This review aims to describe the features of these various muscle components and their relationships with the technological, nutritional, and sensory properties of meat/flesh from different livestock and fish species. Thus, the contractile and metabolic types, size and number of muscle fibers, the content, composition and distribution of the connective tissue, and the content and lipid composition of intramuscular fat play a role in the determination of meat/flesh appearance, color, tenderness, juiciness, flavor, and technological value. Interestingly, the biochemical and structural characteristics of muscle fibers, intramuscular connective tissue, and intramuscular fat appear to play independent role, which suggests that the properties of these various muscle components can be independently modulated by genetics or environmental factors to achieve production efficiency and improve meat/flesh quality. PMID:27022618
Modularization of gradient-index optical design using wavefront matching enabled optimization.
Nagar, Jogender; Brocker, Donovan E; Campbell, Sawyer D; Easum, John A; Werner, Douglas H
2016-05-02
This paper proposes a new design paradigm which allows for a modular approach to replacing a homogeneous optical lens system with a higher-performance GRadient-INdex (GRIN) lens system using a WaveFront Matching (WFM) method. In multi-lens GRIN systems, a full-system-optimization approach can be challenging due to the large number of design variables. The proposed WFM design paradigm enables optimization of each component independently by explicitly matching the WaveFront Error (WFE) of the original homogeneous component at the exit pupil, resulting in an efficient design procedure for complex multi-lens systems.
Separation of GRACE geoid time-variations using Independent Component Analysis
NASA Astrophysics Data System (ADS)
Frappart, F.; Ramillien, G.; Maisongrande, P.; Bonnet, M.
2009-12-01
Independent Component Analysis (ICA) is a blind separation method based on the simple assumptions of the independence of the sources and the non-Gaussianity of the observations. An approach based on this numerical method is used here to extract hydrological signals over land and oceans from the polluting striping noise due to orbit repetitiveness and present in the GRACE global mass anomalies. We took advantage of the availability of monthly Level-2 solutions from three official providers (i.e., CSR, JPL and GFZ) that can be considered as different observations of the same phenomenon. The efficiency of the methodology is first demonstrated on a synthetic case. Applied to one month of GRACE solutions, it allows to clearly separate the total water storage change from the meridional-oriented spurious gravity signals on the continents but not on the oceans. This technique gives results equivalent as the destriping method for continental water storage for the hydrological patterns with less smoothing. This methodology is then used to filter the complete series of the 2002-2009 GRACE solutions.
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
NASA Astrophysics Data System (ADS)
Tang, Hong; Lin, Jian-Zhong
2013-01-01
An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.
Mossavar-Rahmani, Yasmin; Weng, Jia; Wang, Rui; Shaw, Pamela A; Jung, Molly; Sotres-Alvarez, Daniela; Castañeda, Sheila F; Gallo, Linda C; Gellman, Marc D; Qi, Qibin; Ramos, Alberto R; Reid, Kathryn J; Van Horn, Linda; Patel, Sanjay R
2017-12-01
Using a cross-sectional probability sample with actigraphy data and two 24-h dietary recalls, we quantified the association between sleep duration, continuity, variability and timing with the Alternative Healthy Eating Index-2010 diet quality score and its components in 2140 Hispanic Community Health Study/Study of Latinos participants. The Alternative Healthy Eating Index diet quality-2010 score ranges from 0 to 110, with higher scores indicating greater adherence to the dietary guidelines and lower risk from major chronic disease. None of the sleep measures was associated with total caloric intake as assessed using dietary recalls. However, both an increase in sleep duration and sleep efficiency were associated with healthier diet quality. Each standard deviation increase in sleep duration (1.05 h) and sleep efficiency (4.99%) was associated with a 0.30 point increase and 0.28 point increase, respectively, in the total Alternative Healthy Eating Index-2010 score. The component of Alternative Healthy Eating Index-2010 most strongly associated with longer sleep duration was increased nuts and legumes intake. The components of Alternative Healthy Eating Index-2010 most strongly associated with higher sleep efficiency were increased whole fruit intake and decreased sodium intake. Both longer sleep duration and higher sleep efficiency were significantly associated with better diet quality among US Hispanic/Latino adults. The dietary components most strongly associated with sleep duration and sleep efficiency differed, suggesting potentially independent mechanisms by which each aspect of sleep impacts dietary choices. Longitudinal research is needed to understand the directionality of these identified relationships and the generalizability of these data across other ethnic groups. © 2017 European Sleep Research Society.
2D surface temperature measurement of plasma facing components with modulated active pyrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiel, S.; Loarer, T.; Pocheau, C.
2014-10-01
In nuclear fusion devices, such as Tore Supra, the plasma facing components (PFC) are in carbon. Such components are exposed to very high heat flux and the surface temperature measurement is mandatory for the safety of the device and also for efficient plasma scenario development. Besides this measurement is essential to evaluate these heat fluxes for a better knowledge of the physics of plasma-wall interaction, it is also required to monitor the fatigue of PFCs. Infrared system (IR) is used to manage to measure surface temperature in real time. For carbon PFCs, the emissivity is high and known (ε ~more » 0.8), therefore the contribution of the reflected flux from environment and collected by the IR cameras can be neglected. However, the future tokamaks such as WEST and ITER will be equipped with PFCs in metal (W and Be/W, respectively) with low and variable emissivities (ε ~ 0.1–0.4). Consequently, the reflected flux will contribute significantly in the collected flux by IR camera. The modulated active pyrometry, using a bicolor camera, proposed in this paper allows a 2D surface temperature measurement independently of the reflected fluxes and the emissivity. Experimental results with Tungsten sample are reported and compared with simultaneous measurement performed with classical pyrometry (monochromatic and bichromatic) with and without reflective flux demonstrating the efficiency of this method for surface temperature measurement independently of the reflected flux and the emissivity.« less
Factorial experiments: efficient tools for evaluation of intervention components.
Collins, Linda M; Dziak, John J; Kugler, Kari C; Trail, Jessica B
2014-10-01
An understanding of the individual and combined effects of a set of intervention components is important for moving the science of preventive medicine interventions forward. This understanding can often be achieved in an efficient and economical way via a factorial experiment, in which two or more independent variables are manipulated. The factorial experiment is a complement to the RCT; the two designs address different research questions. To offer an introduction to factorial experiments aimed at investigators trained primarily in the RCT. The factorial experiment is compared and contrasted with other experimental designs used commonly in intervention science to highlight where each is most efficient and appropriate. Several points are made: factorial experiments make very efficient use of experimental subjects when the data are properly analyzed; a factorial experiment can have excellent statistical power even if it has relatively few subjects per experimental condition; and when conducting research to select components for inclusion in a multicomponent intervention, interactions should be studied rather than avoided. Investigators in preventive medicine and related areas should begin considering factorial experiments alongside other approaches. Experimental designs should be chosen from a resource management perspective, which states that the best experimental design is the one that provides the greatest scientific benefit without exceeding available resources. Copyright © 2014 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Robust Measurement via A Fused Latent and Graphical Item Response Theory Model.
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Ying, Zhiliang
2018-03-12
Item response theory (IRT) plays an important role in psychological and educational measurement. Unlike the classical testing theory, IRT models aggregate the item level information, yielding more accurate measurements. Most IRT models assume local independence, an assumption not likely to be satisfied in practice, especially when the number of items is large. Results in the literature and simulation studies in this paper reveal that misspecifying the local independence assumption may result in inaccurate measurements and differential item functioning. To provide more robust measurements, we propose an integrated approach by adding a graphical component to a multidimensional IRT model that can offset the effect of unknown local dependence. The new model contains a confirmatory latent variable component, which measures the targeted latent traits, and a graphical component, which captures the local dependence. An efficient proximal algorithm is proposed for the parameter estimation and structure learning of the local dependence. This approach can substantially improve the measurement, given no prior information on the local dependence structure. The model can be applied to measure both a unidimensional latent trait and multidimensional latent traits.
Pizzari, Tommaso; Jensen, Per; Cornwallis, Charles K.
2004-01-01
The phenotype-linked fertility hypothesis predicts that male sexual ornaments signal fertilizing efficiency and that the coevolution of male ornaments and female preference for such ornaments is driven by female pursuit of fertility benefits. In addition, directional testicular asymmetry frequently observed in birds has been suggested to reflect fertilizing efficiency and to covary with ornament expression. However, the idea of a phenotypic relationship between male ornaments and fertilizing efficiency is often tested in populations where environmental effects mask the underlying genetic associations between ornaments and fertilizing efficiency implied by this idea. Here, we adopt a novel design, which increases genetic diversity through the crossing of two divergent populations while controlling for environmental effects, to test: (i) the phenotypic relationship between male ornaments and both, gonadal (testicular mass) and gametic (sperm quality) components of fertilizing efficiency; and (ii) the extent to which these components are phenotypically integrated in the fowl, Gallus gallus. We show that consistent with theory, the testosterone-dependent expression of a male ornament, the comb, predicted testicular mass. However, despite their functional inter-dependence, testicular mass and sperm quality were not phenotypically integrated. Consistent with this result, males of one parental population invested more in testicular and comb mass, whereas males of the other parental population had higher sperm quality. We found no evidence that directional testicular asymmetry covaried with ornament expression. These results shed new light on the evolutionary relationship between male fertilizing efficiency and ornaments. Although testosterone-dependent ornaments may covary with testicular mass and thus reflect sperm production rate, the lack of phenotypic integration between gonadal and gametic traits reveals that the expression of an ornament is unlikely to reflect the overall fertilizing efficiency of a male. PMID:15002771
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
The Effectiveness and Efficiency of Disease Management Programs for Patients with Chronic Diseases
Hisashige, Akinori
2013-01-01
Objective: Disease management (DM) approach is increasingly advocated as a means of improving effectiveness and efficiency of healthcare for chronic diseases. To evaluate the evidence on effectiveness and efficiency of DM, evidence synthesis was carried out. Methods: To locate eligible meta-analyses and systematic reviews, we searched Medline, EMBASE, the Cochrane Library, SCI-EXPANDED, SSCI, A&HCI, DARE, HTA and NHS EED from 1995 to 2010. Two reviewers independently extracted data and assessed a study quality. Results: Twenty-eight meta-analyses and systematic reviews were included for synthesizing evidence. The proportion of articles which observed improvement with a reasonable amount of evidence was the highest at process (69%), followed by health services (63%), QOL (57%), health outcomes (51%), satisfaction (50%), costs (38%) and so on. As to mortality, statistically significant results were observed only in coronary heart disease. Important components in DM, such as a multidisciplinary approach, were identified. Conclusion: The evidence synthesized shows considerable evidence in the effectiveness and efficiency of DM programs in process, health services, QOL and so on. The question is no longer whether DM programs work, but rather which type or component of DM programs works best and efficiently in the context of each healthcare system or country. PMID:23445693
Huang, Huiyuan; Wang, Junjing; Seger, Carol; Lu, Min; Deng, Feng; Wu, Xiaoyan; He, Yuan; Niu, Chen; Wang, Jun; Huang, Ruiwang
2018-01-01
Long-term intensive gymnastic training can induce brain structural and functional reorganization. Previous studies have identified structural and functional network differences between world class gymnasts (WCGs) and non-athletes at the whole-brain level. However, it is still unclear how interactions within and between functional networks are affected by long-term intensive gymnastic training. We examined both intra- and inter-network functional connectivity of gymnasts relative to non-athletes using resting-state fMRI (R-fMRI). R-fMRI data were acquired from 13 WCGs and 14 non-athlete controls. Group-independent component analysis (ICA) was adopted to decompose the R-fMRI data into spatial independent components and associated time courses. An automatic component identification method was used to identify components of interest associated with resting-state networks (RSNs). We identified nine RSNs, the basal ganglia network (BG), sensorimotor network (SMN), cerebellum (CB), anterior and posterior default mode networks (aDMN/pDMN), left and right fronto-parietal networks (lFPN/rFPN), primary visual network (PVN), and extrastriate visual network (EVN). Statistical analyses revealed that the intra-network functional connectivity was significantly decreased within the BG, aDMN, lFPN, and rFPN, but increased within the EVN in the WCGs compared to the controls. In addition, the WCGs showed uniformly decreased inter-network functional connectivity between SMN and BG, CB, and PVN, BG and PVN, and pDMN and rFPN compared to the controls. We interpret this generally weaker intra- and inter-network functional connectivity in WCGs during the resting state as a result of greater efficiency in the WCGs' brain associated with long-term motor skill training.
Go Pink! The Effect of Secondary Quanta on Detective Quantum Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, Scott
2017-09-05
Photons are never directly observable. Consequently, we often use photoelectric detectors (eg CCDs) to record associated photoelectrons statistically. Nonetheless, it is an implicit goal of radiographic detector designers to achieve the maximum possible detector efficiency1. In part the desire for ever higher efficiency has been due to the fact that detectors are far less expensive than associated accelerator facilities (e.g. DARHT and PHERMEX2). In addition, higher efficiency detectors often have better spatial resolution. Consequently, the optimization of the detector, not the accelerator, is the system component with the highest leverage per dollar. In recent years, imaging scientists have adopted themore » so-called Detective Quantum Efficiency, or DQE as a summary measure of detector performance. Unfortunately, owing to the complex nature of the trade-space associated with detector components, and the natural desire for simplicity and low(er) cost, there has been a recent trend in Los Alamos to focus only on the zerofrequency efficiency, or DQE(0), when designing such systems. This narrow focus leads to system designs that neglect or even ignore the importance of high-spatial-frequency image components. In this paper we demonstrate the significant negative impact of these design choices on the Noise Power Spectrum1 (NPS) and recommend a more holistic approach to detector design. Here we present a statistical argument which indicates that a very large number (>20) of secondary quanta (typically visible light and/or recorded photo-electrons) are needed to take maximum advantage of the primary quanta (typically x-rays or protons) which are available to form an image. Since secondary particles come in bursts, they are not independent. In short, we want to maximize the pink nature of detector noise at DARHT.« less
Lakkaraju, Asvin K. K.; Thankappan, Ratheeshkumar; Mary, Camille; Garrison, Jennifer L.; Taunton, Jack; Strub, Katharina
2012-01-01
Mammalian cells secrete a large number of small proteins, but their mode of translocation into the endoplasmic reticulum is not fully understood. Cotranslational translocation was expected to be inefficient due to the small time window for signal sequence recognition by the signal recognition particle (SRP). Impairing the SRP pathway and reducing cellular levels of the translocon component Sec62 by RNA interference, we found an alternate, Sec62-dependent translocation path in mammalian cells required for the efficient translocation of small proteins with N-terminal signal sequences. The Sec62-dependent translocation occurs posttranslationally via the Sec61 translocon and requires ATP. We classified preproteins into three groups: 1) those that comprise ≤100 amino acids are strongly dependent on Sec62 for efficient translocation; 2) those in the size range of 120–160 amino acids use the SRP pathway, albeit inefficiently, and therefore rely on Sec62 for efficient translocation; and 3) those larger than 160 amino acids depend on the SRP pathway to preserve a transient translocation competence independent of Sec62. Thus, unlike in yeast, the Sec62-dependent translocation pathway in mammalian cells serves mainly as a fail-safe mechanism to ensure efficient secretion of small proteins and provides cells with an opportunity to regulate secretion of small proteins independent of the SRP pathway. PMID:22648169
Low thermal expansion seal ring support
Dewis, David W.; Glezer, Boris
2000-01-01
Today, the trend is to increase the temperature of operation of gas turbine engines. To cool the components with compressor discharge air, robs air which could otherwise be used for combustion and creates a less efficient gas turbine engine. The present low thermal expansion sealing ring support system reduces the quantity of cooling air required while maintaining life and longevity of the components. Additionally, the low thermal expansion sealing ring reduces the clearance "C","C'" demanded between the interface between the sealing surface and the tip of the plurality of turbine blades. The sealing ring is supported by a plurality of support members in a manner in which the sealing ring and the plurality of support members independently expand and contract relative to each other and to other gas turbine engine components.
Frequency-domain-independent vector analysis for mode-division multiplexed transmission
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Hu, Guijun; Li, Jiao
2018-04-01
In this paper, we propose a demultiplexing method based on frequency-domain independent vector analysis (FD-IVA) algorithm for mode-division multiplexing (MDM) system. FD-IVA extends frequency-domain independent component analysis (FD-ICA) from unitary variable to multivariate variables, and provides an efficient method to eliminate the permutation ambiguity. In order to verify the performance of FD-IVA algorithm, a 6 ×6 MDM system is simulated. The simulation results show that the FD-IVA algorithm has basically the same bit-error-rate(BER) performance with the FD-ICA algorithm and frequency-domain least mean squares (FD-LMS) algorithm. Meanwhile, the convergence speed of FD-IVA algorithm is the same as that of FD-ICA. However, compared with the FD-ICA and the FD-LMS, the FD-IVA has an obviously lower computational complexity.
A new transform for the analysis of complex fractionated atrial electrograms
2011-01-01
Background Representation of independent biophysical sources using Fourier analysis can be inefficient because the basis is sinusoidal and general. When complex fractionated atrial electrograms (CFAE) are acquired during atrial fibrillation (AF), the electrogram morphology depends on the mix of distinct nonsinusoidal generators. Identification of these generators using efficient methods of representation and comparison would be useful for targeting catheter ablation sites to prevent arrhythmia reinduction. Method A data-driven basis and transform is described which utilizes the ensemble average of signal segments to identify and distinguish CFAE morphologic components and frequencies. Calculation of the dominant frequency (DF) of actual CFAE, and identification of simulated independent generator frequencies and morphologies embedded in CFAE, is done using a total of 216 recordings from 10 paroxysmal and 10 persistent AF patients. The transform is tested versus Fourier analysis to detect spectral components in the presence of phase noise and interference. Correspondence is shown between ensemble basis vectors of highest power and corresponding synthetic drivers embedded in CFAE. Results The ensemble basis is orthogonal, and efficient for representation of CFAE components as compared with Fourier analysis (p ≤ 0.002). When three synthetic drivers with additive phase noise and interference were decomposed, the top three peaks in the ensemble power spectrum corresponded to the driver frequencies more closely as compared with top Fourier power spectrum peaks (p ≤ 0.005). The synthesized drivers with phase noise and interference were extractable from their corresponding ensemble basis with a mean error of less than 10%. Conclusions The new transform is able to efficiently identify CFAE features using DF calculation and by discerning morphologic differences. Unlike the Fourier transform method, it does not distort CFAE signals prior to analysis, and is relatively robust to jitter in periodic events. Thus the ensemble method can provide a useful alternative for quantitative characterization of CFAE during clinical study. PMID:21569421
Ammari, Faten; Jouan-Rimbaud-Bouveresse, Delphine; Boughanmi, Néziha; Rutledge, Douglas N
2012-09-15
The aim of this study was to find objective analytical methods to study the degradation of edible oils during heating and thus to suggest solutions to improve their stability. The efficiency of Nigella seed extract as natural antioxidant was compared with butylated hydroxytoluene (BHT) during accelerated oxidation of edible vegetable oils at 120 and 140 °C. The modifications during heating were monitored by 3D-front-face fluorescence spectroscopy along with Independent Components Analysis (ICA), (1)H NMR spectroscopy and classical physico-chemical methods such as anisidine value and viscosity. The results of the study clearly indicate that the natural seed extract at a level of 800 ppm exhibited antioxidant effects similar to those of the synthetic antioxidant BHT at a level of 200 ppm and thus contributes to an increase in the oxidative stability of the oil. Copyright © 2012 Elsevier B.V. All rights reserved.
Lu, Chi-Jie; Chang, Chi-Chang
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting.
2014-01-01
Sales forecasting plays an important role in operating a business since it can be used to determine the required inventory level to meet consumer demand and avoid the problem of under/overstocking. Improving the accuracy of sales forecasting has become an important issue of operating a business. This study proposes a hybrid sales forecasting scheme by combining independent component analysis (ICA) with K-means clustering and support vector regression (SVR). The proposed scheme first uses the ICA to extract hidden information from the observed sales data. The extracted features are then applied to K-means algorithm for clustering the sales data into several disjoined clusters. Finally, the SVR forecasting models are applied to each group to generate final forecasting results. Experimental results from information technology (IT) product agent sales data reveal that the proposed sales forecasting scheme outperforms the three comparison models and hence provides an efficient alternative for sales forecasting. PMID:25045738
Coherent Optical Memory with High Storage Efficiency and Large Fractional Delay
NASA Astrophysics Data System (ADS)
Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I.-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite A.
2013-02-01
A high-storage efficiency and long-lived quantum memory for photons is an essential component in long-distance quantum communication and optical quantum computation. Here, we report a 78% storage efficiency of light pulses in a cold atomic medium based on the effect of electromagnetically induced transparency. At 50% storage efficiency, we obtain a fractional delay of 74, which is the best up-to-date record. The classical fidelity of the recalled pulse is better than 90% and nearly independent of the storage time, as confirmed by the direct measurement of phase evolution of the output light pulse with a beat-note interferometer. Such excellent phase coherence between the stored and recalled light pulses suggests that the current result may be readily applied to single photon wave packets. Our work significantly advances the technology of electromagnetically induced transparency-based optical memory and may find practical applications in long-distance quantum communication and optical quantum computation.
Coherent optical memory with high storage efficiency and large fractional delay.
Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite A
2013-02-22
A high-storage efficiency and long-lived quantum memory for photons is an essential component in long-distance quantum communication and optical quantum computation. Here, we report a 78% storage efficiency of light pulses in a cold atomic medium based on the effect of electromagnetically induced transparency. At 50% storage efficiency, we obtain a fractional delay of 74, which is the best up-to-date record. The classical fidelity of the recalled pulse is better than 90% and nearly independent of the storage time, as confirmed by the direct measurement of phase evolution of the output light pulse with a beat-note interferometer. Such excellent phase coherence between the stored and recalled light pulses suggests that the current result may be readily applied to single photon wave packets. Our work significantly advances the technology of electromagnetically induced transparency-based optical memory and may find practical applications in long-distance quantum communication and optical quantum computation.
Lefever, Marlies; Decuman, Saskia; Perl, François; Braeckman, Lutgart; Van de Velde, Dominique
2018-01-01
Disability management (DM) is a systematic method to ensure job-retention and job-reintegration in competitive employment for individuals with a disability. There is evidence that 'returning to work' has a positive impact on the individual, the company and on the society. However, a clear overview of the efficacy and efficiency of the DM programs is scarce. To systematically review the efficacy and efficiency of the disability management programs. Cochrane, PubMed, Google Scholar, and Web of Science were searched from 1994 to 2015. Two reviewers independently evaluated the articles on title, abstract, and full text. The data extraction and results are documented according to the study designs. Twenty-eight articles were included in the review. These 28 articles consisted of 7 systematic reviews, 3 randomized controlled trials, 9 clinical trials, 4 mixed-method studies and 5 qualitative studies. The DM program has shown to be effective and efficient. A consensus about the DM components is still not reached. Nevertheless, some components are emphasized more than others; job accommodation, facilitation of transitional duty, communication between all stakeholders, health care provider advice, early intervention, and acceptance, goodwill and trust in the stakeholders, in the organization, and in the disability management process.
NASA Technical Reports Server (NTRS)
2002-01-01
The Optical Vector Analyzer (OVA) 1550 significantly reduces the time and cost of testing sophisticated optical components. The technology grew from the research Luna Technologies' Dr. Mark Froggatt conducted on optical fiber strain measurement while working at Langley Research Center. Dr. Froggatt originally developed the technology for non- destructive evaluation testing at Langley. The new technique can provide 10,000 independent strain measurements while adding less than 10 grams to the weight of the vehicle. The OVA is capable of complete linear characterization of single-mode optical components used in high- bit-rate applications. The device can test most components over their full range in less than 30 seconds, compared to the more than 20 minutes required by other testing methods. The dramatically shortened measurement time results in increased efficiency in final acceptance tests of optical devices, and the comprehensive data produced by the instrument adds considerable value for component consumers. The device eliminates manufacturing bottlenecks, while reducing labor costs and wasted materials during production.
Automatic classification of artifactual ICA-components for artifact removal in EEG signals.
Winkler, Irene; Haufe, Stefan; Tangermann, Michael
2011-08-02
Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.
Schach Von Wittenau, Alexis E.
2003-01-01
A method is provided to represent the calculated phase space of photons emanating from medical accelerators used in photon teletherapy. The method reproduces the energy distributions and trajectories of the photons originating in the bremsstrahlung target and of photons scattered by components within the accelerator head. The method reproduces the energy and directional information from sources up to several centimeters in radial extent, so it is expected to generalize well to accelerators made by different manufacturers. The method is computationally both fast and efficient overall sampling efficiency of 80% or higher for most field sizes. The computational cost is independent of the number of beams used in the treatment plan.
Efficiency analysis of color image filtering
NASA Astrophysics Data System (ADS)
Fevralev, Dmitriy V.; Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Abramov, Sergey K.; Egiazarian, Karen O.; Astola, Jaakko T.
2011-12-01
This article addresses under which conditions filtering can visibly improve the image quality. The key points are the following. First, we analyze filtering efficiency for 25 test images, from the color image database TID2008. This database allows assessing filter efficiency for images corrupted by different noise types for several levels of noise variance. Second, the limit of filtering efficiency is determined for independent and identically distributed (i.i.d.) additive noise and compared to the output mean square error of state-of-the-art filters. Third, component-wise and vector denoising is studied, where the latter approach is demonstrated to be more efficient. Fourth, using of modern visual quality metrics, we determine that for which levels of i.i.d. and spatially correlated noise the noise in original images or residual noise and distortions because of filtering in output images are practically invisible. We also demonstrate that it is possible to roughly estimate whether or not the visual quality can clearly be improved by filtering.
Gardner, Andy; Smiseth, Per T
2011-01-22
In mammals, altricial birds and some invertebrates, parents care for their offspring by providing them with food and protection until independence. Although parental food provisioning is often essential for offspring survival and growth, very little is known about the conditions favouring the evolutionary innovation of this key component of care. Here, we develop a mathematical model for the evolution of parental food provisioning. We find that this evolutionary innovation is favoured when the efficiency of parental food provisioning is high relative to the efficiency of offspring self-feeding and/or parental guarding. We also explore the coevolution between food provisioning and other components of parental care, as well as offspring behaviour. We find that the evolution of food provisioning prompts evolutionary changes in other components of care by allowing parents to choose safer nest sites, and that it promotes the evolution of sibling competition, which in turn further drives the evolution of parental food provisioning. This mutual reinforcement of parental care and sibling competition suggests that evolution of parental food provisioning should show a unidirectional trend from no parental food provisioning to full parental food provisioning.
MHOST: An efficient finite element program for inelastic analysis of solids and structures
NASA Technical Reports Server (NTRS)
Nakazawa, S.
1988-01-01
An efficient finite element program for 3-D inelastic analysis of gas turbine hot section components was constructed and validated. A novel mixed iterative solution strategy is derived from the augmented Hu-Washizu variational principle in order to nodally interpolate coordinates, displacements, deformation, strains, stresses and material properties. A series of increasingly sophisticated material models incorporated in MHOST include elasticity, secant plasticity, infinitesimal and finite deformation plasticity, creep and unified viscoplastic constitutive model proposed by Walker. A library of high performance elements is built into this computer program utilizing the concepts of selective reduced integrations and independent strain interpolations. A family of efficient solution algorithms is implemented in MHOST for linear and nonlinear equation solution including the classical Newton-Raphson, modified, quasi and secant Newton methods with optional line search and the conjugate gradient method.
Technical Note: Independent component analysis for quality assurance in functional MRI.
Astrakas, Loukas G; Kallistis, Nikolaos S; Kalef-Ezra, John A
2016-02-01
Independent component analysis (ICA) is an established method of analyzing human functional MRI (fMRI) data. Here, an ICA-based fMRI quality control (QC) tool was developed and used. ICA-based fMRI QC tool to be used with a commercial phantom was developed. In an attempt to assess the performance of the tool relative to preexisting alternative tools, it was used seven weeks before and eight weeks after repair of a faulty gradient amplifier of a non-state-of-the-art MRI unit. More specifically, its performance was compared with the AAPM 100 acceptance testing and quality assurance protocol and two fMRI QC protocols, proposed by Freidman et al. ["Report on a multicenter fMRI quality assurance protocol," J. Magn. Reson. Imaging 23, 827-839 (2006)] and Stocker et al. ["Automated quality assurance routines for fMRI data applied to a multicenter study," Hum. Brain Mapp. 25, 237-246 (2005)], respectively. The easily developed and applied ICA-based QC protocol provided fMRI QC indices and maps equally sensitive to fMRI instabilities with the indices and maps of other established protocols. The ICA fMRI QC indices were highly correlated with indices of other fMRI QC protocols and in some cases theoretically related to them. Three or four independent components with slow varying time series are detected under normal conditions. ICA applied on phantom measurements is an easy and efficient tool for fMRI QC. Additionally, it can protect against misinterpretations of artifact components as human brain activations. Evaluating fMRI QC indices in the central region of a phantom is not always the optimal choice.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
Efficient Application of Continuous Fractional Component Monte Carlo in the Reaction Ensemble
2017-01-01
A new formulation of the Reaction Ensemble Monte Carlo technique (RxMC) combined with the Continuous Fractional Component Monte Carlo method is presented. This method is denoted by serial Rx/CFC. The key ingredient is that fractional molecules of either reactants or reaction products are present and that chemical reactions always involve fractional molecules. Serial Rx/CFC has the following advantages compared to other approaches: (1) One directly obtains chemical potentials of all reactants and reaction products. Obtained chemical potentials can be used directly as an independent check to ensure that chemical equilibrium is achieved. (2) Independent biasing is applied to the fractional molecules of reactants and reaction products. Therefore, the efficiency of the algorithm is significantly increased, compared to the other approaches. (3) Changes in the maximum scaling parameter of intermolecular interactions can be chosen differently for reactants and reaction products. (4) The number of fractional molecules is reduced. As a proof of principle, our method is tested for Lennard-Jones systems at various pressures and for various chemical reactions. Excellent agreement was found both for average densities and equilibrium mixture compositions computed using serial Rx/CFC, RxMC/CFCMC previously introduced by Rosch and Maginn (Journal of Chemical Theory and Computation, 2011, 7, 269–279), and the conventional RxMC approach. The serial Rx/CFC approach is also tested for the reaction of ammonia synthesis at various temperatures and pressures. Excellent agreement was found between results obtained from serial Rx/CFC, experimental results from literature, and thermodynamic modeling using the Peng–Robinson equation of state. The efficiency of reaction trial moves is improved by a factor of 2 to 3 (depending on the system) compared to the RxMC/CFCMC formulation by Rosch and Maginn. PMID:28737933
Lipid biomarker analysis for the quantitative analysis of airborne microorganisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macnaughton, S.J.; Jenkins, T.L.; Cormier, M.R.
1997-08-01
There is an ever increasing concern regarding the presence of airborne microbial contaminants within indoor air environments. Exposure to such biocontaminants can give rise to large numbers of different health effects including infectious diseases, allergenic responses and respiratory problems, Biocontaminants typically round in indoor air environments include bacteria, fungi, algae, protozoa and dust mites. Mycotoxins, endotoxins, pollens and residues of organisms are also known to cause adverse health effects. A quantitative detection/identification technique independent of culturability that assays both culturable and non culturable biomass including endotoxin is critical in defining risks from indoor air biocontamination. Traditionally, methods employed for themore » monitoring of microorganism numbers in indoor air environments involve classical culture based techniques and/or direct microscopic counting. It has been repeatedly documented that viable microorganism counts only account for between 0.1-10% of the total community detectable by direct counting. The classic viable microbiologic approach doe`s not provide accurate estimates of microbial fragments or other indoor air components that can act as antigens and induce or potentiate allergic responses. Although bioaerosol samplers are designed to damage the microbes as little as possible, microbial stress has been shown to result from air sampling, aerosolization and microbial collection. Higher collection efficiency results in greater cell damage while less cell damage often results in lower collection efficiency. Filtration can collect particulates at almost 100% efficiency, but captured microorganisms may become dehydrated and damaged resulting in non-culturability, however, the lipid biomarker assays described herein do not rely on cell culture. Lipids are components that are universally distributed throughout cells providing a means to assess independent of culturability.« less
Kairov, Ulykbek; Cantini, Laura; Greco, Alessandro; Molkenov, Askhat; Czerwinska, Urszula; Barillot, Emmanuel; Zinovyev, Andrei
2017-09-11
Independent Component Analysis (ICA) is a method that models gene expression data as an action of a set of statistically independent hidden factors. The output of ICA depends on a fundamental parameter: the number of components (factors) to compute. The optimal choice of this parameter, related to determining the effective data dimension, remains an open question in the application of blind source separation techniques to transcriptomic data. Here we address the question of optimizing the number of statistically independent components in the analysis of transcriptomic data for reproducibility of the components in multiple runs of ICA (within the same or within varying effective dimensions) and in multiple independent datasets. To this end, we introduce ranking of independent components based on their stability in multiple ICA computation runs and define a distinguished number of components (Most Stable Transcriptome Dimension, MSTD) corresponding to the point of the qualitative change of the stability profile. Based on a large body of data, we demonstrate that a sufficient number of dimensions is required for biological interpretability of the ICA decomposition and that the most stable components with ranks below MSTD have more chances to be reproduced in independent studies compared to the less stable ones. At the same time, we show that a transcriptomics dataset can be reduced to a relatively high number of dimensions without losing the interpretability of ICA, even though higher dimensions give rise to components driven by small gene sets. We suggest a protocol of ICA application to transcriptomics data with a possibility of prioritizing components with respect to their reproducibility that strengthens the biological interpretation. Computing too few components (much less than MSTD) is not optimal for interpretability of the results. The components ranked within MSTD range have more chances to be reproduced in independent studies.
Independent component model for cognitive functions of multiple subjects using [15O]H2O PET images.
Park, Hae-Jeong; Kim, Jae-Jin; Youn, Tak; Lee, Dong Soo; Lee, Myung Chul; Kwon, Jun Soo
2003-04-01
An independent component model of multiple subjects' positron emission tomography (PET) images is proposed to explore the overall functional components involved in a task and to explain subject specific variations of metabolic activities under altered experimental conditions utilizing the Independent component analysis (ICA) concept. As PET images represent time-compressed activities of several cognitive components, we derived a mathematical model to decompose functional components from cross-sectional images based on two fundamental hypotheses: (1) all subjects share basic functional components that are common to subjects and spatially independent of each other in relation to the given experimental task, and (2) all subjects share common functional components throughout tasks which are also spatially independent. The variations of hemodynamic activities according to subjects or tasks can be explained by the variations in the usage weight of the functional components. We investigated the plausibility of the model using serial cognitive experiments of simple object perception, object recognition, two-back working memory, and divided attention of a syntactic process. We found that the independent component model satisfactorily explained the functional components involved in the task and discuss here the application of ICA in multiple subjects' PET images to explore the functional association of brain activations. Copyright 2003 Wiley-Liss, Inc.
Davis, Sarah J; Vale, Gillian L; Schapiro, Steven J; Lambeth, Susan P; Whiten, Andrew
2016-10-24
A vital prerequisite for cumulative culture, a phenomenon often asserted to be unique to humans, is the ability to modify behaviour and flexibly switch to more productive or efficient alternatives. Here, we first established an inefficient solution to a foraging task in five captive chimpanzee groups (N = 19). Three groups subsequently witnessed a conspecific using an alternative, more efficient, solution. When participants could successfully forage with their established behaviours, most individuals did not switch to this more efficient technique; however, when their foraging method became substantially less efficient, nine chimpanzees with socially-acquired information (four of whom witnessed additional human demonstrations) relinquished their old behaviour in favour of the more efficient one. Only a single chimpanzee in control groups, who had not witnessed a knowledgeable model, discovered this. Individuals who switched were later able to combine components of their two learned techniques to produce a more efficient solution than their extensively used, original foraging method. These results suggest that, although chimpanzees show a considerable degree of conservatism, they also have an ability to combine independent behaviours to produce efficient compound action sequences; one of the foundational abilities (or candidate mechanisms) for human cumulative culture.
CO Component Estimation Based on the Independent Component Analysis
NASA Astrophysics Data System (ADS)
Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo
2014-01-01
Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.
Kelemen, Lóránd; Valkai, Sándor; Ormos, Pál
2006-04-20
A light-driven micrometer-sized mechanical motor is created by laser-light-induced two-photon photopolymerization. All necessary components of the engine are built upon a glass surface by an identical procedure and include the following: a rigid mechanical framework, a rotor freely rotating on an axis, and an integrated optical waveguide carrying the actuating light to the rotor. The resulting product is a most practical stand-alone system. The light introduced into the integrated optical waveguide input of the motor provides the driving force: neither optical tweezers or even a microscope are needed for the function. The power and efficiency of the motor are evaluated. The independent unit is expected to become an important component of more complex integrated lab-on-a-chip devices.
Nguyen, Phuong H
2007-05-15
Principal component analysis is a powerful method for projecting multidimensional conformational space of peptides or proteins onto lower dimensional subspaces in which the main conformations are present, making it easier to reveal the structures of molecules from e.g. molecular dynamics simulation trajectories. However, the identification of all conformational states is still difficult if the subspaces consist of more than two dimensions. This is mainly due to the fact that the principal components are not independent with each other, and states in the subspaces cannot be visualized. In this work, we propose a simple and fast scheme that allows one to obtain all conformational states in the subspaces. The basic idea is that instead of directly identifying the states in the subspace spanned by principal components, we first transform this subspace into another subspace formed by components that are independent of one other. These independent components are obtained from the principal components by employing the independent component analysis method. Because of independence between components, all states in this new subspace are defined as all possible combinations of the states obtained from each single independent component. This makes the conformational analysis much simpler. We test the performance of the method by analyzing the conformations of the glycine tripeptide and the alanine hexapeptide. The analyses show that our method is simple and quickly reveal all conformational states in the subspaces. The folding pathways between the identified states of the alanine hexapeptide are analyzed and discussed in some detail. 2007 Wiley-Liss, Inc.
Hand classification of fMRI ICA noise components.
Griffanti, Ludovica; Douaud, Gwenaëlle; Bijsterbosch, Janine; Evangelisti, Stefania; Alfaro-Almagro, Fidel; Glasser, Matthew F; Duff, Eugene P; Fitzgibbon, Sean; Westphal, Robert; Carone, Davide; Beckmann, Christian F; Smith, Stephen M
2017-07-01
We present a practical "how-to" guide to help determine whether single-subject fMRI independent components (ICs) characterise structured noise or not. Manual identification of signal and noise after ICA decomposition is required for efficient data denoising: to train supervised algorithms, to check the results of unsupervised ones or to manually clean the data. In this paper we describe the main spatial and temporal features of ICs and provide general guidelines on how to evaluate these. Examples of signal and noise components are provided from a wide range of datasets (3T data, including examples from the UK Biobank and the Human Connectome Project, and 7T data), together with practical guidelines for their identification. Finally, we discuss how the data quality, data type and preprocessing can influence the characteristics of the ICs and present examples of particularly challenging datasets. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Fong, Chii Shyang; Mazo, Gregory; Das, Tuhin; Goodman, Joshua; Kim, Minhee; O'Rourke, Brian P; Izquierdo, Denisse; Tsou, Meng-Fu Bryan
2016-07-02
Mitosis occurs efficiently, but when it is disturbed or delayed, p53-dependent cell death or senescence is often triggered after mitotic exit. To characterize this process, we conducted CRISPR-mediated loss-of-function screens using a cell-based assay in which mitosis is consistently disturbed by centrosome loss. We identified 53BP1 and USP28 as essential components acting upstream of p53, evoking p21-dependent cell cycle arrest in response not only to centrosome loss, but also to other distinct defects causing prolonged mitosis. Intriguingly, 53BP1 mediates p53 activation independently of its DNA repair activity, but requiring its interacting protein USP28 that can directly deubiquitinate p53 in vitro and ectopically stabilize p53 in vivo. Moreover, 53BP1 can transduce prolonged mitosis to cell cycle arrest independently of the spindle assembly checkpoint (SAC), suggesting that while SAC protects mitotic accuracy by slowing down mitosis, 53BP1 and USP28 function in parallel to select against disturbed or delayed mitosis, promoting mitotic efficiency.
High Storage Efficiency and Large Fractional Delay of EIT-Based Memory
NASA Astrophysics Data System (ADS)
Chen, Yi-Hsin; Lee, Meng-Jung; Wang, I.-Chung; Du, Shengwang; Chen, Yong-Fan; Chen, Ying-Cheng; Yu, Ite
2013-05-01
In long-distance quantum communication and optical quantum computation, an efficient and long-lived quantum memory is an important component. We first experimentally demonstrated that a time-space-reversing method plus the optimum pulse shape can improve the storage efficiency (SE) of light pulses to 78% in cold media based on the effect of electromagnetically induced transparency (EIT). We obtain a large fractional delay of 74 at 50% SE, which is the best record so far. The measured classical fidelity of the recalled pulse is higher than 90% and nearly independent of the storage time, implying that the optical memory maintains excellent phase coherence. Our results suggest the current result may be readily applied to single-photon quantum states due to quantum nature of the EIT light-matter inference. This study advances the EIT-based quantum memory in practical quantum information applications.
What Not To Do: Anti-patterns for Developing Scientific Workflow Software Components
NASA Astrophysics Data System (ADS)
Futrelle, J.; Maffei, A. R.; Sosik, H. M.; Gallager, S. M.; York, A.
2013-12-01
Scientific workflows promise to enable efficient scaling-up of researcher code to handle large datasets and workloads, as well as documentation of scientific processing via standardized provenance records, etc. Workflow systems and related frameworks for coordinating the execution of otherwise separate components are limited, however, in their ability to overcome software engineering design problems commonly encountered in pre-existing components, such as scripts developed externally by scientists in their laboratories. In practice, this often means that components must be rewritten or replaced in a time-consuming, expensive process. In the course of an extensive workflow development project involving large-scale oceanographic image processing, we have begun to identify and codify 'anti-patterns'--problematic design characteristics of software--that make components fit poorly into complex automated workflows. We have gone on to develop and document low-effort solutions and best practices that efficiently address the anti-patterns we have identified. The issues, solutions, and best practices can be used to evaluate and improve existing code, as well as guiding the development of new components. For example, we have identified a common anti-pattern we call 'batch-itis' in which a script fails and then cannot perform more work, even if that work is not precluded by the failure. The solution we have identified--removing unnecessary looping over independent units of work--is often easier to code than the anti-pattern, as it eliminates the need for complex control flow logic in the component. Other anti-patterns we have identified are similarly easy to identify and often easy to fix. We have drawn upon experience working with three science teams at Woods Hole Oceanographic Institution, each of which has designed novel imaging instruments and associated image analysis code. By developing use cases and prototypes within these teams, we have undertaken formal evaluations of software components developed by programmers with widely varying levels of expertise, and have been able to discover and characterize a number of anti-patterns. Our evaluation methodology and testbed have also enabled us to assess the efficacy of strategies to address these anti-patterns according to scientifically relevant metrics, such as ability of algorithms to perform faster than the rate of data acquisition and the accuracy of workflow component output relative to ground truth. The set of anti-patterns and solutions we have identified augments of the body of more well-known software engineering anti-patterns by addressing additional concerns that obtain when a software component has to function as part of a workflow assembled out of independently-developed codebases. Our experience shows that identifying and resolving these anti-patterns reduces development time and improves performance without reducing component reusability.
NASA Astrophysics Data System (ADS)
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
Independent component analysis for automatic note extraction from musical trills
NASA Astrophysics Data System (ADS)
Brown, Judith C.; Smaragdis, Paris
2004-05-01
The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.
Robust efficient video fingerprinting
NASA Astrophysics Data System (ADS)
Puri, Manika; Lubin, Jeffrey
2009-02-01
We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.
Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals
2011-01-01
Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lonchampt, J.; Fessart, K.
2013-07-01
The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in themore » Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description of the features of the software a test case is presented showing the influence of the optimization algorithm parameters on its efficiency to find an optimal investments planning. (authors)« less
NASA Astrophysics Data System (ADS)
Sahlan, Muhamad; Raihani Rahman, Mohammad
2017-07-01
One of many applications of essential oils is as fragrance in perfumery. Menthol, benzyl acetate, and vanillin, each represents olfactive characteristic of peppermint leaves, jasmine flowers, and vanilla beans, are commonly used in perfumery. These components are highly volatile, hence the fragrance components will quickly evaporate resulting in short-lasting scent and low shelf life. In this research, said components have been successfully encapsulated simultaneously inside Polyvinyl Alcohol (PVA) using simple coacervation method to increase its shelf life. Optimization has been done using Central Composite Diagram with 4 independent variables, i.e. composition of menthol, benzyl acetate, vanillin, and tergitol 15-S-9 (as emulsifier). Encapsulation efficiency, loading capacity, and microcapsule size have been measured. In optimized composition of menthol (13.98 %w/w), benzyl acetate (14.75 %w/w), vanillin (17.84 %w/w), and tergitol 15-S-9 (13.4 %w/w) encapsulation efficiency of 97,34% and loading capacity of 46,46% have been achieved. Mean diameter of microcapsule is 20,24 μm and within range of 2,011-36,24 μm. Final product was achieved in the form of cross linked polyvinyl alcohol with hydrogel consistency and orange to yellow in color.
NASA Astrophysics Data System (ADS)
Futko, S. I.; Koznacheev, I. A.; Ermolaeva, E. M.
2014-11-01
On the basis of thermodynamic calculations, the features of the combustion of a solid-fuel mixture based on the glycidyl azide polymer were investigated, the thermal cycle of the combustion chamber of a model engine system was analyzed, and the efficiency of this chamber was determined for a wide range of pressures in it and different ratios between the components of the combustible mixture. It was established that, when the pressure in the combustion chamber of an engine system increases, two maxima arise successively on the dependence of the thermal efficiency of the chamber on the weight fractions of the components of the combustible mixture and that the first maximum shifts to the side of smaller concentrations of the glycidyl azide polymer with increase in the pressure in the chamber; the position of the second maximum is independent of this pressure, coincides with the minimum on the dependence of the rate of combustion of the mixture, and corresponds to the point of its structural phase transition at which the mole fractions of the carbon and oxygen atoms in the mixture are equal. The results obtained were interpreted on the basis of the Le-Chatelier principle.
Real-space decoupling transformation for quantum many-body systems.
Evenbly, G; Vidal, G
2014-06-06
We propose a real-space renormalization group method to explicitly decouple into independent components a many-body system that, as in the phenomenon of spin-charge separation, exhibits separation of degrees of freedom at low energies. Our approach produces a branching holographic description of such systems that opens the path to the efficient simulation of the most entangled phases of quantum matter, such as those whose ground state violates a boundary law for entanglement entropy. As in the coarse-graining transformation of Vidal [Phys. Rev. Lett. 99, 220405 (2007).
Co-Optima Targets Maximum Transportation Sector Efficiency, Energy
Independence and Industry Growth | News | NREL Co-Optima Targets Maximum Transportation Sector Efficiency, Energy Independence and Industry Growth Co-Optima Targets Maximum Transportation Sector Efficiency, Energy Independence and Industry Growth February 6, 2017 Report cover on Co-Optima Year in Review
Jäncke, Lutz; Alahmadi, Nsreen
2016-04-13
The measurement of brain activation during music listening is a topic that is attracting increased attention from many researchers. Because of their high spatial accuracy, functional MRI measurements are often used for measuring brain activation in the context of music listening. However, this technique faces the issues of contaminating scanner noise and an uncomfortable experimental environment. Electroencephalogram (EEG), however, is a neural registration technique that allows the measurement of neurophysiological activation in silent and more comfortable experimental environments. Thus, it is optimal for recording brain activations during pleasant music stimulation. Using a new mathematical approach to calculate intracortical independent components (sLORETA-IC) on the basis of scalp-recorded EEG, we identified specific intracortical independent components during listening of a musical piece and scales, which differ substantially from intracortical independent components calculated from the resting state EEG. Most intracortical independent components are located bilaterally in perisylvian brain areas known to be involved in auditory processing and specifically in music perception. Some intracortical independent components differ between the music and scale listening conditions. The most prominent difference is found in the anterior part of the perisylvian brain region, with stronger activations seen in the left-sided anterior perisylvian regions during music listening, most likely indicating semantic processing during music listening. A further finding is that the intracortical independent components obtained for the music and scale listening are most prominent in higher frequency bands (e.g. beta-2 and beta-3), whereas the resting state intracortical independent components are active in lower frequency bands (alpha-1 and theta). This new technique for calculating intracortical independent components is able to differentiate independent neural networks associated with music and scale listening. Thus, this tool offers new opportunities for studying neural activations during music listening using the silent and more convenient EEG technology.
Motion estimation in the frequency domain using fuzzy c-planes clustering.
Erdem, C E; Karabulut, G Z; Yanmaz, E; Anarim, E
2001-01-01
A recent work explicitly models the discontinuous motion estimation problem in the frequency domain where the motion parameters are estimated using a harmonic retrieval approach. The vertical and horizontal components of the motion are independently estimated from the locations of the peaks of respective periodogram analyses and they are paired to obtain the motion vectors using a procedure proposed. In this paper, we present a more efficient method that replaces the motion component pairing task and hence eliminates the problems of the pairing method described. The method described in this paper uses the fuzzy c-planes (FCP) clustering approach to fit planes to three-dimensional (3-D) frequency domain data obtained from the peaks of the periodograms. Experimental results are provided to demonstrate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2001-01-01
The Independent Component Analysis is a recently developed technique for component extraction. This new method requires the statistical independence of the extracted components, a stronger constraint that uses higher-order statistics, instead of the classical decorrelation, a weaker constraint that uses only second-order statistics. This technique has been used recently for the analysis of geophysical time series with the goal of investigating the causes of variability in observed data (i.e. exploratory approach). We demonstrate with a data simulation experiment that, if initialized with a Principal Component Analysis, the Independent Component Analysis performs a rotation of the classical PCA (or EOF) solution. This rotation uses no localization criterion like other Rotation Techniques (RT), only the global generalization of decorrelation by statistical independence is used. This rotation of the PCA solution seems to be able to solve the tendency of PCA to mix several physical phenomena, even when the signal is just their linear sum.
Independent component analysis decomposition of hospital emergency department throughput measures
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Henry
2016-05-01
We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
Transthyretin Concentrations in Acute Stroke Patients Predict Convalescent Rehabilitation.
Isono, Naofumi; Imamura, Yuki; Ohmura, Keiko; Ueda, Norihide; Kawabata, Shinji; Furuse, Motomasa; Kuroiwa, Toshihiko
2017-06-01
For stroke patients, intensive nutritional management is an important and effective component of inpatient rehabilitation. Accordingly, acute care hospitals must detect and prevent malnutrition at an early stage. Blood transthyretin levels are widely used as a nutritional monitoring index in critically ill patients. Here, we had analyzed the relationship between the transthyretin levels during the acute phase and Functional Independence Measure in stroke patients undergoing convalescent rehabilitation. We investigated 117 patients who were admitted to our hospital with acute ischemic or hemorrhagic stroke from February 2013 to October 2015 and subsequently transferred to convalescent hospitals after receiving acute treatment. Transthyretin concentrations were evaluated at 3 time points as follows: at admission, and 5 and 10 days after admission. After categorizing patients into 3 groups according to the minimum transthyretin level, we analyzed the association between transthyretin and Functional Independence Measure. In our patients, transthyretin levels decreased during the first 5 days after admission and recovered slightly during the subsequent 5 days. Notably, Functional Independence Measure efficiency was significantly associated with the decrease in transthyretin levels during the 5 days after admission. Patients with lower transthyretin levels had poorer Functional Independence Measure outcomes and tended not to be discharged to their own homes. A minimal transthyretin concentration (<10 mg/dL) is predictive of a poor outcome in stroke patients undergoing convalescent rehabilitation. In particular, an early decrease in transthyretin levels suggests restricted rehabilitation efficiency. Accordingly, transthyretin levels should be monitored in acute stroke patients to indicate mid-term rehabilitation prospects. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ludwick, J D; Moore, E B
1984-01-01
Safety and cost information is developed for the conceptual decommissioning of five different types of reference independent spent fuel storage installations (ISFSIs), each of which is being given consideration for interim storage of spent nuclear fuel in the United States. These include one water basin-type ISFSI (wet) and four dry ISFSIs (drywell, silo, vault, and cask). The reference ISFSIs include all component parts necessary for the receipt, handling and storage of spent fuel in a safe and efficient manner. Three decommissioning alternatives are studied to obtain comparisons between costs (in 1981 dollars), occupational radiation doses, and potential radiation doses tomore » the public. The alternatives considered are: DECON (immediate decontamination), SAFSTOR (safe storage followed by deferred decontamination), and ENTOMB (entombment followed by long-term surveillance).« less
Solid Modeling of Crew Exploration Vehicle Structure Concepts for Mass Optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2006-01-01
Parametric solid and surface models of the crew exploration vehicle (CEV) command module (CM) structure concepts are developed for rapid finite element analyses, structural sizing and estimation of optimal structural mass. The effects of the structural configuration and critical design parameters on the stress distribution are visualized, examined to arrive at an efficient design. The CM structural components consisted of the outer heat shield, inner pressurized crew cabin, ring bulkhead and spars. For this study only the internal cabin pressure load case is considered. Component stress, deflection, margins of safety and mass are used as design goodness criteria. The design scenario is explored by changing the component thickness parameters and materials until an acceptable design is achieved. Aluminum alloy, titanium alloy and an advanced composite material properties are considered for the stress analysis and the results are compared as a part of lessons learned and to build up a structural component sizing knowledge base for the future CEV technology support. This independent structural analysis and the design scenario based optimization process may also facilitate better CM structural definition and rapid prototyping.
Dawidowicz, Andrzej L; Czapczyńska, Natalia B; Wianowska, Dorota
2012-05-30
The influence of different Purge Times on the effectiveness of Pressurized Liquid Extraction (PLE) of volatile oil components from cypress plant matrix (Cupressus sempervirens) was investigated, applying solvents of diverse extraction efficiencies. The obtained results show the decrease of the mass yields of essential oil components as a result of increased Purge Time. The loss of extracted components depends on the extrahent type - the greatest mass yield loss occurred in the case of non-polar solvents, whereas the smallest was found in polar extracts. Comparisons of the PLE method with Sea Sand Disruption Method (SSDM), Matrix Solid-Phase Dispersion Method (MSPD) and Steam Distillation (SD) were performed to assess the method's accuracy. Independent of the solvent and Purge Time applied in the PLE process, the total mass yield was lower than the one obtained for simple, short and relatively cheap low-temperature matrix disruption procedures - MSPD and SSDM. Thus, in the case of volatile oils analysis, the application of these methods is advisable. Copyright © 2012 Elsevier B.V. All rights reserved.
Connected Component Model for Multi-Object Tracking.
He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan
2016-08-01
In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.
Activated release of membrane-anchored TGF-alpha in the absence of cytosol
1993-01-01
The ectodomain of proTGF-alpha, a membrane-anchored growth factor, is converted into soluble TGF-alpha by a regulated cellular proteolytic system that recognizes proTGF-alpha via the C-terminal valine of its cytoplasmic tail. In order to define the biochemical components involved in proTGF-alpha cleavage, we have used cells permeabilized with streptolysin O (SLO) that have been extensively washed to remove cytosol. PMA, acting through a Ca(2+)-independent protein kinase C, activates cleavage as efficiently in permeabilized cells as it does in intact cells. ProTGF-alpha cleavage is also stimulated by GTP gamma S through a mechanism whose pharmacological properties suggest the involvement of a heterotrimeric G protein acting upstream of the PMA- sensitive Ca(2+)-independent protein kinase C. Activated proTGF-alpha cleavage is dependent on ATP hydrolysis, appears not to require vesicular traffic, and acts specifically on proTGF-alpha that has reached the cell surface. These results indicate that proTGF-alpha is cleaved from the cell surface by a regulated system whose signaling, recognition, and proteolytic components are retained in cells devoid of cytosol. PMID:8314849
Hur, Junho K.; Luo, Yicheng; Moon, Sungjin; Ninova, Maria; Marinov, Georgi K.; Chung, Yun D.; Aravin, Alexei A.
2016-01-01
The conserved THO/TREX (transcription/export) complex is critical for pre-mRNA processing and mRNA nuclear export. In metazoa, TREX is loaded on nascent RNA transcribed by RNA polymerase II in a splicing-dependent fashion; however, how TREX functions is poorly understood. Here we show that Thoc5 and other TREX components are essential for the biogenesis of piRNA, a distinct class of small noncoding RNAs that control expression of transposable elements (TEs) in the Drosophila germline. Mutations in TREX lead to defects in piRNA biogenesis, resulting in derepression of multiple TE families, gametogenesis defects, and sterility. TREX components are enriched on piRNA precursors transcribed from dual-strand piRNA clusters and colocalize in distinct nuclear foci that overlap with sites of piRNA transcription. The localization of TREX in nuclear foci and its loading on piRNA precursor transcripts depend on Cutoff, a protein associated with chromatin of piRNA clusters. Finally, we show that TREX is required for accumulation of nascent piRNA precursors. Our study reveals a novel splicing-independent mechanism for TREX loading on nascent RNA and its importance in piRNA biogenesis. PMID:27036967
Real-time Adaptive EEG Source Separation using Online Recursive Independent Component Analysis
Hsu, Sheng-Hsiou; Mullen, Tim; Jung, Tzyy-Ping; Cauwenberghs, Gert
2016-01-01
Independent Component Analysis (ICA) has been widely applied to electroencephalographic (EEG) biosignal processing and brain-computer interfaces. The practical use of ICA, however, is limited by its computational complexity, data requirements for convergence, and assumption of data stationarity, especially for high-density data. Here we study and validate an optimized online recursive ICA algorithm (ORICA) with online recursive least squares (RLS) whitening for blind source separation of high-density EEG data, which offers instantaneous incremental convergence upon presentation of new data. Empirical results of this study demonstrate the algorithm's: (a) suitability for accurate and efficient source identification in high-density (64-channel) realistically-simulated EEG data; (b) capability to detect and adapt to non-stationarity in 64-ch simulated EEG data; and (c) utility for rapidly extracting principal brain and artifact sources in real 61-channel EEG data recorded by a dry and wearable EEG system in a cognitive experiment. ORICA was implemented as functions in BCILAB and EEGLAB and was integrated in an open-source Real-time EEG Source-mapping Toolbox (REST), supporting applications in ICA-based online artifact rejection, feature extraction for real-time biosignal monitoring in clinical environments, and adaptable classifications in brain-computer interfaces. PMID:26685257
Młynarski, Wiktor
2014-01-01
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
NASA Astrophysics Data System (ADS)
Liang, Sheng-Fu; Chen, Yi-Chun; Wang, Yu-Lin; Chen, Pin-Tzu; Yang, Chia-Hsiang; Chiueh, Herming
2013-08-01
Objective. Around 1% of the world's population is affected by epilepsy, and nearly 25% of patients cannot be treated effectively by available therapies. The presence of closed-loop seizure-triggered stimulation provides a promising solution for these patients. Realization of fast, accurate, and energy-efficient seizure detection is the key to such implants. In this study, we propose a two-stage on-line seizure detection algorithm with low-energy consumption for temporal lobe epilepsy (TLE). Approach. Multi-channel signals are processed through independent component analysis and the most representative independent component (IC) is automatically selected to eliminate artifacts. Seizure-like intracranial electroencephalogram (iEEG) segments are fast detected in the first stage of the proposed method and these seizures are confirmed in the second stage. The conditional activation of the second-stage signal processing reduces the computational effort, and hence energy, since most of the non-seizure events are filtered out in the first stage. Main results. Long-term iEEG recordings of 11 patients who suffered from TLE were analyzed via leave-one-out cross validation. The proposed method has a detection accuracy of 95.24%, a false alarm rate of 0.09/h, and an average detection delay time of 9.2 s. For the six patients with mesial TLE, a detection accuracy of 100.0%, a false alarm rate of 0.06/h, and an average detection delay time of 4.8 s can be achieved. The hierarchical approach provides a 90% energy reduction, yielding effective and energy-efficient implementation for real-time epileptic seizure detection. Significance. An on-line seizure detection method that can be applied to monitor continuous iEEG signals of patients who suffered from TLE was developed. An IC selection strategy to automatically determine the most seizure-related IC for seizure detection was also proposed. The system has advantages of (1) high detection accuracy, (2) low false alarm, (3) short detection latency, and (4) energy-efficient design for hardware implementation.
CO component estimation based on the independent component analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki
2014-01-01
Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.
1995-01-01
To determine the feasibility of coupling the output of an optical fiber to a rib waveguide in a temperature environment ranging from 20 C to 300 C, a theoretical calculation of the coupling efficiency between the two was investigated. This is a significant problem which needs to be addressed to determine whether an integrated optic device can function in a harsh temperature environment. Because the behavior of the integrated-optic device is polarization sensitive, a polarization-preserving optic fiber, via its elliptical core, was used to couple light with a known polarization into the device. To couple light energy efficiently from an optical fiber into a channel waveguide, the design of both components should provide for well-matched electric field profiles. The rib waveguide analyzed was the light input channel of an integrated-optic pressure sensor. Due to the complex geometry of the rib waveguide, there is no analytical solution to the wave equation for the guided modes. Approximation or numerical techniques must be utilized to determine the propagation constants and field patterns of the guide. In this study, three solution methods were used to determine the field profiles of both the fiber and guide: the effective-index method (EIM), Marcatili's approximation, and a Fourier method. These methods were utilized independently to calculate the electric field profile of a rib channel waveguide and elliptical fiber at two temperatures, 20 C and 300 C. These temperatures were chosen to represent a nominal and a high temperature that the device would experience. Using the electric field profile calculated from each method, the theoretical coupling efficiency between the single-mode optical fiber and rib waveguide was calculated using the overlap integral and results of the techniques compared. Initially, perfect alignment was assumed and the coupling efficiency calculated. Then, the coupling efficiency calculation was repeated for a range of transverse offsets at both temperatures. Results of the calculation indicate a high coupling efficiency can be achieved when the two components were properly aligned. The coupling efficiency was more sensitive to alignment offsets in the y direction than the x, due to the elliptical modal profile of both components. Changes in the coupling efficiency over temperature were found to be minimal.
Probabilistic structural analysis methods for improving Space Shuttle engine reliability
NASA Technical Reports Server (NTRS)
Boyce, L.
1989-01-01
Probabilistic structural analysis methods are particularly useful in the design and analysis of critical structural components and systems that operate in very severe and uncertain environments. These methods have recently found application in space propulsion systems to improve the structural reliability of Space Shuttle Main Engine (SSME) components. A computer program, NESSUS, based on a deterministic finite-element program and a method of probabilistic analysis (fast probability integration) provides probabilistic structural analysis for selected SSME components. While computationally efficient, it considers both correlated and nonnormal random variables as well as an implicit functional relationship between independent and dependent variables. The program is used to determine the response of a nickel-based superalloy SSME turbopump blade. Results include blade tip displacement statistics due to the variability in blade thickness, modulus of elasticity, Poisson's ratio or density. Modulus of elasticity significantly contributed to blade tip variability while Poisson's ratio did not. Thus, a rational method for choosing parameters to be modeled as random is provided.
Lattice Independent Component Analysis for Mobile Robot Localization
NASA Astrophysics Data System (ADS)
Villaverde, Ivan; Fernandez-Gauna, Borja; Zulueta, Ekaitz
This paper introduces an approach to appearance based mobile robot localization using Lattice Independent Component Analysis (LICA). The Endmember Induction Heuristic Algorithm (EIHA) is used to select a set of Strong Lattice Independent (SLI) vectors, which can be assumed to be Affine Independent, and therefore candidates to be the endmembers of the data. Selected endmembers are used to compute the linear unmixing of the robot's acquired images. The resulting mixing coefficients are used as feature vectors for view recognition through classification. We show on a sample path experiment that our approach can recognise the localization of the robot and we compare the results with the Independent Component Analysis (ICA).
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
NASA Astrophysics Data System (ADS)
Zilber, Nicolas A.; Katayama, Yoshinori; Iramina, Keiji; Erich, Wintermantel
2010-05-01
A new approach is proposed to test the efficiency of methods, such as the Kalman filter and the independent component analysis (ICA), when applied to remove the artifacts induced by transcranial magnetic stimulation (TMS) from electroencephalography (EEG). By using EEG recordings corrupted by TMS induction, the shape of the artifacts is approximately described with a model based on an equivalent circuit simulation. These modeled artifacts are subsequently added to other EEG signals—this time not influenced by TMS. The resulting signals prove of interest since we also know their form without the pseudo-TMS artifacts. Therefore, they enable us to use a fit test to compare the signals we obtain after removing the artifacts with the original signals. This efficiency test turned out very useful in comparing the methods between them, as well as in determining the parameters of the filtering that give satisfactory results with the automatic ICA.
Multi-spectrometer calibration transfer based on independent component analysis.
Liu, Yan; Xu, Hao; Xia, Zhenzhen; Gong, Zhiyong
2018-02-26
Calibration transfer is indispensable for practical applications of near infrared (NIR) spectroscopy due to the need for precise and consistent measurements across different spectrometers. In this work, a method for multi-spectrometer calibration transfer is described based on independent component analysis (ICA). A spectral matrix is first obtained by aligning the spectra measured on different spectrometers. Then, by using independent component analysis, the aligned spectral matrix is decomposed into the mixing matrix and the independent components of different spectrometers. These differing measurements between spectrometers can then be standardized by correcting the coefficients within the independent components. Two NIR datasets of corn and edible oil samples measured with three and four spectrometers, respectively, were used to test the reliability of this method. The results of both datasets reveal that spectra measurements across different spectrometers can be transferred simultaneously and that the partial least squares (PLS) models built with the measurements on one spectrometer can predict that the spectra can be transferred correctly on another.
Wakefield, Douglas S; Ward, Marcia M; Loes, Jean L; O'Brien, John
2010-01-01
We report how seven independent critical access hospitals collaborated with a rural referral hospital to standardize workflow policies and procedures while jointly implementing the same health information technologies (HITs) to enhance medication care processes. The study hospitals implemented the same electronic health record, computerized provider order entry, pharmacy information systems, automated dispensing cabinets (ADC), and barcode medication administration systems. We conducted interviews and examined project documents to explore factors underlying the successful implementation of ADC and barcode medication administration across the network hospitals. These included a shared culture of collaboration; strategic sequencing of HIT component implementation; interface among HIT components; strategic placement of ADCs; disciplined use and sharing of workflow analyses linked with HIT applications; planning for workflow efficiencies; acquisition of adequate supply of HIT-related devices; and establishing metrics to monitor HIT use and outcomes.
Performance Analysis of Hybrid Electric Vehicle over Different Driving Cycles
NASA Astrophysics Data System (ADS)
Panday, Aishwarya; Bansal, Hari Om
2017-02-01
Article aims to find the nature and response of a hybrid vehicle on various standard driving cycles. Road profile parameters play an important role in determining the fuel efficiency. Typical parameters of road profile can be reduced to a useful smaller set using principal component analysis and independent component analysis. Resultant data set obtained after size reduction may result in more appropriate and important parameter cluster. With reduced parameter set fuel economies over various driving cycles, are ranked using TOPSIS and VIKOR multi-criteria decision making methods. The ranking trend is then compared with the fuel economies achieved after driving the vehicle over respective roads. Control strategy responsible for power split is optimized using genetic algorithm. 1RC battery model and modified SOC estimation method are considered for the simulation and improved results compared with the default are obtained.
Li, Ping; Stumpf, Maria; Müller, Rolf; Eichinger, Ludwig; Glöckner, Gernot; Noegel, Angelika A
2017-08-22
SUN1, a component of the LINC (Linker of Nucleoskeleton and Cytoskeleton) complex, functions in mammalian mRNA export through the NXF1-dependent pathway. It associates with mRNP complexes by direct interaction with NXF1. It also binds to the NPC through association with the nuclear pore component Nup153, which is involved in mRNA export. The SUN1-NXF1 association is at least partly regulated by a protein kinase C (PKC) which phosphorylates serine 113 (S113) in the N-terminal domain leading to reduced interaction. The phosphorylation appears to be important for the SUN1 function in nuclear mRNA export since GFP-SUN1 carrying a S113A mutation was less efficient in restoring mRNA export after SUN1 knockdown as compared to the wild type protein. By contrast, GFP-SUN1-S113D resembling the phosphorylated state allowed very efficient export of poly(A)+RNA. Furthermore, probing a possible role of the LINC complex component Nesprin-2 in this process we observed impaired mRNA export in Nesprin-2 knockdown cells. This effect might be independent of SUN1 as expression of a GFP tagged SUN-domain deficient SUN1, which no longer can interact with Nesprin-2, did not affect mRNA export.
NASA Astrophysics Data System (ADS)
Watanabe, Yuuki; Kawase, Kodo; Ikari, Tomofumi; Ito, Hiromasa; Ishikawa, Youichi; Minamide, Hiroaki
2003-10-01
We separated the component spatial patterns of frequency-dependent absorption in chemicals and frequency-independent components such as plastic, paper, and measurement noise in terahertz (THz) spectroscopic images, using known spectral curves. Our measurement system, which uses a widely tunable coherent THz-wave parametric oscillator source, can image at a specific frequency in the range 1-2 THz. The component patterns of chemicals can easily be extracted by use of the frequency-independent components. This method could be successfully used for nondestructive inspection for the detection of illegal drugs and devices of bioterrorism concealed, e.g., inside mail and packages.
An 81.6 μW FastICA processor for epileptic seizure detection.
Yang, Chia-Hsiang; Shih, Yi-Hsin; Chiueh, Herming
2015-02-01
To improve the performance of epileptic seizure detection, independent component analysis (ICA) is applied to multi-channel signals to separate artifacts and signals of interest. FastICA is an efficient algorithm to compute ICA. To reduce the energy dissipation, eigenvalue decomposition (EVD) is utilized in the preprocessing stage to reduce the convergence time of iterative calculation of ICA components. EVD is computed efficiently through an array structure of processing elements running in parallel. Area-efficient EVD architecture is realized by leveraging the approximate Jacobi algorithm, leading to a 77.2% area reduction. By choosing proper memory element and reduced wordlength, the power and area of storage memory are reduced by 95.6% and 51.7%, respectively. The chip area is minimized through fixed-point implementation and architectural transformations. Given a latency constraint of 0.1 s, an 86.5% area reduction is achieved compared to the direct-mapped architecture. Fabricated in 90 nm CMOS, the core area of the chip is 0.40 mm(2). The FastICA processor, part of an integrated epileptic control SoC, dissipates 81.6 μW at 0.32 V. The computation delay of a frame of 256 samples for 8 channels is 84.2 ms. Compared to prior work, 0.5% power dissipation, 26.7% silicon area, and 3.4 × computation speedup are achieved. The performance of the chip was verified by human dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caruso, Francesco; Bellacicca, Andrea; Milani, Paolo, E-mail: pmilani@mi.infn.it
We report the rapid prototyping of passive electrical components (resistors and capacitors) on plain paper by an additive and parallel technology consisting of supersonic cluster beam deposition (SCBD) coupled with shadow mask printing. Cluster-assembled films have a growth mechanism substantially different from that of atom-assembled ones providing the possibility of a fine tuning of their electrical conduction properties around the percolative conduction threshold. Exploiting the precise control on cluster beam intensity and shape typical of SCBD, we produced, in a one-step process, batches of resistors with resistance values spanning a range of two orders of magnitude. Parallel plate capacitors withmore » paper as the dielectric medium were also produced with capacitance in the range of tens of picofarads. Compared to standard deposition technologies, SCBD allows for a very efficient use of raw materials and the rapid production of components with different shape and dimensions while controlling independently the electrical characteristics. Discrete electrical components produced by SCBD are very robust against deformation and bending, and they can be easily assembled to build circuits with desired characteristics. The availability of large batches of these components enables the rapid and cheap prototyping and integration of electrical components on paper as building blocks of more complex systems.« less
Zuendorf, Gerhard; Kerrouche, Nacer; Herholz, Karl; Baron, Jean-Claude
2003-01-01
Principal component analysis (PCA) is a well-known technique for reduction of dimensionality of functional imaging data. PCA can be looked at as the projection of the original images onto a new orthogonal coordinate system with lower dimensions. The new axes explain the variance in the images in decreasing order of importance, showing correlations between brain regions. We used an efficient, stable and analytical method to work out the PCA of Positron Emission Tomography (PET) images of 74 normal subjects using [(18)F]fluoro-2-deoxy-D-glucose (FDG) as a tracer. Principal components (PCs) and their relation to age effects were investigated. Correlations between the projections of the images on the new axes and the age of the subjects were carried out. The first two PCs could be identified as being the only PCs significantly correlated to age. The first principal component, which explained 10% of the data set variance, was reduced only in subjects of age 55 or older and was related to loss of signal in and adjacent to ventricles and basal cisterns, reflecting expected age-related brain atrophy with enlarging CSF spaces. The second principal component, which accounted for 8% of the total variance, had high loadings from prefrontal, posterior parietal and posterior cingulate cortices and showed the strongest correlation with age (r = -0.56), entirely consistent with previously documented age-related declines in brain glucose utilization. Thus, our method showed that the effect of aging on brain metabolism has at least two independent dimensions. This method should have widespread applications in multivariate analysis of brain functional images. Copyright 2002 Wiley-Liss, Inc.
On the rate of black hole binary mergers in galactic nuclei due to dynamical hardening
NASA Astrophysics Data System (ADS)
Leigh, N. W. C.; Geller, A. M.; McKernan, B.; Ford, K. E. S.; Mac Low, M.-M.; Bellovary, J.; Haiman, Z.; Lyra, W.; Samsing, J.; O'Dowd, M.; Kocsis, B.; Endlich, S.
2018-03-01
We assess the contribution of dynamical hardening by direct three-body scattering interactions to the rate of stellar-mass black hole binary (BHB) mergers in galactic nuclei. We derive an analytic model for the single-binary encounter rate in a nucleus with spherical and disc components hosting a super-massive black hole (SMBH). We determine the total number of encounters NGW needed to harden a BHB to the point that inspiral due to gravitational wave emission occurs before the next three-body scattering event. This is done independently for both the spherical and disc components. Using a Monte Carlo approach, we refine our calculations for NGW to include gravitational wave emission between scattering events. For astrophysically plausible models, we find that typically NGW ≲ 10. We find two separate regimes for the efficient dynamical hardening of BHBs: (1) spherical star clusters with high central densities, low-velocity dispersions, and no significant Keplerian component and (2) migration traps in discs around SMBHs lacking any significant spherical stellar component in the vicinity of the migration trap, which is expected due to effective orbital inclination reduction of any spherical population by the disc. We also find a weak correlation between the ratio of the second-order velocity moment to velocity dispersion in galactic nuclei and the rate of BHB mergers, where this ratio is a proxy for the ratio between the rotation- and dispersion-supported components. Because discs enforce planar interactions that are efficient in hardening BHBs, particularly in migration traps, they have high merger rates that can contribute significantly to the rate of BHB mergers detected by the advanced Laser Interferometer Gravitational-Wave Observatory.
NASA Astrophysics Data System (ADS)
Xu, B.
2017-12-01
Interferometric Synthetic Aperture Radar (InSAR) has the advantages of high spatial resolution which enable measure line of sight (LOS) surface displacements with nearly complete spatial continuity and a satellite's perspective that permits large areas view of Earth's surface quickly and efficiently. However, using InSAR to observe long wavelength and small magnitude deformation signals is still significantly limited by various unmodeled errors sources i.e. atmospheric delays, orbit induced errors, Digital Elevation Model (DEM) errors. Independent component analysis (ICA) is a probabilistic method for separating linear mixed signals generated by different underlying physical processes.The signal sources which form the interferograms are statistically independent both in space and in time, thus, they can be separated by ICA approach.The seismic behavior in the Los Angeles Basin is active and the basin has experienced numerous moderate to large earthquakes since the early Pliocene. Hence, understanding the seismotectonic deformation in the Los Angeles Basin is important for analyzing seismic behavior. Compare with the tectonic deformations, nontectonic deformations due to groundwater and oil extraction may be mainly responsible for the surface deformation in the Los Angeles basin. Using the small baseline subset (SBAS) InSAR method, we extracted the surface deformation time series in the Los Angeles basin with a time span of 7 years (September 27, 2003-September 25,2010). Then, we successfully separate the atmospheric noise from InSAR time series and detect different processes caused by different mechanisms.
Yasukawa, Kazutaka; Nakamura, Kentaro; Fujinaga, Koichiro; Ikehara, Minoru; Kato, Yasuhiro
2017-09-12
Multiple transient global warming events occurred during the early Palaeogene. Although these events, called hyperthermals, have been reported from around the globe, geologic records for the Indian Ocean are limited. In addition, the recovery processes from relatively modest hyperthermals are less constrained than those from the severest and well-studied hothouse called the Palaeocene-Eocene Thermal Maximum. In this study, we constructed a new and high-resolution geochemical dataset of deep-sea sediments clearly recording multiple Eocene hyperthermals in the Indian Ocean. We then statistically analysed the high-dimensional data matrix and extracted independent components corresponding to the biogeochemical responses to the hyperthermals. The productivity feedback commonly controls and efficiently sequesters the excess carbon in the recovery phases of the hyperthermals via an enhanced biological pump, regardless of the magnitude of the events. Meanwhile, this negative feedback is independent of nannoplankton assemblage changes generally recognised in relatively large environmental perturbations.
Li, Yongfeng; Zhang, Jieqiu; Ma, Hua; Wang, Jiafu; Pang, Yongqiang; Feng, Dayi; Xu, Zhuo; Qu, Shaobo
2016-01-01
We propose the design of wideband birefringent metamaterials based on spoof surface plasmon polaritons (SSPPs). Spatial k-dispersion design of SSPP modes in metamaterials is adopted to achieve high-efficiency transmission of electromagnetic waves through the metamaterial layer. By anisotropic design, the transmission phase accumulation in metamaterials can be independently modulated for x- and y-polarized components of incident waves. Since the dispersion curve of SSPPs is nonlinear, frequency-dependent phase differences can be obtained between the two orthogonal components of transmitted waves. As an example, we demonstrate a microwave birefringent metamaterials composed of fishbone structures. The full-polarization-state conversions on the zero-longitude line of Poincaré sphere can be fulfilled twice in 6–20 GHz for both linearly polarized (LP) and circularly polarized (CP) waves incidence. Besides, at a given frequency, the full-polarization-state conversion can be achieved by changing the polarization angle of the incident LP waves. Both the simulation and experiment results verify the high-efficiency polarization conversion functions of the birefringent metamaterial, including circular-to-circular, circular-to-linear(linear-to-circular), linear-to-linear polarization conversions. PMID:27698443
Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach
NASA Astrophysics Data System (ADS)
Yan, Wen; Shelley, Michael
2018-02-01
An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.
Gerych, P; Yatsyshyn, R
2015-01-01
Studied oxygen independent reaction and phagocytic activity of macrophage cells of patients with chronic obstructive pulmonary disease (COPD) II-III stage when combined with coronary heart disease (CHD). The increasing oxygen independent reactions monocytes and neutrophils and a decrease of the parameters that characterize the functional state of phagocytic cells, indicating a decrease in the functional capacity of macrophage phagocytic system (MPS) in patients with acute exacerbation of COPD, which runs as its own or in combination with stable coronary heart disease angina I-II. FC. Severity immunodeficiency state in terms of cellular component of nonspecific immunity in patients with acute exacerbation of COPD II-III stage in conjunction with the accompanying CHD increases with the progression of heart failure. Inclusion of basic therapy of COPD exacerbation and standard treatment of coronary artery disease and drug combinations Roflumilastand quercetin causes normalization of phagocytic indices MFS, indicating improved immune status and improves myocardial perfusion in terms of daily ECG monitoring.
Assessing value-for-money in maternal and newborn health.
Banke-Thomas, Aduragbemi; Madaj, Barbara; Kumar, Shubha; Ameh, Charles; van den Broek, Nynke
2017-01-01
Responding to increasing demands to demonstrate value-for-money (VfM) for maternal and newborn health interventions, and in the absence of VfM analysis in peer-reviewed literature, this paper reviews VfM components and methods, critiques their applicability, strengths and weakness and proposes how VfM assessments can be improved. VfM comprises four components: economy, efficiency, effectiveness and cost-effectiveness. Both 'economy' and 'efficiency' can be assessed with detailed cost analysis utilising costs obtained from programme accounting data or generic cost databases. Before-and-after studies, case-control studies or randomised controlled trials can be used to assess 'effectiveness'. To assess 'cost-effectiveness', cost-effectiveness analysis (CEA), cost-utility analysis (CUA), cost-benefit analysis (CBA) or social return on investment (SROI) analysis are applicable. Generally, costs can be obtained from programme accounting data or existing generic cost databases. As such 'economy' and 'efficiency' are relatively easy to assess. However, 'effectiveness' and 'cost-effectiveness' which require establishment of the counterfactual are more difficult to ascertain. Either a combination of CEA or CUA with tools for assessing other VfM components, or the independent use of CBA or SROI are alternative approaches proposed to strengthen VfM assessments. Cross-cutting themes such as equity, sustainability, scalability and cultural acceptability should also be assessed, as they provide critical contextual information for interpreting VfM assessments. To select an assessment approach, consideration should be given to the purpose, data availability, stakeholders requiring the findings and perspectives of programme beneficiaries. Implementers and researchers should work together to improve the quality of assessments. Standardisation around definitions, methodology and effectiveness measures to be assessed would help.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
Hyperspectral functional imaging of the human brain
NASA Astrophysics Data System (ADS)
Toronov, Vladislav; Schelkanova, Irina
2013-03-01
We performed the independent component analysis of the hyperspectral functional near-infrared data acquired on humans during exercise and rest. We found that the hyperspectral functional data acquired on the human brain requires only two physiologically meaningful components to cover more than 50% o the temporal variance in hundreds of wavelengths. The analysis of the spectra of independent components showed that these components could be interpreted as results of changes in the cerebral blood volume and blood flow. Also, we found significant contributions of water and cytochrome c oxydase into changes associated with the independent components. Another remarkable effect of ICA was its good performance in terms of the filtering of the data noise.
Pagani, Marco; Giuliani, Alessandro; Öberg, Johanna; De Carli, Fabrizio; Morbelli, Silvia; Girtler, Nicola; Arnaldi, Dario; Accardo, Jennifer; Bauckneht, Matteo; Bongioanni, Francesca; Chincarini, Andrea; Sambuceti, Gianmario; Jonsson, Cathrine; Nobili, Flavio
2017-07-01
Brain connectivity has been assessed in several neurodegenerative disorders investigating the mutual correlations between predetermined regions or nodes. Selective breakdown of brain networks during progression from normal aging to Alzheimer disease dementia (AD) has also been observed. Methods: We implemented independent-component analysis of 18 F-FDG PET data in 5 groups of subjects with cognitive states ranging from normal aging to AD-including mild cognitive impairment (MCI) not converting or converting to AD-to disclose the spatial distribution of the independent components in each cognitive state and their accuracy in discriminating the groups. Results: We could identify spatially distinct independent components in each group, with generation of local circuits increasing proportionally to the severity of the disease. AD-specific independent components first appeared in the late-MCI stage and could discriminate converting MCI and AD from nonconverting MCI with an accuracy of 83.5%. Progressive disintegration of the intrinsic networks from normal aging to MCI to AD was inversely proportional to the conversion time. Conclusion: Independent-component analysis of 18 F-FDG PET data showed a gradual disruption of functional brain connectivity with progression of cognitive decline in AD. This information might be useful as a prognostic aid for individual patients and as a surrogate biomarker in intervention trials. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Ebstein, Frédéric; Keller, Martin; Paschen, Annette; Walden, Peter; Seeger, Michael; Bürger, Elke; Krüger, Elke; Schadendorf, Dirk; Kloetzel, Peter-M.; Seifert, Ulrike
2016-01-01
Efficient processing of target antigens by the ubiquitin-proteasome-system (UPS) is essential for treatment of cancers by T cell therapies. However, immune escape due to altered expression of IFN-γ-inducible components of the antigen presentation machinery and consequent inefficient processing of HLA-dependent tumor epitopes can be one important reason for failure of such therapies. Here, we show that short-term co-culture of Melan-A/MART-1 tumor antigen-expressing melanoma cells with Melan-A/MART-126-35-specific cytotoxic T lymphocytes (CTL) led to resistance against CTL-induced lysis because of impaired Melan-A/MART-126-35 epitope processing. Interestingly, deregulation of p97/VCP expression, which is an IFN-γ-independent component of the UPS and part of the ER-dependent protein degradation pathway (ERAD), was found to be essentially involved in the observed immune escape. In support, our data demonstrate that re-expression of p97/VCP in Melan-A/MART-126-35 CTL-resistant melanoma cells completely restored immune recognition by Melan-A/MART-126-35 CTL. In conclusion, our experiments show that impaired expression of IFN-γ-independent components of the UPS can exert rapid immune evasion of tumor cells and suggest that tumor antigens processed by distinct UPS degradation pathways should be simultaneously targeted in T cell therapies to restrict the likelihood of immune evasion due to impaired antigen processing. PMID:27143649
García-Nicolás, Obdulio; Auray, Gaël; Sautter, Carmen A.; Rappe, Julie C. F.; McCullough, Kenneth C.; Ruggli, Nicolas; Summerfield, Artur
2016-01-01
Porcine reproductive and respiratory syndrome virus (PRRSV) represents a macrophage (MØ)-tropic virus which is unable to induce interferon (IFN) type I in its target cells. Nevertheless, infected pigs show a short but prominent systemic IFN alpha (IFN-α) response. A possible explanation for this discrepancy is the ability of plasmacytoid dendritic cells (pDC) to produce IFN-α in response to free PRRSV virions, independent of infection. Here, we show that the highly pathogenic PRRSV genotype 1 strain Lena is unique in not inducing IFN-α production in pDC, contrasting with systemic IFN-α responses found in infected pigs. We also demonstrate efficient pDC stimulation by PRRSV Lena-infected MØ, resulting in a higher IFN-α production than direct stimulation of pDC by PRRSV virions. This response was strain-independent, required integrin-mediated intercellular contact, intact actin filaments in the MØ and was partially inhibited by an inhibitor of neutral sphingomyelinase. Although infected MØ-derived exosomes stimulated pDC, an efficient delivery of the stimulatory component was dependent on a tight contact between pDC and the infected cells. In conclusion, with this mechanism the immune system can efficiently sense PRRSV, resulting in production of considerable quantities of IFN-α. This is adding complexity to the immunopathogenesis of PRRSV infections, as IFN-α should alert the immune system and initiate the induction of adaptive immune responses, a process known to be inefficient during infection of pigs. PMID:27458429
Microgravity heat pump for space station thermal management.
Domitrovic, R E; Chen, F C; Mei, V C; Spezia, A L
2003-01-01
A highly efficient recuperative vapor compression heat pump was developed and tested for its ability to operate independent of orientation with respect to gravity while maximizing temperature lift. The objective of such a heat pump is to increase the temperature of, and thus reduce the size of, the radiative heat rejection panels on spacecrafts such as the International Space Station. Heat pump operation under microgravity was approximated by gravitational-independent experiments. Test evaluations include functionality, efficiency, and temperature lift. Commercially available components were used to minimize costs of new hardware development. Testing was completed on two heat pump design iterations--LBU-I and LBU--II, for a variety of operating conditions under the variation of several system parameters, including: orientation, evaporator water inlet temperature (EWIT), condenser water inlet temperature (CWIT), and compressor speed. The LBU-I system employed an ac motor, belt-driven scroll compressor, and tube-in-tube heat exchangers. The LBU-II system used a direct-drive AC motor compressor assembly and plate heat exchangers. The LBU-II system in general outperformed the LBU-I system on all accounts. Results are presented for all systems, showing particular attention to those states that perform with a COP of 4.5 +/- 10% and can maintain a temperature lift of 55 degrees F (30.6 degrees C) +/- 10%. A calculation of potential radiator area reduction shows that points with maximum temperature lift give the greatest potential for reduction, and that area reduction is a function of heat pump efficiency and a stronger function of temperature lift.
Ranking and averaging independent component analysis by reproducibility (RAICAR).
Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping
2008-06-01
Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.
Therapy-induced brain reorganization patterns in aphasia.
Abel, Stefanie; Weiller, Cornelius; Huber, Walter; Willmes, Klaus; Specht, Karsten
2015-04-01
Both hemispheres are engaged in recovery from word production deficits in aphasia. Lexical therapy has been shown to induce brain reorganization even in patients with chronic aphasia. However, the interplay of factors influencing reorganization patterns still remains unresolved. We were especially interested in the relation between lesion site, therapy-induced recovery, and beneficial reorganization patterns. Thus, we applied intensive lexical therapy, which was evaluated with functional magnetic resonance imaging, to 14 chronic patients with aphasic word retrieval deficits. In a group study, we aimed to illuminate brain reorganization of the naming network in comparison with healthy controls. Moreover, we intended to analyse the data with joint independent component analysis to relate lesion sites to therapy-induced brain reorganization, and to correlate resulting components with therapy gain. As a result, we found peri-lesional and contralateral activations basically overlapping with premorbid naming networks observed in healthy subjects. Reduced activation patterns for patients compared to controls before training comprised damaged left hemisphere language areas, right precentral and superior temporal gyrus, as well as left caudate and anterior cingulate cortex. There were decreasing activations of bilateral visuo-cognitive, articulatory, attention, and language areas due to therapy, with stronger decreases for patients in right middle temporal gyrus/superior temporal sulcus, bilateral precuneus as well as left anterior cingulate cortex and caudate. The joint independent component analysis revealed three components indexing lesion subtypes that were associated with patient-specific recovery patterns. Activation decreases (i) of an extended frontal lesion disconnecting language pathways occurred in left inferior frontal gyrus; (ii) of a small frontal lesion were found in bilateral inferior frontal gyrus; and (iii) of a large temporo-parietal lesion occurred in bilateral inferior frontal gyrus and contralateral superior temporal gyrus. All components revealed increases in prefrontal areas. One component was negatively correlated with therapy gain. Therapy was associated exclusively with activation decreases, which could mainly be attributed to higher processing efficiency within the naming network. In our joint independent component analysis, all three lesion patterns disclosed involved deactivation of left inferior frontal gyrus. Moreover, we found evidence for increased demands on control processes. As expected, we saw partly differential reorganization profiles depending on lesion patterns. There was no compensatory deactivation for the large left inferior frontal lesion, with its less advantageous outcome probably being related to its disconnection from crucial language processing pathways. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Storage strategies of eddy-current FE-BI model for GPU implementation
NASA Astrophysics Data System (ADS)
Bardel, Charles; Lei, Naiguang; Udpa, Lalita
2013-01-01
In the past few years graphical processing units (GPUs) have shown tremendous improvements in computational throughput over standard CPU architecture. However, this comes at the cost of restructuring the algorithms to meet the strengths and drawbacks of this GPU architecture. A major drawback is the state of limited memory, and hence storage of FE stiffness matrices on the GPU is important. In contrast to storage on CPU the GPU storage format has significant influence on the overall performance. This paper presents an investigation of a storage strategy in the implementation of a two-dimensional finite element-boundary integral (FE-BI) model for Eddy current NDE applications, on GPU architecture. Specifically, the high dimensional matrices are manipulated by examining the matrix structure and optimally splitting into structurally independent component matrices for efficient storage and retrieval of each component. Results obtained using the proposed approach are compared to those of conventional CPU implementation for validating the method.
The air-conditioning capacity of the human nose.
Naftali, Sara; Rosenfeld, Moshe; Wolf, Michael; Elad, David
2005-04-01
The nose is the front line defender of the respiratory system. Unsteady simulations in three-dimensional models have been developed to study transport patterns in the human nose and its overall air-conditioning capacity. The results suggested that the healthy nose can efficiently provide about 90% of the heat and the water fluxes required to condition the ambient inspired air to near alveolar conditions in a variety of environmental conditions and independent of variations in internal structural components. The anatomical replica of the human nose showed the best performance and was able to provide 92% of the heating and 96% of the moisture needed to condition the inspired air to alveolar conditions. A detailed analysis explored the relative contribution of endonasal structural components to the air-conditioning process. During a moderate breathing effort, about 11% reduction in the efficacy of nasal air-conditioning capacity was observed.
RHCV Telescope System Operations Manual
2018-01-05
hardware and software components. Several of the components are closely coupled and rely on one-another, while others are largely independent. This...of hardware and software components. Several of the components are closely coupled and rely on one-another, while others are largely independent. This...attendant training The use cases are briefly described in separate sections, and step-by-step instructions are presented. Each section begins on a new
Design and Analysis of a Stiffened Composite Structure Repair Concept
NASA Technical Reports Server (NTRS)
Przekop, Adam
2011-01-01
A design and analysis of a repair concept applicable to a stiffened thin-skin composite panel based on the Pultruded Rod Stitched Efficient Unitized Structure is presented. Since the repair concept is a bolted repair using metal components, it can easily be applied in the operational environment. Initial analyses are aimed at validating the finite element modeling approach by comparing with available test data. Once confidence in the analysis approach is established several repair configurations are explored and the most efficient one presented. Repairs involving damage to the top of the stiffener alone are considered in addition to repairs involving a damaged stiffener, flange and underlying skin. High fidelity finite element modeling techniques such as mesh-independent definition of compliant fasteners, elastic-plastic metallic material properties and geometrically nonlinear analysis are utilized in the effort. The results of the analysis are presented and factors influencing the design are assessed and discussed.
Fedotchev, A I
2010-01-01
The perspective approach to non-pharmacological correction of the stress induced functional disorders in humans, based on the double negative feedback from patient's EEG was validated and experimentally tested. The approach implies a simultaneous use of narrow frequency EEG-oscillators, characteristic for each patient and recorded in real time span, in two independent contours of negative feedback--traditional contour of adaptive biomanagement and additional contour of resonance stimulation. In the last the signals of negative feedback from individual narrow frequency EEG oscillators are not recognized by the subject, but serve for an automatic modulation of the parameters of the sensory impact. Was shown that due to combination of active (conscious perception) and passive (automatic modulation) use of signals of negative feedback from narrow frequency EEG components of the patient, opens a possibility of considerable increase of efficiency of the procedures of EEG biomanagement.
DOD can save millions by using energy efficient centralized aircraft support systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-05-07
The ways the Department of Defense can save millions of dollars annually by using new energy efficient centralized aircraft support systems at certain Air Force and Navy bases are discussed. The Air Force and Navy have developed and installed several different systems and have realized some degree of success. However, each service has developed its systems independently. Consequently, there is no commonality between the services' systems which could permit economical procurements for standard servicewide systems. Standardization would also prevent duplication of design efforts by the services and minimize proliferation of aircraft support equipment. It also would allow the services tomore » further reduce costs by combining requirements to assure the most economical quantities for buying system components. GAO makes specific recommendations to the Secretaries of Defense and the Air Force to develop standard systems and to install them at all bases where feasible and practical.« less
Park, Arnold; Yun, Tatyana; Vigant, Frederic; Pernet, Olivier; Won, Sohui T; Dawes, Brian E; Bartkowski, Wojciech; Freiberg, Alexander N; Lee, Benhur
2016-05-01
The budding of Nipah virus, a deadly member of the Henipavirus genus within the Paramyxoviridae, has been thought to be independent of the host ESCRT pathway, which is critical for the budding of many enveloped viruses. This conclusion was based on the budding properties of the virus matrix protein in the absence of other virus components. Here, we find that the virus C protein, which was previously investigated for its role in antagonism of innate immunity, recruits the ESCRT pathway to promote efficient virus release. Inhibition of ESCRT or depletion of the ESCRT factor Tsg101 abrogates the C enhancement of matrix budding and impairs live Nipah virus release. Further, despite the low sequence homology of the C proteins of known henipaviruses, they all enhance the budding of their cognate matrix proteins, suggesting a conserved and previously unknown function for the henipavirus C proteins.
Towards automatic lithological classification from remote sensing data using support vector machines
NASA Astrophysics Data System (ADS)
Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael
2010-05-01
Remote sensing data can be effectively used as a mean to build geological knowledge for poorly mapped terrains. Spectral remote sensing data from space- and air-borne sensors have been widely used to geological mapping, especially in areas of high outcrop density in arid regions. However, spectral remote sensing information by itself cannot be efficiently used for a comprehensive lithological classification of an area due to (1) diagnostic spectral response of a rock within an image pixel is conditioned by several factors including the atmospheric effects, spectral and spatial resolution of the image, sub-pixel level heterogeneity in chemical and mineralogical composition of the rock, presence of soil and vegetation cover; (2) only surface information and is therefore highly sensitive to the noise due to weathering, soil cover, and vegetation. Consequently, for efficient lithological classification, spectral remote sensing data needs to be supplemented with other remote sensing datasets that provide geomorphological and subsurface geological information, such as digital topographic model (DEM) and aeromagnetic data. Each of the datasets contain significant information about geology that, in conjunction, can potentially be used for automated lithological classification using supervised machine learning algorithms. In this study, support vector machine (SVM), which is a kernel-based supervised learning method, was applied to automated lithological classification of a study area in northwestern India using remote sensing data, namely, ASTER, DEM and aeromagnetic data. Several digital image processing techniques were used to produce derivative datasets that contained enhanced information relevant to lithological discrimination. A series of SVMs (trained using k-folder cross-validation with grid search) were tested using various combinations of input datasets selected from among 50 datasets including the original 14 ASTER bands and 36 derivative datasets (including 14 principal component bands, 14 independent component bands, 3 band ratios, 3 DEM derivatives: slope/curvatureroughness and 2 aeromagnetic derivatives: mean and variance of susceptibility) extracted from the ASTER, DEM and aeromagnetic data, in order to determine the optimal inputs that provide the highest classification accuracy. It was found that a combination of ASTER-derived independent components, principal components and band ratios, DEM-derived slope, curvature and roughness, and aeromagnetic-derived mean and variance of magnetic susceptibility provide the highest classification accuracy of 93.4% on independent test samples. A comparison of the classification results of the SVM with those of maximum likelihood (84.9%) and minimum distance (38.4%) classifiers clearly show that the SVM algorithm returns much higher classification accuracy. Therefore, the SVM method can be used to produce quick and reliable geological maps from scarce geological information, which is still the case with many under-developed frontier regions of the world.
Olejniczak, Małgorzata; Bast, Radovan; Saue, Trond; Pecul, Magdalena
2012-01-07
We report the implementation of nuclear magnetic resonance (NMR) shielding tensors within the four-component relativistic Kohn-Sham density functional theory including non-collinear spin magnetization and employing London atomic orbitals to ensure gauge origin independent results, together with a new and efficient scheme for assuring correct balance between the large and small components of a molecular four-component spinor in the presence of an external magnetic field (simple magnetic balance). To test our formalism we have carried out calculations of NMR shielding tensors for the HX series (X = F, Cl, Br, I, At), the Xe atom, and the Xe dimer. The advantage of simple magnetic balance scheme combined with the use of London atomic orbitals is the fast convergence of results (when compared with restricted kinetic balance) and elimination of linear dependencies in the basis set (when compared to unrestricted kinetic balance). The effect of including spin magnetization in the description of NMR shielding tensor has been found important for hydrogen atoms in heavy HX molecules, causing an increase of isotropic values of 10%, but negligible for heavy atoms.
Using independent component analysis for electrical impedance tomography
NASA Astrophysics Data System (ADS)
Yan, Peimin; Mo, Yulong
2004-05-01
Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.
NASA Technical Reports Server (NTRS)
Leach, K.; Thulin, R. D.; Howe, D. C.
1982-01-01
A four stage, low pressure turbine component has been designed to power the fan and low pressure compressor system in the Energy Efficient Engine. Designs for a turbine intermediate case and an exit guide vane assembly also have been established. The components incorporate numerous technology features to enhance efficiency, durability, and performance retention. These designs reflect a positive step towards improving engine fuel efficiency on a component level. The aerodynamic and thermal/mechanical designs of the intermediate case and low pressure turbine components are presented and described. An overview of the predicted performance of the various component designs is given.
Selection of independent components based on cortical mapping of electromagnetic activity
NASA Astrophysics Data System (ADS)
Chan, Hui-Ling; Chen, Yong-Sheng; Chen, Li-Fen
2012-10-01
Independent component analysis (ICA) has been widely used to attenuate interference caused by noise components from the electromagnetic recordings of brain activity. However, the scalp topographies and associated temporal waveforms provided by ICA may be insufficient to distinguish functional components from artifactual ones. In this work, we proposed two component selection methods, both of which first estimate the cortical distribution of the brain activity for each component, and then determine the functional components based on the parcellation of brain activity mapped onto the cortical surface. Among all independent components, the first method can identify the dominant components, which have strong activity in the selected dominant brain regions, whereas the second method can identify those inter-regional associating components, which have similar component spectra between a pair of regions. For a targeted region, its component spectrum enumerates the amplitudes of its parceled brain activity across all components. The selected functional components can be remixed to reconstruct the focused electromagnetic signals for further analysis, such as source estimation. Moreover, the inter-regional associating components can be used to estimate the functional brain network. The accuracy of the cortical activation estimation was evaluated on the data from simulation studies, whereas the usefulness and feasibility of the component selection methods were demonstrated on the magnetoencephalography data recorded from a gender discrimination study.
NASA Astrophysics Data System (ADS)
Jaber, Khalid Mohammad; Alia, Osama Moh'd.; Shuaib, Mohammed Mahmod
2018-03-01
Finding the optimal parameters that can reproduce experimental data (such as the velocity-density relation and the specific flow rate) is a very important component of the validation and calibration of microscopic crowd dynamic models. Heavy computational demand during parameter search is a known limitation that exists in a previously developed model known as the Harmony Search-Based Social Force Model (HS-SFM). In this paper, a parallel-based mechanism is proposed to reduce the computational time and memory resource utilisation required to find these parameters. More specifically, two MATLAB-based multicore techniques (parfor and create independent jobs) using shared memory are developed by taking advantage of the multithreading capabilities of parallel computing, resulting in a new framework called the Parallel Harmony Search-Based Social Force Model (P-HS-SFM). The experimental results show that the parfor-based P-HS-SFM achieved a better computational time of about 26 h, an efficiency improvement of ? 54% and a speedup factor of 2.196 times in comparison with the HS-SFM sequential processor. The performance of the P-HS-SFM using the create independent jobs approach is also comparable to parfor with a computational time of 26.8 h, an efficiency improvement of about 30% and a speedup of 2.137 times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couto, J.A.
1975-06-01
Liquid hydrocarbons contained in Argentina's Pico Truncade natural gas caused a number of serious pipeline transmission and gas processing problems. Gas del Estado has installed a series of efficient liquid removal devices at the producing fields. A flow chart of the gasoline stripping process is illustrated, as are 2 types of heat exchangers. This process of gasoline stripping (gas condensate recovery) integrates various operations which normally are performed independently: separation of the poor condensate in the gas, stabilization of the same, and incorporation of the light components (products of the stabilization) in the main gas flow.
The approach to engineering tasks composition on knowledge portals
NASA Astrophysics Data System (ADS)
Novogrudska, Rina; Globa, Larysa; Schill, Alexsander; Romaniuk, Ryszard; Wójcik, Waldemar; Karnakova, Gaini; Kalizhanova, Aliya
2017-08-01
The paper presents an approach to engineering tasks composition on engineering knowledge portals. The specific features of engineering tasks are highlighted, their analysis makes the basis for partial engineering tasks integration. The formal algebraic system for engineering tasks composition is proposed, allowing to set the context-independent formal structures for engineering tasks elements' description. The method of engineering tasks composition is developed that allows to integrate partial calculation tasks into general calculation tasks on engineering portals, performed on user request demand. The real world scenario «Calculation of the strength for the power components of magnetic systems» is represented, approving the applicability and efficiency of proposed approach.
The Economics of Independent Living: Efficiency, Equity and Ethics.
ERIC Educational Resources Information Center
O'Shea, E.; Kennelly, B.
1996-01-01
This article explores the meaning of efficiency and equity in the context of independent living programs for people with disabilities. Conflicts in costs and trade-offs in various scenarios of the efficiency/equity equation are examined in terms of theories of utilitarianism, contractarianism, justice and mutual advantage, and justice as…
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
NMRPro: an integrated web component for interactive processing and visualization of NMR spectra.
Mohamed, Ahmed; Nguyen, Canh Hao; Mamitsuka, Hiroshi
2016-07-01
The popularity of using NMR spectroscopy in metabolomics and natural products has driven the development of an array of NMR spectral analysis tools and databases. Particularly, web applications are well used recently because they are platform-independent and easy to extend through reusable web components. Currently available web applications provide the analysis of NMR spectra. However, they still lack the necessary processing and interactive visualization functionalities. To overcome these limitations, we present NMRPro, a web component that can be easily incorporated into current web applications, enabling easy-to-use online interactive processing and visualization. NMRPro integrates server-side processing with client-side interactive visualization through three parts: a python package to efficiently process large NMR datasets on the server-side, a Django App managing server-client interaction, and SpecdrawJS for client-side interactive visualization. Demo and installation instructions are available at http://mamitsukalab.org/tools/nmrpro/ mohamed@kuicr.kyoto-u.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Modeling of organic solar cell using response surface methodology
NASA Astrophysics Data System (ADS)
Suliman, Rajab; Mitul, Abu Farzan; Mohammad, Lal; Djira, Gemechis; Pan, Yunpeng; Qiao, Qiquan
Polymer solar cells have drawn much attention during the past few decades due to their low manufacturing cost and incompatibility for flexible substrates. In solution-processed organic solar cells, the optimal thickness, annealing temperature, and morphology are key components to achieving high efficiency. In this work, response surface methodology (RSM) is used to find optimal fabrication conditions for polymer solar cells. In order to optimize cell efficiency, the central composite design (CCD) with three independent variables polymer concentration, polymer-fullerene ratio, and active layer spinning speed was used. Optimal device performance was achieved using 10.25 mg/ml polymer concentration, 0.42 polymer-fullerene ratio, and 1624 rpm of active layer spinning speed. The predicted response (the efficiency) at the optimum stationary point was found to be 5.23% for the Poly(diketopyrrolopyrrole-terthiophene) (PDPP3T)/PC60BM solar cells. Moreover, 97% of the variation in the device performance was explained by the best model. Finally, the experimental results are consistent with the CCD prediction, which proves that this is a promising and appropriate model for optimum device performance and fabrication conditions.
Response-reinforcer dependency and resistance to change.
Cançado, Carlos R X; Abreu-Rodrigues, Josele; Aló, Raquel Moreira; Hauck, Flávia; Doughty, Adam H
2018-01-01
The effects of the response-reinforcer dependency on resistance to change were studied in three experiments with rats. In Experiment 1, lever pressing produced reinforcers at similar rates after variable interreinforcer intervals in each component of a two-component multiple schedule. Across conditions, in the fixed component, all reinforcers were response-dependent; in the alternative component, the percentage of response-dependent reinforcers was 100, 50 (i.e., 50% response-dependent and 50% response-independent) or 10% (i.e., 10% response-dependent and 90% response-independent). Resistance to extinction was greater in the alternative than in the fixed component when the dependency in the former was 10%, but was similar between components when this dependency was 100 or 50%. In Experiment 2, a three-component multiple schedule was used. The dependency was 100% in one component and 10% in the other two. The 10% components differed on how reinforcers were programmed. In one component, as in Experiment 1, a reinforcer had to be collected before the scheduling of other response-dependent or independent reinforcers. In the other component, response-dependent and -independent reinforcers were programmed by superimposing a variable-time schedule on an independent variable-interval schedule. Regardless of the procedure used to program the dependency, resistance to extinction was greater in the 10% components than in the 100% component. These results were replicated in Experiment 3 in which, instead of extinction, VT schedules replaced the baseline schedules in each multiple-schedule component during the test. We argue that the relative change in dependency from Baseline to Test, which is greater when baseline dependencies are high rather than low, could account for the differential resistance to change in the present experiments. The inconsistencies in results across the present and previous experiments suggest that the effects of dependency on resistance to change are not well understood. Additional systematic analyses are important to further understand the effects of the response-reinforcer relation on resistance to change and to the development of a more comprehensive theory of behavioral persistence. © 2017 Society for the Experimental Analysis of Behavior.
Automatic removal of eye-movement and blink artifacts from EEG signals.
Gao, Jun Feng; Yang, Yong; Lin, Pan; Wang, Pei; Zheng, Chong Xun
2010-03-01
Frequent occurrence of electrooculography (EOG) artifacts leads to serious problems in interpreting and analyzing the electroencephalogram (EEG). In this paper, a robust method is presented to automatically eliminate eye-movement and eye-blink artifacts from EEG signals. Independent Component Analysis (ICA) is used to decompose EEG signals into independent components. Moreover, the features of topographies and power spectral densities of those components are extracted to identify eye-movement artifact components, and a support vector machine (SVM) classifier is adopted because it has higher performance than several other classifiers. The classification results show that feature-extraction methods are unsuitable for identifying eye-blink artifact components, and then a novel peak detection algorithm of independent component (PDAIC) is proposed to identify eye-blink artifact components. Finally, the artifact removal method proposed here is evaluated by the comparisons of EEG data before and after artifact removal. The results indicate that the method proposed could remove EOG artifacts effectively from EEG signals with little distortion of the underlying brain signals.
Kouzel, Nadzeya; Oldewurtel, Enno R; Maier, Berenike
2015-07-01
Extracellular DNA is an important structural component of many bacterial biofilms. It is unknown, however, to which extent external DNA is used to transfer genes by means of transformation. Here, we quantified the acquisition of multidrug resistance and visualized its spread under selective and nonselective conditions in biofilms formed by Neisseria gonorrhoeae. The density and architecture of the biofilms were controlled by microstructuring the substratum for bacterial adhesion. Horizontal transfer of antibiotic resistance genes between cocultured strains, each carrying a single resistance, occurred efficiently in early biofilms. The efficiency of gene transfer was higher in early biofilms than between planktonic cells. It was strongly reduced after 24 h and independent of biofilm density. Pilin antigenic variation caused a high fraction of nonpiliated bacteria but was not responsible for the reduced gene transfer at later stages. When selective pressure was applied to dense biofilms using antibiotics at their MIC, the double-resistant bacteria did not show a significant growth advantage. In loosely connected biofilms, the spreading of double-resistant clones was prominent. We conclude that multidrug resistance readily develops in early gonococcal biofilms through horizontal gene transfer. However, selection and spreading of the multiresistant clones are heavily suppressed in dense biofilms. Biofilms are considered ideal reaction chambers for horizontal gene transfer and development of multidrug resistances. The rate at which genes are exchanged within biofilms is unknown. Here, we quantified the acquisition of double-drug resistance by gene transfer between gonococci with single resistances. At early biofilm stages, the transfer efficiency was higher than for planktonic cells but then decreased with biofilm age. The surface topography affected the architecture of the biofilm. While the efficiency of gene transfer was independent of the architecture, spreading of double-resistant bacteria under selective conditions was strongly enhanced in loose biofilms. We propose that while biofilms help generating multiresistant strains, selection takes place mostly after dispersal from the biofilm. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-06-27
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.
Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin
2016-01-01
Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946
Topology design and performance analysis of an integrated communication network
NASA Technical Reports Server (NTRS)
Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.
1985-01-01
A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.
EON: a component-based approach to automation of protocol-directed therapy.
Musen, M A; Tu, S W; Das, A K; Shahar, Y
1996-01-01
Provision of automated support for planning protocol-directed therapy requires a computer program to take as input clinical data stored in an electronic patient-record system and to generate as output recommendations for therapeutic interventions and laboratory testing that are defined by applicable protocols. This paper presents a synthesis of research carried out at Stanford University to model the therapy-planning task and to demonstrate a component-based architecture for building protocol-based decision-support systems. We have constructed general-purpose software components that (1) interpret abstract protocol specifications to construct appropriate patient-specific treatment plans; (2) infer from time-stamped patient data higher-level, interval-based, abstract concepts; (3) perform time-oriented queries on a time-oriented patient database; and (4) allow acquisition and maintenance of protocol knowledge in a manner that facilitates efficient processing both by humans and by computers. We have implemented these components in a computer system known as EON. Each of the components has been developed, evaluated, and reported independently. We have evaluated the integration of the components as a composite architecture by implementing T-HELPER, a computer-based patient-record system that uses EON to offer advice regarding the management of patients who are following clinical trial protocols for AIDS or HIV infection. A test of the reuse of the software components in a different clinical domain demonstrated rapid development of a prototype application to support protocol-based care of patients who have breast cancer. PMID:8930854
A first application of independent component analysis to extracting structure from stock returns.
Back, A D; Weigend, A S
1997-08-01
This paper explores the application of a signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. ICA is shown to be a potentially powerful method of analyzing and understanding driving mechanisms in financial time series. The application to portfolio optimization is described in Chin and Weigend (1998).
Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.
He, Lifeng; Chao, Yuyan; Suzuki, Kenji
2011-08-01
Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.
Wienken, Magdalena; Dickmanns, Antje; Nemajerova, Alice; Kramer, Daniela; Najafova, Zeynab; Weiss, Miriam; Karpiuk, Oleksandra; Kassem, Moustapha; Zhang, Yanping; Lozano, Guillermina; Johnsen, Steven A; Moll, Ute M; Zhang, Xin; Dobbelstein, Matthias
2016-01-07
The MDM2 oncoprotein ubiquitinates and antagonizes p53 but may also carry out p53-independent functions. Here we report that MDM2 is required for the efficient generation of induced pluripotent stem cells (iPSCs) from murine embryonic fibroblasts, in the absence of p53. Similarly, MDM2 depletion in the context of p53 deficiency also promoted the differentiation of human mesenchymal stem cells and diminished clonogenic survival of cancer cells. Most of the MDM2-controlled genes also responded to the inactivation of the Polycomb Repressor Complex 2 (PRC2) and its catalytic component EZH2. MDM2 physically associated with EZH2 on chromatin, enhancing the trimethylation of histone 3 at lysine 27 and the ubiquitination of histone 2A at lysine 119 (H2AK119) at its target genes. Removing MDM2 simultaneously with the H2AK119 E3 ligase Ring1B/RNF2 further induced these genes and synthetically arrested cell proliferation. In conclusion, MDM2 supports the Polycomb-mediated repression of lineage-specific genes, independent of p53. Copyright © 2016 Elsevier Inc. All rights reserved.
Acceleration characteristics of human ocular accommodation.
Bharadwaj, Shrikant R; Schor, Clifton M
2005-01-01
Position and velocity of accommodation are known to increase with stimulus magnitude, however, little is known about acceleration properties. We investigated three acceleration properties: peak acceleration, time-to-peak acceleration and total duration of acceleration to step changes in defocus. Peak velocity and total duration of acceleration increased with response magnitude. Peak acceleration and time-to-peak acceleration remained independent of response magnitude. Independent first-order and second-order dynamic components of accommodation demonstrate that neural control of accommodation has an initial open-loop component that is independent of response magnitude and a closed-loop component that increases with response magnitude.
Two-Stage Winch for Kites and Tethered Balloons or Blimps
NASA Technical Reports Server (NTRS)
Miles, Ted; Bland, Geoff
2011-01-01
A winch system provides a method for launch and recovery capabilities for kites and tethered blimps or balloons. Low power consumption is a key objective, as well as low weight for portability. This is accomplished by decoupling the tether-line storage and wind ing/ unwinding functions, and providing tailored and efficient mechanisms for each. The components of this system include rotational power input devices such as electric motors or other apparatus, line winding/unwinding reel(s), line storage reel(s), and independent drive trains. Power is applied to the wind/unwind reels to transport the tether line. Power is also applied to a line storage reel, from either the wind/unwind power source, the wind/unwind reel itself, or separate power source. The speeds of the two reels are synchronized, but not dependent on each other. This is accomplished via clutch mechanisms, variable transmissions, or independent motor controls. The speed of the storage reel is modulated as the effective diameter of the reel changes with line accumulation.
Activation of DNA Damage Repair Pathways by Murine Polyomavirus
Heiser, Katie; Nicholas, Catherine; Garcea, Robert L.
2016-01-01
Nuclear replication of DNA viruses activates DNA damage repair (DDR) pathways, which are thought to detect and inhibit viral replication. However, many DNA viruses also depend on these pathways in order to optimally replicate their genomes. We investigated the relationship between murine polyomavirus (MuPyV) and components of DDR signaling pathways including CHK1, CHK2, H2AX, ATR, and DNAPK. We found that recruitment and retention of DDR proteins at viral replication centers was independent of H2AX, as well as the viral small and middle T-antigens. Additionally, infectious virus production required ATR kinase activity, but was independent of CHK1, CHK2, or DNAPK signaling. ATR inhibition did not reduce the total amount of viral DNA accumulated, but affected the amount of virus produced, indicating a defect in virus assembly. These results suggest that MuPyV may utilize a subset of DDR proteins or non-canonical DDR signaling pathways in order to efficiently replicate and assemble. PMID:27529739
Instrument-independent analysis of music by means of the continuous wavelet transform
NASA Astrophysics Data System (ADS)
Olmo, Gabriella; Dovis, Fabio; Benotto, Paolo; Calosso, Claudio; Passaro, Pierluigi
1999-10-01
This paper deals with the problem of automatic recognition of music. Segments of digitized music are processed by means of a Continuous Wavelet Transform, properly chosen so as to match the spectral characteristics of the signal. In order to achieve a good time-scale representation of the signal components a novel wavelet has been designed suited to the musical signal features. particular care has been devoted towards an efficient implementation, which operates in the frequency domain, and includes proper segmentation and aliasing reduction techniques to make the analysis of long signals feasible. The method achieves very good performance in terms of both time and frequency selectivity, and can yield the estimate and the localization in time of both the fundamental frequency and the main harmonics of each tone. The analysis is used as a preprocessing step for a recognition algorithm, which we show to be almost independent on the instrument reproducing the sounds. Simulations are provided to demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Wang, Fang; Liu, Chang; Liu, Xiaoning; Niu, Tiaoming; Wang, Jing; Mei, Zhonglei; Qin, Jiayong
2017-06-01
In this paper, a flat and incident angle independence absorbing material is proposed and numerically verified in the optical spectrum. A homogeneous and anisotropic dielectric slab as a non-reflecting layer is first reviewed, and a feasible realization strategy of the slab is then given by using layered isotropic materials. When the loss components of the constitutive materials are not zero, the slab will work as an angle insensitive absorbing layer, and the absorption rate augments with increase of the losses. As the numerical verifications, the field distributions of a metallic cylinder and a triangular metallic object individually covered by the designed absorbing layer are demonstrated. The simulation results show that the designed absorbing layer can efficiently absorb the incident waves with the property of incident angle independence at the operation frequency. This homogeneous slab can be used in one and two dimensional situations for the realization of an invisibility cloak, a carpet cloak and even a skin cloak, if it is used to conformally cover target objects.
Feed efficiency - how should it be used for the cow herd?
USDA-ARS?s Scientific Manuscript database
In cows, the most critical factor influencing the output component of efficiency is reproductive rate, and not necessarily weight gain. Thus benefits of selecting animals with desirable measures of feed efficiency on cow efficiency remain to be determined. The feed input component of cow efficiency...
OXlearn: a new MATLAB-based simulation tool for connectionist models.
Ruh, Nicolas; Westermann, Gert
2009-11-01
OXlearn is a free, platform-independent MATLAB toolbox in which standard connectionist neural network models can be set up, run, and analyzed by means of a user-friendly graphical interface. Due to its seamless integration with the MATLAB programming environment, the inner workings of the simulation tool can be easily inspected and/or extended using native MATLAB commands or components. This combination of usability, transparency, and extendability makes OXlearn an efficient tool for the implementation of basic research projects or the prototyping of more complex research endeavors, as well as for teaching. Both the MATLAB toolbox and a compiled version that does not require access to MATLAB can be downloaded from http://psych.brookes.ac.uk/oxlearn/.
Lin, Yuan-Pin; Duann, Jeng-Ren; Chen, Jyh-Horng; Jung, Tzyy-Ping
2010-04-21
This study explores the electroencephalographic (EEG) correlates of emotional experience during music listening. Independent component analysis and analysis of variance were used to separate statistically independent spectral changes of the EEG in response to music-induced emotional processes. An independent brain process with equivalent dipole located in the fronto-central region exhibited distinct δ-band and θ-band power changes associated with self-reported emotional states. Specifically, the emotional valence was associated with δ-power decreases and θ-power increases in the frontal-central area, whereas the emotional arousal was accompanied by increases in both δ and θ powers. The resultant emotion-related component activations that were less interfered by the activities from other brain processes complement previous EEG studies of emotion perception to music.
Biomechanics of forearm rotation: force and efficiency of pronator teres.
Ibáñez-Gimeno, Pere; Galtés, Ignasi; Jordana, Xavier; Malgosa, Assumpció; Manyosa, Joan
2014-01-01
Biomechanical models are useful to assess the effect of muscular forces on bone structure. Using skeletal remains, we analyze pronator teres rotational efficiency and its force components throughout the entire flexion-extension and pronation-supination ranges by means of a new biomechanical model and 3D imaging techniques, and we explore the relationship between these parameters and skeletal structure. The results show that maximal efficiency is the highest in full elbow flexion and is close to forearm neutral position for each elbow angle. The vertical component of pronator teres force is the highest among all components and is greater in pronation and elbow extension. The radial component becomes negative in pronation and reaches lower values as the elbow flexes. Both components could enhance radial curvature, especially in pronation. The model also enables to calculate efficiency and force components simulating changes in osteometric parameters. An increase of radial curvature improves efficiency and displaces the position where the radial component becomes negative towards the end of pronation. A more proximal location of pronator teres radial enthesis and a larger humeral medial epicondyle increase efficiency and displace the position where this component becomes negative towards forearm neutral position, which enhances radial curvature. Efficiency is also affected by medial epicondylar orientation and carrying angle. Moreover, reaching an object and bringing it close to the face in a close-to-neutral position improve efficiency and entail an equilibrium between the forces affecting the elbow joint stability. When the upper-limb skeleton is used in positions of low efficiency, implying unbalanced force components, it undergoes plastic changes, which improve these parameters. These findings are useful for studies on ergonomics and orthopaedics, and the model could also be applied to fossil primates in order to infer their locomotor form. Moreover, activity patterns in human ancient populations could be deduced from parameters reported here.
Independent active and thermodynamic processes govern the nucleolus assembly in vivo
Falahati, Hanieh; Wieschaus, Eric
2017-01-01
Membraneless organelles play a central role in the organization of protoplasm by concentrating macromolecules, which allows efficient cellular processes. Recent studies have shown that, in vitro, certain components in such organelles can assemble through phase separation. Inside the cell, however, such organelles are multicomponent, with numerous intermolecular interactions that can potentially affect the demixing properties of individual components. In addition, the organelles themselves are inherently active, and it is not clear how the active, energy-consuming processes that occur constantly within such organelles affect the phase separation behavior of the constituent macromolecules. Here, we examine the phase separation model for the formation of membraneless organelles in vivo by assessing the two features that collectively distinguish it from active assembly, namely temperature dependence and reversibility. We use a microfluidic device that allows accurate and rapid manipulation of temperature and examine the quantitative dynamics by which six different nucleolar proteins assemble into the nucleoli of Drosophila melanogaster embryos. Our results indicate that, although phase separation is the main mode of recruitment for four of the studied proteins, the assembly of the other two is irreversible and enhanced at higher temperatures, behaviors indicative of active recruitment to the nucleolus. These two subsets of components differ in their requirements for ribosomal DNA; the two actively assembling components fail to assemble in the absence of ribosomal DNA, whereas the thermodynamically driven components assemble but lose temporal and spatial precision. PMID:28115706
Extracting Independent Local Oscillatory Geophysical Signals by Geodetic Tropospheric Delay
NASA Technical Reports Server (NTRS)
Botai, O. J.; Combrinck, L.; Sivakumar, V.; Schuh, H.; Bohm, J.
2010-01-01
Zenith Tropospheric Delay (ZTD) due to water vapor derived from space geodetic techniques and numerical weather prediction simulated-reanalysis data exhibits non-linear and non-stationary properties akin to those in the crucial geophysical signals of interest to the research community. These time series, once decomposed into additive (and stochastic) components, have information about the long term global change (the trend) and other interpretable (quasi-) periodic components such as seasonal cycles and noise. Such stochastic component(s) could be a function that exhibits at most one extremum within a data span or a monotonic function within a certain temporal span. In this contribution, we examine the use of the combined Ensemble Empirical Mode Decomposition (EEMD) and Independent Component Analysis (ICA): the EEMD-ICA algorithm to extract the independent local oscillatory stochastic components in the tropospheric delay derived from the European Centre for Medium-Range Weather Forecasts (ECMWF) over six geodetic sites (HartRAO, Hobart26, Wettzell, Gilcreek, Westford, and Tsukub32). The proposed methodology allows independent geophysical processes to be extracted and assessed. Analysis of the quality index of the Independent Components (ICs) derived for each cluster of local oscillatory components (also called the Intrinsic Mode Functions (IMFs)) for all the geodetic stations considered in the study demonstrate that they are strongly site dependent. Such strong dependency seems to suggest that the localized geophysical signals embedded in the ZTD over the geodetic sites are not correlated. Further, from the viewpoint of non-linear dynamical systems, four geophysical signals the Quasi-Biennial Oscillation (QBO) index derived from the NCEP/NCAR reanalysis, the Southern Oscillation Index (SOI) anomaly from NCEP, the SIDC monthly Sun Spot Number (SSN), and the Length of Day (LoD) are linked to the extracted signal components from ZTD. Results from the synchronization analysis show that ZTD and the geophysical signals exhibit (albeit subtle) site dependent phase synchronization index.
Dharmaprani, Dhani; Nguyen, Hoang K; Lewis, Trent W; DeLosAngeles, Dylan; Willoughby, John O; Pope, Kenneth J
2016-08-01
Independent Component Analysis (ICA) is a powerful statistical tool capable of separating multivariate scalp electrical signals into their additive independent or source components, specifically EEG or electroencephalogram and artifacts. Although ICA is a widely accepted EEG signal processing technique, classification of the recovered independent components (ICs) is still flawed, as current practice still requires subjective human decisions. Here we build on the results from Fitzgibbon et al. [1] to compare three measures and three ICA algorithms. Using EEG data acquired during neuromuscular paralysis, we tested the ability of the measures (spectral slope, peripherality and spatial smoothness) and algorithms (FastICA, Infomax and JADE) to identify components containing EMG. Spatial smoothness showed differentiation between paralysis and pre-paralysis ICs comparable to spectral slope, whereas peripherality showed less differentiation. A combination of the measures showed better differentiation than any measure alone. Furthermore, FastICA provided the best discrimination between muscle-free and muscle-contaminated recordings in the shortest time, suggesting it may be the most suited to EEG applications of the considered algorithms. Spatial smoothness results suggest that a significant number of ICs are mixed, i.e. contain signals from more than one biological source, and so the development of an ICA algorithm that is optimised to produce ICs that are easily classifiable is warranted.
Tangprasertchai, Narin S; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S; Qin, Peter Z
2015-01-01
The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve "correct" all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. © 2015 Elsevier Inc. All rights reserved.
Tangprasertchai, Narin S.; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S.; Qin, Peter Z.
2015-01-01
The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve “correct” all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260
Performance Evaluation of Reduced-Chord Rotor Blading as Applied to J73 Two-Stage Turbine
NASA Technical Reports Server (NTRS)
Schurn, Harold J.
1957-01-01
The multistage turbine from the J73 turbojet engine has previously been investigated with standard and with reduced-chord rotor blading in order to determine the individual performance characteristics of each configuration over a range of over-all pressure ratio and speed. Because both turbine configurations exhibited peak efficiencies of over 90 percent, and because both units had relatively wide efficient operating ranges, it was considered of interest to determine the performance of the first stage of the turbine as a separate component. Accordingly, the standard-bladed multistage turbine was modified by removing the second-stage rotor disk and stator and altering the flow passage so that the first stage of the unit could be operated independently. The modified single-stage turbine was then operated over a range of stage pressure ratio and speed. The single-stage turbine operated at a peak brake internal efficiency of over 90 percent at an over-all stage pressure ratio of 1.4 and at 90 percent of design equivalent speed. Furthermore, the unit operated at high efficiencies over a relatively wide operating range. When the single-stage results were compared with the multistage results at the design operating point, it was found that the first stage produced approximately half the total multistage-turbine work output.
Hybrid Power Management-Based Vehicle Architecture
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
Hybrid Power Management (HPM) is the integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications (s ee figure). The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, that provides all power to a common energy storage system that is used to power the drive motors and vehicle accessory systems. This architecture also provides power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. The key element of HPM is the energy storage system. All generated power is sent to the energy storage system, and all loads derive their power from that system. This can significantly reduce the power requirement of the primary power source, while increasing the vehicle reliability. Ultracapacitors are ideal for an HPM-based energy storage system due to their exceptionally long cycle life, high reliability, high efficiency, high power density, and excellent low-temperature performance. Multiple power sources and multiple loads are easily incorporated into an HPM-based vehicle. A gas turbine is a good primary power source because of its high efficiency, high power density, long life, high reliability, and ability to operate on a wide range of fuels. An HPM controller maintains optimal control over each vehicle component. This flexible operating system can be applied to all vehicles to considerably improve vehicle efficiency, reliability, safety, security, and performance. The HPM-based vehicle architecture has many advantages over conventional vehicle architectures. Ultracapacitors have a much longer cycle life than batteries, which greatly improves system reliability, reduces life-of-system costs, and reduces environmental impact as ultracapacitors will probably never need to be replaced and disposed of. The environmentally safe ultracapacitor components reduce disposal concerns, and their recyclable nature reduces the environmental impact. High ultracapacitor power density provides high power during surges, and the ability to absorb high power during recharging. Ultracapacitors are extremely efficient in capturing recharging energy, are rugged, reliable, maintenance-free, have excellent lowtemperature characteristic, provide consistent performance over time, and promote safety as they can be left indefinitely in a safe, discharged state whereas batteries cannot.
SIGPI. Fault Tree Cut Set System Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patenaude, C.J.
1992-01-13
SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less
SIGPI. Fault Tree Cut Set System Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patenaude, C.J.
1992-01-14
SIGPI computes the probabilistic performance of complex systems by combining cut set or other binary product data with probability information on each basic event. SIGPI is designed to work with either coherent systems, where the system fails when certain combinations of components fail, or noncoherent systems, where at least one cut set occurs only if at least one component of the system is operating properly. The program can handle conditionally independent components, dependent components, or a combination of component types and has been used to evaluate responses to environmental threats and seismic events. The three data types that can bemore » input are cut set data in disjoint normal form, basic component probabilities for independent basic components, and mean and covariance data for statistically dependent basic components.« less
Multivariate Associations of Fluid Intelligence and NAA.
Nikolaidis, Aki; Baniqued, Pauline L; Kranz, Michael B; Scavuzzo, Claire J; Barbey, Aron K; Kramer, Arthur F; Larsen, Ryan J
2017-04-01
Understanding the neural and metabolic correlates of fluid intelligence not only aids scientists in characterizing cognitive processes involved in intelligence, but it also offers insight into intervention methods to improve fluid intelligence. Here we use magnetic resonance spectroscopic imaging (MRSI) to measure N-acetyl aspartate (NAA), a biochemical marker of neural energy production and efficiency. We use principal components analysis (PCA) to examine how the distribution of NAA in the frontal and parietal lobes relates to fluid intelligence. We find that a left lateralized frontal-parietal component predicts fluid intelligence, and it does so independently of brain size, another significant predictor of fluid intelligence. These results suggest that the left motor regions play a key role in the visualization and planning necessary for spatial cognition and reasoning, and we discuss these findings in the context of the Parieto-Frontal Integration Theory of intelligence. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Foong, Shaohui; Sun, Zhenglong
2016-08-12
In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.
Rapid Airplane Parametric Input Design(RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.
2004-01-01
An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.
Specific Uptake of Lipid-Antibody-Functionalized LbL Microcarriers by Cells.
Göse, Martin; Scheffler, Kira; Reibetanz, Uta
2016-11-14
The modular construction of Layer-by-Layer biopolymer microcarriers facilitates a highly specific design of drug delivery systems. A supported lipid bilayer (SLB) contributes to biocompatibility and protection of sensitive active agents. The addition of a lipid anchor equipped with PEG (shielding from opsonins) and biotin (attachment of exchangeable outer functional molecules) enhances the microcarrier functionality even more. However, a homogeneously assembled supported lipid bilayer is a prerequisite for a specific binding of functional components. Our investigations show that a tightly packed SLB improves the efficiency of functional components attached to the microcarrier's surface, as illustrated with specific antibodies in cellular application. Only a low quantity of antibodies is needed to obtain improved cellular uptake rates independent from cell type as compared to an antibody-functionalized loosely packed lipid bilayer or directly assembled antibody onto the multilayer. A fast disassembly of the lipid bilayer within endolysosomes exposing the underlying drug delivering multilayer structure demonstrates the suitability of LbL-microcarriers as a multifunctional drug delivery system.
Shen, Hujun; Czaplewski, Cezary; Liwo, Adam; Scheraga, Harold A.
2009-01-01
The kinetic-trapping problem in simulating protein folding can be overcome by using a Replica Exchange Method (REM). However, in implementing REM in molecular dynamics simulations, synchronization between processors on parallel computers is required, and communication between processors limits its ability to sample conformational space in a complex system efficiently. To minimize communication between processors during the simulation, a Serial Replica Exchange Method (SREM) has been proposed recently by Hagan et al. (J. Phys. Chem. B 2007, 111, 1416–1423). Here, we report the implementation of this new SREM algorithm with our physics-based united-residue (UNRES) force field. The method has been tested on the protein 1E0L with a temperature-independent UNRES force field and on terminally blocked deca-alanine (Ala10) and 1GAB with the recently introduced temperature-dependent UNRES force field. With the temperature-independent force field, SREM reproduces the results of REM but is more efficient in terms of wall-clock time and scales better on distributed-memory machines. However, exact application of SREM to the temperature-dependent UNRES algorithm requires the determination of a four-dimensional distribution of UNRES energy components instead of a one-dimensional energy distribution for each temperature, which is prohibitively expensive. Hence, we assumed that the temperature dependence of the force field can be ignored for neighboring temperatures. This version of SREM worked for Ala10 which is a simple system but failed to reproduce the thermodynamic results as well as regular REM on the more complex 1GAB protein. Hence, SREM can be applied to the temperature-independent but not to the temperature-dependent UNRES force field. PMID:20011673
Speeding up Coarse Point Cloud Registration by Threshold-Independent Baysac Match Selection
NASA Astrophysics Data System (ADS)
Kang, Z.; Lindenbergh, R.; Pu, S.
2016-06-01
This paper presents an algorithm for the automatic registration of terrestrial point clouds by match selection using an efficiently conditional sampling method -- threshold-independent BaySAC (BAYes SAmpling Consensus) and employs the error metric of average point-to-surface residual to reduce the random measurement error and then approach the real registration error. BaySAC and other basic sampling algorithms usually need to artificially determine a threshold by which inlier points are identified, which leads to a threshold-dependent verification process. Therefore, we applied the LMedS method to construct the cost function that is used to determine the optimum model to reduce the influence of human factors and improve the robustness of the model estimate. Point-to-point and point-to-surface error metrics are most commonly used. However, point-to-point error in general consists of at least two components, random measurement error and systematic error as a result of a remaining error in the found rigid body transformation. Thus we employ the measure of the average point-to-surface residual to evaluate the registration accuracy. The proposed approaches, together with a traditional RANSAC approach, are tested on four data sets acquired by three different scanners in terms of their computational efficiency and quality of the final registration. The registration results show the st.dev of the average point-to-surface residuals is reduced from 1.4 cm (plain RANSAC) to 0.5 cm (threshold-independent BaySAC). The results also show that, compared to the performance of RANSAC, our BaySAC strategies lead to less iterations and cheaper computational cost when the hypothesis set is contaminated with more outliers.
Certification and brand identity for energy efficiency in competitive energy services markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prindle, W.R.; Wiser, R.
Resource commitments for energy efficiency from electricity companies are disappearing rapidly as the regulated Integrated Resource Planning and Demand-Side Management paradigms that fostered them give way to competitive power markets in a restructuring electricity industry. While free-market advocates claim that energy efficiency needs will be taken care of by competitive energy service providers, there is no assurance that efficiency will compete effectively with the panoply of other energy-related (and non-energy-related) services that are beginning to appear in early market offerings. This paper reports the results of a feasibility study for a certification and brand identity program for energy efficiency gearedmore » to competitive power markets. Funded by the Energy Foundation, this study involved a survey and personal interviews with stakeholders, plus a workshop to further the discussion. Stakeholders include independent power marketers and energy service companies, utility affiliate power marketers and energy service companies, government agencies, trade associations, non-profit organizations, equipment manufacturers, and consultants. The paper summarizes the study's findings on such key issues as: Whether a brand identity concept has a critical mass of interest and support; how qualification and certification could work in such a program; how a brand identity could be positioned in the market; how an efficiency brand identity could co-brand with renewable power branding programs and other green marketing efforts; and the resources and components needed to make such a program work on a national scale.« less
Percolator: Scalable Pattern Discovery in Dynamic Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhury, Sutanay; Purohit, Sumit; Lin, Peng
We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walkingmore » through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.« less
Automated Assessment of Child Vocalization Development Using LENA.
Richards, Jeffrey A; Xu, Dongxin; Gilkerson, Jill; Yapanel, Umit; Gray, Sharmistha; Paul, Terrance
2017-07-12
To produce a novel, efficient measure of children's expressive vocal development on the basis of automatic vocalization assessment (AVA), child vocalizations were automatically identified and extracted from audio recordings using Language Environment Analysis (LENA) System technology. Assessment was based on full-day audio recordings collected in a child's unrestricted, natural language environment. AVA estimates were derived using automatic speech recognition modeling techniques to categorize and quantify the sounds in child vocalizations (e.g., protophones and phonemes). These were expressed as phone and biphone frequencies, reduced to principal components, and inputted to age-based multiple linear regression models to predict independently collected criterion-expressive language scores. From these models, we generated vocal development AVA estimates as age-standardized scores and development age estimates. AVA estimates demonstrated strong statistical reliability and validity when compared with standard criterion expressive language assessments. Automated analysis of child vocalizations extracted from full-day recordings in natural settings offers a novel and efficient means to assess children's expressive vocal development. More research remains to identify specific mechanisms of operation.
Li, Yang; Hong, Jiali; Wei, Renjian; Zhang, Yingying; Tong, Zaizai; Zhang, Xinghong; Du, Binyang; Xu, Junting; Fan, Zhiqiang
2015-02-01
It is a long-standing challenge to combine mixed monomers into multiblock copolymer (MBC) in a one-pot/one-step polymerization manner. We report the first example of MBC with biodegradable polycarbonate and polyester blocks that were synthesized from highly efficient one-pot/one-step polymerization of cyclohexene oxide (CHO), CO 2 and ε-caprolactone (ε-CL) in the presence of zinc-cobalt double metal cyanide complex and stannous octoate. In this protocol, two cross-chain exchange reactions (CCER) occurred at dual catalysts respectively and connected two independent chain propagation procedures ( i.e. , polycarbonate formation and polyester formation) simultaneously in a block-by-block manner, affording MBC without tapering structure. The multiblock structure of MBC was determined by the rate ratio of CCER to the two chain propagations and could be simply tuned by various kinetic factors. This protocol is also of significance due to partial utilization of renewable CO 2 and improved mechanical properties of the resultant MBC.
Formulation Effects and the Off-target Transport of Pyrethroid Insecticides from Urban Hard Surfaces
Jorgenson, Brant C.; Young, Thomas M.
2010-01-01
Controlled rainfall experiments utilizing drop forming rainfall simulators were conducted to study various factors contributing to off-target transport of off-the-shelf formulated pyrethroid insecticides from concrete surfaces. Factors evaluated included active ingredient, product formulation, time between application and rainfall (set time), and rainfall intensity. As much as 60% and as little as 0.8% of pyrethroid applied could be recovered in surface runoff depending primarily on product formulation, and to a lesser extent on product set time. Resulting wash-off profiles during one-hour storm simulations could be categorized based on formulation, with formulations utilizing emulsifying surfactants rather than organic solvents resulting in unique wash-off profiles with overall higher wash-off efficiency. These higher wash-off efficiency profiles were qualitatively replicated by applying formulation-free neat pyrethroid in the presence of independently applied linear alkyl benzene sulfonate (LAS) surfactant, suggesting that the surfactant component of some formulated products may be influential in pyrethroid wash-off from urban hard surfaces. PMID:20524665
Verma, Vikash; Mallik, Leena; Hariadi, Rizal F.; Sivaramakrishnan, Sivaraj; Skiniotis, Georgios; Joglekar, Ajit P.
2015-01-01
DNA origami provides a versatile platform for conducting ‘architecture-function’ analysis to determine how the nanoscale organization of multiple copies of a protein component within a multi-protein machine affects its overall function. Such analysis requires that the copy number of protein molecules bound to the origami scaffold exactly matches the desired number, and that it is uniform over an entire scaffold population. This requirement is challenging to satisfy for origami scaffolds with many protein hybridization sites, because it requires the successful completion of multiple, independent hybridization reactions. Here, we show that a cleavable dimerization domain on the hybridizing protein can be used to multiplex hybridization reactions on an origami scaffold. This strategy yields nearly 100% hybridization efficiency on a 6-site scaffold even when using low protein concentration and short incubation time. It can also be developed further to enable reliable patterning of a large number of molecules on DNA origami for architecture-function analysis. PMID:26348722
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchiyama, A., E-mail: a-uchi@riken.jp; Ozeki, K.; Higurashi, Y.
A RIKEN 18 GHz electron cyclotron resonance ion source (18 GHz ECRIS) is used as an external ion source at the Radioactive Ion Beam Factory (RIBF) accelerator complex to produce an intense beam of medium-mass heavy ions (e.g., Ca and Ar). In most components that comprise the RIBF, the control systems (CSs) are integrated by the Experimental Physics and Industrial Control System (EPICS). On the other hand, a non-EPICS-based system has hardwired controllers, and it is used in the 18 GHz ECRIS CS as an independent system. In terms of efficient and effective operation, the 18 GHz ECRIS CS asmore » well as the RIBF CS should be renewed using EPICS. Therefore, we constructed an 18 GHz ECRIS CS by using programmable logic controllers with embedded EPICS technology. In the renewed system, an operational log system was developed as a new feature, for supporting of the 18 GHz ECRIS operation.« less
Efficient Computational Prototyping of Mixed Technology Microfluidic Components and Systems
2002-08-01
AFRL-IF-RS-TR-2002-190 Final Technical Report August 2002 EFFICIENT COMPUTATIONAL PROTOTYPING OF MIXED TECHNOLOGY MICROFLUIDIC...SUBTITLE EFFICIENT COMPUTATIONAL PROTOTYPING OF MIXED TECHNOLOGY MICROFLUIDIC COMPONENTS AND SYSTEMS 6. AUTHOR(S) Narayan R. Aluru, Jacob White...Aided Design (CAD) tools for microfluidic components and systems were developed in this effort. Innovative numerical methods and algorithms for mixed
Hemispheric connectivity and the visual-spatial divergent-thinking component of creativity.
Moore, Dana W; Bhadelia, Rafeeque A; Billings, Rebecca L; Fulwiler, Carl; Heilman, Kenneth M; Rood, Kenneth M J; Gansler, David A
2009-08-01
Divergent thinking is an important measurable component of creativity. This study tested the postulate that divergent thinking depends on large distributed inter- and intra-hemispheric networks. Although preliminary evidence supports increased brain connectivity during divergent thinking, the neural correlates of this characteristic have not been entirely specified. It was predicted that visuospatial divergent thinking would correlate with right hemisphere white matter volume (WMV) and with the size of the corpus callosum (CC). Volumetric magnetic resonance imaging (MRI) analyses and the Torrance Tests of Creative Thinking (TTCT) were completed among 21 normal right-handed adult males. TTCT scores correlated negatively with the size of the CC and were not correlated with right or, incidentally, left WMV. Although these results were not predicted, perhaps, as suggested by Bogen and Bogen (1988), decreased callosal connectivity enhances hemispheric specialization, which benefits the incubation of ideas that are critical for the divergent-thinking component of creativity, and it is the momentary inhibition of this hemispheric independence that accounts for the illumination that is part of the innovative stage of creativity. Alternatively, decreased CC size may reflect more selective developmental pruning, thereby facilitating efficient functional connectivity.
NASA Astrophysics Data System (ADS)
Byers, C. P.; Fu, M. K.; Fan, Y.; Hultmark, M.
2018-02-01
A novel method of obtaining two orthogonal velocity components with high spatial and temporal resolution is investigated. Both components are obtained utilizing a single sensing nanoribbon by combining the two independent operating modes of classic hot wire anemometry and the newly discovered elastic filament velocimetry (EFV). In contrast to hot wire anemometry, EFV measures fluid velocity through correlating the fluid forcing with the internal strain of the wire. In order to utilize both modes of operation, a system that switches between the two operating modes is built and characterized, and the theoretically predicted sensing response time in water is compared to experimental results. The sensing system is capable of switching between the two modes of operation at a frequency of 100 kHz with minimal attenuation with an uncompensated repetition rate up to 3 kHz or up to 10 kHz utilizing modest signal compensation. While further characterization of the sensor performance in air is needed, this methodology enables a technique for obtaining well-resolved yet cost-efficient directional measurements of flow velocities which, for example, can be used for distributed measurements of velocity or measurements of turbulent stresses with excellent spatial resolution.
Classification of independent components of EEG into multiple artifact classes.
Frølich, Laura; Andersen, Tobias S; Mørup, Morten
2015-01-01
In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them. Copyright © 2014 Society for Psychophysiological Research.
Dasari, Deepika; Shou, Guofa; Ding, Lei
2017-01-01
Electroencephalograph (EEG) has been increasingly studied to identify distinct mental factors when persons perform cognitively demanding tasks. However, most of these studies examined EEG correlates at channel domain, which suffers the limitation that EEG signals are the mixture of multiple underlying neuronal sources due to the volume conduction effect. Moreover, few studies have been conducted in real-world tasks. To precisely probe EEG correlates with specific neural substrates to mental factors in real-world tasks, the present study examined EEG correlates to three mental factors, i.e., mental fatigue [also known as time-on-task (TOT) effect], workload and effort, in EEG component signals, which were obtained using an independent component analysis (ICA) on high-density EEG data. EEG data were recorded when subjects performed a realistically simulated air traffic control (ATC) task for 2 h. Five EEG independent component (IC) signals that were associated with specific neural substrates (i.e., the frontal, central medial, motor, parietal, occipital areas) were identified. Their spectral powers at their corresponding dominant bands, i.e., the theta power of the frontal IC and the alpha power of the other four ICs, were detected to be correlated to mental workload and effort levels, measured by behavioral metrics. Meanwhile, a linear regression analysis indicated that spectral powers at five ICs significantly increased with TOT. These findings indicated that different levels of mental factors can be sensitively reflected in EEG signals associated with various brain functions, including visual perception, cognitive processing, and motor outputs, in real-world tasks. These results can potentially aid in the development of efficient operational interfaces to ensure productivity and safety in ATC and beyond.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Real-space analysis of radiation-induced specific changes with independent component analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borek, Dominika; Bromberg, Raquel; Hattne, Johan
A method of analysis is presented that allows for the separation of specific radiation-induced changes into distinct components in real space. The method relies on independent component analysis (ICA) and can be effectively applied to electron density maps and other types of maps, provided that they can be represented as sets of numbers on a grid. Here, for glucose isomerase crystals, ICA was used in a proof-of-concept analysis to separate temperature-dependent and temperature-independent components of specific radiation-induced changes for data sets acquired from multiple crystals across multiple temperatures. ICA identified two components, with the temperature-independent component being responsible for themore » majority of specific radiation-induced changes at temperatures below 130 K. The patterns of specific temperature-independent radiation-induced changes suggest a contribution from the tunnelling of electron holes as a possible explanation. In the second case, where a group of 22 data sets was collected on a single thaumatin crystal, ICA was used in another type of analysis to separate specific radiation-induced effects happening on different exposure-level scales. Here, ICA identified two components of specific radiation-induced changes that likely result from radiation-induced chemical reactions progressing with different rates at different locations in the structure. In addition, ICA unexpectedly identified the radiation-damage state corresponding to reduced disulfide bridges rather than the zero-dose extrapolated state as the highest contrast structure. The application of ICA to the analysis of specific radiation-induced changes in real space and the data pre-processing for ICA that relies on singular value decomposition, which was used previously in data space to validate a two-component physical model of X-ray radiation-induced changes, are discussed in detail. This work lays a foundation for a better understanding of protein-specific radiation chemistries and provides a framework for analysing effects of specific radiation damage in crystallographic and cryo-EM experiments.« less
Energy-efficient neural information processing in individual neurons and neuronal networks.
Yu, Lianchun; Yu, Yuguo
2017-11-01
Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Tauer, L W; Mishra, A K
2006-12-01
A stochastic cost equation was estimated for US dairy farms using national data from the production year 2000 to determine how farmers might reduce their cost of production. Cost of producing a unit of milk was estimated into separate frontier (efficient) and inefficiency components, with both components estimated as a function of management and causation variables. Variables were entered as impacting the frontier component as well as the efficiency component of the stochastic curve because a priori both components could be impacted. A factor that has an impact on the cost frontier was the number of hours per day the milking facility is used. Using the milking facility for more hours per day decreased frontier costs; however, inefficiency increased with increased hours of milking facility use. Thus, farmers can decrease costs with increased utilization of the milking facility, but only if they are efficient in this strategy. Parlors compared with stanchions used for milking did not decrease frontier costs, but decreased costs because of increased efficiency, as did the use of a nutritionist. Use of rotational grazing decreased frontier costs but also increased inefficiency. Older farmers were less efficient.
DNAJC17 is localized in nuclear speckles and interacts with splicing machinery components.
Pascarella, A; Ferrandino, G; Credendino, S C; Moccia, C; D'Angelo, F; Miranda, B; D'Ambrosio, C; Bielli, P; Spadaro, O; Ceccarelli, M; Scaloni, A; Sette, C; De Felice, M; De Vita, G; Amendola, E
2018-05-17
DNAJC17 is a heat shock protein (HSP40) family member, identified in mouse as susceptibility gene for congenital hypothyroidism. DNAJC17 knockout mouse embryos die prior to implantation. In humans, germline homozygous mutations in DNAJC17 have been found in syndromic retinal dystrophy patients, while heterozygous mutations represent candidate pathogenic events for myeloproliferative disorders. Despite widespread expression and involvement in human diseases, DNAJC17 function is still poorly understood. Herein, we have investigated its function through high-throughput transcriptomic and proteomic approaches. DNAJC17-depleted cells transcriptome highlighted genes involved in general functional categories, mainly related to gene expression. Conversely, DNAJC17 interactome can be classified in very specific functional networks, with the most enriched one including proteins involved in splicing. Furthermore, several splicing-related interactors, were independently validated by co-immunoprecipitation and in vivo co-localization. Accordingly, co-localization of DNAJC17 with SC35, a marker of nuclear speckles, further supported its interaction with spliceosomal components. Lastly, DNAJC17 up-regulation enhanced splicing efficiency of minigene reporter in live cells, while its knockdown induced perturbations of splicing efficiency at whole genome level, as demonstrated by specific analysis of RNAseq data. In conclusion, our study strongly suggests a role of DNAJC17 in splicing-related processes and provides support to its recognized essential function in early development.
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
2010-01-01
The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
Age-Dependent and Age-Independent Measures of Locus of Control.
ERIC Educational Resources Information Center
Sherman, Lawrence W.; Hofmann, Richard
Using a longitudinal data set obtained from 169 pre-adolescent children between the ages of 8 and 13 years, this study statistically divided locus of control into two independent components. The first component was noted as "age-dependent" (AD) and was determined by predicted values generated by regressing children's ages onto their…
A new multicriteria risk mapping approach based on a multiattribute frontier concept
Denys Yemshanov; Frank H. Koch; Yakov Ben-Haim; Marla Downing; Frank Sapio; Marty Siltanen
2013-01-01
Invasive species risk maps provide broad guidance on where to allocate resources for pest monitoring and regulation, but they often present individual risk components (such as climatic suitability, host abundance, or introduction potential) as independent entities. These independent risk components are integrated using various multicriteria analysis techniques that...
DeGutis, Joseph; Chiu, Christopher; Thai, Michelle; Esterman, Michael; Milberg, William; McGlinchey, Regina
2018-01-01
While the associations between psychological distress (e.g., posttraumatic stress disorder [PTSD], depression) and sleep dysfunction have been demonstrated in trauma-exposed populations, studies have not fully explored the associations between sleep dysfunction and the wide range of common physical and physiological changes that can occur after trauma exposure (e.g., pain, cardiometabolic risk factors). We aimed to clarify the unique associations of psychological and physical trauma sequelae with different aspects of self-reported sleep dysfunction. A comprehensive psychological and physical examination was administered to 283 combat-deployed trauma-exposed Operation Enduring Freedom/Operation Iraqi Freedom/Operation New Dawn (OEF/OIF/OND) veterans. The Pittsburgh Sleep Quality Index (PSQI) and PSQI Addendum for PSTD (PSQI-A) were administered along with measures of PTSD, depression, anxiety, pain, traumatic brain injury, alcohol use, nicotine dependence, and cardiometabolic symptoms. We first performed a confirmatory factor analysis of the PSQI and then conducted regressions with the separate PSQI factors as well as the PSQI-A to identify unique associations between trauma-related measures and the separate aspects of sleep. We found that the PSQI global score was composed of three factors: Sleep Efficiency (sleep efficiency/sleep duration), Perceived Sleep Quality (sleep quality/sleep latency/sleep medication) and Daily Disturbances (sleep disturbances/daytime dysfunction). Linear regressions demonstrated that PTSD symptoms were uniquely associated with the PSQI global score and all three factors, as well as the PSQI-A. For the other psychological distress variables, anxiety was independently associated with PSQI global as well as Sleep Efficiency, Perceived Sleep Quality, and PSQI-A, whereas depression was uniquely associated with Daily Disturbances and PSQI-A. Notably, cardiometabolic symptoms explained independent variance in PSQI global and Sleep Efficiency. These findings help lay the groundwork for further investigations of the mechanisms of sleep dysfunction in trauma-exposed individuals and may help in the development of more effective, individualized treatments.
NASA Astrophysics Data System (ADS)
Pires, Carlos; Ribeiro, Andreia
2016-04-01
An efficient nonlinear method of statistical source separation of space-distributed non-Gaussian distributed data is proposed. The method relies in the so called Independent Subspace Analysis (ISA), being tested on a long time-series of the stream-function field of an atmospheric quasi-geostrophic 3-level model (QG3) simulating the winter's monthly variability of the Northern Hemisphere. ISA generalizes the Independent Component Analysis (ICA) by looking for multidimensional and minimally dependent, uncorrelated and non-Gaussian distributed statistical sources among the rotated projections or subspaces of the multivariate probability distribution of the leading principal components of the working field whereas ICA restrict to scalar sources. The rationale of that technique relies upon the projection pursuit technique, looking for data projections of enhanced interest. In order to accomplish the decomposition, we maximize measures of the sources' non-Gaussianity by contrast functions which are given by squares of nonlinear, cross-cumulant-based correlations involving the variables spanning the sources. Therefore sources are sought matching certain nonlinear data structures. The maximized contrast function is built in such a way that it provides the minimization of the mean square of the residuals of certain nonlinear regressions. The issuing residuals, followed by spherization, provide a new set of nonlinear variable changes that are at once uncorrelated, quasi-independent and quasi-Gaussian, representing an advantage with respect to the Independent Components (scalar sources) obtained by ICA where the non-Gaussianity is concentrated into the non-Gaussian scalar sources. The new scalar sources obtained by the above process encompass the attractor's curvature thus providing improved nonlinear model indices of the low-frequency atmospheric variability which is useful since large circulation indices are nonlinearly correlated. The non-Gaussian tested sources (dyads and triads, respectively of two and three dimensions) lead to a dense data concentration along certain curves or surfaces, nearby which the clusters' centroids of the joint probability density function tend to be located. That favors a better splitting of the QG3 atmospheric model's weather regimes: the positive and negative phases of the Arctic Oscillation and positive and negative phases of the North Atlantic Oscillation. The leading model's non-Gaussian dyad is associated to a positive correlation between: 1) the squared anomaly of the extratropical jet-stream and 2) the meridional jet-stream meandering. Triadic sources coming from maximized third-order cross cumulants between pairwise uncorrelated components reveal situations of triadic wave resonance and nonlinear triadic teleconnections, only possible thanks to joint non-Gaussianity. That kind of triadic synergies are accounted for an Information-Theoretic measure: the Interaction Information. The dominant model's triad occurs between anomalies of: 1) the North Pole anomaly pressure 2) the jet-stream intensity at the Eastern North-American boundary and 3) the jet-stream intensity at the Eastern Asian boundary. Publication supported by project FCT UID/GEO/50019/2013 - Instituto Dom Luiz.
A novel approach to analyzing fMRI and SNP data via parallel independent component analysis
NASA Astrophysics Data System (ADS)
Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas
2007-03-01
There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.
CdTe Based Hard X-ray Imager Technology For Space Borne Missions
NASA Astrophysics Data System (ADS)
Limousin, Olivier; Delagnes, E.; Laurent, P.; Lugiez, F.; Gevin, O.; Meuris, A.
2009-01-01
CEA Saclay has recently developed an innovative technology for CdTe based Pixelated Hard X-Ray Imagers with high spectral performance and high timing resolution for efficient background rejection when the camera is coupled to an active veto shield. This development has been done in a R&D program supported by CNES (French National Space Agency) and has been optimized towards the Simbol-X mission requirements. In the latter telescope, the hard X-Ray imager is 64 cm² and is equipped with 625µm pitch pixels (16384 independent channels) operating at -40°C in the range of 4 to 80 keV. The camera we demonstrate in this paper consists of a mosaic of 64 independent cameras, divided in 8 independent sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique 1 cm² component, juxtaposable on its four sides. Recently, promising results have been obtained from the first micro-camera prototypes called Caliste 64 and will be presented to illustrate the capabilities of the device as well as the expected performance of an instrument based on it. The modular design of Caliste enables to consider extended developments toward IXO type mission, according to its specific scientific requirements.
Current-mode subthreshold MOS implementation of the Herault-Jutten autoadaptive network
NASA Astrophysics Data System (ADS)
Cohen, Marc H.; Andreou, Andreas G.
1992-05-01
The translinear circuits in subthreshold MOS technology and current-mode design techniques for the implementation of neuromorphic analog network processing are investigated. The architecture, also known as the Herault-Jutten network, performs an independent component analysis and is essentially a continuous-time recursive linear adaptive filter. Analog I/O interface, weight coefficients, and adaptation blocks are all integrated on the chip. A small network with six neurons and 30 synapses was fabricated in a 2-microns n-well double-polysilicon, double-metal CMOS process. Circuit designs at the transistor level yield area-efficient implementations for neurons, synapses, and the adaptation blocks. The design methodology and constraints as well as test results from the fabricated chips are discussed.
Blind source separation for ambulatory sleep recording
Porée, Fabienne; Kachenoura, Amar; Gauvrit, Hervé; Morvan, Catherine; Carrault, Guy; Senhadji, Lotfi
2006-01-01
This paper deals with the conception of a new system for sleep staging in ambulatory conditions. Sleep recording is performed by means of five electrodes: two temporal, two frontal and a reference. This configuration enables to avoid the chin area to enhance the quality of the muscular signal and the hair region for patient convenience. The EEG, EMG and EOG signals are separated using the Independent Component Analysis approach. The system is compared to a standard sleep analysis system using polysomnographic recordings of 14 patients. The overall concordance of 67.2% is achieved between the two systems. Based on the validation results and the computational efficiency we recommend the clinical use of the proposed system in a commercial sleep analysis platform. PMID:16617618
Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system
NASA Astrophysics Data System (ADS)
Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook
2017-10-01
Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.
Silicon synaptic transistor for hardware-based spiking neural network and neuromorphic system.
Kim, Hyungjin; Hwang, Sungmin; Park, Jungjin; Park, Byung-Gook
2017-10-06
Brain-inspired neuromorphic systems have attracted much attention as new computing paradigms for power-efficient computation. Here, we report a silicon synaptic transistor with two electrically independent gates to realize a hardware-based neural network system without any switching components. The spike-timing dependent plasticity characteristics of the synaptic devices are measured and analyzed. With the help of the device model based on the measured data, the pattern recognition capability of the hardware-based spiking neural network systems is demonstrated using the modified national institute of standards and technology handwritten dataset. By comparing systems with and without inhibitory synapse part, it is confirmed that the inhibitory synapse part is an essential element in obtaining effective and high pattern classification capability.
Is Migraine A Lateralisation Defect?
Kaaro, Jani; Partonen, Timo; Naik, Paulami; Hadjikhani, Nouchine
2008-01-01
Migraine often co-occurs with patent foramen ovale (PFO) and some have suggested surgical closure as an efficient treatment for migraine. However, prospective studies do not report radical effect of PFO surgery on migraine. Here we examined the hypothesis that PFO and migraine may co-occur as two independent manifestations of lateralization defect during embryonic development. We measured the absolute displacement of a midline structure, the pineal gland, on brain scans of 39 migraineurs and 26 controls. We found a significant asymmetry of the pineal gland in migraineurs compared with controls. Our data suggest that migraine's circadian component and its association with PFO may be linked to a lateralization defect during embryogenesis, which could be a result from abnormal serotonin regulation. PMID:18695522
NASA Astrophysics Data System (ADS)
Zheng, Mingyue; Zhang, Xiaohui; Lu, Peng; Cao, Qiguang; Yuan, Yuan; Yue, Mingxing; Fu, Yiwei; Wu, Libin
2018-02-01
The present study examines the optimization of the ultrasonic pre-treatment conditions with response surface experimental design in terms of sludge disintegration efficiency (solubilisation of organic components). Ultrasonic pre-treatment for the maximum solubilization with residual sludge enhanced the SCOD release. Optimization of the ultrasonic pre-treatment was conducted through a Box-Behnken design (three variables, a total of 17 experiments) to determine the effects of three independent variables (power, residence time and TS) on COD solubilization of sludge. The optimal COD was obtained at 17349.4mg/L, when the power was 534.67W, the time was 10.77, and TS was 2%, while the SE of this condition was 28792J/kg TS.
A federated design for a neurobiological simulation engine: the CBI federated software architecture.
Cornelis, Hugo; Coop, Allan D; Bower, James M
2012-01-01
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components.
A Federated Design for a Neurobiological Simulation Engine: The CBI Federated Software Architecture
Cornelis, Hugo; Coop, Allan D.; Bower, James M.
2012-01-01
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationships and defined application programming interfaces. They allow any appropriately configured component or software application to be incorporated into a simulator. The architecture defines independent functional modules that run stand-alone. They are arranged in logical layers that naturally correspond to the occurrence of high-level data (biological concepts) versus low-level data (numerical values) and distinguish data from control functions. The modular nature of the architecture and its independence from a given technology facilitates communication about similar concepts and functions for both users and developers. It provides several advantages for multiple independent contributions to software development. Importantly, these include: (1) Reduction in complexity of individual simulator components when compared to the complexity of a complete simulator, (2) Documentation of individual components in terms of their inputs and outputs, (3) Easy removal or replacement of unnecessary or obsoleted components, (4) Stand-alone testing of components, and (5) Clear delineation of the development scope of new components. PMID:22242154
Energy efficient engine fan component detailed design report
NASA Technical Reports Server (NTRS)
Halle, J. E.; Michael, C. J.
1981-01-01
The fan component which was designed for the energy efficient engine is an advanced high performance, single stage system and is based on technology advancements in aerodynamics and structure mechanics. Two fan components were designed, both meeting the integrated core/low spool engine efficiency goal of 84.5%. The primary configuration, envisioned for a future flight propulsion system, features a shroudless, hollow blade and offers a predicted efficiency of 87.3%. A more conventional blade was designed, as a back up, for the integrated core/low spool demonstrator engine. The alternate blade configuration has a predicted efficiency of 86.3% for the future flight propulsion system. Both fan configurations meet goals established for efficiency surge margin, structural integrity and durability.
Self-regulation and recovery: approaching an understanding of the process of recovery from stress.
Beckmann, Jürgen; Kellmann, Michael
2004-12-01
Stress has been studied extensively in psychology. Only recently, however, has research started to address the question of how individuals manage to recover from stress. Recovery from stress is analyzed as a process of self-regulation. Several individual difference variables which affect the efficiency of self-regulation have been integrated into a structured model of the recovery process. Such variables are action versus state orientation (a tendency to ruminate, e.g., about a past experience) and volitional components, such as self-determination, self-motivation, emotion control, rumination, and self-discipline. Some of these components are assumed to promote recovery from stress, whereas others are assumed to further the perseverance of stress. The model was supported by the empirical findings of three independent studies (Study 1, N=58; Study 2, N=221; Study 3, N= 105). Kuhl's Action Control Scale measured action versus state orientation. Volitional components were assessed with Kuhl and Fuhrmann's Volitional Components Questionnaire. The amounts of experienced stress and recovery from stress was assessed with Kellmann and Kallus's Recovery-Stress Questionnaire. As hypothesized in the model, the disposition towards action versus state orientation was a more distant determinant of the recovery from stress and perseverance of stress. The volitional components are more proximal determinants in the recovery process. Action orientation promotes recovery from stress via adequate volitional skills, e.g., self-determination, self-motivation, emotion control, whereas state orientation furthers a perseverance of stress through rumination and self-discipline.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
NASA Astrophysics Data System (ADS)
Seshadreesan, Kaushik P.; Takeoka, Masahiro; Sasaki, Masahide
2016-04-01
Device-independent quantum key distribution (DIQKD) guarantees unconditional security of a secret key without making assumptions about the internal workings of the devices used for distribution. It does so using the loophole-free violation of a Bell's inequality. The primary challenge in realizing DIQKD in practice is the detection loophole problem that is inherent to photonic tests of Bell' s inequalities over lossy channels. We revisit the proposal of Curty and Moroder [Phys. Rev. A 84, 010304(R) (2011), 10.1103/PhysRevA.84.010304] to use a linear optics-based entanglement-swapping relay (ESR) to counter this problem. We consider realistic models for the entanglement sources and photodetectors: more precisely, (a) polarization-entangled states based on pulsed spontaneous parametric down-conversion sources with infinitely higher-order multiphoton components and multimode spectral structure, and (b) on-off photodetectors with nonunit efficiencies and nonzero dark-count probabilities. We show that the ESR-based scheme is robust against the above imperfections and enables positive key rates at distances much larger than what is possible otherwise.
Activation of DNA damage repair pathways by murine polyomavirus.
Heiser, Katie; Nicholas, Catherine; Garcea, Robert L
2016-10-01
Nuclear replication of DNA viruses activates DNA damage repair (DDR) pathways, which are thought to detect and inhibit viral replication. However, many DNA viruses also depend on these pathways in order to optimally replicate their genomes. We investigated the relationship between murine polyomavirus (MuPyV) and components of DDR signaling pathways including CHK1, CHK2, H2AX, ATR, and DNAPK. We found that recruitment and retention of DDR proteins at viral replication centers was independent of H2AX, as well as the viral small and middle T-antigens. Additionally, infectious virus production required ATR kinase activity, but was independent of CHK1, CHK2, or DNAPK signaling. ATR inhibition did not reduce the total amount of viral DNA accumulated, but affected the amount of virus produced, indicating a defect in virus assembly. These results suggest that MuPyV may utilize a subset of DDR proteins or non-canonical DDR signaling pathways in order to efficiently replicate and assemble. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shimoda, Jiro; Ohira, Yutaka; Yamazaki, Ryo; Laming, J. Martin; Katsuda, Satoru
2018-01-01
Linearly polarized Balmer line emissions from supernova remnant shocks are studied taking into account the energy loss of the shock owing to the production of non-thermal particles. The polarization degree depends on the downstream temperature and the velocity difference between upstream and downstream regions. The former is derived once the line width of the broad component of the H α emission is observed. Then, the observation of the polarization degree tells us the latter. At the same time, the estimated value of the velocity difference independently predicts adiabatic downstream temperature that is derived from Rankine Hugoniot relations for adiabatic shocks. If the actually observed downstream temperature is lower than the adiabatic temperature, there is a missing thermal energy which is consumed for particle acceleration. It is shown that a larger energy-loss rate leads to more highly polarized H α emission. Furthermore, we find that polarized intensity ratio of H β to H α also depends on the energy-loss rate and that it is independent of uncertain quantities such as electron temperature, the effect of Lyman line trapping and our line of sight.
Tensorial extensions of independent component analysis for multisubject FMRI analysis.
Beckmann, C F; Smith, S M
2005-03-01
We discuss model-free analysis of multisubject or multisession FMRI data by extending the single-session probabilistic independent component analysis model (PICA; Beckmann and Smith, 2004. IEEE Trans. on Medical Imaging, 23 (2) 137-152) to higher dimensions. This results in a three-way decomposition that represents the different signals and artefacts present in the data in terms of their temporal, spatial, and subject-dependent variations. The technique is derived from and compared with parallel factor analysis (PARAFAC; Harshman and Lundy, 1984. In Research methods for multimode data analysis, chapter 5, pages 122-215. Praeger, New York). Using simulated data as well as data from multisession and multisubject FMRI studies we demonstrate that the tensor PICA approach is able to efficiently and accurately extract signals of interest in the spatial, temporal, and subject/session domain. The final decompositions improve upon PARAFAC results in terms of greater accuracy, reduced interference between the different estimated sources (reduced cross-talk), robustness (against deviations of the data from modeling assumptions and against overfitting), and computational speed. On real FMRI 'activation' data, the tensor PICA approach is able to extract plausible activation maps, time courses, and session/subject modes as well as provide a rich description of additional processes of interest such as image artefacts or secondary activation patterns. The resulting data decomposition gives simple and useful representations of multisubject/multisession FMRI data that can aid the interpretation and optimization of group FMRI studies beyond what can be achieved using model-based analysis techniques.
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
The cost of space independence in P300-BCI spellers.
Chennu, Srivas; Alsufyani, Abdulmajeed; Filetti, Marco; Owen, Adrian M; Bowman, Howard
2013-07-29
Though non-invasive EEG-based Brain Computer Interfaces (BCI) have been researched extensively over the last two decades, most designs require control of spatial attention and/or gaze on the part of the user. In healthy adults, we compared the offline performance of a space-independent P300-based BCI for spelling words using Rapid Serial Visual Presentation (RSVP), to the well-known space-dependent Matrix P300 speller. EEG classifiability with the RSVP speller was as good as with the Matrix speller. While the Matrix speller's performance was significantly reliant on early, gaze-dependent Visual Evoked Potentials (VEPs), the RSVP speller depended only on the space-independent P300b. However, there was a cost to true spatial independence: the RSVP speller was less efficient in terms of spelling speed. The advantage of space independence in the RSVP speller was concomitant with a marked reduction in spelling efficiency. Nevertheless, with key improvements to the RSVP design, truly space-independent BCIs could approach efficiencies on par with the Matrix speller. With sufficiently high letter spelling rates fused with predictive language modelling, they would be viable for potential applications with patients unable to direct overt visual gaze or covert attentional focus.
NASA Astrophysics Data System (ADS)
Serio, C.; Masiello, G.; Camy-Peyret, C.; Jacquette, E.; Vandermarcq, O.; Bermudo, F.; Coppens, D.; Tobin, D.
2018-02-01
The problem of characterizing and estimating the instrumental or radiometric noise of satellite high spectral resolution infrared spectrometers directly from Earth observations is addressed in this paper. An approach has been developed, which relies on the Principal Component Analysis (PCA) with a suitable criterion to select the optimal number of PC scores. Different selection criteria have been set up and analysed, which is based on the estimation theory of Least Squares and/or Maximum Likelihood Principle. The approach is independent of any forward model and/or radiative transfer calculations. The PCA is used to define an orthogonal basis, which, in turn, is used to derive an optimal linear reconstruction of the observations. The residual vector that is the observation vector minus the calculated or reconstructed one is then used to estimate the instrumental noise. It will be shown that the use of the spectral residuals to assess the radiometric instrumental noise leads to efficient estimators, which are largely independent of possible departures of the true noise from that assumed a priori to model the observational covariance matrix. Application to the Infrared Atmospheric Sounder Interferometer (IASI) has been considered. A series of case studies has been set up, which make use of IASI observations. As a major result, the analysis confirms the high stability and radiometric performance of IASI. The approach also proved to be efficient in characterizing noise features due to mechanical micro-vibrations of the beam splitter of the IASI instrument.
Assouline, Shmuel; Or, Dani
2013-01-01
Plant gas exchange is a key process shaping global hydrological and carbon cycles and is often characterized by plant water use efficiency (WUE - the ratio of CO2 gain to water vapor loss). Plant fossil record suggests that plant adaptation to changing atmospheric CO2 involved correlated evolution of stomata density (d) and size (s), and related maximal aperture, amax . We interpreted the fossil record of s and d correlated evolution during the Phanerozoic to quantify impacts on gas conductance affecting plant transpiration, E, and CO2 uptake, A, independently, and consequently, on plant WUE. A shift in stomata configuration from large s-low d to small s-high d in response to decreasing atmospheric CO2 resulted in large changes in plant gas exchange characteristics. The relationships between gas conductance, gws , A and E and maximal relative transpiring leaf area, (amax ⋅d), exhibited hysteretic-like behavior. The new WUE trend derived from independent estimates of A and E differs from established WUE-CO2 trends for atmospheric CO2 concentrations exceeding 1,200 ppm. In contrast with a nearly-linear decrease in WUE with decreasing CO2 obtained by standard methods, the newly estimated WUE trend exhibits remarkably stable values for an extended geologic period during which atmospheric CO2 dropped from 3,500 to 1,200 ppm. Pending additional tests, the findings may affect projected impacts of increased atmospheric CO2 on components of the global hydrological cycle.
NASA Astrophysics Data System (ADS)
Wang, Rongrong; Chen, Yan; Feng, Daiwei; Huang, Xiaoyu; Wang, Junmin
This paper presents the development and experimental characterizations of a prototyping pure electric ground vehicle, which is equipped with four independently actuated in-wheel motors (FIAIWM) and is powered by a 72 V 200 Ah LiFeYPO 4 battery pack. Such an electric ground vehicle (EGV) employs four in-wheel (or hub) motors to independently drive/brake the four wheels and is one of the promising vehicle architectures primarily due to its actuation flexibility, energy efficiency, and performance potentials. Experimental data obtained from the EGV chassis dynamometer tests were employed to generate the in-wheel motor torque response and power efficiency maps in both driving and regenerative braking modes. A torque distribution method is proposed to show the potentials of optimizing the FIAIWM EGV operational energy efficiency by utilizing the actuation flexibility and the characterized in-wheel motor efficiency and torque response.
Quenching And Luminescence Efficiency Of Nd3+ In YAG
NASA Astrophysics Data System (ADS)
Lupei, Voicu; Lupei, Aurelia; Georgescu, Serban; Ionescu, Christian I.; Yen, William M.
1989-05-01
The effect of the concentration luminescence quenching of the 4F 3/2, level of Nd3+ in YAG on the relative efficiency is presented. Based on the analysis of the decay curves in terms of the energy transfer theory, an analytical expression for the relative luminescence efficiency is obtained. In the low concentration range (up to q,1.5 at % Nd3+), the efficiency linearly decreases when Nd3+ concentration increases. It is also stressed that pairs quenching contribute about 20 % to the nonradiative energy transfer losses. Quantum efficiency of luminescence is an important parameter for the characterization of laser active media; its lowering is due to either multiphonon relaxation or energy transfer processes. The multiphonon non-radiative probability depends on the energy gap between levels, on the phonon energy and temperature; usually at low activator doping it is practically independent on concentration. On the other hand, energy transfer losses show a marked dependence on activator concentration, a fact that severely limits the range of useful con-centration of active centers in some laser crystals. In the YAG:Nd case the minimum energy gap between the Stark components of the 4F,I.) and the next lower level 4F15/2 is of about 4700 cm-1. Since in YAG tree phonons most effdbtively coupled to the Rare pi.th ions have an energy of 1, 700 cm-1, the probability for multiphonon relaxation from the 'F3/, level, even at room temperature, is very low and therefore for low Nd 3+ concentrations quantum efficiency is expected to be close to 1.
Highly Efficient Room Temperature Spin Injection Using Spin Filtering in MgO
NASA Astrophysics Data System (ADS)
Jiang, Xin
2007-03-01
Efficient electrical spin injection into GaAs/AlGaAs quantum well structures was demonstrated using CoFe/MgO tunnel spin injectors at room temperature. The spin polarization of the injected electron current was inferred from the circular polarization of electroluminescence from the quantum well. Polarization values as high as 57% at 100 K and 47% at 290 K were obtained in a perpendicular magnetic field of 5 Tesla. The interface between the tunnel spin injector and the GaAs interface remained stable even after thermal annealing at 400 ^oC. The temperature dependence of the electron-hole recombination time and the electron spin relaxation time in the quantum well was measured using time-resolved optical techniques. By taking into account of these properties of the quantum well, the intrinsic spin injection efficiency can be deduced. We conclude that the efficiency of spin injection from a CoFe/MgO spin injector is nearly independent of temperature and, moreover, is highly efficient with an efficiency of ˜ 70% for the temperature range studied (10 K to room temperature). Tunnel spin injectors are thus highly promising components of future semiconductor spintronic devices. Collaborators: Roger Wang^1, 3, Gian Salis^2, Robert Shelby^1, Roger Macfarlane^1, Seth Bank^3, Glenn Solomon^3, James Harris^3, Stuart S. P. Parkin^1 ^1 IBM Almaden Research Center, San Jose, CA 95120 ^2 IBM Zurich Research Laboratory, S"aumerstrasse 4, 8803 R"uschlikon, Switzerland ^3 Solid States and Photonics Laboratory, Stanford University, Stanford, CA 94305
Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.
2005-01-01
An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
Frickenhaus, Stephan; Kannan, Srinivasaraghavan; Zacharias, Martin
2009-02-01
A direct conformational clustering and mapping approach for peptide conformations based on backbone dihedral angles has been developed and applied to compare conformational sampling of Met-enkephalin using two molecular dynamics (MD) methods. Efficient clustering in dihedrals has been achieved by evaluating all combinations resulting from independent clustering of each dihedral angle distribution, thus resolving all conformational substates. In contrast, Cartesian clustering was unable to accurately distinguish between all substates. Projection of clusters on dihedral principal component (PCA) subspaces did not result in efficient separation of highly populated clusters. However, representation in a nonlinear metric by Sammon mapping was able to separate well the 48 highest populated clusters in just two dimensions. In addition, this approach also allowed us to visualize the transition frequencies between clusters efficiently. Significantly, higher transition frequencies between more distinct conformational substates were found for a recently developed biasing-potential replica exchange MD simulation method allowing faster sampling of possible substates compared to conventional MD simulations. Although the number of theoretically possible clusters grows exponentially with peptide length, in practice, the number of clusters is only limited by the sampling size (typically much smaller), and therefore the method is well suited also for large systems. The approach could be useful to rapidly and accurately evaluate conformational sampling during MD simulations, to compare different sampling strategies and eventually to detect kinetic bottlenecks in folding pathways.
Independent EEG Sources Are Dipolar
Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott
2012-01-01
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308
2013-06-01
zarzoso/ biblio /tnn10.pdf"> % "Robust independent component analysis by iterative maximization</a> % <a href = "http://www.i3s.unice.fr/~zarzoso... biblio /tnn10.pdf"> % of the kurtosis contrast with algebraic optimal step size"</a>, % IEEE Transactions on Neural Networks, vol. 21, no. 2, % pp
NASA Astrophysics Data System (ADS)
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Podlesnik, Christopher A; Fleet, James D
2014-09-01
Behavioral momentum theory asserts Pavlovian stimulus-reinforcer relations govern the persistence of operant behavior. Specifically, resistance to conditions of disruption (e.g., extinction, satiation) reflects the relation between discriminative stimuli and the prevailing reinforcement conditions. The present study assessed whether Pavlovian stimulus-reinforcer relations govern resistance to disruption in pigeons by arranging both response-dependent and -independent food reinforcers in two components of a multiple schedule. In one component, discrete-stimulus changes preceded response-independent reinforcers, paralleling methods that reduce Pavlovian conditioned responding to contextual stimuli. Compared to the control component with no added stimuli preceding response-independent reinforcement, response rates increased as discrete-stimulus duration increased (0, 5, 10, and 15 s) across conditions. Although resistance to extinction decreased as stimulus duration increased in the component with the added discrete stimulus, further tests revealed no effect of discrete stimuli, including other disrupters (presession food, intercomponent food, modified extinction) and reinstatement designed to control for generalization decrement. These findings call into question a straightforward conception that the stimulus-reinforcer relations governing resistance to disruption reflect the same processes as Pavlovian conditioning, as asserted by behavioral momentum theory. © Society for the Experimental Analysis of Behavior.
Energy production advantage of independent subcell connection for multijunction photovoltaics
Warmann, Emily C.; Atwater, Harry A.
2016-07-07
Increasing the number of subcells in a multijunction or "spectrum splitting" photovoltaic improves efficiency under the standard AM1.5D design spectrum, but it can lower efficiency under spectra that differ from the standard if the subcells are connected electrically in series. Using atmospheric data and the SMARTS multiple scattering and absorption model, we simulated sunny day spectra over 1 year for five locations in the United States and determined the annual energy production of spectrum splitting ensembles with 2-20 subcells connected electrically in series or independently. While electrically independent subcells have a small efficiency advantage over series-connected ensembles under the AM1.5Dmore » design spectrum, they have a pronounced energy production advantage under realistic spectra over 1 year. Simulated energy production increased with subcell number for the electrically independent ensembles, but it peaked at 8-10 subcells for those connected in series. As a result, electrically independent ensembles with 20 subcells produce up to 27% more energy annually than the series-connected 20-subcell ensemble. This energy production advantage persists when clouds are accounted for.« less
Energy production advantage of independent subcell connection for multijunction photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warmann, Emily C.; Atwater, Harry A.
Increasing the number of subcells in a multijunction or "spectrum splitting" photovoltaic improves efficiency under the standard AM1.5D design spectrum, but it can lower efficiency under spectra that differ from the standard if the subcells are connected electrically in series. Using atmospheric data and the SMARTS multiple scattering and absorption model, we simulated sunny day spectra over 1 year for five locations in the United States and determined the annual energy production of spectrum splitting ensembles with 2-20 subcells connected electrically in series or independently. While electrically independent subcells have a small efficiency advantage over series-connected ensembles under the AM1.5Dmore » design spectrum, they have a pronounced energy production advantage under realistic spectra over 1 year. Simulated energy production increased with subcell number for the electrically independent ensembles, but it peaked at 8-10 subcells for those connected in series. As a result, electrically independent ensembles with 20 subcells produce up to 27% more energy annually than the series-connected 20-subcell ensemble. This energy production advantage persists when clouds are accounted for.« less
Characterization of Strombolian events by using independent component analysis
NASA Astrophysics Data System (ADS)
Ciaramella, A.; de Lauro, E.; de Martino, S.; di Lieto, B.; Falanga, M.; Tagliaferri, R.
2004-10-01
We apply Independent Component Analysis (ICA) to seismic signals recorded at Stromboli volcano. Firstly, we show how ICA works considering synthetic signals, which are generated by dynamical systems. We prove that Strombolian signals, both tremor and explosions, in the high frequency band (>0.5 Hz), are similar in time domain. This seems to give some insights to the organ pipe model generation for the source of these events. Moreover, we are able to recognize in the tremor signals a low frequency component (<0.5 Hz), with a well defined peak corresponding to 30s.
NASA Astrophysics Data System (ADS)
Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei
2017-07-01
In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.
Analysis of independent components of cognitive event related potentials in a group of ADHD adults.
Markovska-Simoska, Silvana; Pop-Jordanova, Nada; Pop-Jordanov, Jordan
In the last decade, many studies have tried to define the neural correlates of attention deficit hyperactivity disorder (ADHD). The main aim of this study is the comparison of the ERPs independent components in the four QEEG subtypes in a group of ADHD adults as a basis for defining the corresponding endophenotypes among ADHD population. Sixty-seven adults diagnosed as ADHD according to the DSM-IV criteria and 50 age-matched control subjects participated in the study. The brain activity of the subjects was recorded by 19 channel quantitative electroencephalography (QEEG) system in two neuropsychological tasks (visual and emotional continuous performance tests). The ICA method was applied for separation of the independent ERPs components. The components were associated with distinct psychological operations, such as engagement operations (P3bP component), comparison (vcomTL and vcom TR), motor inhibition (P3supF) and monitoring (P4monCC) operations. The ERPs results point out that there is disturbance in executive functioning in investigated ADHD group obtained by the significantly lower amplitude and longer latency for the engagement (P3bP), motor inhibition (P3supF) and monitoring (P4monCC) components. Particularly, the QEEG subtype IV was with the most significant ERPs differences comparing to the other subtypes. In particular, the most prominent difference in the ERPs independent components for the QEEG subtype IV in comparison to other three subtypes, rise many questions and becomes the subject for future research. This study aims to advance and facilitate the use of neurophysiological procedures (QEEG and ERPs) in clinical practice as objective measures of ADHD for better assessment, subtyping and treatment of ADHD.
Energy efficient engine low-pressure compressor component test hardware detailed design report
NASA Technical Reports Server (NTRS)
Michael, C. J.; Halle, J. E.
1981-01-01
The aerodynamic and mechanical design description of the low pressure compressor component of the Energy Efficient Engine were used. The component was designed to meet the requirements of the Flight Propulsion System while maintaining a low cost approach in providing a low pressure compressor design for the Integrated Core/Low Spool test required in the Energy Efficient Engine Program. The resulting low pressure compressor component design meets or exceeds all design goals with the exception of surge margin. In addition, the expense of hardware fabrication for the Integrated Core/Low Spool test has been minimized through the use of existing minor part hardware.
Energy efficient engine component development and integration program
NASA Technical Reports Server (NTRS)
1981-01-01
Accomplishments in the Energy Efficient Engine Component Development and Integration program during the period of April 1, 1981 through September 30, 1981 are discussed. The major topics considered are: (1) propulsion system analysis, design, and integration; (2) engine component analysis, design, and development; (3) core engine tests; and (4) integrated core/low spool testing.
Decomposed fuzzy systems and their application in direct adaptive fuzzy control.
Hsueh, Yao-Chu; Su, Shun-Feng; Chen, Ming-Chang
2014-10-01
In this paper, a novel fuzzy structure termed as the decomposed fuzzy system (DFS) is proposed to act as the fuzzy approximator for adaptive fuzzy control systems. The proposed structure is to decompose each fuzzy variable into layers of fuzzy systems, and each layer is to characterize one traditional fuzzy set. Similar to forming fuzzy rules in traditional fuzzy systems, layers from different variables form the so-called component fuzzy systems. DFS is proposed to provide more adjustable parameters to facilitate possible adaptation in fuzzy rules, but without introducing a learning burden. It is because those component fuzzy systems are independent so that it can facilitate minimum distribution learning effects among component fuzzy systems. It can be seen from our experiments that even when the rule number increases, the learning time in terms of cycles is still almost constant. It can also be found that the function approximation capability and learning efficiency of the DFS are much better than that of the traditional fuzzy systems when employed in adaptive fuzzy control systems. Besides, in order to further reduce the computational burden, a simplified DFS is proposed in this paper to satisfy possible real time constraints required in many applications. From our simulation results, it can be seen that the simplified DFS can perform fairly with a more concise decomposition structure.
Zu, Qin; Zhao, Chun-Jiang; Deng, Wei; Wang, Xiu
2013-05-01
The automatic identification of weeds forms the basis for precision spraying of crops infest. The canopy spectral reflectance within the 350-2 500 nm band of two strains of cabbages and five kinds of weeds such as barnyard grass, setaria, crabgrass, goosegrass and pigweed was acquired by ASD spectrometer. According to the spectral curve characteristics, the data in different bands were compressed with different levels to improve the operation efficiency. Firstly, the spectrum was denoised in accordance with the different order of multiple scattering correction (MSC) method and Savitzky-Golay (SG) convolution smoothing method set by different parameters, then the model was built by combining the principal component analysis (PCA) method to extract principal components, finally all kinds of plants were classified by using the soft independent modeling of class analogy (SIMCA) taxonomy and the classification results were compared. The tests results indicate that after the pretreatment of the spectral data with the method of the combination of MSC and SG set with 3rd order, 5th degree polynomial, 21 smoothing points, and the top 10 principal components extraction using PCA as a classification model input variable, 100% correct classification rate was achieved, and it is able to identify cabbage and several kinds of common weeds quickly and nondestructively.
Partitioning diversity into independent alpha and beta components.
Jost, Lou
2007-10-01
Existing general definitions of beta diversity often produce a beta with a hidden dependence on alpha. Such a beta cannot be used to compare regions that differ in alpha diversity. To avoid misinterpretation, existing definitions of alpha and beta must be replaced by a definition that partitions diversity into independent alpha and beta components. Such a unique definition is derived here. When these new alpha and beta components are transformed into their numbers equivalents (effective numbers of elements), Whittaker's multiplicative law (alpha x beta = gamma) is necessarily true for all indices. The new beta gives the effective number of distinct communities. The most popular similarity and overlap measures of ecology (Jaccard, Sorensen, Horn, and Morisita-Horn indices) are monotonic transformations of the new beta diversity. Shannon measures follow deductively from this formalism and do not need to be borrowed from information theory; they are shown to be the only standard diversity measures which can be decomposed into meaningful independent alpha and beta components when community weights are unequal.
NASA Astrophysics Data System (ADS)
Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei
2014-08-01
The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.
Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li
2009-02-01
Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.
Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo
2016-09-01
The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
Ivanov, Konstantin I; Tselykh, Timofey V; Heino, Tapio I; Mäkinen, Kristiina
2005-07-27
RNA interference (RNAi) is mediated by a multicomponent RNA-induced silencing complex (RISC). Here we examine the phosphorylation state of three Drosophila RISC-associated proteins, VIG, R2D2 and a truncated form of Argonaute2 devoid of the nonconserved N-terminal glutamine-rich domain. We show that of the three studied proteins, only VIG is phosphorylated in cultured Drosophila cells. We also demonstrate that the phosphorylation state of VIG remains unchanged after cell transfection with exogenous dsRNA. A sequence similarity search revealed that VIG shares significant similarity with the human phosphoprotein Ki-1/57, a known in vivo substrate for protein kinase C (PKC). In vitro kinase assays followed by tryptic phosphopeptide mapping showed that PKC could efficiently phosphorylate VIG on multiple sites, suggesting PKC as a candidate kinase for VIG phosphorylation in vivo. Taken together, our results identify the RISC component VIG as a novel kinase substrate in cultured Drosophila cells and suggest a possible involvement of PKC in its phosphorylation.
A comparison of PCA/ICA for data preprocessing in remote sensing imagery classification
NASA Astrophysics Data System (ADS)
He, Hui; Yu, Xianchuan
2005-10-01
In this paper a performance comparison of a variety of data preprocessing algorithms in remote sensing image classification is presented. These selected algorithms are principal component analysis (PCA) and three different independent component analyses, ICA (Fast-ICA (Aapo Hyvarinen, 1999), Kernel-ICA (KCCA and KGV (Bach & Jordan, 2002), EFFICA (Aiyou Chen & Peter Bickel, 2003). These algorithms were applied to a remote sensing imagery (1600×1197), obtained from Shunyi, Beijing. For classification, a MLC method is used for the raw and preprocessed data. The results show that classification with the preprocessed data have more confident results than that with raw data and among the preprocessing algorithms, ICA algorithms improve on PCA and EFFICA performs better than the others. The convergence of these ICA algorithms (for data points more than a million) are also studied, the result shows EFFICA converges much faster than the others. Furthermore, because EFFICA is a one-step maximum likelihood estimate (MLE) which reaches asymptotic Fisher efficiency (EFFICA), it computers quite small so that its demand of memory come down greatly, which settled the "out of memory" problem occurred in the other algorithms.
Component Analyses Using Single-Subject Experimental Designs: A Review
ERIC Educational Resources Information Center
Ward-Horner, John; Sturmey, Peter
2010-01-01
A component analysis is a systematic assessment of 2 or more independent variables or components that comprise a treatment package. Component analyses are important for the analysis of behavior; however, previous research provides only cursory descriptions of the topic. Therefore, in this review the definition of "component analysis" is discussed,…
Assessing attentional systems in children with Attention Deficit Hyperactivity Disorder.
Casagrande, Maria; Martella, Diana; Ruggiero, Maria Cleonice; Maccari, Lisa; Paloscia, Claudio; Rosa, Caterina; Pasini, Augusto
2012-01-01
The aim of this study was to evaluate the efficiency and interactions of attentional systems in children with Attention Deficit Hyperactivity Disorder (ADHD) by considering the effects of reinforcement and auditory warning on each component of attention. Thirty-six drug-naïve children (18 children with ADHD/18 typically developing children) performed two revised versions of the Attentional Network Test, which assess the efficiency of alerting, orienting, and executive systems. In feedback trials, children received feedback about their accuracy, whereas in the no-feedback trials, feedback was not given. In both conditions, children with ADHD performed more slowly than did typically developing children. They also showed impairments in the ability to disengage attention and in executive functioning, which improved when alertness was increased by administering the auditory warning. The performance of the attentional networks appeared to be modulated by the absence or the presence of reinforcement. We suggest that the observed executive system deficit in children with ADHD could depend on their low level of arousal rather than being an independent disorder. © The Author 2011. Published by Oxford University Press. All rights reserved.
Esteban-Cornejo, Irene; Tejero-González, Carlos Ma; Martinez-Gomez, David; del-Campo, Juan; González-Galo, Ana; Padilla-Moledo, Carmen; Sallis, James F; Veiga, Oscar L
2014-08-01
To examine the independent and combined associations of the components of physical fitness with academic performance among youths. This cross-sectional study included a total of 2038 youths (989 girls) aged 6-18 years. Cardiorespiratory capacity was measured using the 20-m shuttle run test. Motor ability was assessed with the 4×10-m shuttle run test of speed of movement, agility, and coordination. A muscular strength z-score was computed based on handgrip strength and standing long jump distance. Academic performance was assessed through school records using 4 indicators: Mathematics, Language, an average of Mathematics and Language, and grade point average score. Cardiorespiratory capacity and motor ability were independently associated with all academic variables in youth, even after adjustment for fitness and fatness indicators (all P≤.001), whereas muscular strength was not associated with academic performance independent of the other 2 physical fitness components. In addition, the combined adverse effects of low cardiorespiratory capacity and motor ability on academic performance were observed across the risk groups (P for trend<.001). Cardiorespiratory capacity and motor ability, both independently and combined, may have a beneficial influence on academic performance in youth. Copyright © 2014 Elsevier Inc. All rights reserved.
Aragón, Pedro; Fitze, Patrick S.
2014-01-01
Geographical body size variation has long interested evolutionary biologists, and a range of mechanisms have been proposed to explain the observed patterns. It is considered to be more puzzling in ectotherms than in endotherms, and integrative approaches are necessary for testing non-exclusive alternative mechanisms. Using lacertid lizards as a model, we adopted an integrative approach, testing different hypotheses for both sexes while incorporating temporal, spatial, and phylogenetic autocorrelation at the individual level. We used data on the Spanish Sand Racer species group from a field survey to disentangle different sources of body size variation through environmental and individual genetic data, while accounting for temporal and spatial autocorrelation. A variation partitioning method was applied to separate independent and shared components of ecology and phylogeny, and estimated their significance. Then, we fed-back our models by controlling for relevant independent components. The pattern was consistent with the geographical Bergmann's cline and the experimental temperature-size rule: adults were larger at lower temperatures (and/or higher elevations). This result was confirmed with additional multi-year independent data-set derived from the literature. Variation partitioning showed no sex differences in phylogenetic inertia but showed sex differences in the independent component of ecology; primarily due to growth differences. Interestingly, only after controlling for independent components did primary productivity also emerge as an important predictor explaining size variation in both sexes. This study highlights the importance of integrating individual-based genetic information, relevant ecological parameters, and temporal and spatial autocorrelation in sex-specific models to detect potentially important hidden effects. Our individual-based approach devoted to extract and control for independent components was useful to reveal hidden effects linked with alternative non-exclusive hypothesis, such as those of primary productivity. Also, including measurement date allowed disentangling and controlling for short-term temporal autocorrelation reflecting sex-specific growth plasticity. PMID:25090025
Tremblay, Pier-Luc; Höglund, Daniel; Koza, Anna; Bonde, Ida; Zhang, Tian
2015-11-04
Acetogens are efficient microbial catalysts for bioprocesses converting C1 compounds into organic products. Here, an adaptive laboratory evolution approach was implemented to adapt Sporomusa ovata for faster autotrophic metabolism and CO2 conversion to organic chemicals. S. ovata was first adapted to grow quicker autotrophically with methanol, a toxic C1 compound, as the sole substrate. Better growth on different concentrations of methanol and with H2-CO2 indicated the adapted strain had a more efficient autotrophic metabolism and a higher tolerance to solvent. The growth rate on methanol was increased 5-fold. Furthermore, acetate production rate from CO2 with an electrode serving as the electron donor was increased 6.5-fold confirming that the acceleration of the autotrophic metabolism of the adapted strain is independent of the electron donor provided. Whole-genome sequencing, transcriptomic, and biochemical studies revealed that the molecular mechanisms responsible for the novel characteristics of the adapted strain were associated with the methanol oxidation pathway and the Wood-Ljungdahl pathway of acetogens along with biosynthetic pathways, cell wall components, and protein chaperones. The results demonstrate that an efficient strategy to increase rates of CO2 conversion in bioprocesses like microbial electrosynthesis is to evolve the microbial catalyst by adaptive laboratory evolution to optimize its autotrophic metabolism.
Chiang-Ni, Chuan; Tsou, Chih-Cheng; Lin, Yee-Shin; Chuang, Woei-Jer; Lin, Ming-T; Liu, Ching-Chuan; Wu, Jiunn-Jong
2008-12-31
CovR/S is an important two component regulatory system, which regulates about 15% of the gene expression in Streptococcus pyogenes. The covR/S locus was identified as an operon generating an RNA transcript around 2.5-kb in size. In this study, we found the covR/S operon produced three RNA transcripts (around 2.5-, 1.0-, and 0.8-kb in size). Using RNA transcriptional terminator sequence prediction and transcriptional terminator analysis, we identified two atypical rho-independent terminator sequences downstream of the covR gene and showed these terminator sequences terminate RNA transcription efficiently. These results indicate that covR/S operon generates covR/S transcript and monocistronic covR transcripts.
NASA Astrophysics Data System (ADS)
Calvet, Nicolas; Martins, Mathieu; Grange, Benjamin; Perez, Victor G.; Belasri, Djawed; Ali, Muhammad T.; Armstrong, Peter R.
2016-05-01
Masdar Institute established a new solar platform dedicated to research and development of concentrated solar power (CSP), and thermal energy storage systems. The facility includes among others, state of the art solar resource assessment apparatuses, a 100 kW beam down CSP plant that has been adapted to research activity, one independent 100 kW hot-oil loop, and new thermal energy storage systems. The objective of this platform is to develop cost efficient CSP solutions, promote and test these technologies in extreme desert conditions, and finally develop local expertise. The purpose of this paper is not to present experimental results, but more to give a general overview of the different capabilities of the Masdar Institute Solar Platform.
An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
Makeig, S; Westerfield, M; Jung, T P; Covington, J; Townsend, J; Sejnowski, T J; Courchesne, E
1999-04-01
Human event-related potentials (ERPs) were recorded from 10 subjects presented with visual target and nontarget stimuli at five screen locations and responding to targets presented at one of the locations. The late positive response complexes of 25-75 ERP average waveforms from the two task conditions were simultaneously analyzed with Independent Component Analysis, a new computational method for blindly separating linearly mixed signals. Three spatially fixed, temporally independent, behaviorally relevant, and physiologically plausible components were identified without reference to peaks in single-channel waveforms. A novel frontoparietal component (P3f) began at approximately 140 msec and peaked, in faster responders, at the onset of the motor command. The scalp distribution of P3f appeared consistent with brain regions activated during spatial orienting in functional imaging experiments. A longer-latency large component (P3b), positive over parietal cortex, was followed by a postmotor potential (Pmp) component that peaked 200 msec after the button press and reversed polarity near the central sulcus. A fourth component associated with a left frontocentral nontarget positivity (Pnt) was evoked primarily by target-like distractors presented in the attended location. When no distractors were presented, responses of five faster-responding subjects contained largest P3f and smallest Pmp components; when distractors were included, a Pmp component appeared only in responses of the five slower-responding subjects. Direct relationships between component amplitudes, latencies, and behavioral responses, plus similarities between component scalp distributions and regional activations reported in functional brain imaging experiments suggest that P3f, Pmp, and Pnt measure the time course and strength of functionally distinct brain processes.
NASA Technical Reports Server (NTRS)
Fichtl, G. H.; Holland, R. L.
1978-01-01
A stochastic model of spacecraft motion was developed based on the assumption that the net torque vector due to crew activity and rocket thruster firings is a statistically stationary Gaussian vector process. The process had zero ensemble mean value, and the components of the torque vector were mutually stochastically independent. The linearized rigid-body equations of motion were used to derive the autospectral density functions of the components of the spacecraft rotation vector. The cross-spectral density functions of the components of the rotation vector vanish for all frequencies so that the components of rotation were mutually stochastically independent. The autospectral and cross-spectral density functions of the induced gravity environment imparted to scientific apparatus rigidly attached to the spacecraft were calculated from the rotation rate spectral density functions via linearized inertial frame to body-fixed principal axis frame transformation formulae. The induced gravity process was a Gaussian one with zero mean value. Transformation formulae were used to rotate the principal axis body-fixed frame to which the rotation rate and induced gravity vector were referred to a body-fixed frame in which the components of the induced gravity vector were stochastically independent. Rice's theory of exceedances was used to calculate expected exceedance rates of the components of the rotation and induced gravity vector processes.
Suzuki, Makoto; Yamada, Sumio; Omori, Mikayo; Hatakeyama, Mayumi; Sugimura, Yuko; Matsushita, Kazuhiko; Tagawa, Yoshikatsu
2008-09-01
A patient with poststroke hemiparesis learns to use the nonparetic arm to compensate for the weakness of the paretic arm to achieve independence in dressing. This is the learning process of new component actions on dressing. The purpose of this study was to develop the Upper-Body Dressing Scale (UBDS) for buttoned shirt dressing, which evaluates the component actions of upper-body dressing, and to provide preliminary data on internal consistency of the UBDS, as well as its reproducibility, validity, and sensitivity to clinical change. Correlational study of concurrent validity and reliability in which 63 consecutive stroke patients were enrolled in the study and were assessed repeatedly by the UBDS and the dressing item of Functional Independent Measure (FIM). Fifty-one patients completed the 3-wk study. The Cronbach's coefficient alpha of UBDS was 0.88. The principal component analysis extracted two components, which explained 62.3% of total variance. All items of the scale had high loading on the first component (0.65-0.83). Actions on the paralytic side were the positive loadings and actions on the healthy side were the negative loadings on the second component. Intraclass correlation coefficient was 0.87. The level of correlation between UBDS score and FIM dressing item scores was -0.72. Logistic regression analysis showed that only the score of UBDS on the first day of evaluation was a significant independent predictor of dressing ability (odds ratio, 0.82; 95% confidence interval, 0.71-0.95). The UBDS scores for paralytic hand passed into the sleeve, sleeve pulled up beyond the elbow joint, and sleeve pulled up beyond the shoulder joint were worse than the score for the other components of the task. These component actions had positive loading on the second component, which was identified by the principal component analysis. The UBDS has good internal consistency, reproducibility, validity, and sensitivity to clinical changes of patients with poststroke hemiparesis. This detailed UBDS assessment enables us to document the most difficult stages in dressing and to assess motor and process skills for independence of dressing.
Jiang, Xi; Zhang, Xin; Zhu, Dajiang
2014-10-01
Alzheimer's disease (AD) is the most common type of dementia (accounting for 60% to 80%) and is the fifth leading cause of death for those people who are 65 or older. By 2050, one new case of AD in United States is expected to develop every 33 sec. Unfortunately, there is no available effective treatment that can stop or slow the death of neurons that causes AD symptoms. On the other hand, it is widely believed that AD starts before development of the associated symptoms, so its prestages, including mild cognitive impairment (MCI) or even significant memory concern (SMC), have received increasing attention, not only because of their potential as a precursor of AD, but also as a possible predictor of conversion to other neurodegenerative diseases. Although these prestages have been defined clinically, accurate/efficient diagnosis is still challenging. Moreover, brain functional abnormalities behind those alterations and conversions are still unclear. In this article, by developing novel sparse representations of whole-brain resting-state functional magnetic resonance imaging signals and by using the most updated Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we successfully identified multiple functional components simultaneously, and which potentially represent those intrinsic functional networks involved in the resting-state activities. Interestingly, these identified functional components contain all the resting-state networks obtained from traditional independent-component analysis. Moreover, by using the features derived from those functional components, it yields high classification accuracy for both AD (94%) and MCI (92%) versus normal controls. Even for SMC we can still have 92% accuracy.
NASA Astrophysics Data System (ADS)
Wünsch, Urban; Murphy, Kathleen; Stedmon, Colin
2017-04-01
Absorbance and fluorescence spectroscopy are efficient tools for tracing the supply, turnover and fate of dissolved organic matter (DOM). The fluorescent fraction of DOM (FDOM) can be characterized by measuring excitation-emission matrices and decomposing the combined fluorescence signal into independent underlying fraction using Parallel Factor Analysis (PARAFAC). Comparisons between studies, facilitated by the OpenFluor database, reveal highly similar components across different aquatic systems and between studies. To obtain PARAFAC models in sufficient quality, scientists traditionally rely on analyzing dozens to hundreds of samples spanning environmental gradients. A cross-validation of this approach using different analytical tools has not yet been accomplished. In this study, we applied high-performance size-exclusion chromatography (HPSEC) to characterize the size-dependent optical properties of dissolved organic matter of samples from contrasting aquatic environments with online absorbance and fluorescence detectors. Each sample produced hundreds of absorbance spectra of colored DOM (CDOM) and hundreds of matrices of FDOM intensities. This approach facilitated the detailed study of CDOM spectral slopes and further allowed the reliable implementation of PARAFAC on individual samples. This revealed a high degree of overlap in the spectral properties of components identified from different sites. Moreover, many of the model components showed significant spectral congruence with spectra in the OpenFluor database. Our results provide evidence of the presence of ubiquitous FDOM components and additionally provide further evidence for the supramolecular assembly hypothesis. They demonstrate the potential for HPSEC to provide a wealth of new insights into the relationship between optical and chemical properties of DOM.
An efficient approach for treating composition-dependent diffusion within organic particles
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...
2017-09-07
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
An efficient approach for treating composition-dependent diffusion within organic particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.
Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less
An Information Processing View of Field Dependence-Independence.
ERIC Educational Resources Information Center
Davis, J. Kent; Cochran, Kathryn F.
1989-01-01
Discusses field dependence-independence from an information processing perspective. Topics discussed include field dependence theory, stages of information processing, developmental issues and implications, and future directions. The information reviewed indicates that field-independent individuals are more efficient than field-dependent…
Elementary signaling modes predict the essentiality of signal transduction network components
2011-01-01
Background Understanding how signals propagate through signaling pathways and networks is a central goal in systems biology. Quantitative dynamic models help to achieve this understanding, but are difficult to construct and validate because of the scarcity of known mechanistic details and kinetic parameters. Structural and qualitative analysis is emerging as a feasible and useful alternative for interpreting signal transduction. Results In this work, we present an integrative computational method for evaluating the essentiality of components in signaling networks. This approach expands an existing signaling network to a richer representation that incorporates the positive or negative nature of interactions and the synergistic behaviors among multiple components. Our method simulates both knockout and constitutive activation of components as node disruptions, and takes into account the possible cascading effects of a node's disruption. We introduce the concept of elementary signaling mode (ESM), as the minimal set of nodes that can perform signal transduction independently. Our method ranks the importance of signaling components by the effects of their perturbation on the ESMs of the network. Validation on several signaling networks describing the immune response of mammals to bacteria, guard cell abscisic acid signaling in plants, and T cell receptor signaling shows that this method can effectively uncover the essentiality of components mediating a signal transduction process and results in strong agreement with the results of Boolean (logical) dynamic models and experimental observations. Conclusions This integrative method is an efficient procedure for exploratory analysis of large signaling and regulatory networks where dynamic modeling or experimental tests are impractical. Its results serve as testable predictions, provide insights into signal transduction and regulatory mechanisms and can guide targeted computational or experimental follow-up studies. The source codes for the algorithms developed in this study can be found at http://www.phys.psu.edu/~ralbert/ESM. PMID:21426566
Judicious use of custom development in an open source component architecture
NASA Astrophysics Data System (ADS)
Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.
2014-12-01
Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Makowski, Piotr L; Zaperty, Weronika; Kozacki, Tomasz
2018-01-01
A new framework for in-plane transformations of digital holograms (DHs) is proposed, which provides improved control over basic geometrical features of holographic images reconstructed optically in full color. The method is based on a Fourier hologram equivalent of the adaptive affine transformation technique [Opt. Express18, 8806 (2010)OPEXFF1094-408710.1364/OE.18.008806]. The solution includes four elementary geometrical transformations that can be performed independently on a full-color 3D image reconstructed from an RGB hologram: (i) transverse magnification; (ii) axial translation with minimized distortion; (iii) transverse translation; and (iv) viewing angle rotation. The independent character of transformations (i) and (ii) constitutes the main result of the work and plays a double role: (1) it simplifies synchronization of color components of the RGB image in the presence of mismatch between capture and display parameters; (2) provides improved control over position and size of the projected image, particularly the axial position, which opens new possibilities for efficient animation of holographic content. The approximate character of the operations (i) and (ii) is examined both analytically and experimentally using an RGB circular holographic display system. Additionally, a complex animation built from a single wide-aperture RGB Fourier hologram is presented to demonstrate full capabilities of the developed toolset.
NASA Astrophysics Data System (ADS)
Hachiya, Yuriko; Ogai, Harutoshi; Okazaki, Hiroko; Fujisaki, Takeshi; Uchida, Kazuhiko; Oda, Susumu; Wada, Futoshi; Mori, Koji
A method for the analysis of fatigue parameters has been rarely researched in VDT operation. Up to now, fatigue was evaluated by changing of biological information. If signals regarding fatigue are detected, fatigue can be measured. The purpose of this study proposed experiment and analysis method to extract parameters related to fatigue from the biological information during VDT operation using the Independent Component Analysis (ICA). An experiment had 11 subjects. As for the experiment were light loaded VDT operation and heavy loaded VDT operation. A measurement item were amount of work, a mistake number, subjective symptom, surface skin temperature (forehead and apex nasi), heart rate, skin blood flow of forearm and respiratory rate. In the heavy loaded operation group, mistake number and subjective symptom score were increased to compare with the other. And Two-factor ANOVA was used for analysis. The result of mistake number was confirmed that heavy loaded. After the moving averages of waveshape were calculated, it was made to extract independent components by using the ICA. The results of the ICA suggest that the independent components increase according to accumulation of fatigue. Thus, the independent components would be a possible parameter of fatigue. However, further experiments should continue in order to obtain the conclusive finding of our research.
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Necessary detection efficiencies for secure quantum key distribution and bound randomness
NASA Astrophysics Data System (ADS)
Acín, Antonio; Cavalcanti, Daniel; Passaro, Elsa; Pironio, Stefano; Skrzypczyk, Paul
2016-01-01
In recent years, several hacking attacks have broken the security of quantum cryptography implementations by exploiting the presence of losses and the ability of the eavesdropper to tune detection efficiencies. We present a simple attack of this form that applies to any protocol in which the key is constructed from the results of untrusted measurements performed on particles coming from an insecure source or channel. Because of its generality, the attack applies to a large class of protocols, from standard prepare-and-measure to device-independent schemes. Our attack gives bounds on the critical detection efficiencies necessary for secure quantum key distribution, which show that the implementation of most partly device-independent solutions is, from the point of view of detection efficiency, almost as demanding as fully device-independent ones. We also show how our attack implies the existence of a form of bound randomness, namely nonlocal correlations in which a nonsignalling eavesdropper can find out a posteriori the result of any implemented measurement.
The components of working memory updating: an experimental decomposition and individual differences.
Ecker, Ullrich K H; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E H
2010-01-01
Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major component processes: retrieval, transformation, and substitution. We report a large-scale experiment that instantiated all possible combinations of those 3 component processes. Results show that the 3 components make independent contributions to updating performance. We additionally present structural equation models that link WMU task performance and working memory capacity (WMC) measures. These feature the methodological advancement of estimating interindividual covariation and experimental effects on mean updating measures simultaneously. The modeling results imply that WMC is a strong predictor of WMU skills in general, although some component processes-in particular, substitution skills-were independent of WMC. Hence, the reported predictive power of WMU measures may rely largely on common WM functions also measured in typical WMC tasks, although substitution skills may make an independent contribution to predicting higher mental abilities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Component Structure of Individual Differences in True and False Recognition of Faces
ERIC Educational Resources Information Center
Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha
2009-01-01
Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…
Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco
2015-01-01
The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest.
Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco
2015-01-01
The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest. PMID:26284241
NASA Astrophysics Data System (ADS)
Polat, Esra; Gunay, Suleyman
2013-10-01
One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.
2004-01-01
The computational simulation community is not routinely publishing independently verifiable tests to accompany new models or algorithms. A survey reveals that only 22% of new models published are accompanied by tests suitable for independently verifying the new model. As the community develops larger codes with increased functionality, and hence increased complexity in terms of the number of building block components and their interactions, it becomes prohibitively expensive for each development group to derive the appropriate tests for each component. Therefore, the computational simulation community is building its collective castle on a very shaky foundation of components with unpublished and unrepeatable verification tests. The computational simulation community needs to begin publishing component level verification tests before the tide of complexity undermines its foundation.
Wang, Nizhuan; Chang, Chunqi; Zeng, Weiming; Shi, Yuhu; Yan, Hongjie
2017-01-01
Independent component analysis (ICA) has been widely used in functional magnetic resonance imaging (fMRI) data analysis to evaluate functional connectivity of the brain; however, there are still some limitations on ICA simultaneously handling neuroimaging datasets with diverse acquisition parameters, e.g., different repetition time, different scanner, etc. Therefore, it is difficult for the traditional ICA framework to effectively handle ever-increasingly big neuroimaging datasets. In this research, a novel feature-map based ICA framework (FMICA) was proposed to address the aforementioned deficiencies, which aimed at exploring brain functional networks (BFNs) at different scales, e.g., the first level (individual subject level), second level (intragroup level of subjects within a certain dataset) and third level (intergroup level of subjects across different datasets), based only on the feature maps extracted from the fMRI datasets. The FMICA was presented as a hierarchical framework, which effectively made ICA and constrained ICA as a whole to identify the BFNs from the feature maps. The simulated and real experimental results demonstrated that FMICA had the excellent ability to identify the intergroup BFNs and to characterize subject-specific and group-specific difference of BFNs from the independent component feature maps, which sharply reduced the size of fMRI datasets. Compared with traditional ICAs, FMICA as a more generalized framework could efficiently and simultaneously identify the variant BFNs at the subject-specific, intragroup, intragroup-specific and intergroup levels, implying that FMICA was able to handle big neuroimaging datasets in neuroscience research.
Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S
2011-05-15
Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.
76 FR 30143 - Agency Information Collection Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy Agency Information..., Buy American Coordinator, Office of Energy Efficiency and Renewable Energy (EERE), Department of... Energy Efficiency and Renewable Energy (EERE), Department of Energy, 1000 Independence Avenue, SW...
Ulloa, Alvaro; Jingyu Liu; Vergara, Victor; Jiayu Chen; Calhoun, Vince; Pattichis, Marios
2014-01-01
In the biomedical field, current technology allows for the collection of multiple data modalities from the same subject. In consequence, there is an increasing interest for methods to analyze multi-modal data sets. Methods based on independent component analysis have proven to be effective in jointly analyzing multiple modalities, including brain imaging and genetic data. This paper describes a new algorithm, three-way parallel independent component analysis (3pICA), for jointly identifying genomic loci associated with brain function and structure. The proposed algorithm relies on the use of multi-objective optimization methods to identify correlations among the modalities and maximally independent sources within modality. We test the robustness of the proposed approach by varying the effect size, cross-modality correlation, noise level, and dimensionality of the data. Simulation results suggest that 3p-ICA is robust to data with SNR levels from 0 to 10 dB and effect-sizes from 0 to 3, while presenting its best performance with high cross-modality correlations, and more than one subject per 1,000 variables. In an experimental study with 112 human subjects, the method identified links between a genetic component (pointing to brain function and mental disorder associated genes, including PPP3CC, KCNQ5, and CYP7B1), a functional component related to signal decreases in the default mode network during the task, and a brain structure component indicating increases of gray matter in brain regions of the default mode region. Although such findings need further replication, the simulation and in-vivo results validate the three-way parallel ICA algorithm presented here as a useful tool in biomedical data decomposition applications.
Electromagnetic potential vectors and the Lagrangian of a charged particle
NASA Technical Reports Server (NTRS)
Shebalin, John V.
1992-01-01
Maxwell's equations can be shown to imply the existence of two independent three-dimensional potential vectors. A comparison between the potential vectors and the electric and magnetic field vectors, using a spatial Fourier transformation, reveals six independent potential components but only four independent electromagnetic field components for each mode. Although the electromagnetic fields determined by Maxwell's equations give a complete description of all possible classical electromagnetic phenomena, potential vectors contains more information and allow for a description of such quantum mechanical phenomena as the Aharonov-Bohm effect. A new result is that a charged particle Lagrangian written in terms of potential vectors automatically contains a 'spontaneous symmetry breaking' potential.
Markmann, Sandra; Krambeck, Svenja; Hughes, Christopher J; Mirzaian, Mina; Aerts, Johannes M F G; Saftig, Paul; Schweizer, Michaela; Vissers, Johannes P C; Braulke, Thomas; Damme, Markus
2017-03-01
The efficient receptor-mediated targeting of soluble lysosomal proteins to lysosomes requires the modification with mannose 6-phosphate (M6P) residues. Although the absence of M6P results in misrouting and hypersecretion of lysosomal enzymes in many cells, normal levels of lysosomal enzymes have been reported in liver of patients lacking the M6P-generating phosphotransferase (PT). The identity of lysosomal proteins depending on M6P has not yet been comprehensively analyzed. In this study we purified lysosomes from liver of PT-defective mice and 67 known soluble lysosomal proteins were identified that illustrated quantitative changes using an ion mobility-assisted data-independent label-free LC-MS approach. After validation of various differentially expressed lysosomal components by Western blotting and enzyme activity assays, the data revealed a small number of lysosomal proteins depending on M6P, including neuraminidase 1, cathepsin F, Npc2, and cathepsin L, whereas the majority reach lysosomes by alternative pathways. These data were compared with findings on cultured hepatocytes and liver sinusoid endothelial cells isolated from the liver of wild-type and PT-defective mice. Our findings show that the relative expression, targeting efficiency and lysosomal localization of lysosomal proteins tested in cultured hepatic cells resemble their proportion in isolated liver lysosomes. Hypersecretion of newly synthesized nonphosphorylated lysosomal proteins suggest that secretion-recapture mechanisms contribute to maintain major lysosomal functions in liver. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
NASA Astrophysics Data System (ADS)
Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi
2018-02-01
In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.
Yan, Lijie; Jackson, Andrew O.; Liu, Zhiyong; Han, Chenggui; Yu, Jialin; Li, Dawei
2011-01-01
Barley stripe mosaic virus (BSMV) is a single-stranded RNA virus with three genome components designated alpha, beta, and gamma. BSMV vectors have previously been shown to be efficient virus induced gene silencing (VIGS) vehicles in barley and wheat and have provided important information about host genes functioning during pathogenesis as well as various aspects of genes functioning in development. To permit more effective use of BSMV VIGS for functional genomics experiments, we have developed an Agrobacterium delivery system for BSMV and have coupled this with a ligation independent cloning (LIC) strategy to mediate efficient cloning of host genes. Infiltrated Nicotiana benthamiana leaves provided excellent sources of virus for secondary BSMV infections and VIGS in cereals. The Agro/LIC BSMV VIGS vectors were able to function in high efficiency down regulation of phytoene desaturase (PDS), magnesium chelatase subunit H (ChlH), and plastid transketolase (TK) gene silencing in N. benthamiana and in the monocots, wheat, barley, and the model grass, Brachypodium distachyon. Suppression of an Arabidopsis orthologue cloned from wheat (TaPMR5) also interfered with wheat powdery mildew (Blumeria graminis f. sp. tritici) infections in a manner similar to that of the A. thaliana PMR5 loss-of-function allele. These results imply that the PMR5 gene has maintained similar functions across monocot and dicot families. Our BSMV VIGS system provides substantial advantages in expense, cloning efficiency, ease of manipulation and ability to apply VIGS for high throughput genomics studies. PMID:22031834
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
NASA Astrophysics Data System (ADS)
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements.
Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
NASA Astrophysics Data System (ADS)
Pu, Huangsheng; Zhang, Guanglei; He, Wei; Liu, Fei; Guang, Huizhi; Zhang, Yue; Bai, Jing; Luo, Jianwen
2014-09-01
It is a challenging problem to resolve and identify drug (or non-specific fluorophore) distribution throughout the whole body of small animals in vivo. In this article, an algorithm of unmixing multispectral fluorescence tomography (MFT) images based on independent component analysis (ICA) is proposed to solve this problem. ICA is used to unmix the data matrix assembled by the reconstruction results from MFT. Then the independent components (ICs) that represent spatial structures and the corresponding spectrum courses (SCs) which are associated with spectral variations can be obtained. By combining the ICs with SCs, the recovered MFT images can be generated and fluorophore concentration can be calculated. Simulation studies, phantom experiments and animal experiments with different concentration contrasts and spectrum combinations are performed to test the performance of the proposed algorithm. Results demonstrate that the proposed algorithm can not only provide the spatial information of fluorophores, but also recover the actual reconstruction of MFT images.
10 CFR 431.176 - Voluntary Independent Certification Programs.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Water Heating Products § 431.176 Voluntary Independent Certification Programs. (a) The Department will approve a Voluntary Independent Certification Program (VICP) for a commercial HVAC and WH product if the... Section 431.176 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY EFFICIENCY PROGRAM FOR CERTAIN...
The Public Stake in Independent Higher Education.
ERIC Educational Resources Information Center
Olson, Lawrence
The importance of higher education in providing the skilled labor force needed to respond to changing technologies; the cost-efficiency of independent higher education; and implications for government, industry, and independent higher education are considered. The most readily changing technologies include computers and electronics, satellite…
NASA Astrophysics Data System (ADS)
Yang, Junbo; Yang, Jiankun; Li, Xiujian; Chang, Shengli; Su, Xianyu; Ping, Xu
2011-04-01
The clos network is one of the earliest multistage interconnection networks. Recently, it has been widely studied in parallel optical information processing systems, and there have been many efforts to develop this network. In this paper, a smart and compact Clos network, including Clos(2,3,2) and Clos(2,4,2), is proposed by using polarizing beam-splitters (PBS), phase spatial light modulators (PSLM), and mirrors. PBS features that are s-component (perpendicular to the incident plane) of the incident light beam is reflected, and the p-component (parallel to the incident plane) passes through it. According to switching logic, under control of external electrical signals, PSLM functions to control routing paths of the signal beams, i.e., the polarization of each optical signal is rotated or not rotated 90° by a programmable PSLM. This new type of configuration grants the features of less optical components, compact in structure, efficient in performance, and insensitive to polarization of signal beam. In addition, the straight, the exchange, and the broadcast functions of the basic switch element are implemented bidirectionally in free-space. Furthermore, the new optical experimental module of 2×3 and 2×4 optical switch is also presented by a cascading polarization-independent bidirectional 2×2 optical switch. Simultaneously, the routing state-table of 2×3 and 2×4 optical switch to perform all permutation output and nonblocking switch for the input signal beam, is achieved. Since the proposed optical setup consists of only optical polarization elements, it is compact in structure, and possesses a low energy loss, a high signal-to-ratio, and an available large number of optical channels. Finally, the discussions and the experimental results show that the Clos network proposed here should be helpful in the design of large-scale network matrix, and may be used in optical communication and optical information processing.
NASA Astrophysics Data System (ADS)
Witała, H.; Golak, J.; Skibiński, R.; Topolnicki, K.; Kamada, H.
We discuss the importance of the three-nucleon isospin T = 3/2 component in elastic neutron-deuteron scattering and in the deuteron breakup reaction. The contribution of this amplitude originates from charge-independence breaking of the nucleon-nucleon potential. We study the magnitude of that contribution to the elastic scattering and breakup observables, taking the Av18 nucleon-nucleon potential alone or combined with the Urbana IX three-nucleon force as well as the locally regularized chiral N4LO nucleon-nucleon potential alone or supplemented by the chiral N2LO three-nucleon force. We find that the isospin T = 3/2 component is important for the breakup reaction and the proper treatment of charge-independence breaking in this case requires the inclusion of the 1S 0 state with isospin T = 3/2. For neutron-deuteron elastic scattering the T = 3/2 contributions are insignificant and charge-independence breaking can be accounted for by neglecting T = 3/2 component and using the effective t-matrix generated with the so-called “2/3 ‑ 1/3″ rule.
Discriminant analysis of resting-state functional connectivity patterns on the Grassmann manifold
NASA Astrophysics Data System (ADS)
Fan, Yong; Liu, Yong; Jiang, Tianzi; Liu, Zhening; Hao, Yihui; Liu, Haihong
2010-03-01
The functional networks, extracted from fMRI images using independent component analysis, have been demonstrated informative for distinguishing brain states of cognitive functions and neurological diseases. In this paper, we propose a novel algorithm for discriminant analysis of functional networks encoded by spatial independent components. The functional networks of each individual are used as bases for a linear subspace, referred to as a functional connectivity pattern, which facilitates a comprehensive characterization of temporal signals of fMRI data. The functional connectivity patterns of different individuals are analyzed on the Grassmann manifold by adopting a principal angle based subspace distance. In conjunction with a support vector machine classifier, a forward component selection technique is proposed to select independent components for constructing the most discriminative functional connectivity pattern. The discriminant analysis method has been applied to an fMRI based schizophrenia study with 31 schizophrenia patients and 31 healthy individuals. The experimental results demonstrate that the proposed method not only achieves a promising classification performance for distinguishing schizophrenia patients from healthy controls, but also identifies discriminative functional networks that are informative for schizophrenia diagnosis.
Tsai, Cheng-Tao; Tseng, Sheng-Yu
2013-01-01
This paper presents comparison between phase-shift full-bridge converters with noncoupled and coupled current-doubler rectifier. In high current capability and high step-down voltage conversion, a phase-shift full-bridge converter with a conventional current-doubler rectifier has the common limitations of extremely low duty ratio and high component stresses. To overcome these limitations, a phase-shift full-bridge converter with a noncoupled current-doubler rectifier (NCDR) or a coupled current-doubler rectifier (CCDR) is, respectively, proposed and implemented. In this study, performance analysis and efficiency obtained from a 500 W phase-shift full-bridge converter with two improved current-doubler rectifiers are presented and compared. From their prototypes, experimental results have verified that the phase-shift full-bridge converter with NCDR has optimal duty ratio, lower component stresses, and output current ripple. In component count and efficiency comparison, CCDR has fewer components and higher efficiency at full load condition. For small size and high efficiency requirements, CCDR is relatively suitable for high step-down voltage and high efficiency applications. PMID:24381521
Tsai, Cheng-Tao; Su, Jye-Chau; Tseng, Sheng-Yu
2013-01-01
This paper presents comparison between phase-shift full-bridge converters with noncoupled and coupled current-doubler rectifier. In high current capability and high step-down voltage conversion, a phase-shift full-bridge converter with a conventional current-doubler rectifier has the common limitations of extremely low duty ratio and high component stresses. To overcome these limitations, a phase-shift full-bridge converter with a noncoupled current-doubler rectifier (NCDR) or a coupled current-doubler rectifier (CCDR) is, respectively, proposed and implemented. In this study, performance analysis and efficiency obtained from a 500 W phase-shift full-bridge converter with two improved current-doubler rectifiers are presented and compared. From their prototypes, experimental results have verified that the phase-shift full-bridge converter with NCDR has optimal duty ratio, lower component stresses, and output current ripple. In component count and efficiency comparison, CCDR has fewer components and higher efficiency at full load condition. For small size and high efficiency requirements, CCDR is relatively suitable for high step-down voltage and high efficiency applications.
Energy efficient engine high-pressure turbine component rig performance test report
NASA Technical Reports Server (NTRS)
Leach, K. P.
1983-01-01
A rig test of the cooled high-pressure turbine component for the Energy Efficient Engine was successfully completed. The principal objective of this test was to substantiate the turbine design point performance as well as determine off-design performance with the interaction of the secondary flow system. The measured efficiency of the cooled turbine component was 88.5 percent, which surpassed the rig design goal of 86.5 percent. The secondary flow system in the turbine performed according to the design intent. Characterization studies showed that secondary flow system performance is insensitive to flow and pressure variations. Overall, this test has demonstrated that a highly-loaded, transonic, single-stage turbine can achieve a high level of operating efficiency.
NASA Astrophysics Data System (ADS)
Mahmoudishadi, S.; Malian, A.; Hosseinali, F.
2017-09-01
The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
ESEA Title I Migrant. Final Technical Report.
ERIC Educational Resources Information Center
Austin Independent School District, TX. Office of Research and Evaluation.
The 1981-82 Austin (Texas) Independent School District Title I Migrant Program consisted of seven components: three instructional components--prekindergarten, communication skills, and summer school; and four support components--health services, parental involvement, migrant student record transfer system (MSRTS), and evaluation. The major…
NASA Astrophysics Data System (ADS)
Candia, Sante; Lisio, Giovanni; Campolo, Giovanni; Pascucci, Dario
2010-08-01
The Avionics Software (ASW), in charge of controlling the Low Earth Orbit (LEO) Spacecraft PRIMA Platform (Piattaforma Ri-configurabile Italiana Multi-Applicativa), is evolving towards a highly modular and re-usable architecture based on an architectural framework allowing the effective integration of the software building blocks (SWBBs) providing the on-board control functions. During the recent years, the PRIMA ASW design and production processes have been improved to reach the following objectives: (a) at PUS Services level, separation of the mission-independent software mechanisms from the mission-dependent configuration information; (b) at Application level, identification of mission-independent recurrent functions for promoting abstraction and obtaining a more efficient and safe ASW production, with positive implications also on the software validation activities. This paper is dedicated to the characterisation activity which has been performed at Application level for a software component abstracting a set of functions for the generic On-Board Assembly (OBA), a set of hardware units used to deliver an on-board service. Moreover, the ASW production process is specified to show how it results after the introduction of the new design features.
Naresh, P; Hitesh, C; Patel, A; Kolge, T; Sharma, Archana; Mittal, K C
2013-08-01
A fourth order (LCLC) resonant converter based capacitor charging power supply (CCPS) is designed and developed for pulse power applications. Resonant converters are preferred t utilize soft switching techniques such as zero current switching (ZCS) and zero voltage switching (ZVS). An attempt has been made to overcome the disadvantages in 2nd and 3rd resonant converter topologies; hence a fourth order resonant topology is used in this paper for CCPS application. In this paper a novel fourth order LCLC based resonant converter has been explored and mathematical analysis carried out to calculate load independent constant current. This topology provides load independent constant current at switching frequency (fs) equal to resonant frequency (fr). By changing switching condition (on time and dead time) this topology has both soft switching techniques such as ZCS and ZVS for better switching action to improve the converter efficiency. This novel technique has special features such as low peak current through switches, DC blocking for transformer, utilizing transformer leakage inductance as resonant component. A prototype has been developed and tested successfully to charge a 100 μF capacitor to 200 V.
Independent components of neural activity carry information on individual populations.
Głąbska, Helena; Potworowski, Jan; Łęski, Szymon; Wójcik, Daniel K
2014-01-01
Local field potential (LFP), the low-frequency part of the potential recorded extracellularly in the brain, reflects neural activity at the population level. The interpretation of LFP is complicated because it can mix activity from remote cells, on the order of millimeters from the electrode. To understand better the relation between the recordings and the local activity of cells we used a large-scale network thalamocortical model to compute simultaneous LFP, transmembrane currents, and spiking activity. We used this model to study the information contained in independent components obtained from the reconstructed Current Source Density (CSD), which smooths transmembrane currents, decomposed further with Independent Component Analysis (ICA). We found that the three most robust components matched well the activity of two dominating cell populations: superior pyramidal cells in layer 2/3 (rhythmic spiking) and tufted pyramids from layer 5 (intrinsically bursting). The pyramidal population from layer 2/3 could not be well described as a product of spatial profile and temporal activation, but by a sum of two such products which we recovered in two of the ICA components in our analysis, which correspond to the two first principal components of PCA decomposition of layer 2/3 population activity. At low noise one more cell population could be discerned but it is unlikely that it could be recovered in experiment given typical noise ranges.
Cai, Shaohang; Ou, Zejin; Liu, Duan; Liu, Lili; Liu, Ying; Wu, Xiaolu; Yu, Tao; Peng, Jie
2018-05-01
We investigated whether metabolic syndrome exacerbated the risk of liver fibrosis among chronic hepatitis B patients and risk factors associated with liver steatosis and fibrosis in chronic hepatitis B patients with components of metabolic syndrome. This study included 1236 chronic hepatitis B patients with at least one component of metabolic syndrome. The controlled attenuation parameter and liver stiffness, patient information and relevant laboratory data were recorded. Controlled attenuation parameter was increased progressively with the number of metabolic syndrome components ( p < 0.001). Multivariate analysis indicated younger age, high gamma-glutamyltransferase level, high waist-hip ratio, and high body mass index were independent risk factors associated with nonalcoholic fatty liver disease among chronic hepatitis B patients with metabolic syndrome. In the fibrosis and non-fibrosis groups, most of blood lipid was relatively lower in fibrosis group. An increased proportion of chronic hepatitis B patients with liver fibrosis was found concomitant with an increasing number of components of metabolic syndrome. Male gender, older age, smoking, aspartate aminotransferase levels, high body mass index, and low platelet level were identified as independent risk factors associated with liver fibrosis. For chronic hepatitis B patients with coexisting components of metabolic syndrome, stratification by independent risk factors for nonalcoholic fatty liver disease and fibrosis can help with management of their disease.
Independent Components of Neural Activity Carry Information on Individual Populations
Głąbska, Helena; Potworowski, Jan; Łęski, Szymon; Wójcik, Daniel K.
2014-01-01
Local field potential (LFP), the low-frequency part of the potential recorded extracellularly in the brain, reflects neural activity at the population level. The interpretation of LFP is complicated because it can mix activity from remote cells, on the order of millimeters from the electrode. To understand better the relation between the recordings and the local activity of cells we used a large-scale network thalamocortical model to compute simultaneous LFP, transmembrane currents, and spiking activity. We used this model to study the information contained in independent components obtained from the reconstructed Current Source Density (CSD), which smooths transmembrane currents, decomposed further with Independent Component Analysis (ICA). We found that the three most robust components matched well the activity of two dominating cell populations: superior pyramidal cells in layer 2/3 (rhythmic spiking) and tufted pyramids from layer 5 (intrinsically bursting). The pyramidal population from layer 2/3 could not be well described as a product of spatial profile and temporal activation, but by a sum of two such products which we recovered in two of the ICA components in our analysis, which correspond to the two first principal components of PCA decomposition of layer 2/3 population activity. At low noise one more cell population could be discerned but it is unlikely that it could be recovered in experiment given typical noise ranges. PMID:25153730
Source separation on hyperspectral cube applied to dermatology
NASA Astrophysics Data System (ADS)
Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.
2010-03-01
This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.
Characteristics, Process Parameters, and Inner Components of Anaerobic Bioreactors
Abdelgadir, Awad; Chen, Xiaoguang; Liu, Jianshe; Xie, Xuehui; Zhang, Jian; Zhang, Kai; Wang, Heng; Liu, Na
2014-01-01
The anaerobic bioreactor applies the principles of biotechnology and microbiology, and nowadays it has been used widely in the wastewater treatment plants due to their high efficiency, low energy use, and green energy generation. Advantages and disadvantages of anaerobic process were shown, and three main characteristics of anaerobic bioreactor (AB), namely, inhomogeneous system, time instability, and space instability were also discussed in this work. For high efficiency of wastewater treatment, the process parameters of anaerobic digestion, such as temperature, pH, Hydraulic retention time (HRT), Organic Loading Rate (OLR), and sludge retention time (SRT) were introduced to take into account the optimum conditions for living, growth, and multiplication of bacteria. The inner components, which can improve SRT, and even enhance mass transfer, were also explained and have been divided into transverse inner components, longitudinal inner components, and biofilm-packing material. At last, the newly developed special inner components were discussed and found more efficient and productive. PMID:24672798
Characteristics, process parameters, and inner components of anaerobic bioreactors.
Abdelgadir, Awad; Chen, Xiaoguang; Liu, Jianshe; Xie, Xuehui; Zhang, Jian; Zhang, Kai; Wang, Heng; Liu, Na
2014-01-01
The anaerobic bioreactor applies the principles of biotechnology and microbiology, and nowadays it has been used widely in the wastewater treatment plants due to their high efficiency, low energy use, and green energy generation. Advantages and disadvantages of anaerobic process were shown, and three main characteristics of anaerobic bioreactor (AB), namely, inhomogeneous system, time instability, and space instability were also discussed in this work. For high efficiency of wastewater treatment, the process parameters of anaerobic digestion, such as temperature, pH, Hydraulic retention time (HRT), Organic Loading Rate (OLR), and sludge retention time (SRT) were introduced to take into account the optimum conditions for living, growth, and multiplication of bacteria. The inner components, which can improve SRT, and even enhance mass transfer, were also explained and have been divided into transverse inner components, longitudinal inner components, and biofilm-packing material. At last, the newly developed special inner components were discussed and found more efficient and productive.
NASA Astrophysics Data System (ADS)
Hayami, Masao; Seino, Junji; Nakai, Hiromi
2018-03-01
This article proposes a gauge-origin independent formalism of the nuclear magnetic shielding constant in the two-component relativistic framework based on the unitary transformation. The proposed scheme introduces the gauge factor and the unitary transformation into the atomic orbitals. The two-component relativistic equation is formulated by block-diagonalizing the Dirac Hamiltonian together with gauge factors. This formulation is available for arbitrary relativistic unitary transformations. Then, the infinite-order Douglas-Kroll-Hess (IODKH) transformation is applied to the present formulation. Next, the analytical derivatives of the IODKH Hamiltonian for the evaluation of the nuclear magnetic shielding constant are derived. Results obtained from the numerical assessments demonstrate that the present formulation removes the gauge-origin dependence completely. Furthermore, the formulation with the IODKH transformation gives results that are close to those in four-component and other two-component relativistic schemes.
Influence of stretch-shortening cycle on mechanical behaviour of triceps surae during hopping.
Belli, A; Bosco, C
1992-04-01
Six subjects performed a first series of vertical plantar flexions and a second series of vertical rebounds, both involving muscle triceps surae exclusively. Vertical displacements, vertical forces and ankle angles were recorded during the entire work period of 60 seconds per series. In addition, expired gases were collected during the test and recovery for determination of the energy expenditure. Triceps surae was mechanically modelled with a contractile component and with an elastic component. Mechanical behaviour and work of the different muscle components were determined in both series. The net muscular efficiency calculated from the work performed by the centre of gravity was 17.5 +/- 3.0% (mean +/- SD) in plantar flexions and 29.9 +/- 4.8% in vertical rebounds. The net muscle efficiency calculated from the work performed by the contractile component was 17.4 +/- 2.9% in plantar flexions and 16.1 +/- 1.4% in vertical rebounds. These results suggest that the muscular efficiency differences do not reflect muscle contractile component efficiency but essentially the storage and recoil of elastic energy. This is supported by the relationship (P less than 0.01) found in vertical rebounds between the extra work and the elastic component work. A detailed observation of the mechanical behaviour of muscle mechanical components showed that the strategy to maximize the elastic work depends also on the force-velocity characteristics of the movement and that the eccentric-concentric work of the contractile component does not always correspond respectively to the ankle extension-flexion.
Comparing Networks from a Data Analysis Perspective
NASA Astrophysics Data System (ADS)
Li, Wei; Yang, Jing-Yu
To probe network characteristics, two predominant ways of network comparison are global property statistics and subgraph enumeration. However, they suffer from limited information and exhaustible computing. Here, we present an approach to compare networks from the perspective of data analysis. Initially, the approach projects each node of original network as a high-dimensional data point, and the network is seen as clouds of data points. Then the dispersion information of the principal component analysis (PCA) projection of the generated data clouds can be used to distinguish networks. We applied this node projection method to the yeast protein-protein interaction networks and the Internet Autonomous System networks, two types of networks with several similar higher properties. The method can efficiently distinguish one from the other. The identical result of different datasets from independent sources also indicated that the method is a robust and universal framework.
Projection Mapping User Interface for Disabled People
Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827
EIA's Role in Energy Data Collection, With Some Notes on Water Data
NASA Astrophysics Data System (ADS)
Leckey, T. J.
2017-12-01
The U.S. Energy Information Administration (EIA) is the statistical and analytical agency within the U.S. Department of Energy. EIA collects, analyzes, and disseminates independent and impartial energy information to promote sound policymaking, efficient markets, and public understanding of energy and its interaction with the economy and the environment. EIA conducts a comprehensive data collection program that covers the full spectrum of energy sources, end uses, and energy flows. This presentation will describe EIA's authority to collect energy data, report on the range of energy areas currently collected by EIA, discuss some areas where energy information and water issues intersect, and describe the relatively few areas where EIA does collect a small amount of water data. The presentation will conclude with some thoughts about necessary components for effective collection of water data at the federal level.
Projection Mapping User Interface for Disabled People.
Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
Natural scene logo recognition by joint boosting feature selection in salient regions
NASA Astrophysics Data System (ADS)
Fan, Wei; Sun, Jun; Naoi, Satoshi; Minagawa, Akihiro; Hotta, Yoshinobu
2011-01-01
Logos are considered valuable intellectual properties and a key component of the goodwill of a business. In this paper, we propose a natural scene logo recognition method which is segmentation-free and capable of processing images extremely rapidly and achieving high recognition rates. The classifiers for each logo are trained jointly, rather than independently. In this way, common features can be shared across multiple classes for better generalization. To deal with large range of aspect ratio of different logos, a set of salient regions of interest (ROI) are extracted to describe each class. We ensure the selected ROIs to be both individually informative and two-by-two weakly dependant by a Class Conditional Entropy Maximization criteria. Experimental results on a large logo database demonstrate the effectiveness and efficiency of our proposed method.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Components of Standing Postural Control Evaluated in Pediatric Balance Measures: A Scoping Review.
Sibley, Kathryn M; Beauchamp, Marla K; Van Ooteghem, Karen; Paterson, Marie; Wittmeier, Kristy D
2017-10-01
To identify measures of standing balance validated in pediatric populations, and to determine the components of postural control captured in each tool. Electronic searches of MEDLINE, Embase, and CINAHL databases using key word combinations of postural balance/equilibrium, psychometrics/reproducibility of results/predictive value of tests, and child/pediatrics; gray literature; and hand searches. Inclusion criteria were measures with a stated objective to assess balance, with pediatric (≤18y) populations, with at least 1 psychometric evaluation, with at least 1 standing task, with a standardized protocol and evaluation criteria, and published in English. Two reviewers independently identified studies for inclusion. There were 21 measures included. Two reviewers extracted descriptive characteristics, and 2 investigators independently coded components of balance in each measure using a systems perspective for postural control, an established framework for balance in pediatric populations. Components of balance evaluated in measures were underlying motor systems (100% of measures), anticipatory postural control (72%), static stability (62%), sensory integration (52%), dynamic stability (48%), functional stability limits (24%), cognitive influences (24%), verticality (9%), and reactive postural control (0%). Assessing children's balance with valid and comprehensive measures is important for ensuring development of safe mobility and independence with functional tasks. Balance measures validated in pediatric populations to date do not comprehensively assess standing postural control and omit some key components for safe mobility and independence. Existing balance measures, that have been validated in adult populations and address some of the existing gaps in pediatric measures, warrant consideration for validation in children. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Feng, Guitao; Li, Junyu; Colberts, Fallon J M; Li, Mengmeng; Zhang, Jianqi; Yang, Fan; Jin, Yingzhi; Zhang, Fengling; Janssen, René A J; Li, Cheng; Li, Weiwei
2017-12-27
A series of "double-cable" conjugated polymers were developed for application in efficient single-component polymer solar cells, in which high quantum efficiencies could be achieved due to the optimized nanophase separation between donor and acceptor parts. The new double-cable polymers contain electron-donating poly(benzodithiophene) (BDT) as linear conjugated backbone for hole transport and pendant electron-deficient perylene bisimide (PBI) units for electron transport, connected via a dodecyl linker. Sulfur and fluorine substituents were introduced to tune the energy levels and crystallinity of the conjugated polymers. The double-cable polymers adopt a "face-on" orientation in which the conjugated BDT backbone and the pendant PBI units have a preferential π-π stacking direction perpendicular to the substrate, favorable for interchain charge transport normal to the plane. The linear conjugated backbone acts as a scaffold for the crystallization of the PBI groups, to provide a double-cable nanophase separation of donor and acceptor phases. The optimized nanophase separation enables efficient exciton dissociation as well as charge transport as evidenced from the high-up to 80%-internal quantum efficiency for photon-to-electron conversion. In single-component organic solar cells, the double-cable polymers provide power conversion efficiency up to 4.18%. This is one of the highest performances in single-component organic solar cells. The nanophase-separated design can likely be used to achieve high-performance single-component organic solar cells.
Lörincz, András; Póczos, Barnabás
2003-06-01
In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.
Solar cell efficiency tables (version 48): Solar cell efficiency tables (version 48)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin A.; Emery, Keith; Hishikawa, Yoshihiro
Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since January 2016 are reviewed.
NOTE: Entropy-based automated classification of independent components separated from fMCG
NASA Astrophysics Data System (ADS)
Comani, S.; Srinivasan, V.; Alleva, G.; Romani, G. L.
2007-03-01
Fetal magnetocardiography (fMCG) is a noninvasive technique suitable for the prenatal diagnosis of the fetal heart function. Reliable fetal cardiac signals can be reconstructed from multi-channel fMCG recordings by means of independent component analysis (ICA). However, the identification of the separated components is usually accomplished by visual inspection. This paper discusses a novel automated system based on entropy estimators, namely approximate entropy (ApEn) and sample entropy (SampEn), for the classification of independent components (ICs). The system was validated on 40 fMCG datasets of normal fetuses with the gestational age ranging from 22 to 37 weeks. Both ApEn and SampEn were able to measure the stability and predictability of the physiological signals separated with ICA, and the entropy values of the three categories were significantly different at p <0.01. The system performances were compared with those of a method based on the analysis of the time and frequency content of the components. The outcomes of this study showed a superior performance of the entropy-based system, in particular for early gestation, with an overall ICs detection rate of 98.75% and 97.92% for ApEn and SampEn respectively, as against a value of 94.50% obtained with the time-frequency-based system.
The Independent Associations of Physical Activity and Sleep with Cognitive Function in Older Adults.
Falck, Ryan S; Best, John R; Davis, Jennifer C; Liu-Ambrose, Teresa
2018-01-01
Current evidence suggests physical activity (PA) and sleep are important for cognitive health; however, few studies examining the role of PA and sleep for cognitive health have measured these behaviors objectively. We cross-sectionally examined whether 1) higher PA is associated with better cognitive performance independently of sleep quality; 2) higher sleep quality is associated with better cognitive performance independently of PA; and 3) whether higher PA is associated with better sleep quality. We measured PA, subjective sleep quality using the Pittsburgh Sleep Quality Index (PSQI), and objective sleep quality (i.e., fragmentation, efficiency, duration, and latency) using the MotionWatch8© in community-dwelling adults (N = 137; aged 55+). Cognitive function was indexed using the Alzheimer's Disease Assessment Scale-Plus. Correlation analyses were performed to determine relationships between PA, sleep quality, and cognitive function. We then used latent variable modelling to examine the relationships of PA with cognitive function independently of sleep quality, sleep quality with cognitive function independently of PA, and PA with sleep quality. We found greater PA was associated with better cognitive performance independently of 1) PSQI (β= -0.03; p < 0.01); 2) sleep fragmentation (β= -0.02; p < 0.01); 3) sleep duration (β= -0.02; p < 0.01); and 4) sleep latency (β= -0.02; p < 0.01). In addition, better sleep efficiency was associated with better cognitive performance independently of PA (β= -0.01; p = 0.04). We did not find any associations between PA and sleep quality. PA is associated with better cognitive performance independently of sleep quality, and sleep efficiency is associated with better cognitive performance independently of PA. However, PA is not associated with sleep quality and thus PA and sleep quality may be related to cognitive performance through independent mechanisms.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-03
.... EERE-2010-BT-STD-0011] RIN 1904-AC22 Energy Efficiency Program: Energy Conservation Standards Furnace Fans: Public Meeting and Availability of the Framework Document AGENCY: Office of Energy Efficiency and... Energy, Office of Energy Efficiency and Renewable Energy, Building Technologies, EE-2J, 1000 Independence...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, R.S.
1989-06-01
For a vehicle operating across arbitrarily-contoured terrain, finding the most fuel-efficient route between two points can be viewed as a high-level global path-planning problem with traversal costs and stability dependent on the direction of travel (anisotropic). The problem assumes a two-dimensional polygonal map of homogeneous cost regions for terrain representation constructed from elevation information. The anisotropic energy cost of vehicle motion has a non-braking component dependent on horizontal distance, a braking component dependent on vertical distance, and a constant path-independent component. The behavior of minimum-energy paths is then proved to be restricted to a small, but optimal set of traversalmore » types. An optimal-path-planning algorithm, using a heuristic search technique, reduces the infinite number of paths between the start and goal points to a finite number by generating sequences of goal-feasible window lists from analyzing the polygonal map and applying pruning criteria. The pruning criteria consist of visibility analysis, heading analysis, and region-boundary constraints. Each goal-feasible window lists specifies an associated convex optimization problem, and the best of all locally-optimal paths through the goal-feasible window lists is the globally-optimal path. These ideas have been implemented in a computer program, with results showing considerably better performance than the exponential average-case behavior predicted.« less
The origin and development of the immune system with a view to stem cell therapy.
Anastassova-Kristeva, Marlene
2003-04-01
Careful study of the phylogeny and ontogeny of the three components of the immune system reveals that the macrophage, lymphatic, and hematopoietic systems originate independently of each other. Chronologically, the most ancient is the macrophage system, which arises in the coelomic cavity as mesenchymal ameboid cells having the properties to recognize self from non-self and to ingest foreign particles. The lymphatic system later develops from the endoderm of pharyngeal pouches, where the thymic anlage differentiates. The lymphocytes that originate here seed all lymphatic organs and retain the ability to divide and thereby form multiple colonies (lymphatic nodules) in the respiratory and digestive tract; further diversification of lymphocytes follows after confrontation with antigens. The last component of the immune system to appear is the hematopoietic system, which originates from the splanchnic mesoderm of the yolk sac as hematogenic tissue, containing hemangioblasts. The hematogenic tissue remains attached to the outer wall of the vitelline vessels, which provides an efficient mechanism for introducing the hematogenic tissue into the embryo. In an appropriate microenvironment, the hemangioblasts give rise to sinusoidal endothelium and to hemocytoblasts - the bone marrow stem cells for erythrocytes, myeloid cells, and megakaryocytes. The facts and opinions presented in this article are not in agreement with the currently accepted dogma that a common "hematolymphatic stem cell" localized in the marrow generates all of the cellular components of blood and the immune system.
Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.
Blobel, B G M E; Engel, K; Pharow, P
2006-01-01
To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.
[Research on spectra recognition method for cabbages and weeds based on PCA and SIMCA].
Zu, Qin; Deng, Wei; Wang, Xiu; Zhao, Chun-Jiang
2013-10-01
In order to improve the accuracy and efficiency of weed identification, the difference of spectral reflectance was employed to distinguish between crops and weeds. Firstly, the different combinations of Savitzky-Golay (SG) convolutional derivation and multiplicative scattering correction (MSC) method were applied to preprocess the raw spectral data. Then the clustering analysis of various types of plants was completed by using principal component analysis (PCA) method, and the feature wavelengths which were sensitive for classifying various types of plants were extracted according to the corresponding loading plots of the optimal principal components in PCA results. Finally, setting the feature wavelengths as the input variables, the soft independent modeling of class analogy (SIMCA) classification method was used to identify the various types of plants. The experimental results of classifying cabbages and weeds showed that on the basis of the optimal pretreatment by a synthetic application of MSC and SG convolutional derivation with SG's parameters set as 1rd order derivation, 3th degree polynomial and 51 smoothing points, 23 feature wavelengths were extracted in accordance with the top three principal components in PCA results. When SIMCA method was used for classification while the previously selected 23 feature wavelengths were set as the input variables, the classification rates of the modeling set and the prediction set were respectively up to 98.6% and 100%.
Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
Efficient Variable Selection Method for Exposure Variables on Binary Data
NASA Astrophysics Data System (ADS)
Ohno, Manabu; Tarumi, Tomoyuki
In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.
Marei, Hesham F; Donkers, Jeroen; Al-Eraky, Mohamed M; Van Merrienboer, Jeroen J G
2018-05-25
The use of virtual patients (VPs), due to their high complexity and/or inappropriate sequencing with other instructional methods, might cause a high cognitive load, which hampers learning. To investigate the efficiency of instructional methods that involved three different applications of VPs combined with lectures. From two consecutive batches, 171 out of 183 students have participated in lecture and VPs sessions. One group received a lecture session followed by a collaborative VPs learning activity (collaborative deductive). The other two groups received a lecture session and an independent VP learning activity, which either followed the lecture session (independent deductive) or preceded it (independent inductive). All groups were administrated written knowledge acquisition and retention tests as well as transfer tests using two new VPs. All participants completed a cognitive load questionnaire, which measured intrinsic, extraneous and germane load. Mixed effect analysis of cognitive load and efficiency using the R statistical program was performed. The highest intrinsic and extraneous load was found in the independent inductive group, while the lowest intrinsic and extraneous load was seen in the collaborative deductive group. Furthermore, comparisons showed a significantly higher efficiency, that is, higher performance in combination with lower cognitive load, for the collaborative deductive group than for the other two groups. Collaborative use of VPs after a lecture is the most efficient instructional method, of those tested, as it leads to better learning and transfer combined with lower cognitive load, when compared with independent use of VPs, either before or after the lecture.
76 FR 25683 - State Energy Advisory Board (STEAB); Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-05
... DEPARTMENT OF ENERGY Energy Efficiency and Renewable Energy State Energy Advisory Board (STEAB); Meeting AGENCY: Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of Open... Energy Efficiency and Renewable Energy, 1000 Independence Avenue, SW., Washington DC 20585; or e-mail...
NASA Astrophysics Data System (ADS)
Stisen, S.; Demirel, C.; Koch, J.
2017-12-01
Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.
NASA Astrophysics Data System (ADS)
Kobayashi, Takao; Katsuyama, Etsuo; Sugiura, Hideki; Ono, Eiichi; Yamamoto, Masaki
2018-05-01
This paper proposes an efficient direct yaw moment control (DYC) capable of minimising tyre slip power loss on contact patches for a four-independent wheel drive vehicle. Simulations identified a significant power loss reduction with a direct yaw moment due to a change in steer characteristics during acceleration or deceleration while turning. Simultaneously, the vehicle motion can be stabilised. As a result, the proposed control method can ensure compatibility between vehicle dynamics performance and energy efficiency. This paper also describes the results of a full-vehicle simulation that was conducted to examine the effectiveness of the proposed DYC.
Polarization-Independent Silicon Metadevices for Efficient Optical Wavefront Control.
Chong, Katie E; Staude, Isabelle; James, Anthony; Dominguez, Jason; Liu, Sheng; Campione, Salvatore; Subramania, Ganapathi S; Luk, Ting S; Decker, Manuel; Neshev, Dragomir N; Brener, Igal; Kivshar, Yuri S
2015-08-12
We experimentally demonstrate a functional silicon metadevice at telecom wavelengths that can efficiently control the wavefront of optical beams by imprinting a spatially varying transmittance phase independent of the polarization of the incident beam. Near-unity transmittance efficiency and close to 0-2π phase coverage are enabled by utilizing the localized electric and magnetic Mie-type resonances of low-loss silicon nanoparticles tailored to behave as electromagnetically dual-symmetric scatterers. We apply this concept to realize a metadevice that converts a Gaussian beam into a vortex beam. The required spatial distribution of transmittance phases is achieved by a variation of the lattice spacing as a single geometric control parameter.
Energy efficient engine high-pressure turbine supersonic cascade technology report
NASA Technical Reports Server (NTRS)
Kopper, F. C.; Milano, R.; Davis, R. L.; Dring, R. P.; Stoeffler, R. C.
1981-01-01
The performance of two vane endwall geometries and three blade sections for the high-pressure turbine was evaluated in terms of the efficiency requirements of the Energy Efficient Engine high-pressure turbine component. The van endwall designs featured a straight wall and S-wall configuration. The blade designs included a base blade, straightback blade, and overcambered blade. Test results indicated that the S-wall vane configuration and the base blade configuration offered the most promising performance characteristics for the Energy Efficient Engine high-pressure turbine component.
Tremblay, Pier-Luc; Höglund, Daniel; Koza, Anna; Bonde, Ida; Zhang, Tian
2015-01-01
Acetogens are efficient microbial catalysts for bioprocesses converting C1 compounds into organic products. Here, an adaptive laboratory evolution approach was implemented to adapt Sporomusa ovata for faster autotrophic metabolism and CO2 conversion to organic chemicals. S. ovata was first adapted to grow quicker autotrophically with methanol, a toxic C1 compound, as the sole substrate. Better growth on different concentrations of methanol and with H2-CO2 indicated the adapted strain had a more efficient autotrophic metabolism and a higher tolerance to solvent. The growth rate on methanol was increased 5-fold. Furthermore, acetate production rate from CO2 with an electrode serving as the electron donor was increased 6.5-fold confirming that the acceleration of the autotrophic metabolism of the adapted strain is independent of the electron donor provided. Whole-genome sequencing, transcriptomic, and biochemical studies revealed that the molecular mechanisms responsible for the novel characteristics of the adapted strain were associated with the methanol oxidation pathway and the Wood-Ljungdahl pathway of acetogens along with biosynthetic pathways, cell wall components, and protein chaperones. The results demonstrate that an efficient strategy to increase rates of CO2 conversion in bioprocesses like microbial electrosynthesis is to evolve the microbial catalyst by adaptive laboratory evolution to optimize its autotrophic metabolism. PMID:26530351
Oya, Eriko; Kato, Hiroaki; Chikashige, Yuji; Tsutsumi, Chihiro; Hiraoka, Yasushi; Murakami, Yota
2013-01-01
Heterochromatin at the pericentromeric repeats in fission yeast is assembled and spread by an RNAi-dependent mechanism, which is coupled with the transcription of non-coding RNA from the repeats by RNA polymerase II. In addition, Rrp6, a component of the nuclear exosome, also contributes to heterochromatin assembly and is coupled with non-coding RNA transcription. The multi-subunit complex Mediator, which directs initiation of RNA polymerase II-dependent transcription, has recently been suggested to function after initiation in processes such as elongation of transcription and splicing. However, the role of Mediator in the regulation of chromatin structure is not well understood. We investigated the role of Mediator in pericentromeric heterochromatin formation and found that deletion of specific subunits of the head domain of Mediator compromised heterochromatin structure. The Mediator head domain was required for Rrp6-dependent heterochromatin nucleation at the pericentromere and for RNAi-dependent spreading of heterochromatin into the neighboring region. In the latter process, Mediator appeared to contribute to efficient processing of siRNA from transcribed non-coding RNA, which was required for efficient spreading of heterochromatin. Furthermore, the head domain directed efficient transcription in heterochromatin. These results reveal a pivotal role for Mediator in multiple steps of transcription-coupled formation of pericentromeric heterochromatin. This observation further extends the role of Mediator to co-transcriptional chromatin regulation.
Radiative striped wind model for gamma-ray bursts
NASA Astrophysics Data System (ADS)
Bégué, D.; Pe'er, A.; Lyubarsky, Y.
2017-05-01
In this paper, we revisit the striped wind model in which the wind is accelerated by magnetic reconnection. In our treatment, radiation is included as an independent component, and two scenarios are considered. In the first one, radiation cannot stream efficiently through the reconnection layer, while the second scenario assumes that radiation is homogeneous in the striped wind. We show how these two assumptions affect the dynamics. In particular, we find that the asymptotic radial evolution of the Lorentz factor is not strongly modified whether radiation can stream through the reconnection layer or not. On the other hand, we show that the width, density and temperature of the reconnection layer are strongly dependent on these assumptions. We then apply the model to the gamma-ray burst context and find that photons cannot diffuse efficiently through the reconnection layer below radius r_D^{Δ } ˜ 10^{10.5} cm, which is about an order of magnitude below the photospheric radius. Above r_D^{Δ }, the dynamics asymptotes to the solution of the scenario in which radiation can stream through the reconnection layer. As a result, the density of the current sheet increases sharply, providing efficient photon production by the Bremsstrahlung process that could have profound influence on the emerging spectrum. This effect might provide a solution to the soft photon problem in gamma-ray bursts.
Fetal ECG Extraction From Maternal Body Surface Measurement Using Independent Component Analysis
2001-10-25
Ibaraki 305-0901, Japan Abstract – A method applying independent component analysis (ICA) to detect the electrocardiogram of a prenatal cattle foetus is...monitoring the health status of an unborn cattle foetus is indispensable in preventing natural abortion and premature birth [3]. One of the applicable...and Y. Honda, “ECG and Heart Rate Detection of Prenatal Cattle Foetus Using Adaptive Digital Filtering,” World Congress on Med. Phys.& Biomed. Eng., Chicago TU-CXH-75, pp. 1-4, 2000.
Clean image synthesis and target numerical marching for optical imaging with backscattering light
Pu, Yang; Wang, Wubao
2011-01-01
Scanning backscattering imaging and independent component analysis (ICA) are used to probe targets hidden in the subsurface of a turbid medium. A new correction procedure is proposed and used to synthesize a “clean” image of a homogeneous host medium numerically from a set of raster-scanned “dirty” backscattering images of the medium with embedded targets. The independent intensity distributions on the surface of the medium corresponding to individual targets are then unmixed using ICA of the difference between the set of dirty images and the clean image. The target positions are localized by a novel analytical method, which marches the target to the surface of the turbid medium until a match with the retrieved independent component is accomplished. The unknown surface property of the turbid medium is automatically accounted for by this method. Employing clean image synthesis and target numerical marching, three-dimensional (3D) localization of objects embedded inside a turbid medium using independent component analysis in a backscattering geometry is demonstrated for the first time, using as an example, imaging a small piece of cancerous prostate tissue embedded in a host consisting of normal prostate tissue. PMID:21483608
Combination probes for stagnation pressure and temperature measurements in gas turbine engines
NASA Astrophysics Data System (ADS)
Bonham, C.; Thorpe, S. J.; Erlund, M. N.; Stevenson, R. J.
2018-01-01
During gas turbine engine testing, steady-state gas-path stagnation pressures and temperatures are measured in order to calculate the efficiencies of the main components of turbomachinery. These measurements are acquired using fixed intrusive probes, which are installed at the inlet and outlet of each component at discrete point locations across the gas-path. The overall uncertainty in calculated component efficiency is sensitive to the accuracy of discrete point pressures and temperatures, as well as the spatial sampling across the gas-path. Both of these aspects of the measurement system must be considered if more accurate component efficiencies are to be determined. High accuracy has become increasingly important as engine manufacturers have begun to pursue small gains in component performance, which require efficiencies to be resolved to within less than ± 1% . This article reports on three new probe designs that have been developed in a response to this demand. The probes adopt a compact combination arrangement that facilitates up to twice the spatial coverage compared to individual stagnation pressure and temperature probes. The probes also utilise novel temperature sensors and high recovery factor shield designs that facilitate improvements in point measurement accuracy compared to standard Kiel probes used in engine testing. These changes allow efficiencies to be resolved within ± 1% over a wider range of conditions than is currently achievable with Kiel probes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Eunkyu; Muirhead, Philip S.; Swift, Jonathan J.
Several low-mass eclipsing binary stars show larger than expected radii for their measured mass, metallicity, and age. One proposed mechanism for this radius inflation involves inhibited internal convection and starspots caused by strong magnetic fields. One particular eclipsing binary, T-Cyg1-12664, has proven confounding to this scenario. Çakırlı et al. measured a radius for the secondary component that is twice as large as model predictions for stars with the same mass and age, but a primary mass that is consistent with predictions. Iglesias-Marzoa et al. independently measured the radii and masses of the component stars and found that the radius ofmore » the secondary is not in fact inflated with respect to models, but that the primary is, which is consistent with the inhibited convection scenario. However, in their mass determinations, Iglesias-Marzoa et al. lacked independent radial velocity measurements for the secondary component due to the star’s faintness at optical wavelengths. The secondary component is especially interesting, as its purported mass is near the transition from partially convective to a fully convective interior. In this article, we independently determined the masses and radii of the component stars of T-Cyg1-12664 using archival Kepler data and radial velocity measurements of both component stars obtained with IGRINS on the Discovery Channel Telescope and NIRSPEC and HIRES on the Keck Telescopes. We show that neither of the component stars is inflated with respect to models. Our results are broadly consistent with modern stellar evolutionary models for main-sequence M dwarf stars and do not require inhibited convection by magnetic fields to account for the stellar radii.« less
Koua, Dominique; Kuhn-Nentwig, Lucia
2017-01-01
Spider venoms are rich cocktails of bioactive peptides, proteins, and enzymes that are being intensively investigated over the years. In order to provide a better comprehension of that richness, we propose a three-level family classification system for spider venom components. This classification is supported by an exhaustive set of 219 new profile hidden Markov models (HMMs) able to attribute a given peptide to its precise peptide type, family, and group. The proposed classification has the advantages of being totally independent from variable spider taxonomic names and can easily evolve. In addition to the new classifiers, we introduce and demonstrate the efficiency of hmmcompete, a new standalone tool that monitors HMM-based family classification and, after post-processing the result, reports the best classifier when multiple models produce significant scores towards given peptide queries. The combined used of hmmcompete and the new spider venom component-specific classifiers demonstrated 96% sensitivity to properly classify all known spider toxins from the UniProtKB database. These tools are timely regarding the important classification needs caused by the increasing number of peptides and proteins generated by transcriptomic projects. PMID:28786958
Developing a more useful surface quality metric for laser optics
NASA Astrophysics Data System (ADS)
Turchette, Quentin; Turner, Trey
2011-02-01
Light scatter due to surface defects on laser resonator optics produces losses which lower system efficiency and output power. The traditional methodology for surface quality inspection involves visual comparison of a component to scratch and dig (SAD) standards under controlled lighting and viewing conditions. Unfortunately, this process is subjective and operator dependent. Also, there is no clear correlation between inspection results and the actual performance impact of the optic in a laser resonator. As a result, laser manufacturers often overspecify surface quality in order to ensure that optics will not degrade laser performance due to scatter. This can drive up component costs and lengthen lead times. Alternatively, an objective test system for measuring optical scatter from defects can be constructed with a microscope, calibrated lighting, a CCD detector and image processing software. This approach is quantitative, highly repeatable and totally operator independent. Furthermore, it is flexible, allowing the user to set threshold levels as to what will or will not constitute a defect. This paper details how this automated, quantitative type of surface quality measurement can be constructed, and shows how its results correlate against conventional loss measurement techniques such as cavity ringdown times.
Extensions of algebraic image operators: An approach to model-based vision
NASA Technical Reports Server (NTRS)
Lerner, Bao-Ting; Morelli, Michael V.
1990-01-01
Researchers extend their previous research on a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets. Addition and multiplication are defined for the set of all grey-level images, which can then be described as polynomials of two variables. Utilizing this new algebraic structure, researchers devised an innovative, efficient edge detection scheme. An accurate method for deriving gradient component information from this edge detector is presented. Based upon this new edge detection system researchers developed a robust method for linear feature extraction by combining the techniques of a Hough transform and a line follower. The major advantage of this feature extractor is its general, object-independent nature. Target attributes, such as line segment lengths, intersections, angles of intersection, and endpoints are derived by the feature extraction algorithm and employed during model matching. The algebraic operators are global operations which are easily reconfigured to operate on any size or shape region. This provides a natural platform from which to pursue dynamic scene analysis. A method for optimizing the linear feature extractor which capitalizes on the spatially reconfiguration nature of the edge detector/gradient component operator is discussed.
Awerkiew, Sabine; Schmidt, Annette; Hombach, Andreas A.; Pfister, Herbert; Abken, Hinrich
2012-01-01
Adoptive therapy of malignant diseases with tumor-specific cytotoxic T cells showed remarkable efficacy in recent trials. Repetitive T cell receptor (TCR) engagement of target antigen, however, inevitably ends up in hypo-responsive cells with terminally differentiated KLRG-1+ CD57+ CD7− phenotype limiting their therapeutic efficacy. We here revealed that hypo-responsiveness of CMV-specific late-stage CD8+ T cells is due to reduced TCR synapse formation compared to younger cells. Membrane anchoring of TCR components contributes to T cell hypo-responsiveness since dislocation of galectin-3 from the synapse by swainsonine restored both TCR synapse formation and T cell response. Transgenic expression of a CD3-zeta signaling chimeric antigen receptor (CAR) recovered hypo-responsive T cells to full effector functions indicating that the defect is restricted to TCR membrane components while synapse formation of the transgenic CAR was not blocked. CAR engineered late-stage T cells released cytokines and mediated redirected cytotoxicity as efficiently as younger effector T cells. Our data provide a rationale for TCR independent, CAR mediated activation in the adoptive cell therapy to avoid hypo-responsiveness of late-stage T cells upon repetitive antigen encounter. PMID:22292024
NASA Astrophysics Data System (ADS)
Kim, Vitaly P.; Hegai, Valery V.; Liu, Jann Yenq; Ryu, Kwangsun; Chung, Jong-Kyun
2017-12-01
The electric coupling between the lithosphere and the ionosphere is examined. The electric field is considered as a time- varying irregular vertical Coulomb field presumably produced on the Earth’s surface before an earthquake within its epicentral zone by some micro-processes in the lithosphere. It is shown that the Fourier component of this electric field with a frequency of 500 Hz and a horizontal scale-size of 100 km produces in the nighttime ionosphere of high and middle latitudes a transverse electric field with a magnitude of 20 mV/m if the peak value of the amplitude of this Fourier component is just 30 V/m. The time-varying vertical Coulomb field with a frequency of 500 Hz penetrates from the ground into the ionosphere by a factor of 7×105 more efficient than a time independent vertical electrostatic field of the same scale size. The transverse electric field with amplitude of 20 mV/m will cause perturbations in the nighttime F region electron density through heating the F region plasma resulting in a reduction of the downward plasma flux from the protonosphere and an excitation of acoustic gravity waves.
NASA Astrophysics Data System (ADS)
Konesev, S. G.; Khazieva, R. T.; Kirllov, R. V.; Konev, A. A.
2017-01-01
Some electrical consumers (the charge system of storage capacitor, powerful pulse generators, electrothermal systems, gas-discharge lamps, electric ovens, plasma torches) require constant power consumption, while their resistance changes in the limited range. Current stabilization systems (CSS) with inductive-capacitive transducers (ICT) provide constant power, when the load resistance changes over a wide range and increaseы the efficiency of high-power loads’ power supplies. ICT elements are selected according to the maximum load, which leads to exceeding a predetermined value of capacity. The paper suggests carrying load power by the ICT based on multifunction integrated electromagnetic components (MIEC) to reduce the predetermined capacity of ICT elements and CSS weights and dimensions. The authors developed and patented ICT based on MIEC that reduces the CSS weights and dimensions by reducing components number with the possibility of device’s electric energy transformation and resonance frequency changing. An ICT mathematical model was produced. The model determines the width of the load stabilization range. Electromagnetic processes study model was built with the MIEC integral parameters (full inductance of the electrical lead, total capacity, current of electrical lead). It shows independence of the load current from the load resistance for different ways of MIEC connection.
Advanced General Aviation Turbine Engine (GATE) concepts
NASA Technical Reports Server (NTRS)
Lays, E. J.; Murray, G. L.
1979-01-01
Concepts are discussed that project turbine engine cost savings through use of geometrically constrained components designed for low rotational speeds and low stress to permit manufacturing economies. Aerodynamic development of geometrically constrained components is recommended to maximize component efficiency. Conceptual engines, airplane applications, airplane performance, engine cost, and engine-related life cycle costs are presented. The powerplants proposed offer encouragement with respect to fuel efficiency and life cycle costs, and make possible remarkable airplane performance gains.
22.7% efficient PERL silicon solar cell module with a textured front surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J.; Wang, A.; Campbell, P.
1997-12-31
This paper describes a solar cell module efficiency of 22.7% independently measured at Sandia National Laboratories. This is the highest ever confirmed efficiency for a photovoltaic module of this size achieved by cells made from any material. This 778-cm{sup 2} module used 40 large-area double layer antireflection coated PERL (passivated emitter, rear locally-diffused) silicon cells of average efficiency of 23.1%. A textured front module surface considerably improve the module efficiency. Also reported is an independently confirmed efficiency of 23.7% for a 21.6 cm{sup 2} cell of the type used in the module. Using these PERL cells in the 1996 Worldmore » Solar Challenge solar car race from Darwin to Adelaide across Australia, Honda`s Dream and Aisin Seiki`s Aisol III were placed first and third, respectively. Honda also set a new record by reaching Adelaide in four days with an average speed of 90km/h over the 3010 km course.« less
Transgene manipulation in zebrafish by using recombinases.
Dong, Jie; Stuart, Gary W
2004-01-01
Although much remains to be done, our results to date suggest that efficient and precise genome engineering in zebrafish will be possible in the future by using Cre recombinase and SB transposase in combination with their respective target sites. In this study, we provide the first evidence that Cre recombinase can mediate effective site-specific deletion of transgenes in zebrafish. We found that the efficiency of target site utilization could approach 100%, independent of whether the target site was provided transiently by injection or stably within an integrated transgene. Microinjection of Cre mRNA appeared to be slightly more effective for this purpose than microinjection of Cre-expressing plasmid DNA. Our work has not yet progressed to the point where SB-mediated mobilization of our transgene constructs would be observed. However, a recent report has demonstrated that SB can enhance transgenesis rates sixfold over conventional methods by efficiently mediating multiple single-copy insertion of transgenes into the zebrafish genome (Davidson et al., 2003). Therefore, it seems likely that a combined system should eventually allow both SB-mediated transgene mobilization and Cre-mediated transgene modification. Our goal is to validate methods for the precise reengineering of the zebrafish genome by using a combination of Cre-loxP and SB transposon systems. These methods can be used to delete, replace, or mobilize large pieces of DNA or to modify the genome only when and where required by the investigator. For example, it should be possible to deliver particular RNAi genes to well-expressed chromosomal loci and then exchange them easily with alternative RNAi genes for the specific suppression of alternative targets. As a nonviral vector for gene therapy, the transposon component allows for the possibility of highly efficient integration, whereas the Cre-loxP component can target the integration and/or exchange of foreign DNA into specific sites within the genome. The specificity and efficiency of this system also make it ideal for applications in which precise genome modifications are required (e.g., stock improvement). Future work should establish whether alternative recombination systems (e.g., phiC31 integrase) can improve the utility of this system. After the fish system is fully established, it would be interesting to explore its application to genome engineering in other organisms.
Age and gender estimation using Region-SIFT and multi-layered SVM
NASA Astrophysics Data System (ADS)
Kim, Hyunduk; Lee, Sang-Heon; Sohn, Myoung-Kyu; Hwang, Byunghun
2018-04-01
In this paper, we propose an age and gender estimation framework using the region-SIFT feature and multi-layered SVM classifier. The suggested framework entails three processes. The first step is landmark based face alignment. The second step is the feature extraction step. In this step, we introduce the region-SIFT feature extraction method based on facial landmarks. First, we define sub-regions of the face. We then extract SIFT features from each sub-region. In order to reduce the dimensions of features we employ a Principal Component Analysis (PCA) and a Linear Discriminant Analysis (LDA). Finally, we classify age and gender using a multi-layered Support Vector Machines (SVM) for efficient classification. Rather than performing gender estimation and age estimation independently, the use of the multi-layered SVM can improve the classification rate by constructing a classifier that estimate the age according to gender. Moreover, we collect a dataset of face images, called by DGIST_C, from the internet. A performance evaluation of proposed method was performed with the FERET database, CACD database, and DGIST_C database. The experimental results demonstrate that the proposed approach classifies age and performs gender estimation very efficiently and accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Wei; Li, Hui; Zhang, Bing
We perform 3D relativistic ideal MHD simulations to study the collisions between high-σ (Poynting- ux-dominated) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable Poynting- ux-dominated jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvenic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini-jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. In conclusion, our results give support to the proposed astrophysical models that invoke signi cant magnetic energy dissipation in Poynting- ux-dominated jets, such as the internal collision-induced magnetic reconnection and turbulence (ICMART) model for GRBs, and reconnection triggered mini-jets model for AGNs.« less
Platelet activation suppresses HIV-1 infection of T cells
2013-01-01
Background Platelets, anucleate cell fragments abundant in human blood, can capture HIV-1 and platelet counts have been associated with viral load and disease progression. However, the impact of platelets on HIV-1 infection of T cells is unclear. Results We found that platelets suppress HIV-1 spread in co-cultured T cells in a concentration-dependent manner. Platelets containing granules inhibited HIV-1 spread in T cells more efficiently than degranulated platelets, indicating that the granule content might exert antiviral activity. Indeed, supernatants from activated and thus degranulated platelets suppressed HIV-1 infection. Infection was inhibited at the stage of host cell entry and inhibition was independent of the viral strain or coreceptor tropism. In contrast, blockade of HIV-2 and SIV entry was less efficient. The chemokine CXCL4, a major component of platelet granules, blocked HIV-1 entry and neutralization of CXCL4 in platelet supernatants largely abrogated their anti-HIV-1 activity. Conclusions Release of CXCL4 by activated platelets inhibits HIV-1 infection of adjacent T cells at the stage of virus entry. The inhibitory activity of platelet-derived CXCL4 suggests a role of platelets in the defense against infection by HIV-1 and potentially other pathogens. PMID:23634812
Platelet activation suppresses HIV-1 infection of T cells.
Solomon Tsegaye, Theodros; Gnirß, Kerstin; Rahe-Meyer, Niels; Kiene, Miriam; Krämer-Kühl, Annika; Behrens, Georg; Münch, Jan; Pöhlmann, Stefan
2013-05-01
Platelets, anucleate cell fragments abundant in human blood, can capture HIV-1 and platelet counts have been associated with viral load and disease progression. However, the impact of platelets on HIV-1 infection of T cells is unclear. We found that platelets suppress HIV-1 spread in co-cultured T cells in a concentration-dependent manner. Platelets containing granules inhibited HIV-1 spread in T cells more efficiently than degranulated platelets, indicating that the granule content might exert antiviral activity. Indeed, supernatants from activated and thus degranulated platelets suppressed HIV-1 infection. Infection was inhibited at the stage of host cell entry and inhibition was independent of the viral strain or coreceptor tropism. In contrast, blockade of HIV-2 and SIV entry was less efficient. The chemokine CXCL4, a major component of platelet granules, blocked HIV-1 entry and neutralization of CXCL4 in platelet supernatants largely abrogated their anti-HIV-1 activity. Release of CXCL4 by activated platelets inhibits HIV-1 infection of adjacent T cells at the stage of virus entry. The inhibitory activity of platelet-derived CXCL4 suggests a role of platelets in the defense against infection by HIV-1 and potentially other pathogens.
Deng, Wei; Li, Hui; Zhang, Bing; ...
2015-05-29
We perform 3D relativistic ideal MHD simulations to study the collisions between high-σ (Poynting- ux-dominated) blobs which contain both poloidal and toroidal magnetic field components. This is meant to mimic the interactions inside a highly variable Poynting- ux-dominated jet. We discover a significant electromagnetic field (EMF) energy dissipation with an Alfvenic rate with the efficiency around 35%. Detailed analyses show that this dissipation is mostly facilitated by the collision-induced magnetic reconnection. Additional resolution and parameter studies show a robust result that the relative EMF energy dissipation efficiency is nearly independent of the numerical resolution or most physical parameters in themore » relevant parameter range. The reconnection outflows in our simulation can potentially form the multi-orientation relativistic mini-jets as needed for several analytical models. We also find a linear relationship between the σ values before and after the major EMF energy dissipation process. In conclusion, our results give support to the proposed astrophysical models that invoke signi cant magnetic energy dissipation in Poynting- ux-dominated jets, such as the internal collision-induced magnetic reconnection and turbulence (ICMART) model for GRBs, and reconnection triggered mini-jets model for AGNs.« less
Enhancing the Effectiveness of Smoking Treatment Research: Conceptual Bases and Progress
Baker, Timothy B.; Collins, Linda M.; Mermelstein, Robin; Piper, Megan E.; Schlam, Tanya R.; Cook, Jessica W.; Bolt, Daniel M.; Smith, Stevens S.; Jorenby, Douglas E.; Fraser, David; Loh, Wei-Yin; Theobald, Wendy E.; Fiore, Michael C.
2015-01-01
Background and aims A chronic care strategy could potentially enhance the reach and effectiveness of smoking treatment by providing effective interventions for all smokers, including those who are initially unwilling to quit. This paper describes the conceptual bases of a National Cancer Institute-funded research program designed to develop an optimized, comprehensive, chronic care smoking treatment. Methods This research is grounded in three methodological approaches: 1) the Phase-Based Model, which guides the selection of intervention components to be experimentally evaluated for the different phases of smoking treatment (motivation, preparation, cessation, and maintenance); 2) the Multiphase Optimization Strategy (MOST), which guides the screening of intervention components via efficient experimental designs and, ultimately, the assembly of promising components into an optimized treatment package; and 3) pragmatic research methods, such as electronic health record recruitment, that facilitate the efficient translation of research findings into clinical practice. Using this foundation and working in primary care clinics, we conducted three factorial experiments (reported in three accompanying articles) to screen 15 motivation, preparation, cessation, and maintenance phase intervention components for possible inclusion in a chronic care smoking treatment program. Results This research identified intervention components with relatively strong evidence of effectiveness at particular phases of smoking treatment and it demonstrated the efficiency of the MOST approach in terms both of the number of intervention components tested and of the richness of the information yielded. Conclusions A new, synthesized research approach efficiently evaluates multiple intervention components to identify promising components for every phase of smoking treatment. Many intervention components interact with one another, supporting the use of factorial experiments in smoking treatment development. PMID:26581974
Cortical networks involved in visual awareness independent of visual attention.
Webb, Taylor W; Igelström, Kajsa M; Schurger, Aaron; Graziano, Michael S A
2016-11-29
It is now well established that visual attention, as measured with standard spatial attention tasks, and visual awareness, as measured by report, can be dissociated. It is possible to attend to a stimulus with no reported awareness of the stimulus. We used a behavioral paradigm in which people were aware of a stimulus in one condition and unaware of it in another condition, but the stimulus drew a similar amount of spatial attention in both conditions. The paradigm allowed us to test for brain regions active in association with awareness independent of level of attention. Participants performed the task in an MRI scanner. We looked for brain regions that were more active in the aware than the unaware trials. The largest cluster of activity was obtained in the temporoparietal junction (TPJ) bilaterally. Local independent component analysis (ICA) revealed that this activity contained three distinct, but overlapping, components: a bilateral, anterior component; a left dorsal component; and a right dorsal component. These components had brain-wide functional connectivity that partially overlapped the ventral attention network and the frontoparietal control network. In contrast, no significant activity in association with awareness was found in the banks of the intraparietal sulcus, a region connected to the dorsal attention network and traditionally associated with attention control. These results show the importance of separating awareness and attention when testing for cortical substrates. They are also consistent with a recent proposal that awareness is associated with ventral attention areas, especially in the TPJ.
An expeditious and efficient protocol for the synthesis of naphthopyrans has been developed that proceeds via one-pot three-component sequential reaction in water catalyzed by hydroxyapatite or sodium-modified-hydroxyapatite. The title compounds have been obtained in high yield a...
Family Differences in Aboveground Biomass Allocation in Loblolly Pine
Scott D. Roberts
2002-01-01
The proportion of tree growth allocated to stemwood is an important economic component of growth efficiency. Differences in growth efficiency between species, or between families within species, may therefore be related to how growth is proportionally allocated between the stem and other aboveground biomass components. This study examines genetically related...
Jackson, James G; St Clair, Patricia; Sliwkowski, Mark X; Brattain, Michael G
2004-04-01
Due to heterodimerization and a variety of stimulating ligands, the ErbB receptor system is both diverse and flexible, which proves particularly advantageous to the aberrant signaling of cancer cells. However, specific mechanisms of how a particular receptor contributes to generating the flexibility that leads to aberrant growth regulation have not been well described. We compared the utilization of ErbB2 in response to epidermal growth factor (EGF) and heregulin stimulation in colon carcinoma cells. Anti-ErbB2 monoclonal antibody 2C4 blocked heregulin-stimulated phosphorylation of ErbB2 and ErbB3; activation of mitogen-activated protein kinase (MAPK), phosphatidylinositol 3'-kinase (PI3K), and Akt; proliferation; and anchorage-independent growth. 2C4 blocked EGF-mediated phosphorylation of ErbB2 and inhibited PI3K/Akt and anchorage-independent growth but did not affect ErbB1 or MAPK. Immunoprecipitations showed that ErbB3 and Grb2-associated binder (Gab) 1 were phosphorylated and associated with PI3K activity after heregulin treatment and that Gab1 and Gab2, but not ErbB3, were phosphorylated and associated with PI3K activity after EGF treatment. These data show that monoclonal antibody 2C4 inhibited all aspects of heregulin signaling as well as anchorage-independent and monolayer growth. Furthermore, we identify ErbB2 as a critical component of EGF signaling to the Gab1/Gab2-PI3K-Akt pathway and anchorage-independent growth, but EGF stimulation of MAPK and monolayer growth can occur efficiently without the contribution of ErbB2.
A Local Learning Rule for Independent Component Analysis
Isomura, Takuya; Toyoizumi, Taro
2016-01-01
Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering. PMID:27323661
Skp1 Independent Function of Cdc53/Cul1 in F-box Protein Homeostasis.
Mathur, Radhika; Yen, James L; Kaiser, Peter
2015-12-01
Abundance of substrate receptor subunits of Cullin-RING ubiquitin ligases (CRLs) is tightly controlled to maintain the full repertoire of CRLs. Unbalanced levels can lead to sequestration of CRL core components by a few overabundant substrate receptors. Numerous diseases, including cancer, have been associated with misregulation of substrate receptor components, particularly for the largest class of CRLs, the SCF ligases. One relevant mechanism that controls abundance of their substrate receptors, the F-box proteins, is autocatalytic ubiquitylation by intact SCF complex followed by proteasome-mediated degradation. Here we describe an additional pathway for regulation of F-box proteins on the example of yeast Met30. This ubiquitylation and degradation pathway acts on Met30 that is dissociated from Skp1. Unexpectedly, this pathway required the cullin component Cdc53/Cul1 but was independent of the other central SCF component Skp1. We demonstrated that this non-canonical degradation pathway is critical for chromosome stability and effective defense against heavy metal stress. More importantly, our results assign important biological functions to a sub-complex of cullin-RING ligases that comprises Cdc53/Rbx1/Cdc34, but is independent of Skp1.
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk
2014-01-01
Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here wemore » present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.« less
Evidence for modality-independent order coding in working memory.
Depoorter, Ann; Vandierendonck, André
2009-03-01
The aim of the present study was to investigate the representation of serial order in working memory, more specifically whether serial order is coded by means of a modality-dependent or a modality-independent order code. This was investigated by means of a series of four experiments based on a dual-task methodology in which one short-term memory task was embedded between the presentation and recall of another short-term memory task. Two aspects were varied in these memory tasks--namely, the modality of the stimulus materials (verbal or visuo-spatial) and the presence of an order component in the task (an order or an item memory task). The results of this study showed impaired primary-task recognition performance when both the primary and the embedded task included an order component, irrespective of the modality of the stimulus materials. If one or both of the tasks did not contain an order component, less interference was found. The results of this study support the existence of a modality-independent order code.
Artifacts and noise removal in electrocardiograms using independent component analysis.
Chawla, M P S; Verma, H K; Kumar, Vinod
2008-09-26
Independent component analysis (ICA) is a novel technique capable of separating independent components from electrocardiogram (ECG) complex signals. The purpose of this analysis is to evaluate the effectiveness of ICA in removing artifacts and noise from ECG recordings. ICA is applied to remove artifacts and noise in ECG segments of either an individual ECG CSE data base file or all files. The reconstructed ECGs are compared with the original ECG signal. For the four special cases discussed, the R-Peak magnitudes of the CSE data base ECG waveforms before and after applying ICA are also found. In the results, it is shown that in most of the cases, the percentage error in reconstruction is very small. The results show that there is a significant improvement in signal quality, i.e. SNR. All the ECG recording cases dealt showed an improved ECG appearance after the use of ICA. This establishes the efficacy of ICA in elimination of noise and artifacts in electrocardiograms.
Deciphering the Functional Composition of Fusogenic Liposomes
Kolašinac, Rejhana; Kleusch, Christian; Braun, Tobias; Merkel, Rudolf; Csiszár, Agnes
2018-01-01
Cationic liposomes are frequently used as carrier particles for nucleic acid delivery. The most popular formulation is the equimolar mixture of two components, a cationic lipid and a neutral phosphoethanolamine. Its uptake pathway has been described as endocytosis. The presence of an aromatic molecule as a third component strongly influences the cellular uptake process and results in complete membrane fusion instead of endocytosis. Here, we systematically varied all three components of this lipid mixture and determined how efficiently the resulting particles fused with the plasma membrane of living mammalian cells. Our results show that an aromatic molecule and a cationic lipid component with conical molecular shape are essential for efficient fusion induction. While a neutral lipid is not mandatory, it can be used to control fusion efficiency and, in the most extreme case, to revert the uptake mechanism back to endocytosis. PMID:29364187
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Advances in high temperature components for AMTEC (alkali metal thermal-to-electric converter)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, R.M.; Jeffries-Nakamura, B.; Underwood, M.L.
1991-12-31
Long lifetimes are required for AMTEC (or sodium heat engine) components for aerospace and terrestrial applications, and the high heat input temperature as well as the alkali metal liquid and vapor environment places unusual demands on the materials used to construct AMTEC devices. In addition, it is important to maximize device efficiency and power density, while maintaining a long life capability. In addition to the electrode, which must provide both efficient electrode kinetics, transport of the alkali metal, and low electrical resistance, other high temperature components of the cell face equally demanding requirements. The beta{double_prime} alumina solid electrolyte (BASE), themore » seal between the BASE ceramic and its metallic transition to the hot alkali metal (liquid or vapor) source, and metallic components of the device are exposed to hot liquid alkali metal. Modification of AMTEC components may also be useful in optimizing the device for particular operating conditions. In particular, a potassium AMTEC may be expected to operate more efficiently at lower temperatures.« less
Advances in high temperature components for AMTEC (alkali metal thermal-to-electric converter)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, R.M.; Jeffries-Nakamura, B.; Underwood, M.L.
1991-01-01
Long lifetimes are required for AMTEC (or sodium heat engine) components for aerospace and terrestrial applications, and the high heat input temperature as well as the alkali metal liquid and vapor environment places unusual demands on the materials used to construct AMTEC devices. In addition, it is important to maximize device efficiency and power density, while maintaining a long life capability. In addition to the electrode, which must provide both efficient electrode kinetics, transport of the alkali metal, and low electrical resistance, other high temperature components of the cell face equally demanding requirements. The beta{double prime} alumina solid electrolyte (BASE),more » the seal between the BASE ceramic and its metallic transition to the hot alkali metal (liquid or vapor) source, and metallic components of the device are exposed to hot liquid alkali metal. Modification of AMTEC components may also be useful in optimizing the device for particular operating conditions. In particular, a potassium AMTEC may be expected to operate more efficiently at lower temperatures.« less
Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images
Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049
Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.
Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús
2014-01-01
Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.
Smolinski, Tomasz G; Buchanan, Roger; Boratyn, Grzegorz M; Milanova, Mariofanna; Prinz, Astrid A
2006-01-01
Background Independent Component Analysis (ICA) proves to be useful in the analysis of neural activity, as it allows for identification of distinct sources of activity. Applied to measurements registered in a controlled setting and under exposure to an external stimulus, it can facilitate analysis of the impact of the stimulus on those sources. The link between the stimulus and a given source can be verified by a classifier that is able to "predict" the condition a given signal was registered under, solely based on the components. However, the ICA's assumption about statistical independence of sources is often unrealistic and turns out to be insufficient to build an accurate classifier. Therefore, we propose to utilize a novel method, based on hybridization of ICA, multi-objective evolutionary algorithms (MOEA), and rough sets (RS), that attempts to improve the effectiveness of signal decomposition techniques by providing them with "classification-awareness." Results The preliminary results described here are very promising and further investigation of other MOEAs and/or RS-based classification accuracy measures should be pursued. Even a quick visual analysis of those results can provide an interesting insight into the problem of neural activity analysis. Conclusion We present a methodology of classificatory decomposition of signals. One of the main advantages of our approach is the fact that rather than solely relying on often unrealistic assumptions about statistical independence of sources, components are generated in the light of a underlying classification problem itself. PMID:17118151
Franco, Alexandre R; Ling, Josef; Caprihan, Arvind; Calhoun, Vince D; Jung, Rex E; Heileman, Gregory L; Mayer, Andrew R
2008-12-01
The human brain functions as an efficient system where signals arising from gray matter are transported via white matter tracts to other regions of the brain to facilitate human behavior. However, with a few exceptions, functional and structural neuroimaging data are typically optimized to maximize the quantification of signals arising from a single source. For example, functional magnetic resonance imaging (FMRI) is typically used as an index of gray matter functioning whereas diffusion tensor imaging (DTI) is typically used to determine white matter properties. While it is likely that these signals arising from different tissue sources contain complementary information, the signal processing algorithms necessary for the fusion of neuroimaging data across imaging modalities are still in a nascent stage. In the current paper we present a data-driven method for combining measures of functional connectivity arising from gray matter sources (FMRI resting state data) with different measures of white matter connectivity (DTI). Specifically, a joint independent component analysis (J-ICA) was used to combine these measures of functional connectivity following intensive signal processing and feature extraction within each of the individual modalities. Our results indicate that one of the most predominantly used measures of functional connectivity (activity in the default mode network) is highly dependent on the integrity of white matter connections between the two hemispheres (corpus callosum) and within the cingulate bundles. Importantly, the discovery of this complex relationship of connectivity was entirely facilitated by the signal processing and fusion techniques presented herein and could not have been revealed through separate analyses of both data types as is typically performed in the majority of neuroimaging experiments. We conclude by discussing future applications of this technique to other areas of neuroimaging and examining potential limitations of the methods.
Software component quality evaluation
NASA Technical Reports Server (NTRS)
Clough, A. J.
1991-01-01
The paper describes a software inspection process that can be used to evaluate the quality of software components. Quality criteria, process application, independent testing of the process and proposed associated tool support are covered. Early results indicate that this technique is well suited for assessing software component quality in a standardized fashion. With automated machine assistance to facilitate both the evaluation and selection of software components, such a technique should promote effective reuse of software components.
Dual-energy x-ray image decomposition by independent component analysis
NASA Astrophysics Data System (ADS)
Jiang, Yifeng; Jiang, Dazong; Zhang, Feng; Zhang, Dengfu; Lin, Gang
2001-09-01
The spatial distributions of bone and soft tissue in human body are separated by independent component analysis (ICA) of dual-energy x-ray images. It is because of the dual energy imaging modelí-s conformity to the ICA model that we can apply this method: (1) the absorption in body is mainly caused by photoelectric absorption and Compton scattering; (2) they take place simultaneously but are mutually independent; and (3) for monochromatic x-ray sources the total attenuation is achieved by linear combination of these two absorption. Compared with the conventional method, the proposed one needs no priori information about the accurate x-ray energy magnitude for imaging, while the results of the separation agree well with the conventional one.
Shrinkage simplex-centroid designs for a quadratic mixture model
NASA Astrophysics Data System (ADS)
Hasan, Taha; Ali, Sajid; Ahmed, Munir
2018-03-01
A simplex-centroid design for q mixture components comprises of all possible subsets of the q components, present in equal proportions. The design does not contain full mixture blends except the overall centroid. In real-life situations, all mixture blends comprise of at least a minimum proportion of each component. Here, we introduce simplex-centroid designs which contain complete blends but with some loss in D-efficiency and stability in G-efficiency. We call such designs as shrinkage simplex-centroid designs. Furthermore, we use the proposed designs to generate component-amount designs by their projection.
Experimental evaluation of cooling efficiency of the high performance cooling device
NASA Astrophysics Data System (ADS)
Nemec, Patrik; Malcho, Milan
2016-06-01
This work deal with experimental evaluation of cooling efficiency of cooling device capable transfer high heat fluxes from electric elements to the surrounding. The work contain description of cooling device, working principle of cooling device, construction of cooling device. Experimental part describe the measuring method of device cooling efficiency evaluation. The work results are presented in graphic visualization of temperature dependence of the contact area surface between cooling device evaporator and electronic components on the loaded heat of electronic components in range from 250 to 740 W and temperature dependence of the loop thermosiphon condenser surface on the loaded heat of electronic components in range from 250 to 740 W.
Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems
NASA Astrophysics Data System (ADS)
Kreuder, John J.
Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.
Goyret, Joaquín; Kelber, Almut
2012-01-01
Most visual systems are more sensitive to luminance than to colour signals. Animals resolve finer spatial detail and temporal changes through achromatic signals than through chromatic ones. Probably, this explains that detection of small, distant, or moving objects is typically mediated through achromatic signals. Macroglossum stellatarum are fast flying nectarivorous hawkmoths that inspect flowers with their long proboscis while hovering. They can visually control this behaviour using floral markings known as nectar guides. Here, we investigate whether this is mediated by chromatic or achromatic cues. We evaluated proboscis placement, foraging efficiency, and inspection learning of naïve moths foraging on flower models with coloured markings that offered either chromatic, achromatic or both contrasts. Hummingbird hawkmoths could use either achromatic or chromatic signals to inspect models while hovering. We identified three, apparently independent, components controlling proboscis placement: After initial contact, 1) moths directed their probing towards the yellow colour irrespectively of luminance signals, suggesting a dominant role of chromatic signals; and 2) moths tended to probe mainly on the brighter areas of models that offered only achromatic signals. 3) During the establishment of the first contact, naïve moths showed a tendency to direct their proboscis towards the small floral marks independent of their colour or luminance. Moths learned to find nectar faster, but their foraging efficiency depended on the flower model they foraged on. Our results imply that M. stellatarum can perceive small patterns through colour vision. We discuss how the different informational contents of chromatic and luminance signals can be significant for the control of flower inspection, and visually guided behaviours in general.
Bloom, A. Anthony; Exbrayat, Jean-François; van der Velde, Ivar R.; Feng, Liang; Williams, Mathew
2016-01-01
The terrestrial carbon cycle is currently the least constrained component of the global carbon budget. Large uncertainties stem from a poor understanding of plant carbon allocation, stocks, residence times, and carbon use efficiency. Imposing observational constraints on the terrestrial carbon cycle and its processes is, therefore, necessary to better understand its current state and predict its future state. We combine a diagnostic ecosystem carbon model with satellite observations of leaf area and biomass (where and when available) and soil carbon data to retrieve the first global estimates, to our knowledge, of carbon cycle state and process variables at a 1° × 1° resolution; retrieved variables are independent from the plant functional type and steady-state paradigms. Our results reveal global emergent relationships in the spatial distribution of key carbon cycle states and processes. Live biomass and dead organic carbon residence times exhibit contrasting spatial features (r = 0.3). Allocation to structural carbon is highest in the wet tropics (85–88%) in contrast to higher latitudes (73–82%), where allocation shifts toward photosynthetic carbon. Carbon use efficiency is lowest (0.42–0.44) in the wet tropics. We find an emergent global correlation between retrievals of leaf mass per leaf area and leaf lifespan (r = 0.64–0.80) that matches independent trait studies. We show that conventional land cover types cannot adequately describe the spatial variability of key carbon states and processes (multiple correlation median = 0.41). This mismatch has strong implications for the prediction of terrestrial carbon dynamics, which are currently based on globally applied parameters linked to land cover or plant functional types. PMID:26787856
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Design study of a kinematic Stirling engine for dispered solar electric power systems
NASA Technical Reports Server (NTRS)
1980-01-01
The concept evaluation shows that the four cylinder double acting U type Stirling engine with annular regenerators is the most suitable engine type for the 15 kW solar application with respect to design, performance and cost. Results show that near term performance for a metallic Stirling engine is 42% efficiency. Further improved components show an impact on efficiency of the future metallic engine to 45%. Increase of heater temperature, through the introduction of ceramic components, contribute the greatest amount to achieve high efficiency goals. Future ceramic Stirling engines for solar applications show an efficiency of around 50%.
The Influence of Alertness on Spatial and Nonspatial Components of Visual Attention
ERIC Educational Resources Information Center
Matthias, Ellen; Bublak, Peter; Muller, Hermann J.; Schneider, Werner X.; Krummenacher, Joseph; Finke, Kathrin
2010-01-01
Three experiments investigated whether spatial and nonspatial components of visual attention would be influenced by changes in (healthy, young) subjects' level of alertness and whether such effects on separable components would occur independently of each other. The experiments used a no-cue/alerting-cue design with varying cue-target stimulus…
ERIC Educational Resources Information Center
Tighe, Elizabeth L.; Schatschneider, Christopher
2016-01-01
The current study employed a meta-analytic approach to investigate the relative importance of component reading skills to reading comprehension in struggling adult readers. A total of 10 component skills were consistently identified across 16 independent studies and 2,707 participants. Random effects models generated 76 predictor-reading…
Automatic and Direct Identification of Blink Components from Scalp EEG
Kong, Wanzeng; Zhou, Zhanpeng; Hu, Sanqing; Zhang, Jianhai; Babiloni, Fabio; Dai, Guojun
2013-01-01
Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG) recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA). Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn't need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG) reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects. PMID:23959240
78 FR 33838 - DOE Participation in Development of the International Energy Conservation Code
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-05
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket No. EERE-2012-BT-BC... Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice and request for comment... Efficiency and Renewable Energy, Building Technologies Office, Mailstop EE-2J, 1000 Independence Avenue SW...
Unified design of sinusoidal-groove fused-silica grating.
Feng, Jijun; Zhou, Changhe; Cao, Hongchao; Lu, Peng
2010-10-20
A general design rule of deep-etched subwavelength sinusoidal-groove fused-silica grating as a highly efficient polarization-independent or polarization-selective device is studied based on the simplified modal method, which shows that the device structure depends little on the incident wavelength, but mainly on the ratio of groove depth to incident wavelength and the ratio of wavelength to grating period. These two ratios could be used as the design guidelines for wavelength-independent structure from deep ultraviolet to far infrared. The optimized grating profile with a different function as a polarizing beam splitter, a polarization-independent two-port beam splitter, or a polarization-independent grating with high efficiency of -1st order is obtained at a wavelength of 1064 nm, and verified by using the rigorous coupled-wave analysis. The performance of the sinusoidal grating is better than a conventional rectangular one, which could be useful for practical applications.
[Intranet applications in radiology].
Knopp, M V; von Hippel, G M; Koch, T; Knopp, M A
2000-01-01
The aim of the paper is to present the conceptual basis and capabilities of intranet applications in radiology. The intranet, which is the local brother of the internet can be readily realized using existing computer components and a network. All current computer operating systems support intranet applications which allow hard and software independent communication of text, images, video and sound with the use of browser software without dedicated programs on the individual personal computers. Radiological applications for text communication e.g. department specific bulletin boards and access to examination protocols; use of image communication for viewing and limited processing and documentation of radiological images can be achieved on decentralized PCs as well as speech communication for dictation, distribution of dictation and speech recognition. The intranet helps to optimize the organizational efficiency and cost effectiveness in the daily work of radiological departments in outpatients and hospital settings. The general interest in internet and intranet technology will guarantee its continuous development.
Prediction of BP reactivity to talking using hybrid soft computing approaches.
Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar
2014-01-01
High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.
Horiuchi, Tsutomu; Hayashi, Katsuyoshi; Seyama, Michiko; Inoue, Suzuyo; Tamechika, Emi
2012-10-18
A passive pump consisting of integrated vertical capillaries has been developed for a microfluidic chip as an useful component with an excellent flow volume and flow rate. A fluidic chip built into a passive pump was used by connecting the bottoms of all the capillaries to a top surface consisting of a thin layer channel in the microfluidic chip where the thin layer channel depth was smaller than the capillary radius. As a result the vertical capillaries drew fluid cooperatively rather than independently, thus exerting the maximum suction efficiency at every instance. This meant that a flow rate was realized that exhibited little variation and without any external power or operation. A microfluidic chip built into this passive pump had the ability to achieve a quasi-steady rather than a rapidly decreasing flow rate, which is a universal flow characteristic in an ordinary capillary.
Sluggish vagal brake reactivity to physical exercise challenge in children with selective mutism.
Heilman, Keri J; Connolly, Sucheta D; Padilla, Wendy O; Wrzosek, Marika I; Graczyk, Patricia A; Porges, Stephen W
2012-02-01
Cardiovascular response patterns to laboratory-based social and physical exercise challenges were evaluated in 69 children and adolescents, 20 with selective mutism (SM), to identify possible neurophysiological mechanisms that may mediate the behavioral features of SM. Results suggest that SM is associated with a dampened response of the vagal brake to physical exercise that is manifested as reduced reactivity in heart rate and respiration. Polyvagal theory proposes that the regulation of the vagal brake is a neurophysiological component of an integrated social engagement system that includes the neural regulation of the laryngeal and pharyngeal muscles. Within this theoretical framework, sluggish vagal brake reactivity may parallel an inability to recruit efficiently the structures involved in speech. Thus, the findings suggest that dampened autonomic reactivity during mobilization behaviors may be a biomarker of SM that can be assessed independent of the social stimuli that elicit mutism.
Dynamically variable negative stiffness structures.
Churchill, Christopher B; Shahan, David W; Smith, Sloan P; Keefe, Andrew C; McKnight, Geoffrey P
2016-02-01
Variable stiffness structures that enable a wide range of efficient load-bearing and dexterous activity are ubiquitous in mammalian musculoskeletal systems but are rare in engineered systems because of their complexity, power, and cost. We present a new negative stiffness-based load-bearing structure with dynamically tunable stiffness. Negative stiffness, traditionally used to achieve novel response from passive structures, is a powerful tool to achieve dynamic stiffness changes when configured with an active component. Using relatively simple hardware and low-power, low-frequency actuation, we show an assembly capable of fast (<10 ms) and useful (>100×) dynamic stiffness control. This approach mitigates limitations of conventional tunable stiffness structures that exhibit either small (<30%) stiffness change, high friction, poor load/torque transmission at low stiffness, or high power active control at the frequencies of interest. We experimentally demonstrate actively tunable vibration isolation and stiffness tuning independent of supported loads, enhancing applications such as humanoid robotic limbs and lightweight adaptive vibration isolators.
Open-circuit voltage improvements in low-resistivity solar cells
NASA Technical Reports Server (NTRS)
Godlewski, M. P.; Klucher, T. M.; Mazaris, G. A.; Weizer, V. G.
1979-01-01
Mechanisms limiting the open-circuit voltage in 0.1 ohm-cm solar cells were investigated. It was found that a rather complicated multistep diffusion process could produce cells with significantly improved voltages. The voltage capabilities of various laboratory cells were compared independent of their absorption and collection efficiencies. This was accomplished by comparing the cells on the basis of their saturation currents or, equivalently, comparing their voltage outputs at a constant current-density level. The results show that for both the Lewis diffused emitter cell and the Spire ion-implanted emitter cell the base component of the saturation current is voltage controlling. The evidence for the University of Florida cells, although not very conclusive, suggests emitter control of the voltage in this device. The data suggest further that the critical voltage-limiting parameter for the Lewis cell is the electron mobility in the cell base.
Wang, Yan; Zheng, Xiyin; Yu, Bingjie; Han, Shaojie; Guo, Jiangbo; Tang, Haiping; Yu, Alice Yunzi L; Deng, Haiteng; Hong, Yiguo; Liu, Yule
2015-01-01
Microtubules, the major components of cytoskeleton, are involved in various fundamental biological processes in plants. Recent studies in mammalian cells have revealed the importance of microtubule cytoskeleton in autophagy. However, little is known about the roles of microtubules in plant autophagy. Here, we found that ATG6 interacts with TUB8/β-tubulin 8 and colocalizes with microtubules in Nicotiana benthamiana. Disruption of microtubules by either silencing of tubulin genes or treatment with microtubule-depolymerizing agents in N. benthamiana reduces autophagosome formation during upregulation of nocturnal or oxidation-induced macroautophagy. Furthermore, a blockage of leaf starch degradation occurred in microtubule-disrupted cells and triggered a distinct ATG6-, ATG5- and ATG7-independent autophagic pathway termed starch excess-associated chloroplast autophagy (SEX chlorophagy) for clearance of dysfunctional chloroplasts. Our findings reveal that an intact microtubule network is important for efficient macroautophagy and leaf starch degradation. PMID:26566764
Roshan, Abdul-Rahman A; Gad, Haidy A; El-Ahmady, Sherweit H; Khanbash, Mohamed S; Abou-Shoer, Mohamed I; Al-Azizi, Mohamed M
2013-08-14
This work describes a simple model developed for the authentication of monofloral Yemeni Sidr honey using UV spectroscopy together with chemometric techniques of hierarchical cluster analysis (HCA), principal component analysis (PCA), and soft independent modeling of class analogy (SIMCA). The model was constructed using 13 genuine Sidr honey samples and challenged with 25 honey samples of different botanical origins. HCA and PCA were successfully able to present a preliminary clustering pattern to segregate the genuine Sidr samples from the lower priced local polyfloral and non-Sidr samples. The SIMCA model presented a clear demarcation of the samples and was used to identify genuine Sidr honey samples as well as detect admixture with lower priced polyfloral honey by detection limits >10%. The constructed model presents a simple and efficient method of analysis and may serve as a basis for the authentication of other honey types worldwide.
Access to patents as sources to musical acoustics inventions
NASA Astrophysics Data System (ADS)
Brock-Nannestad, George
2005-09-01
Patents are important sources for the development of any technology. The paper addresses modern methods of access to patent publications relating to musical acoustics, in particular the constructions of instruments and components for instruments, methods for tuning, methods for teaching, and measuring equipment. The patent publications available are, among others, from the U.S., England, France, Germany, Japan, Russia, and the date range is from ca. 1880 to the present day. The two main searchable websites use different classification systems in their approach, and by suitable combination of the information it is possible to target the search efficiently. The paper will demonstrate the recent transfer of inventions relating to physical instruments to electronic simulations, and the fact that most recent inventions were made by independent inventors. A specific example is given by discussing the proposals for improved pipe organ and violin constructions invented in Denmark in the 1930s by Jarnak based on patented improvements for telephone reproducers.
Parallel evolution of the make–accumulate–consume strategy in Saccharomyces and Dekkera yeasts
Rozpędowska, Elżbieta; Hellborg, Linda; Ishchuk, Olena P.; Orhan, Furkan; Galafassi, Silvia; Merico, Annamaria; Woolfit, Megan; Compagno, Concetta; Piškur, Jure
2011-01-01
Saccharomyces yeasts degrade sugars to two-carbon components, in particular ethanol, even in the presence of excess oxygen. This characteristic is called the Crabtree effect and is the background for the 'make–accumulate–consume' life strategy, which in natural habitats helps Saccharomyces yeasts to out-compete other microorganisms. A global promoter rewiring in the Saccharomyces cerevisiae lineage, which occurred around 100 mya, was one of the main molecular events providing the background for evolution of this strategy. Here we show that the Dekkera bruxellensis lineage, which separated from the Saccharomyces yeasts more than 200 mya, also efficiently makes, accumulates and consumes ethanol and acetic acid. Analysis of promoter sequences indicates that both lineages independently underwent a massive loss of a specific cis-regulatory element from dozens of genes associated with respiration, and we show that also in D. bruxellensis this promoter rewiring contributes to the observed Crabtree effect. PMID:21556056
The evolution of CELSS for lunar bases. [Controlled Ecological Life Support Systems
NASA Technical Reports Server (NTRS)
Macelroy, R. D.; Klein, H. P.; Averner, M. M.
1985-01-01
A bioregenerative life support system designed to address the fundamental requirements of a functioning independent lunar base is presented in full. Issues to be discussed are associated with CELSS weight, volume and cost of operation. The fundamental CELSS component is a small, highly automated module containing plants which photosynthesize and provide the crew with food, water and oxygen. Hydrogen, nitrogen and carbon dioxide will be initially brought in from earth, recycled and their waste products conserved. As the insufficiency of buffers necessitates stringent cybernetic control, a stable state will be maintained by computer control. Through genetic engineering and carbon dioxide, temperature, and nutrient manipulation, plant productivity can be increased, while the area necessary for growth and illumination energy decreased. In addition, photosynthetic efficiency can be enhanced through lamp design, fiber optics and the use of appropriate wavelengths. Crop maintenance will be performed by robotics, as a means of preventing plant ailments.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Giuseppone, Nicolas; Schmitt, Jean-Louis; Schwartz, Evan; Lehn, Jean-Marie
2005-04-20
Sc(OTf)(3) efficiently catalyzes the self-sufficient transimination reaction between various types of C=N bonds in organic solvents, with turnover frequencies up to 3600 h(-)(1) and rate accelerations up to 6 x 10(5). The mechanism of the crossover reaction in mixtures of amines and imines is studied, comparing parallel individual reactions with coupled equilibria. The intrinsic kinetic parameters for isolated reactions cannot simply be added up when several components are mixed, and the behavior of the system agrees with the presence of a unique mediator that constitutes the core of a network of competing reactions. In mixed systems, every single amine or imine competes for the same central hub, in accordance with their binding affinity for the catalyst metal ion center. More generally, the study extends the basic principles of constitutional dynamic chemistry to interconnected chemical transformations and provides a step toward dynamic systems of increasing complexity.
Li, Rui; Zhang, Qing; Li, Junbai; Shi, Hualin
2016-01-01
An experimental system was designed to measure in vivo termination efficiency (TE) of the Rho-independent terminator and position–function relations were quantified for the terminator tR2 in Escherichia coli. The terminator function was almost completely repressed when tR2 was located several base pairs downstream from the gene, and TE gradually increased to maximum values with the increasing distance between the gene and terminator. This TE–distance relation reflected a stochastic coupling of the ribosome and RNA polymerase (RNAP). Terminators located in the first 100 bp of the coding region can function efficiently. However, functional repression was observed when the terminator was located in the latter part of the coding region, and the degree of repression was determined by transcriptional and translational dynamics. These results may help to elucidate mechanisms of Rho-independent termination and reveal genomic locations of terminators and functions of the sequence that precedes terminators. These observations may have important applications in synthetic biology. PMID:26602687
Component research for future propulsion systems
NASA Technical Reports Server (NTRS)
Walker, C. L.; Weden, G. J.; Zuk, J.
1981-01-01
Factors affecting the helicopter market are reviewed. The trade-offs involving acquisition cost, mission reliability, and life cycle cost are reviewed, including civil and military aspects. The potential for advanced vehicle configurations with substantial improvements in energy efficiency, operating economics, and characteristics to satisfy the demands of the future market are identified. Advanced propulsion systems required to support these vehicle configurations are discussed, as well as the component technology for the engine systems. Considerations for selection of components in areas of economics and efficiency are presented.
Chess, David G.; Grainger, R. Wayne; Phillips, Tom; Zarzour, Zane D.; Sheppard, Bruce R.
1996-01-01
Objective To review the clinical performance of the anatomic medullary locking (AML) femoral stem in total hip arthroplasty. Design A clinical and radiographic review. Setting A tertiary lower limb joint replacement centre. Patients Two hundred and twenty-one patients with noninflammatory gonarthrosis. Interventions Two hundred and twenty-seven primary total hip arthroplasties with the noncemented AML component completed by two surgeons. Main Outcome Measures Independent review by two experienced reviewers of the postoperative Harris hip score, radiographs of component fixation, size and degree of diaphyseal fill. Results Harris hip score was 84 (range from 43 to 98); component fixation showed bone ingrowth in 41%, stable fixation with fibrous ingrowth in 56% and unstable fixation in 3%; severe thigh pain in 4% of cases correlated with unstable fixation, and there was mild thigh pain in 20% of cases. Conclusion The AML femoral stem performs well in replacement arthroplasty compared with other noncemented stems. PMID:8857987
Signal, noise, and variation in neural and sensory-motor latency
Lee, Joonyeol; Joshua, Mati; Medina, Javier F.; Lisberger, Stephen G.
2016-01-01
Analysis of the neural code for sensory-motor latency in smooth pursuit eye movements reveals general principles of neural variation and the specific origin of motor latency. The trial-by-trial variation in neural latency in MT comprises: a shared component expressed as neuron-neuron latency correlations; and an independent component that is local to each neuron. The independent component arises heavily from fluctuations in the underlying probability of spiking with an unexpectedly small contribution from the stochastic nature of spiking itself. The shared component causes the latency of single neuron responses in MT to be weakly predictive of the behavioral latency of pursuit. Neural latency deeper in the motor system is more strongly predictive of behavioral latency. A model reproduces both the variance of behavioral latency and the neuron-behavior latency correlations in MT if it includes realistic neural latency variation, neuron-neuron latency correlations in MT, and noisy gain control downstream from MT. PMID:26971946
Schmithorst, Vincent J; Brown, Rhonda Douglas
2004-07-01
The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.
Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S
2015-10-09
A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.
Contact-free heart rate measurement using multiple video data
NASA Astrophysics Data System (ADS)
Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei
2013-10-01
In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.
Roh, Taehwan; Song, Kiseok; Cho, Hyunwoo; Shin, Dongjoo; Yoo, Hoi-Jun
2014-12-01
A wearable neuro-feedback system is proposed with a low-power neuro-feedback SoC (NFS), which supports mental status monitoring with encephalography (EEG) and transcranial electrical stimulation (tES) for neuro-modulation. Self-configured independent component analysis (ICA) is implemented to accelerate source separation at low power. Moreover, an embedded support vector machine (SVM) enables online source classification, configuring the ICA accelerator adaptively depending on the types of the decomposed components. Owing to the hardwired accelerating functions, the NFS dissipates only 4.45 mW to yield 16 independent components. For non-invasive neuro-modulation, tES stimulation up to 2 mA is implemented on the SoC. The NFS is fabricated in 130-nm CMOS technology.
Methanol ice in the protostar GL 2136
NASA Technical Reports Server (NTRS)
Skinner, C. J.; Tielens, A. G. G. M.; Barlow, M. J.; Justtanont, K.
1992-01-01
We present ground-based spectra in the 10 and 20 micron atmospheric windows of the deeply embedded protostar GL 2136. These reveal narrow absorption features at 9.7 and 8.9 microns, which we ascribe to the CO-stretch and CH3 rock (respectively) of solid methanol in grain mantles. The peak position of the 9.7 micron band implies that methanol is an important ice mantle component. However, the CH3OH/H2O abundance ratio derived from the observed column densities is only 0.1. This discrepancy suggests that the solid methanol and water ice are located in independent grain components. These independent components may reflect chemical differentiation during grain mantle formation and/or partial outgassing close to the protostar.
Halftoning method for the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1989-01-01
This paper describes a novel computer-graphic technique for the generation of a broad class of motion stimuli for vision research, which uses color table animation in conjunction with a single base image. Using this technique, contrast and temporal frequency can be varied with a negligible amount of computation, once a single-base image is produced. Since only two-bit planes are needed to display a single drifting grating, an eight-bit/pixel display can be used to generate four-component plaids, in which each component of the plaid has independently programmable contrast and temporal frequency. Because the contrast and temporal frequencies of the various components are mutually independent, a large number of two-dimensional stimulus motions can be produced from a single image file.
Maneshi, Mona; Vahdat, Shahabeddin; Gotman, Jean; Grova, Christophe
2016-01-01
Independent component analysis (ICA) has been widely used to study functional magnetic resonance imaging (fMRI) connectivity. However, the application of ICA in multi-group designs is not straightforward. We have recently developed a new method named “shared and specific independent component analysis” (SSICA) to perform between-group comparisons in the ICA framework. SSICA is sensitive to extract those components which represent a significant difference in functional connectivity between groups or conditions, i.e., components that could be considered “specific” for a group or condition. Here, we investigated the performance of SSICA on realistic simulations, and task fMRI data and compared the results with one of the state-of-the-art group ICA approaches to infer between-group differences. We examined SSICA robustness with respect to the number of allowable extracted specific components and between-group orthogonality assumptions. Furthermore, we proposed a modified formulation of the back-reconstruction method to generate group-level t-statistics maps based on SSICA results. We also evaluated the consistency and specificity of the extracted specific components by SSICA. The results on realistic simulated and real fMRI data showed that SSICA outperforms the regular group ICA approach in terms of reconstruction and classification performance. We demonstrated that SSICA is a powerful data-driven approach to detect patterns of differences in functional connectivity across groups/conditions, particularly in model-free designs such as resting-state fMRI. Our findings in task fMRI show that SSICA confirms results of the general linear model (GLM) analysis and when combined with clustering analysis, it complements GLM findings by providing additional information regarding the reliability and specificity of networks. PMID:27729843
Data analysis using a combination of independent component analysis and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Lin, Shih-Lin; Tung, Pi-Cheng; Huang, Norden E.
2009-06-01
A combination of independent component analysis and empirical mode decomposition (ICA-EMD) is proposed in this paper to analyze low signal-to-noise ratio data. The advantages of ICA-EMD combination are these: ICA needs few sensory clues to separate the original source from unwanted noise and EMD can effectively separate the data into its constituting parts. The case studies reported here involve original sources contaminated by white Gaussian noise. The simulation results show that the ICA-EMD combination is an effective data analysis tool.
Investigation for all polarization conversions of the guided-modes in a bending waveguide
NASA Astrophysics Data System (ADS)
Shi, Yunjie; Shang, Hongpeng; Sun, DeGui
2018-03-01
In this work, a new solution to the partial differential Maxwell equations is first derived to investigate all polarization conversions of the transverse and the longitudinal components of guided-modes in a bending waveguide. Then, for the silica-waveguides, the polarization conversion efficiencies are numerical calculated and a significant finding is that the transverse-longitudinal polarization conversion efficiency is much higher than that of transverse-transverse polarization conversion. Furthermore, the dependences of all the conversion efficiencies on waveguide parameters are found. The agreeable results between the numerical calculation and the finite difference time-domain (FDTD) simulation show that for two 100 μm long bending waveguides of 0.75 and 1.50% index contrasts, the amplitude conversion efficiencies from ∼10-3 to ∼10-2 can be realized for the transverse-transverse polarization components and that of ∼10-1 can be realized for the transverse-longitudinal polarization components.
Enhanced efficiency of the second harmonic inhomogeneous component in an opaque cavity.
Roppo, V; Raineri, F; Raj, R; Sagnes, I; Trull, J; Vilaseca, R; Scalora, M; Cojocaru, C
2011-05-15
In this Letter, we experimentally demonstrate the enhancement of the inhomogeneous second harmonic conversion in the opaque region of a GaAs cavity with efficiencies of the order of 0.1% at 612 nm, using 3 ps pump pulses having peak intensities of the order of 10 MW/cm(2). We show that the conversion efficiency of the inhomogeneous, phase-locked second harmonic component is a quadratic function of the cavity factor Q. © 2011 Optical Society of America
Postoperated hip fracture rehabilitation effectiveness and efficiency in a community hospital.
Tan, Adrian K H; Taiju, Rangpa; Menon, Edward B; Koh, Gerald C H
2014-04-01
This study aims to determine the inpatient rehabilitation effectiveness (REs) and rehabilitation efficiency (REy) of hip fracture in a Singapore community hospital (CH), its association with socio-demographic variables, medical comorbidities and admission Shah-modified Barthel Index (BI) score as well as change in independent ambulation from discharge to 4 months later. A retrospective cohort study using data manually extracted from medical records of all patients who had hip fracture within 90 days and admitted to a CH after the operation for rehabilitation. Multiple linear regressions are used to identify independent predictors of REs and REy. The mean REs was 40.4% (95% Confidence Interval (CI), 36.7 to 44.0). The independent predictors of poorer REs on multivariate analysis were older age, Malay (vs non-Malay) patients, fewer numbers of rehabilitative therapy sessions and dementia. The mean REy was 0.41 units per day [CI, 0.36 to 0.46]. The independent predictors of poorer REy on multivariate analysis were higher admission BI and being non-hypertensive patient. The prevalence of independent ambulation improved from 78.9% at the discharge to 88.3% 4 months later. CH inpatient rehabilitative therapy showed REs 40.4% and REy of 0.41 units per day and the optimum number of rehabilitative therapy session was from 28 to 41 in terms of rehabilitation effectiveness and the maximum rehabilitation efficiency was seen in those doing 14 to 27 sessions of rehabilitative therapy. The study also showed improvement in BI at discharge and improvement in the independent ambulation 4 months after discharge from the CH.
NASA Astrophysics Data System (ADS)
Akranata, Ahmad Ridho; Sulistijono, Awali, Jatmoko
2018-04-01
Sacrificial anode is sacirifial component that used to protect steel from corrosion. Generally, the component are made of aluminium and zinc in water environment. Sacrificial anode change the protected metal structure become cathodic with giving current. The advantages of aluminium is corrosion resistance, non toxicity and easy forming. Zinc generally used for coating in steel to prevent steel from corrosion. This research was conducted to analyze the effect of zinc content to the value of cell potential and efficiency aluminium sacrificial anode with sand casting method in 0.2 M sulphuric acid environment. The sacrificial anode fabrication made with alloying aluminium and zinc metals with variation composition of alloy with pure Al, Al-3Zn, Al-6Zn, and Al-9Zn with open die sand casting process. The component installed with ASTM A36 steel. After the research has been done the result showed that addition of zinc content increase the cell potential, protection efficiency, and anode efficiency from steel plate. Cell potential value measurement and weight loss measurement showed that addition of zinc content increase the cell potential value into more positive that can protected the ASTM A36 steel more efficiently that showed in weight loss measurement where the protection efficiency and anodic efficiency of Al-9Zn sacrificial anode is better than protection efficiency and anodic efficiency of pure Al. The highest protection efficiency gotten by Al-9Zn alloy
Yue, Hai-Yuan; Bieberich, Erhard; Xu, Jianhua
2017-08-01
At rat calyx of Held terminals, ATP was required not only for slow endocytosis, but also for rapid phase of compensatory endocytosis. An ATP-independent form of endocytosis was recruited to accelerate membrane retrieval at increased activity and temperature. ATP-independent endocytosis primarily involved retrieval of pre-existing membrane, which depended on Ca 2+ and the activity of neutral sphingomyelinase but not clathrin-coated pit maturation. ATP-independent endocytosis represents a non-canonical mechanism that can efficiently retrieve membrane at physiological conditions without competing for the limited ATP at elevated neuronal activity. Neurotransmission relies on membrane endocytosis to maintain vesicle supply and membrane stability. Endocytosis has been generally recognized as a major ATP-dependent function, which efficiently retrieves more membrane at elevated neuronal activity when ATP consumption within nerve terminals increases drastically. This paradox raises the interesting question of whether increased activity recruits ATP-independent mechanism(s) to accelerate endocytosis at the same time as preserving ATP availability for other tasks. To address this issue, we studied ATP requirement in three typical forms of endocytosis at rat calyx of Held terminals by whole-cell membrane capacitance measurements. At room temperature, blocking ATP hydrolysis effectively abolished slow endocytosis and rapid endocytosis but only partially inhibited excess endocytosis following intense stimulation. The ATP-independent endocytosis occurred at calyces from postnatal days 8-15, suggesting its existence before and after hearing onset. This endocytosis was not affected by a reduction of exocytosis using the light chain of botulinum toxin C, nor by block of clathrin-coat maturation. It was abolished by EGTA, which preferentially blocked endocytosis of retrievable membrane pre-existing at the surface, and was impaired by oxidation of cholesterol and inhibition of neutral sphingomyelinase. ATP-independent endocytosis became more significant at 34-35°C, and recovered membrane by an amount that, on average, was close to exocytosis. The results of the present study suggest that activity and temperature recruit ATP-independent endocytosis of pre-existing membrane (in addition to ATP-dependent endocytosis) to efficiently retrieve membrane at nerve terminals. This less understood endocytosis represents a non-canonical mechanism regulated by lipids such as cholesterol and sphingomyelinase. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
The number processing and calculation system: evidence from cognitive neuropsychology.
Salguero-Alcañiz, M P; Alameda-Bailén, J R
2015-04-01
Cognitive neuropsychology focuses on the concepts of dissociation and double dissociation. The performance of number processing and calculation tasks by patients with acquired brain injury can be used to characterise the way in which the healthy cognitive system manipulates number symbols and quantities. The objective of this study is to determine the components of the numerical processing and calculation system. Participants consisted of 6 patients with acquired brain injuries in different cerebral localisations. We used Batería de evaluación del procesamiento numérico y el cálculo, a battery assessing number processing and calculation. Data was analysed using the difference in proportions test. Quantitative numerical knowledge is independent from number transcoding, qualitative numerical knowledge, and calculation. Recodification is independent from qualitative numerical knowledge and calculation. Quantitative numerical knowledge and calculation are also independent functions. The number processing and calculation system comprises at least 4 components that operate independently: quantitative numerical knowledge, number transcoding, qualitative numerical knowledge, and calculation. Therefore, each one may be damaged selectively without affecting the functioning of another. According to the main models of number processing and calculation, each component has different characteristics and cerebral localisations. Copyright © 2013 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
Solar cell efficiency tables (version 50)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin A.; Hishikawa, Yoshihiro; Warta, Wilhelm
Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Guidelines for inclusion of results into these tables are outlined, and new entries since January 2017 are reviewed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Le; Timbie, Peter T.; Bunn, Emory F.
In this paper, we present a new Bayesian semi-blind approach for foreground removal in observations of the 21 cm signal measured by interferometers. The technique, which we call H i Expectation–Maximization Independent Component Analysis (HIEMICA), is an extension of the Independent Component Analysis technique developed for two-dimensional (2D) cosmic microwave background maps to three-dimensional (3D) 21 cm cosmological signals measured by interferometers. This technique provides a fully Bayesian inference of power spectra and maps and separates the foregrounds from the signal based on the diversity of their power spectra. Relying only on the statistical independence of the components, this approachmore » can jointly estimate the 3D power spectrum of the 21 cm signal, as well as the 2D angular power spectrum and the frequency dependence of each foreground component, without any prior assumptions about the foregrounds. This approach has been tested extensively by applying it to mock data from interferometric 21 cm intensity mapping observations under idealized assumptions of instrumental effects. We also discuss the impact when the noise properties are not known completely. As a first step toward solving the 21 cm power spectrum analysis problem, we compare the semi-blind HIEMICA technique to the commonly used Principal Component Analysis. Under the same idealized circumstances, the proposed technique provides significantly improved recovery of the power spectrum. This technique can be applied in a straightforward manner to all 21 cm interferometric observations, including epoch of reionization measurements, and can be extended to single-dish observations as well.« less
Hiles, Sarah A; Révész, Dóra; Lamers, Femke; Giltay, Erik; Penninx, Brenda W J H
2016-08-01
Metabolic syndrome components-waist circumference, high-density lipoprotein cholesterol (HDL-C), triglycerides, systolic blood pressure and fasting glucose-are cross-sectionally associated with depression and anxiety with differing strength. Few studies examine the relationships over time or whether antidepressants have independent effects. Participants were from the Netherlands Study of Depression and Anxiety (NESDA; N = 2,776; 18-65 years; 66% female). At baseline, 2- and 6-year follow-up, participants completed diagnostic interviews, depression and anxiety symptom inventories, antidepressant use assessment, and measurements of the five metabolic syndrome components. Data were analyzed for the consistency of associations between psychopathology indicators and metabolic syndrome components across the three assessment waves, and whether psychopathology or antidepressant use at one assessment predicts metabolic dysregulation at the next and vice versa. Consistently across waves, psychopathology was associated with generally poorer values of metabolic syndrome components, particularly waist circumference and triglycerides. Stronger associations were observed for psychopathology symptom severity than diagnosis. Antidepressant use was independently associated with higher waist circumference, triglycerides and number of metabolic syndrome abnormalities, and lower HDL-C. Symptom severity and antidepressant use were associated with subsequently increased number of abnormalities, waist circumference, and glucose after 2 but not 4 years. Conversely, there was little evidence that metabolic syndrome components were associated with subsequent psychopathology outcomes. Symptom severity and antidepressant use were independently associated with metabolic dysregulation consistently over time and also had negative consequences for short-term metabolic health. This is of concern given the chronicity of depression and anxiety and prevalence of antidepressant treatment. © 2016 The Authors. Depression and Anxiety published by Wiley Periodicals, Inc.
Efficiency of parallel direct optimization
NASA Technical Reports Server (NTRS)
Janies, D. A.; Wheeler, W. C.
2001-01-01
Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.
NASA Astrophysics Data System (ADS)
Golub, M. A.; Sisakyan, I. N.; Soĭfer, V. A.; Uvarov, G. V.
1989-04-01
Theoretical and experimental investigations are reported of new mode optical components (elements) which are analogs of sinusoidal phase diffraction gratings with a variable modulation depth. Expressions are derived for nonlinear predistortion and depth of modulation, which are essential for effective operation of amplitude and phase mode optical components in devices used for analysis and formation of the transverse mode composition of coherent radiation. An estimate is obtained of the energy efficiency of phase and amplitude mode optical components, and a comparison is made with the results of an experimental investigation of a set of phase optical components matched to Gauss-Laguerre modes. It is shown that the improvement in the energy efficiency of phase mode components, compared with amplitude components, is the same as the improvement achieved using a phase diifraction grating, compared with amplitude grating with the same depth of modulation.
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel andmore » one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a group is similar to all other components as a group. However, some differences were observed. The Supermicro server used 27 percent more power at idle compared to the other brands. The Intel server had a power supply control feature called cold redundancy, and the data suggest that cold redundancy can provide energy savings at low power levels. Test and evaluation methods that might be used by others having limited resources for IT equipment evaluation are explained in the report.« less
Yan, Erjia; Williams, Jake; Chen, Zheng
2017-01-01
Publication metadata help deliver rich analyses of scholarly communication. However, research concepts and ideas are more effectively expressed through unstructured fields such as full texts. Thus, the goals of this paper are to employ a full-text enabled method to extract terms relevant to disciplinary vocabularies, and through them, to understand the relationships between disciplines. This paper uses an efficient, domain-independent term extraction method to extract disciplinary vocabularies from a large multidisciplinary corpus of PLoS ONE publications. It finds a power-law pattern in the frequency distributions of terms present in each discipline, indicating a semantic richness potentially sufficient for further study and advanced analysis. The salient relationships amongst these vocabularies become apparent in application of a principal component analysis. For example, Mathematics and Computer and Information Sciences were found to have similar vocabulary use patterns along with Engineering and Physics; while Chemistry and the Social Sciences were found to exhibit contrasting vocabulary use patterns along with the Earth Sciences and Chemistry. These results have implications to studies of scholarly communication as scholars attempt to identify the epistemological cultures of disciplines, and as a full text-based methodology could lead to machine learning applications in the automated classification of scholarly work according to disciplinary vocabularies.
Kaji, Tomohiro; Ishige, Akiko; Hikida, Masaki; Taka, Junko; Hijikata, Atsushi; Kubo, Masato; Nagashima, Takeshi; Takahashi, Yoshimasa; Kurosaki, Tomohiro; Okada, Mariko; Ohara, Osamu
2012-01-01
One component of memory in the antibody system is long-lived memory B cells selected for the expression of somatically mutated, high-affinity antibodies in the T cell–dependent germinal center (GC) reaction. A puzzling observation has been that the memory B cell compartment also contains cells expressing unmutated, low-affinity antibodies. Using conditional Bcl6 ablation, we demonstrate that these cells are generated through proliferative expansion early after immunization in a T cell–dependent but GC-independent manner. They soon become resting and long-lived and display a novel distinct gene expression signature which distinguishes memory B cells from other classes of B cells. GC-independent memory B cells are later joined by somatically mutated GC descendants at roughly equal proportions and these two types of memory cells efficiently generate adoptive secondary antibody responses. Deletion of T follicular helper (Tfh) cells significantly reduces the generation of mutated, but not unmutated, memory cells early on in the response. Thus, B cell memory is generated along two fundamentally distinct cellular differentiation pathways. One pathway is dedicated to the generation of high-affinity somatic antibody mutants, whereas the other preserves germ line antibody specificities and may prepare the organism for rapid responses to antigenic variants of the invading pathogen. PMID:23027924
Application of the Statistical ICA Technique in the DANCE Data Analysis
NASA Astrophysics Data System (ADS)
Baramsai, Bayarbadrakh; Jandel, M.; Bredeweg, T. A.; Rusev, G.; Walker, C. L.; Couture, A.; Mosby, S.; Ullmann, J. L.; Dance Collaboration
2015-10-01
The Detector for Advanced Neutron Capture Experiments (DANCE) at the Los Alamos Neutron Science Center is used to improve our understanding of the neutron capture reaction. DANCE is a highly efficient 4 π γ-ray detector array consisting of 160 BaF2 crystals which make it an ideal tool for neutron capture experiments. The (n, γ) reaction Q-value equals to the sum energy of all γ-rays emitted in the de-excitation cascades from the excited capture state to the ground state. The total γ-ray energy is used to identify reactions on different isotopes as well as the background. However, it's challenging to identify contribution in the Esum spectra from different isotopes with the similar Q-values. Recently we have tested the applicability of modern statistical methods such as Independent Component Analysis (ICA) to identify and separate different (n, γ) reaction yields on different isotopes that are present in the target material. ICA is a recently developed computational tool for separating multidimensional data into statistically independent additive subcomponents. In this conference talk, we present some results of the application of ICA algorithms and its modification for the DANCE experimental data analysis. This research is supported by the U. S. Department of Energy, Office of Science, Nuclear Physics under the Early Career Award No. LANL20135009.
Williams, Jake; Chen, Zheng
2017-01-01
Publication metadata help deliver rich analyses of scholarly communication. However, research concepts and ideas are more effectively expressed through unstructured fields such as full texts. Thus, the goals of this paper are to employ a full-text enabled method to extract terms relevant to disciplinary vocabularies, and through them, to understand the relationships between disciplines. This paper uses an efficient, domain-independent term extraction method to extract disciplinary vocabularies from a large multidisciplinary corpus of PLoS ONE publications. It finds a power-law pattern in the frequency distributions of terms present in each discipline, indicating a semantic richness potentially sufficient for further study and advanced analysis. The salient relationships amongst these vocabularies become apparent in application of a principal component analysis. For example, Mathematics and Computer and Information Sciences were found to have similar vocabulary use patterns along with Engineering and Physics; while Chemistry and the Social Sciences were found to exhibit contrasting vocabulary use patterns along with the Earth Sciences and Chemistry. These results have implications to studies of scholarly communication as scholars attempt to identify the epistemological cultures of disciplines, and as a full text-based methodology could lead to machine learning applications in the automated classification of scholarly work according to disciplinary vocabularies. PMID:29186141
Energy efficient engine component development and integration program
NASA Technical Reports Server (NTRS)
1980-01-01
The design of an energy efficient commercial turbofan engine is examined with emphasis on lower fuel consumption and operating costs. Propulsion system performance, emission standards, and noise reduction are also investigated. A detailed design analysis of the engine/aircraft configuration, engine components, and core engine is presented along with an evaluation of the technology and testing involved.
ERIC Educational Resources Information Center
You, Di; Bebeau, Muriel J.
2013-01-01
Rest's hypothesis that the components of morality (i.e., sensitivity, reasoning, motivation, and implementation) are distinct from one another was tested using evidence from a dental ethics curriculum that uses well-validated measures of each component. Archival data from five cohorts ("n" = 385) included the following: (1) transcribed…
Using a Modular Construction Kit for the Realization of an Interactive Computer Graphics Course.
ERIC Educational Resources Information Center
Klein, Reinhard; Hanisch, Frank
Recently, platform independent software components, like JavaBeans, have appeared that allow writing reusable components and composing them in a visual builder tool into new applications. This paper describes the use of such models to transform an existing course into a modular construction kit consisting of components of teaching text and program…
A practically unconditionally gradient stable scheme for the N-component Cahn-Hilliard system
NASA Astrophysics Data System (ADS)
Lee, Hyun Geun; Choi, Jeong-Whan; Kim, Junseok
2012-02-01
We present a practically unconditionally gradient stable conservative nonlinear numerical scheme for the N-component Cahn-Hilliard system modeling the phase separation of an N-component mixture. The scheme is based on a nonlinear splitting method and is solved by an efficient and accurate nonlinear multigrid method. The scheme allows us to convert the N-component Cahn-Hilliard system into a system of N-1 binary Cahn-Hilliard equations and significantly reduces the required computer memory and CPU time. We observe that our numerical solutions are consistent with the linear stability analysis results. We also demonstrate the efficiency of the proposed scheme with various numerical experiments.
Qiu, H; Hu, C; Anderson, J; Björk, G R; Sarkar, S; Hopper, A K; Hinnebusch, A G
2000-04-01
Induction of GCN4 translation in amino acid-starved cells involves the inhibition of initiator tRNA(Met) binding to eukaryotic translation initiation factor 2 (eIF2) in response to eIF2 phosphorylation by protein kinase GCN2. It was shown previously that GCN4 translation could be induced independently of GCN2 by overexpressing a mutant tRNA(AAC)(Val) (tRNA(Val*)) or the RNA component of RNase MRP encoded by NME1. Here we show that overexpression of the tRNA pseudouridine 55 synthase encoded by PUS4 also leads to translational derepression of GCN4 (Gcd(-) phenotype) independently of eIF2 phosphorylation. Surprisingly, the Gcd(-) phenotype of high-copy-number PUS4 (hcPUS4) did not require PUS4 enzymatic activity, and several lines of evidence indicate that PUS4 overexpression did not diminish functional initiator tRNA(Met) levels. The presence of hcPUS4 or hcNME1 led to the accumulation of certain tRNA precursors, and their Gcd(-) phenotypes were reversed by overexpressing the RNA component of RNase P (RPR1), responsible for 5'-end processing of all tRNAs. Consistently, overexpression of a mutant pre-tRNA(Tyr) that cannot be processed by RNase P had a Gcd(-) phenotype. Interestingly, the Gcd(-) phenotype of hcPUS4 also was reversed by overexpressing LOS1, required for efficient nuclear export of tRNA, and los1Delta cells have a Gcd(-) phenotype. Overproduced PUS4 appears to impede 5'-end processing or export of certain tRNAs in the nucleus in a manner remedied by increased expression of RNase P or LOS1, respectively. The mutant tRNA(Val*) showed nuclear accumulation in otherwise wild-type cells, suggesting a defect in export to the cytoplasm. We propose that yeast contains a nuclear surveillance system that perceives defects in processing or export of tRNA and evokes a reduction in translation initiation at the step of initiator tRNA(Met) binding to the ribosome.
Qiu, Hongfang; Hu, Cuihua; Anderson, James; Björk, Glenn R.; Sarkar, Srimonti; Hopper, Anita K.; Hinnebusch, Alan G.
2000-01-01
Induction of GCN4 translation in amino acid-starved cells involves the inhibition of initiator tRNAMet binding to eukaryotic translation initiation factor 2 (eIF2) in response to eIF2 phosphorylation by protein kinase GCN2. It was shown previously that GCN4 translation could be induced independently of GCN2 by overexpressing a mutant tRNAAACVal (tRNAVal*) or the RNA component of RNase MRP encoded by NME1. Here we show that overexpression of the tRNA pseudouridine 55 synthase encoded by PUS4 also leads to translational derepression of GCN4 (Gcd− phenotype) independently of eIF2 phosphorylation. Surprisingly, the Gcd− phenotype of high-copy-number PUS4 (hcPUS4) did not require PUS4 enzymatic activity, and several lines of evidence indicate that PUS4 overexpression did not diminish functional initiator tRNAMet levels. The presence of hcPUS4 or hcNME1 led to the accumulation of certain tRNA precursors, and their Gcd− phenotypes were reversed by overexpressing the RNA component of RNase P (RPR1), responsible for 5′-end processing of all tRNAs. Consistently, overexpression of a mutant pre-tRNATyr that cannot be processed by RNase P had a Gcd− phenotype. Interestingly, the Gcd− phenotype of hcPUS4 also was reversed by overexpressing LOS1, required for efficient nuclear export of tRNA, and los1Δ cells have a Gcd− phenotype. Overproduced PUS4 appears to impede 5′-end processing or export of certain tRNAs in the nucleus in a manner remedied by increased expression of RNase P or LOS1, respectively. The mutant tRNAVal* showed nuclear accumulation in otherwise wild-type cells, suggesting a defect in export to the cytoplasm. We propose that yeast contains a nuclear surveillance system that perceives defects in processing or export of tRNA and evokes a reduction in translation initiation at the step of initiator tRNAMet binding to the ribosome. PMID:10713174
Nian, Rui; Liu, Fang; He, Bo
2013-07-16
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).
Nian, Rui; Liu, Fang; He, Bo
2013-01-01
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855
Bailey, Craig S; Denham, Susanne A; Curby, Timothy W; Bassett, Hideko H
2016-03-01
Preschool teachers, like parents, support children in ways that promote the regulation capacities that drive school adjustment, especially for children struggling to succeed in the classroom. The purpose of this study was to explore the emotionally and organizationally supportive classroom processes that contribute to the development of children's emotion regulation and executive control. Emotion regulation and executive control were assessed in 312 3-, 4- and 5-year-old children. The 44 teachers of these children completed questionnaires asking about 3 components of children's school adjustment: Positive/Engaged, Independent/Motivated, and Prosocial/Connected. Observations of classroom emotional and organizational supports were conducted. Results of multilevel models indicated emotion regulation was significantly associated with the Positive/Engaged school adjustment component, but only when teachers' emotional and organizational supports were taken into account. Children with lower levels of emotion regulation, who were also in less supportive classrooms, had the lowest scores on the Positive/Engaged component. Children's executive control was associated with the Independent/Motivated and Prosocial/Connected components independently of teacher effects. In general, moderate support was found for the notion that teachers' supports can be particularly helpful for children struggling to regulate their emotions to be better adjusted to school. Children's emotionally salient classroom behaviors, and teachers' emotion scaffolding, are discussed. (c) 2016 APA, all rights reserved).
Grouping individual independent BOLD effects: a new way to ICA group analysis
NASA Astrophysics Data System (ADS)
Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott
2009-04-01
A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.
Induction and expression of GluA1 (GluR-A)-independent LTP in the hippocampus
Romberg, Carola; Raffel, Joel; Martin, Lucy; Sprengel, Rolf; Seeburg, Peter H; Rawlins, J Nicholas P; Bannerman, David M; Paulsen, Ole
2009-01-01
Long-term potentiation (LTP) at hippocampal CA3–CA1 synapses is thought to be mediated, at least in part, by an increase in the postsynaptic surface expression of α-amino-3-hydroxy-5-methyl-4-isoxazole proprionic acid (AMPA) receptors induced by N-methyl-d-aspartate (NMDA) receptor activation. While this process was originally attributed to the regulated synaptic insertion of GluA1 (GluR-A) subunit-containing AMPA receptors, recent evidence suggests that regulated synaptic trafficking of GluA2 subunits might also contribute to one or several phases of potentiation. However, it has so far been difficult to separate these two mechanisms experimentally. Here we used genetically modified mice lacking the GluA1 subunit (Gria1−/− mice) to investigate GluA1-independent mechanisms of LTP at CA3–CA1 synapses in transverse hippocampal slices. An extracellular, paired theta-burst stimulation paradigm induced a robust GluA1-independent form of LTP lacking the early, rapidly decaying component characteristic of LTP in wild-type mice. This GluA1-independent form of LTP was attenuated by inhibitors of neuronal nitric oxide synthase and protein kinase C (PKC), two enzymes known to regulate GluA2 surface expression. Furthermore, the induction of GluA1-independent potentiation required the activation of GluN2B (NR2B) subunit-containing NMDA receptors. Our findings support and extend the evidence that LTP at hippocampal CA3–CA1 synapses comprises a rapidly decaying, GluA1-dependent component and a more sustained, GluA1-independent component, induced and expressed via a separate mechanism involving GluN2B-containing NMDA receptors, neuronal nitric oxide synthase and PKC. PMID:19302150
Functional Connectivity Parcellation of the Human Thalamus by Independent Component Analysis.
Zhang, Sheng; Li, Chiang-Shan R
2017-11-01
As a key structure to relay and integrate information, the thalamus supports multiple cognitive and affective functions through the connectivity between its subnuclei and cortical and subcortical regions. Although extant studies have largely described thalamic regional functions in anatomical terms, evidence accumulates to suggest a more complex picture of subareal activities and connectivities of the thalamus. In this study, we aimed to parcellate the thalamus and examine whole-brain connectivity of its functional clusters. With resting state functional magnetic resonance imaging data from 96 adults, we used independent component analysis (ICA) to parcellate the thalamus into 10 components. On the basis of the independence assumption, ICA helps to identify how subclusters overlap spatially. Whole brain functional connectivity of each subdivision was computed for independent component's time course (ICtc), which is a unique time series to represent an IC. For comparison, we computed seed-region-based functional connectivity using the averaged time course across all voxels within a thalamic subdivision. The results showed that, at p < 10 -6 , corrected, 49% of voxels on average overlapped among subdivisions. Compared with seed-region analysis, ICtc analysis revealed patterns of connectivity that were more distinguished between thalamic clusters. ICtc analysis demonstrated thalamic connectivity to the primary motor cortex, which has eluded the analysis as well as previous studies based on averaged time series, and clarified thalamic connectivity to the hippocampus, caudate nucleus, and precuneus. The new findings elucidate functional organization of the thalamus and suggest that ICA clustering in combination with ICtc rather than seed-region analysis better distinguishes whole-brain connectivities among functional clusters of a brain region.
Krohn, M.D.; Milton, N.M.; Segal, D.; Enland, A.
1981-01-01
A principal component image enhancement has been effective in applying Landsat data to geologic mapping in a heavily forested area of E Virginia. The image enhancement procedure consists of a principal component transformation, a histogram normalization, and the inverse principal componnet transformation. The enhancement preserves the independence of the principal components, yet produces a more readily interpretable image than does a single principal component transformation. -from Authors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richardson, Gregory D; Goodall, John R; Steed, Chad A
In developing visualizations for different data sets, the end solution often become dependent on the data being visualized. This causes engineers to have to re-develop many common components multiple times. The vis-react components library was designed to help enable creating visualizations that are independent of the underlying data. This library utilizes the React.js pattern of instantiating components that may be re-used. The library exposes an example application that allows other developers to understand how to use the components in the library.
Kaczmarski, Krzysztof; Poe, Donald P; Guiochon, Georges
2010-10-15
When chromatography is carried out with high-density carbon dioxide as the main component of the mobile phase (a method generally known as "supercritical fluid chromatography" or SFC), the required pressure gradient along the column is moderate. However, this mobile phase is highly compressible and, under certain experimental conditions, its density may decrease significantly along the column. Such an expansion absorbs heat, cooling the column, which absorbs heat from the outside. The resulting heat transfer causes the formation of axial and radial gradients of temperature that may become large under certain conditions. Due to these gradients, the mobile phase velocity and most physico-chemical parameters of the system (viscosity, diffusion coefficients, etc.) are no longer constant throughout the column, resulting in a loss of column efficiency, even at low flow rates. At high flow rates and in serious cases, systematic variations of the retention factors and the separation factors with increasing flow rates and important deformations of the elution profiles of all sample components may occur. The model previously used to account satisfactorily for the effects of the viscous friction heating of the mobile phase in HPLC is adapted here to account for the expansion cooling of the mobile phase in SFC and is applied to the modeling of the elution peak profiles of an unretained compound in SFC. The numerical solution of the combined heat and mass balance equations provides temperature and pressure profiles inside the column, and values of the retention time and efficiency for elution of this unretained compound that are in excellent agreement with independent experimental data. Copyright © 2010 Elsevier B.V. All rights reserved.
2011-01-01
Background The process of HIV-1 genomic RNA (gRNA) encapsidation is governed by a number of viral encoded components, most notably the Gag protein and gRNA cis elements in the canonical packaging signal (ψ). Also implicated in encapsidation are cis determinants in the R, U5, and PBS (primer binding site) from the 5' untranslated region (UTR). Although conventionally associated with nuclear export of HIV-1 RNA, there is a burgeoning role for the Rev/RRE in the encapsidation process. Pleiotropic effects exhibited by these cis and trans viral components may confound the ability to examine their independent, and combined, impact on encapsidation of RNA into HIV-1 viral particles in their innate viral context. We systematically reconstructed the HIV-1 packaging system in the context of a heterologous murine leukemia virus (MLV) vector RNA to elucidate a mechanism in which the Rev/RRE system is central to achieving efficient and specific encapsidation into HIV-1 viral particles. Results We show for the first time that the Rev/RRE system can augment RNA encapsidation independent of all cis elements from the 5' UTR (R, U5, PBS, and ψ). Incorporation of all the 5' UTR cis elements did not enhance RNA encapsidation in the absence of the Rev/RRE system. In fact, we demonstrate that the Rev/RRE system is required for specific and efficient encapsidation commonly associated with the canonical packaging signal. The mechanism of Rev/RRE-mediated encapsidation is not a general phenomenon, since the combination of the Rev/RRE system and 5' UTR cis elements did not enhance encapsidation into MLV-derived viral particles. Lastly, we show that heterologous MLV RNAs conform to transduction properties commonly associated with HIV-1 viral particles, including in vivo transduction of non-dividing cells (i.e. mouse neurons); however, the cDNA forms are episomes predominantly in the 1-LTR circle form. Conclusions Premised on encapsidation of a heterologous RNA into HIV-1 viral particles, our findings define a functional HIV-1 packaging system as comprising the 5' UTR cis elements, Gag, and the Rev/RRE system, in which the Rev/RRE system is required to make the RNA amenable to the ensuing interaction between Gag and the canonical packaging signal for subsequent encapsidation. PMID:21702950
Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James
2018-06-01
Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.
Polarization-independent silicon metadevices for efficient optical wavefront control
Chong, Katie E.; Staude, Isabelle; James, Anthony Randolph; ...
2015-07-20
In this study, we experimentally demonstrate a functional silicon metadevice at telecom wavelengths that can efficiently control the wavefront of optical beams by imprinting a spatially varying transmittance phase independent of the polarization of the incident beam. Near-unity transmittance efficiency and close to 0–2π phase coverage are enabled by utilizing the localized electric and magnetic Mie-type resonances of low-loss silicon nanoparticles tailored to behave as electromagnetically dual-symmetric scatterers. We apply this concept to realize a metadevice that converts a Gaussian beam into a vortex beam. The required spatial distribution of transmittance phases is achieved by a variation of the latticemore » spacing as a single geometric control parameter.« less
Polarization-independent silicon metadevices for efficient optical wavefront control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chong, Katie E.; Staude, Isabelle; James, Anthony Randolph
In this study, we experimentally demonstrate a functional silicon metadevice at telecom wavelengths that can efficiently control the wavefront of optical beams by imprinting a spatially varying transmittance phase independent of the polarization of the incident beam. Near-unity transmittance efficiency and close to 0–2π phase coverage are enabled by utilizing the localized electric and magnetic Mie-type resonances of low-loss silicon nanoparticles tailored to behave as electromagnetically dual-symmetric scatterers. We apply this concept to realize a metadevice that converts a Gaussian beam into a vortex beam. The required spatial distribution of transmittance phases is achieved by a variation of the latticemore » spacing as a single geometric control parameter.« less
Solar cell efficiency tables (version 49)
Green, Martin A.; Emery, Keith; Hishikawa, Yoshihiro; ...
2016-11-28
Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Here, guidelines for inclusion of results into these tables are outlined, and new entries since June 2016 are reviewed.
Solar cell efficiency tables (version 49)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, Martin A.; Emery, Keith; Hishikawa, Yoshihiro
Consolidated tables showing an extensive listing of the highest independently confirmed efficiencies for solar cells and modules are presented. Here, guidelines for inclusion of results into these tables are outlined, and new entries since June 2016 are reviewed.
NASA Astrophysics Data System (ADS)
Liu, X.; Beroza, G. C.; Nakata, N.
2017-12-01
Cross-correlation of fully diffuse wavefields provides Green's function between receivers, although the ambient noise field in the real world contains both diffuse and non-diffuse fields. The non-diffuse field potentially degrades the correlation functions. We attempt to blindly separate the diffuse and the non-diffuse components from cross-correlations of ambient seismic noise and analyze the potential bias caused by the non-diffuse components. We compute the 9-component noise cross-correlations for 17 stations in southern California. For the Rayleigh wave components, we assume that the cross-correlation of multiply scattered waves (diffuse component) is independent from the cross-correlation of ocean microseismic quasi-point source responses (non-diffuse component), and the cross-correlation function of ambient seismic data is the sum of both components. Thus we can blindly separate the non-diffuse component due to physical point sources and the more diffuse component due to cross-correlation of multiply scattered noise based on their statistical independence. We also perform beamforming over different frequency bands for the cross-correlations before and after the separation, and we find that the decomposed Rayleigh wave represents more coherent features among all Rayleigh wave polarization cross-correlation components. We show that after separating the non-diffuse component, the Frequency-Time Analysis results are less ambiguous. In addition, we estimate the bias in phase velocity on the raw cross-correlation data due to the non-diffuse component. We also apply this technique to a few borehole stations in Groningen, the Netherlands, to demonstrate its applicability in different instrument/geology settings.
Efficiency analysis for 3D filtering of multichannel images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Rubel, Oleksii; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2016-10-01
Modern remote sensing systems basically acquire images that are multichannel (dual- or multi-polarization, multi- and hyperspectral) where noise, usually with different characteristics, is present in all components. If noise is intensive, it is desirable to remove (suppress) it before applying methods of image classification, interpreting, and information extraction. This can be done using one of two approaches - by component-wise or by vectorial (3D) filtering. The second approach has shown itself to have higher efficiency if there is essential correlation between multichannel image components as this often happens for multichannel remote sensing data of different origin. Within the class of 3D filtering techniques, there are many possibilities and variations. In this paper, we consider filtering based on discrete cosine transform (DCT) and pay attention to two aspects of processing. First, we study in detail what changes in DCT coefficient statistics take place for 3D denoising compared to component-wise processing. Second, we analyze how selection of component images united into 3D data array influences efficiency of filtering and can the observed tendencies be exploited in processing of images with rather large number of channels.
Separating spatial search and efficiency rates as components of predation risk
DeCesare, Nicholas J.
2012-01-01
Predation risk is an important driver of ecosystems, and local spatial variation in risk can have population-level consequences by affecting multiple components of the predation process. I use resource selection and proportional hazard time-to-event modelling to assess the spatial drivers of two key components of risk—the search rate (i.e. aggregative response) and predation efficiency rate (i.e. functional response)—imposed by wolves (Canis lupus) in a multi-prey system. In my study area, both components of risk increased according to topographic variation, but anthropogenic features affected only the search rate. Predicted models of the cumulative hazard, or risk of a kill, underlying wolf search paths validated well with broad-scale variation in kill rates, suggesting that spatial hazard models provide a means of scaling up from local heterogeneity in predation risk to population-level dynamics in predator–prey systems. Additionally, I estimated an integrated model of relative spatial predation risk as the product of the search and efficiency rates, combining the distinct contributions of spatial heterogeneity to each component of risk. PMID:22977145
Optimal Design of Magnetic ComponentsinPlasma Cutting Power Supply
NASA Astrophysics Data System (ADS)
Jiang, J. F.; Zhu, B. R.; Zhao, W. N.; Yang, X. J.; Tang, H. J.
2017-10-01
Phase-shifted transformer and DC reactor are usually needed in chopper plasma cutting power supply. Because of high power rate, the loss of magnetic components may reach to several kilowatts, which seriously affects the conversion efficiency. Therefore, it is necessary to research and design low loss magnetic components by means of efficient magnetic materials and optimal design methods. The main task in this paper is to compare the core loss of different magnetic material, to analyze the influence of transformer structure, winding arrangement and wire structure on the characteristics of magnetic component. Then another task is to select suitable magnetic material, structure and wire in order to reduce the loss and volume of magnetic components. Based on the above outcome, the optimization design process of transformer and dc reactor are proposed in chopper plasma cutting power supply with a lot of solutions. These solutions are analyzed and compared before the determination of the optimal solution in order to reduce the volume and power loss of the two magnetic components and improve the conversion efficiency of plasma cutting power supply.
LeVan, P; Urrestarazu, E; Gotman, J
2006-04-01
To devise an automated system to remove artifacts from ictal scalp EEG, using independent component analysis (ICA). A Bayesian classifier was used to determine the probability that 2s epochs of seizure segments decomposed by ICA represented EEG activity, as opposed to artifact. The classifier was trained using numerous statistical, spectral, and spatial features. The system's performance was then assessed using separate validation data. The classifier identified epochs representing EEG activity in the validation dataset with a sensitivity of 82.4% and a specificity of 83.3%. An ICA component was considered to represent EEG activity if the sum of the probabilities that its epochs represented EEG exceeded a threshold predetermined using the training data. Otherwise, the component represented artifact. Using this threshold on the validation set, the identification of EEG components was performed with a sensitivity of 87.6% and a specificity of 70.2%. Most misclassified components were a mixture of EEG and artifactual activity. The automated system successfully rejected a good proportion of artifactual components extracted by ICA, while preserving almost all EEG components. The misclassification rate was comparable to the variability observed in human classification. Current ICA methods of artifact removal require a tedious visual classification of the components. The proposed system automates this process and removes simultaneously multiple types of artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-innermore » product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.« less
Independent Orbiter Assessment (IOA): Analysis of the mechanical actuation subsystem
NASA Technical Reports Server (NTRS)
Bacher, J. L.; Montgomery, A. D.; Bradway, M. W.; Slaughter, W. T.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Mechanical Actuation System (MAS) hardware. Specifically, the MAS hardware consists of the following components: Air Data Probe (ADP); Elevon Seal Panel (ESP); External Tank Umbilical (ETU); Ku-Band Deploy (KBD); Payload Bay Doors (PBD); Payload Bay Radiators (PBR); Personnel Hatches (PH); Vent Door Mechanism (VDM); and Startracker Door Mechanism (SDM). The IOA analysis process utilized available MAS hardware drawings and schematics for defining hardware assemblies, components, and hardware items. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode.
Corman, Gregory Scot; Dean, Anthony John; Tognarelli, Leonardo; Pecchioli, Mario
2005-06-28
A structure for attaching together or sealing a space between a first component and a second component that have different rates or amounts of dimensional change upon being exposed to temperatures other than ambient temperature. The structure comprises a first attachment structure associated with the first component that slidably engages a second attachment structure associated with the second component, thereby allowing for an independent floating movement of the second component relative to the first component. The structure can comprise split rings, laminar rings, or multiple split rings.
Motor efficiency: compare apples to apples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keinz, J.R.
1982-08-01
The efficiency differences between electric motors are now a significant cost consideration for many companies, but evaluating motor efficiency is not as straightforward as it should be. The buyer must look beyond the manufacturer's designated efficiency, which is too generalized, and the results of independent tests, which vary because of the difficulty in establishing standard conditions. Manufacturers may be following established testing procedures, but not labeling in accordance with the standards. Manufacturers should also supply efficiency versus load-curve data. (DCK)
InP/Ga0.47In0.53As monolithic, two-junction, three-terminal tandem solar cells
NASA Technical Reports Server (NTRS)
Wanlaas, M. W.; Gessert, T. A.; Horner, G. S.; Emery, K. A.; Coutts, T. J.
1991-01-01
The work presented has focussed on increasing the efficiency of InP-based solar cells through the development of a high-performance InP/Ga(0.47)In(0.53)As two-junction, three-terminal monolithic tandem cell. Such a tandem is particularly suited to space applications where a radiation-hard top cell (i.e., InP) is required. Furthermore, the InP/Ga(0.47)In(0.53)As materials system is lattice matched and offers a top cell/bottom cell bandgap differential (0.60 eV at 300 K) suitable for high tandem cell efficiencies under AMO illumination. A three-terminal configuration was chosen since it allows for independent power collection from each subcell in the monolithic stack, thus minimizing the adverse impact of radiation damage on the overall tandem efficiency. Realistic computer modeling calculations predict an efficiency boost of 7 to 11 percent from the Ga(0.47)In(0.53)As bottom cell under AMO illumination (25 C) for concentration ratios in the 1 to 1000 range. Thus, practical AMO efficiencies of 25 to 32 percent appear possible with the InP/Ga(0.47)In(0.53)As tandem cell. Prototype n/p/n InP/Ga(0.47)In(0.53)As monolithic tandem cells were fabricated and tested successfully. Using an aperture to define the illuminated areas, efficiency measurements performed on a non-optimized device under standard global illumination conditions (25 C) with no antireflection coating (ARC) give 12.2 percent for the InP top cell and 3.2 percent for the Ga(0.47)In(0.53)As bottom cell, yielding an overall tandem efficiency of 15.4 percent. With an ARC, the tandem efficiency could reach approximately 22 percent global and approximately 20 percent AMO. Additional details regarding the performance of individual InP and Ga(0.47)In(0.53)As component cells, fabrication and operation of complete tandem cells and methods for improving the tandem cell performance, are also discussed.
An Efficient Implementation For Real Time Applications Of The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Black, Peter; Whitehouse, Harper J.
1986-03-01
The Wigner-Ville Distribution (WVD) is a valuable tool for time-frequency signal analysis. In order to implement the WVD in real time an efficient algorithm and architecture have been developed which may be implemented with commercial components. This algorithm successively computes the analytic signal corresponding to the input signal, forms a weighted kernel function and analyses the kernel via a Discrete Fourier Transform (DFT). To evaluate the analytic signal required by the algorithm it is shown that the time domain definition implemented as a finite impulse response (FIR) filter is practical and more efficient than the frequency domain definition of the analytic signal. The windowed resolution of the WVD in the frequency domain is shown to be similar to the resolution of a windowed Fourier Transform. A real time signal processsor has been designed for evaluation of the WVD analysis system. The system is easily paralleled and can be configured to meet a variety of frequency and time resolutions. The arithmetic unit is based on a pair of high speed VLSI floating-point multiplier and adder chips. Dual operand buses and an independent result bus maximize data transfer rates. The system is horizontally microprogrammed and utilizes a full instruction pipeline. Each microinstruction specifies two operand addresses, a result location, the type of arithmetic and the memory configuration. input and output is via shared memory blocks with front-end processors to handle data transfers during the non access periods of the analyzer.
On the feasibility of a fiber-based inertial fusion laser driver
NASA Astrophysics Data System (ADS)
Labaune, C.; Hulin, D.; Galvanauskas, A.; Mourou, G. A.
2008-08-01
One critical issue for the realization of Inertial Fusion Energy (IFE) power plants is the driver efficiency. High driver efficiency will greatly relax the driver energy requested to produce a fusion gain, resulting in more compact and less costly facilities. Among lasers, systems based on guided wave such as diode pumped Yb:glass fiber-amplifiers with a demonstrated overall efficiency close to 70% as opposed to few percents for systems based on free propagation, offer some intriguing opportunities. Guided optics provides the enormous advantage to directly benefit from the telecommunication industry where components are made cheap, rugged, well tested, environmentally stable, with lifetimes measured in tens of years and compatible with massive manufacturing. In this paper, we are studying the possibility to design a laser driver solely based on guided wave optics. We call this concept FAN for Fiber Amplification Network. It represents a profound departure from already proposed laser drivers all based on free propagation optics. The system will use a large number of identical fibers to combines long (ns) and short (ps) pulses that are needed for the fast ignition scheme. Technical details are discussed relative to fiber type, pump, phasing, pulse shaping and timing as well as fiber distribution around the chamber. The proposed fiber driver provides maximum and independent control on the wavefront, pulse duration, pulse shape, timing, making possible reaching the highest gain. The massive manufacturing will be amenable to a cheaper facility with an easy upkeep.
Xu, Yin; Xiao, Jinbiao
2016-01-01
On-chip polarization manipulation is pivotal for silicon-on-insulator material platform to realize polarization-transparent circuits and polarization-division-multiplexing transmissions, where polarization splitters and rotators are fundamental components. In this work, we propose an ultracompact and high efficient silicon-based polarization splitter-rotator (PSR) using a partially-etched subwavelength grating (SWG) coupler. The proposed PSR consists of a taper-integrated SWG coupler combined with a partially-etched waveguide between the input and output strip waveguides to make the input transverse-electric (TE) mode couple and convert to the output transverse-magnetic (TM) mode at the cross port while the input TM mode confine well in the strip waveguide during propagation and directly output from the bar port with nearly neglected coupling. Moreover, to better separate input polarizations, an additional tapered waveguide extended from the partially-etched waveguide is also added. From results, an ultracompact PSR of only 8.2 μm in length is achieved, which is so far the reported shortest one. The polarization conversion loss and efficiency are 0.12 dB and 98.52%, respectively, together with the crosstalk and reflection loss of −31.41/−22.43 dB and −34.74/−33.13 dB for input TE/TM mode at wavelength of 1.55 μm. These attributes make the present device suitable for constructing on-chip compact photonic integrated circuits with polarization-independence. PMID:27306112
Laner-Plamberger, Sandra; Lener, Thomas; Schmid, Doris; Streif, Doris A; Salzer, Tina; Öller, Michaela; Hauser-Kronberger, Cornelia; Fischer, Thorsten; Jacobs, Volker R; Schallmoser, Katharina; Gimona, Mario; Rohde, Eva
2015-11-10
Pooled human platelet lysate (pHPL) is an efficient alternative to xenogenic supplements for ex vivo expansion of mesenchymal stem cells (MSCs) in clinical studies. Currently, porcine heparin is used in pHPL-supplemented medium to prevent clotting due to plasmatic coagulation factors. We therefore searched for an efficient and reproducible medium preparation method that avoids clot formation while omitting animal-derived heparin. We established a protocol to deplete fibrinogen by clotting of pHPL in medium, subsequent mechanical hydrogel disruption and removal of the fibrin pellet. After primary culture, bone-marrow and umbilical cord derived MSCs were tested for surface markers by flow cytometry and for trilineage differentiation capacity. Proliferation and clonogenicity were analyzed for three passages. The proposed clotting procedure reduced fibrinogen more than 1000-fold, while a volume recovery of 99.5 % was obtained. All MSC types were propagated in standard and fibrinogen-depleted medium. Flow cytometric phenotype profiles and adipogenic, osteogenic and chondrogenic differentiation potential in vitro were independent of MSC-source or medium type. Enhanced proliferation of MSCs was observed in the absence of fibrinogen but presence of heparin compared to standard medium. Interestingly, this proliferative response to heparin was not detected after an initial contact with fibrinogen during the isolation procedure. Here, we present an efficient, reproducible and economical method in compliance to good manufacturing practice for the preparation of MSC media avoiding xenogenic components and suitable for clinical studies.
Multimodal neural correlates of cognitive control in the Human Connectome Project.
Lerman-Sinkoff, Dov B; Sui, Jing; Rachakonda, Srinivas; Kandala, Sridhar; Calhoun, Vince D; Barch, Deanna M
2017-12-01
Cognitive control is a construct that refers to the set of functions that enable decision-making and task performance through the representation of task states, goals, and rules. The neural correlates of cognitive control have been studied in humans using a wide variety of neuroimaging modalities, including structural MRI, resting-state fMRI, and task-based fMRI. The results from each of these modalities independently have implicated the involvement of a number of brain regions in cognitive control, including dorsal prefrontal cortex, and frontal parietal and cingulo-opercular brain networks. However, it is not clear how the results from a single modality relate to results in other modalities. Recent developments in multimodal image analysis methods provide an avenue for answering such questions and could yield more integrated models of the neural correlates of cognitive control. In this study, we used multiset canonical correlation analysis with joint independent component analysis (mCCA + jICA) to identify multimodal patterns of variation related to cognitive control. We used two independent cohorts of participants from the Human Connectome Project, each of which had data from four imaging modalities. We replicated the findings from the first cohort in the second cohort using both independent and predictive analyses. The independent analyses identified a component in each cohort that was highly similar to the other and significantly correlated with cognitive control performance. The replication by prediction analyses identified two independent components that were significantly correlated with cognitive control performance in the first cohort and significantly predictive of performance in the second cohort. These components identified positive relationships across the modalities in neural regions related to both dynamic and stable aspects of task control, including regions in both the frontal-parietal and cingulo-opercular networks, as well as regions hypothesized to be modulated by cognitive control signaling, such as visual cortex. Taken together, these results illustrate the potential utility of multi-modal analyses in identifying the neural correlates of cognitive control across different indicators of brain structure and function. Copyright © 2017 Elsevier Inc. All rights reserved.
A reference architecture for the component factory
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Caldiera, Gianluigi; Cantone, Giovanni
1992-01-01
Software reuse can be achieved through an organization that focuses on utilization of life cycle products from previous developments. The component factory is both an example of the more general concepts of experience and domain factory and an organizational unit worth being considered independently. The critical features of such an organization are flexibility and continuous improvement. In order to achieve these features we can represent the architecture of the factory at different levels of abstraction and define a reference architecture from which specific architectures can be derived by instantiation. A reference architecture is an implementation and organization independent representation of the component factory and its environment. The paper outlines this reference architecture, discusses the instantiation process, and presents some examples of specific architectures by comparing them in the framework of the reference model.
Three state-of-the-art individual electric and hybrid vehicle test reports, volume 2
NASA Technical Reports Server (NTRS)
1978-01-01
Procedures used in determining the energy efficiency and economy of a gasoline-electric hybrid taxi, an electric passenger car, and an electric van are described. Tabular and graphic data show results of driving cycle and constant speed tests, energy distribution to various components, efficiency of the components, and, for the hybrid vehicle, the emissions.
Hu, Shan-Zhou; Chen, Fen-Fei; Zeng, Li-Bo; Wu, Qiong-Shui
2013-01-01
Imaging AOTF is an important optical filter component for new spectral imaging instruments developed in recent years. The principle of imaging AOTF component was demonstrated, and a set of testing methods for some key performances were studied, such as diffraction efficiency, wavelength shift with temperature, homogeneity in space for diffraction efficiency, imaging shift, etc.
Jarusiewicz, Jamie; Choe, Yvonne; Yoo, Kyung Soo; Park, Chan Pil
2009-01-01
A simple and efficient one-pot three-component method has been developed for the synthesis of α-aminonitriles. This Strecker reaction is applicable for aldehydes and ketones with aliphatic or aromatic amines and trimethyl siliyl cyanide in the presence of a palladium Lewis aid catalyst in dichloromethane solvent at room temperature. PMID:19265413
Inorganic Biominerals in Crustaceans are Structurally Independent of Organic Framework
NASA Astrophysics Data System (ADS)
Mergelsberg, S. T.; Michel, F. M.; Mukhopadhyay, B.; Dove, P. M.
2015-12-01
Biomineralization of calcium carbonate (CaCO3) as crystalline calcite or amorphous CaCO3 (ACC) occurs in the exoskeletons of all crustaceans. These cuticles are complex composites of inorganic mineral and organic macromolecules with highly divergent morphologies that are adapted to the extreme variations in environmental pressures within their diverse ecological niches. The remarkable variations and adaptations that form, infer a highly efficient and regulated mechanism for biomineralization that is most likely orchestrated by a myriad of biomacromolecules (Ziegler A 2012). The roles of these peptides and organic metabolites during CaCO3 biomineralization are not well understood. In part, this is due to a lack of knowledge of crustacean homeostasis. In a step toward understanding cuticle mineralization in crustaceans, this study asks: Which molecules affect biomineralization? Do the biomineral-active molecules vary greatly between species and body parts? Recent studies of polysaccharide controls on mineralization also raise the question of whether small heterogeneities in chitin, the most abundant biopolymer of the composite, could be primarily responsible for differences in CaCO3 crystallinity. This study used a novel spectroscopic approach to characterize the mineral and organic components of exoskeletons from three Malacostraca organisms — American Lobster (Homarus americanus), Dungeness Crab (Metacarcinus magister), and Red Rock Crab (Cancer productus). Using high-energy x-ray diffraction and Raman spectroscopy, the cuticles of three major body parts from these organisms were analyzed for the structure and bulk chemistry of its chitin and CaCO3 components. The findings indicate that Raman spectroscopy provides adequate resolution to show that crystallinity of chitin and the CaCO3 mineral component are chemically independent of each other, although their crystallinities co-vary for Brachyura species (Dungeness and Red Rock Crabs). Insights from this study suggest chitin provides the structural framework of the cuticle, without direct chemical control on the biomineralization of CaCO3. Peptides and small metabolites are likely to be more directly involved in the mechanisms controlling polymerization, crystallization, and composition of both chitin and CaCO3.
Resurrecting ancestral genes in bacteria to interpret ancient biosignatures
NASA Astrophysics Data System (ADS)
Kacar, Betul; Guy, Lionel; Smith, Eric; Baross, John
2017-11-01
Two datasets, the geologic record and the genetic content of extant organisms, provide complementary insights into the history of how key molecular components have shaped or driven global environmental and macroevolutionary trends. Changes in global physico-chemical modes over time are thought to be a consistent feature of this relationship between Earth and life, as life is thought to have been optimizing protein functions for the entirety of its approximately 3.8 billion years of history on the Earth. Organismal survival depends on how well critical genetic and metabolic components can adapt to their environments, reflecting an ability to optimize efficiently to changing conditions. The geologic record provides an array of biologically independent indicators of macroscale atmospheric and oceanic composition, but provides little in the way of the exact behaviour of the molecular components that influenced the compositions of these reservoirs. By reconstructing sequences of proteins that might have been present in ancient organisms, we can downselect to a subset of possible sequences that may have been optimized to these ancient environmental conditions. How can one use modern life to reconstruct ancestral behaviours? Configurations of ancient sequences can be inferred from the diversity of extant sequences, and then resurrected in the laboratory to ascertain their biochemical attributes. One way to augment sequence-based, single-gene methods to obtain a richer and more reliable picture of the deep past, is to resurrect inferred ancestral protein sequences in living organisms, where their phenotypes can be exposed in a complex molecular-systems context, and then to link consequences of those phenotypes to biosignatures that were preserved in the independent historical repository of the geological record. As a first step beyond single-molecule reconstruction to the study of functional molecular systems, we present here the ancestral sequence reconstruction of the beta-carbonic anhydrase protein. We assess how carbonic anhydrase proteins meet our selection criteria for reconstructing ancient biosignatures in the laboratory, which we term palaeophenotype reconstruction. This article is part of the themed issue 'Reconceptualizing the origins of life'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phadke, Amol; Shah, Nihar; Abhyankar, Nikit
Improving efficiency of air conditioners (ACs) typically involves improving the efficiency of various components such as compressors, heat exchangers, expansion valves, refrigerant,and fans. We estimate the incremental cost of improving the efficiency of room ACs based on the cost of improving the efficiency of its key components. Further, we estimate the retail price increase required to cover the cost of efficiency improvement, compare it with electricity bill savings, and calculate the payback period for consumers to recover the additional price of a more efficient AC. The finding that significant efficiency improvement is cost effective from a consumer perspective is robustmore » over a wide range of assumptions. If we assume a 50% higher incremental price compared to our baseline estimate, the payback period for the efficiency level of 3.5 ISEER is 1.1 years. Given the findings of this study, establishing more stringent minimum efficiency performance criteria (one-star level) should be evaluated rigorously considering significant benefits to consumers, energy security, and environment« less
Welsh, Timothy N
2009-12-01
The "Simon effect" describes a pattern of reaction times (RTs) where responses to symbolic information are shorter when the information is presented on the same side of space as the desired response than when it is on the opposite side of space. For example, if right hand responses are required for green targets and left hand responses for red targets, RTs with the right hand are shorter when the green target appears on the right side than on the left side. It has been reported that Simon effects also appear when two individuals perform independent components of a Simon effect task. It has been suggested that such joint Simon effects occur because participants represent the action of their partner. It is unclear, however, if the joint Simon effect emerges because: (1) each partner represents the other's action; (2) each partner is using the other person or their response as an environmental reference; or (3) an intra-hemispheric processing advantage due to the lateralized cerebral organization of perceptual and motor systems. The present study distinguished between these possibilities by asking pairs of participants to perform in conditions in which they crossed their arms into the other person's space. Consistent with within-person Simon effects, joint Simon effects were observed in uncrossed- and crossed limb conditions. These results support a response co-representation explanation of joint Simon effects. It is suggested that the processes underlying the evoked representations have developed to allow two independent agents to form temporary synergies to facilitate efficient task completion. 2009 Elsevier B.V. All rights reserved.
Bhattarai, Hitesh; Gupta, Richa
2014-01-01
Nonhomologous end joining (NHEJ) is a recently described bacterial DNA double-strand break (DSB) repair pathway that has been best characterized for mycobacteria. NHEJ can religate transformed linear plasmids, repair ionizing radiation (IR)-induced DSBs in nonreplicating cells, and seal I-SceI-induced chromosomal DSBs. The core components of the mycobacterial NHEJ machinery are the DNA end binding protein Ku and the polyfunctional DNA ligase LigD. LigD has three autonomous enzymatic modules: ATP-dependent DNA ligase (LIG), DNA/RNA polymerase (POL), and 3′ phosphoesterase (PE). Although genetic ablation of ku or ligD abolishes NHEJ and sensitizes nonreplicating cells to ionizing radiation, selective ablation of the ligase activity of LigD in vivo only mildly impairs NHEJ of linearized plasmids, indicating that an additional DNA ligase can support NHEJ. Additionally, the in vivo role of the POL and PE domains in NHEJ is unclear. Here we define a LigD ligase-independent NHEJ pathway in Mycobacterium smegmatis that requires the ATP-dependent DNA ligase LigC1 and the POL domain of LigD. Mycobacterium tuberculosis LigC can also support this backup NHEJ pathway. We also demonstrate that, although dispensable for efficient plasmid NHEJ, the activities of the POL and PE domains are required for repair of IR-induced DSBs in nonreplicating cells. These findings define the genetic requirements for a LigD-independent NHEJ pathway in mycobacteria and demonstrate that all enzymatic functions of the LigD protein participate in NHEJ in vivo. PMID:24957619
The UMO (University of Maine, Orono) Teacher Training Program: A Case Study and a Model.
ERIC Educational Resources Information Center
Miller, James R.; McNally, Harry
This case study presents a model of the University of Maine, Orono, pre-service program for preparing secondary social studies teachers. Focus is on the Foundations Component and the Methods Component, either of which can function independently of the other. Only brief mention is made of either the Exploratory Field Experience Component or the…
Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K.
2015-01-01
The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8–12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task. PMID:26199874
Horowitz-Kraus, Tzipi; DiFrancesco, Mark; Kay, Benjamin; Wang, Yingying; Holland, Scott K
2015-01-01
The Reading Acceleration Program, a computerized reading-training program, increases activation in neural circuits related to reading. We examined the effect of the training on the functional connectivity between independent components related to visual processing, executive functions, attention, memory, and language during rest after the training. Children 8-12 years old with reading difficulties and typical readers participated in the study. Behavioral testing and functional magnetic resonance imaging were performed before and after the training. Imaging data were analyzed using an independent component analysis approach. After training, both reading groups showed increased single-word contextual reading and reading comprehension scores. Greater positive correlations between the visual-processing component and the executive functions, attention, memory, or language components were found after training in children with reading difficulties. Training-related increases in connectivity between the visual and attention components and between the visual and executive function components were positively correlated with increased word reading and reading comprehension, respectively. Our findings suggest that the effect of the Reading Acceleration Program on basic cognitive domains can be detected even in the absence of an ongoing reading task.
Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung
2017-01-01
Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework’s simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications. PMID:28350887