Science.gov

Sample records for analytic receiver analysis

  1. Multimedia Analysis plus Visual Analytics = Multimedia Analytics

    SciTech Connect

    Chinchor, Nancy; Thomas, James J.; Wong, Pak C.; Christel, Michael; Ribarsky, Martin W.

    2010-10-01

    Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.

  2. Exploratory Analysis in Learning Analytics

    ERIC Educational Resources Information Center

    Gibson, David; de Freitas, Sara

    2016-01-01

    This article summarizes the methods, observations, challenges and implications for exploratory analysis drawn from two learning analytics research projects. The cases include an analysis of a games-based virtual performance assessment and an analysis of data from 52,000 students over a 5-year period at a large Australian university. The complex…

  3. Heat-Energy Analysis for Solar Receivers

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1982-01-01

    Heat-energy analysis program (HEAP) solves general heat-transfer problems, with some specific features that are "custom made" for analyzing solar receivers. Can be utilized not only to predict receiver performance under varying solar flux, ambient temperature and local heat-transfer rates but also to detect locations of hotspots and metallurgical difficulties and to predict performance sensitivity of neighboring component parameters.

  4. Costas loop analysis for coherent optical receivers

    NASA Astrophysics Data System (ADS)

    Hodgkinson, T. G.

    1986-03-01

    A homodyne Costas loop receiver is analyzed taking both shot and laser phase noise sources into acount. The reciever performance is compared with that of a heterodyne receiver using an electrical Costas loop and that of a coherent receiver using a pilot carrier phase-locked loop. It is shown that, to avoid large performance penalties, beat linewidth to bit-rate ratios smaller than 0.05 percent and 0.5 percent are needed for PSK homodyne and heterodyne systems, respectively.

  5. Phenol Analysis -- Some Analytical Considerations

    NASA Technical Reports Server (NTRS)

    Starkey, R. J., Jr.

    1971-01-01

    Contamination of potable water supplies with halogenated phenols in concentrations of 2-10 parts per billion (ppb) produces objectionable tastes and odors capable of influencing consumer acceptability. Routine analysis by the distillation/ 4-aminoantipyrine method is limited by lack of sensitivity and subject to interference by aryl amines. This has been overcome by developing a continuous liquid-liquid extraction system to selectively isolate phenols and eliminate major interfering substances. Stable reagents have been formulated to reduce blank color and extend sensitivity. Equipment suitable for analysis of phenols at the 1 ppb level or less in 20 minutes is described.

  6. ANALYTICAL RESULTS FOR MOX COLEMANITE SAMPLES RECEIVED ON JULY 22, 2013

    SciTech Connect

    Reigel, M.; Best, D.

    2014-05-19

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the boron oxide content of the colemanite raw aggregate material prior to it being mixed into the concrete. SRNL received ten samples of colemanite for analysis on July 22, 2013. The elemental boron content of each sample was measured according to ASTM C 1301. The boron oxide content was calculated using the oxide conversion factor for boron.

  7. ANALYTICAL RESULTS FOR MOX COLEMANITE SAMPLES RECEIVED ON JULY 22, 2013

    SciTech Connect

    Reigel, M.; Best, D.

    2013-08-13

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the boron oxide content of the colemanite raw aggregate material prior to it being mixed into the concrete. SRNL received ten samples of colemanite for analysis on July 22, 2013. The elemental boron content of each sample was measured according to ASTM C 1301. The boron oxide content was calculated using the oxide conversion factor for boron.

  8. Analytical Results For MOX Colemanite Concrete Samples Received On September 4, 2013

    SciTech Connect

    Reigel, Marissa M.

    2013-09-24

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. SRNL received three samples of colemanite concrete for analysis on September 4, 2013. The average total density of each of the samples measured by the ASTM method C 642, the average partial hydrogen density was measured using method ASTM E 1131, and the average partial boron density of each sample was measured according to ASTM C 1301. The lower limits and measured values for the total density, hydrogen partial density, and boron partial density are presented. For all the samples tested, the total density and the boron partial density met or exceeded the specified limit. None of the samples met the lower limit for hydrogen partial density.

  9. ANALYTICAL RESULTS FOR MOX COLEMANITE CONCRETE SAMPLES RECEIVED ON NOVEMBER 21, 2013

    SciTech Connect

    Reigel, M.

    2014-05-19

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. SRNL received two samples of colemanite concrete for analysis on November 21, 2013. The average total density of each of the samples measured by the ASTM method C 642, the average partial hydrogen density was measured using method ASTM E 1131, and the average partial boron density of each sample was measured according to ASTM C 1301. The lower limits and measured values for the total density, hydrogen partial density, and boron partial density are presented. For all the samples tested, the total density and the boron partial density met or exceeded the specified limit. None of the samples met the lower limit for hydrogen partial density.

  10. Analytical Results For MOX Colemanite Concrete Samples Received On November, 2013

    SciTech Connect

    Reigel, Marissa M.

    2013-12-18

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. SRNL received two samples of colemanite concrete for analysis on November 21, 2013. The average total density of each of the samples measured by the ASTM method C 642, the average partial hydrogen density was measured using method ASTM E 1131, and the average partial boron density of each sample was measured according to ASTM C 1301. For all the samples tested, the total density and the boron partial density met or exceeded the specified limit. None of the samples met the lower limit for hydrogen partial density.

  11. ANALYTICAL RESULTS FOR MOX COLEMANITE CONCRETE SAMPLES RECEIVED ON SEPTEMBER 4, 2013

    SciTech Connect

    Reigel, M.

    2014-05-19

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. SRNL received three samples of colemanite concrete for analysis on September 4, 2013. The average total density of each of the samples measured by the ASTM method C 642, the average partial hydrogen density was measured using method ASTM E 1131, and the average partial boron density of each sample was measured according to ASTM C 1301. The lower limits and measured values for the total density, hydrogen partial density, and boron partial density are presented. For all the samples tested, the total density and the boron partial density met or exceeded the specified limit. None of the samples met the lower limit for hydrogen partial density.

  12. The Relation of Perceived and Received Social Support to Mental Health among First Responders: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Prati, Gabriele; Pietrantoni, Luca

    2010-01-01

    There are plenty of theories that may support the protective role of social support in the aftermath of potentially traumatic events. This meta-analytic review examined the role of received and perceived social support in promoting mental health among first responders (e.g., firefighters, police officers, and paramedics or emergency medical…

  13. Analytical techniques and instrumentation: A compilation. [analytical instrumentation, materials performance, and systems analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical information is presented covering the areas of: (1) analytical instrumentation useful in the analysis of physical phenomena; (2) analytical techniques used to determine the performance of materials; and (3) systems and component analyses for design and quality control.

  14. ANALYTICAL RESULTS FOR MOX COLEMANITE CONCRETE SAMPLES RECEIVED ON JANUARY 15, 2013

    SciTech Connect

    Reigel, M.; Best, D.

    2013-02-13

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. SRNL received twelve samples of colemanite concrete for analysis on January 15, 2013. The average total density of each of the samples measured by the ASTM method C 642, the average partial hydrogen density was measured using method ASTM E 1311, and the average partial boron density of each sample was measured according to ASTM C 1301. The lower limits and measured values for the total density, hydrogen partial density, and boron partial density are presented. For all the samples tested, the total density and the hydrogen partial density met or exceeded the specified limit. All of the samples met or exceeded the boron partial density lower bound with the exception of samples G3-M11-2000-H, G3-M11-3000-M, and G5-M1-3000-H which are below the limit of 1.65E-01 g/cm3.

  15. ANALYTICAL RESULTS FOR MOX COLEMANITE CONCRETE SAMPLES RECEIVED ON JANUARY 15, 2013

    SciTech Connect

    Reigel, M.

    2014-05-19

    The Mixed Oxide Fuel Fabrication Facility (MFFF) will use colemanite bearing concrete neutron absorber panels credited with attenuating neutron flux in the criticality design analyses and shielding operators from radiation. The Savannah River National Laboratory (SRNL) is tasked with measuring the total density, partial hydrogen density, and partial boron density of the colemanite concrete. SRNL received twelve samples of colemanite concrete for analysis on January 15, 2013. The average total density of each of the samples measured by the ASTM method C 642, the average partial hydrogen density was measured using method ASTM E 1131, and the average partial boron density of each sample was measured according to ASTM C 1301. The lower limits and measured values for the total density, hydrogen partial density, and boron partial density are presented. For all the samples tested, the total density and the hydrogen partial density met or exceeded the specified limit. All of the samples met or exceeded the boron partial density lower bound with the exception of samples G3-M11-2000-H, G3-M11-3000-M, and G5-M1-3000-H which are below the limit of 1.65E-01 g/cm{sup 3}.

  16. Visual Analytics for Power Grid Contingency Analysis

    SciTech Connect

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu; Mackey, Patrick S.; Jin, Shuangshuang

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure to do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.

  17. Analytical signal analysis of strange nonchaotic dynamics.

    PubMed

    Gupta, Kopal; Prasad, Awadhesh; Singh, Harinder P; Ramaswamy, Ramakrishna

    2008-04-01

    We apply an analytical signal analysis to strange nonchaotic dynamics. Through this technique it is possible to obtain the spectrum of instantaneous intrinsic mode frequencies that are present in a given signal. We find that the second-mode frequency and its variance are good order parameters for dynamical transitions from quasiperiodic tori to strange nonchaotic attractors (SNAs) and from SNAs to chaotic attractors. Phase fluctuation analysis shows that SNAs and chaotic attractors behave identically within short time windows as a consequence of local instabilities in the dynamics. In longer time windows, however, the globally stable character of SNAs becomes apparent. This methodology can be of great utility in the analysis of experimental time series, and representative applications are made to signals obtained from Rössler and Duffing oscillators. PMID:18517723

  18. Guided Text Analysis Using Adaptive Visual Analytics

    SciTech Connect

    Steed, Chad A; Symons, Christopher T; DeNap, Frank A; Potok, Thomas E; Potok, Thomas E

    2012-01-01

    This paper demonstrates the promise of augmenting interactive visualizations with semi-supervised machine learning techniques to improve the discovery of significant associations and insight in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source publications related to national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term- frequency views, and multiple coordinated views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the Department of Homeland Securitys Fusion Centers, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in search and investigative analysis of textual information.

  19. Guided text analysis using adaptive visual analytics

    NASA Astrophysics Data System (ADS)

    Steed, Chad A.; Symons, Christopher T.; DeNap, Frank A.; Potok, Thomas E.

    2012-01-01

    This paper demonstrates the promise of augmenting interactive visualizations with semi-supervised machine learning techniques to improve the discovery of significant associations and insight in the search and analysis of textual information. More specifically, we have developed a system-called Gryffin-that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source publications related to national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinated views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the Department of Homeland Security's Fusion Centers, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in search and investigative analysis of textual information.

  20. Green analytical method development for statin analysis.

    PubMed

    Assassi, Amira Louiza; Roy, Claude-Eric; Perovitch, Philippe; Auzerie, Jack; Hamon, Tiphaine; Gaudin, Karen

    2015-02-01

    Green analytical chemistry method was developed for pravastatin, fluvastatin and atorvastatin analysis. HPLC/DAD method using ethanol-based mobile phase with octadecyl-grafted silica with various grafting and related-column parameters such as particle sizes, core-shell and monolith was studied. Retention, efficiency and detector linearity were optimized. Even for column with particle size under 2 μm, the benefit of keeping efficiency within a large range of flow rate was not obtained with ethanol based mobile phase compared to acetonitrile one. Therefore the strategy to shorten analysis by increasing the flow rate induced decrease of efficiency with ethanol based mobile phase. An ODS-AQ YMC column, 50 mm × 4.6 mm, 3 μm was selected which showed the best compromise between analysis time, statin separation, and efficiency. HPLC conditions were at 1 mL/min, ethanol/formic acid (pH 2.5, 25 mM) (50:50, v/v) and thermostated at 40°C. To reduce solvent consumption for sample preparation, 0.5mg/mL concentration of each statin was found the highest which respected detector linearity. These conditions were validated for each statin for content determination in high concentrated hydro-alcoholic solutions. Solubility higher than 100mg/mL was found for pravastatin and fluvastatin, whereas for atorvastatin calcium salt the maximum concentration was 2mg/mL for hydro-alcoholic binary mixtures between 35% and 55% of ethanol in water. Using atorvastatin instead of its calcium salt, solubility was improved. Highly concentrated solution of statins offered potential fluid for per Buccal Per-Mucous(®) administration with the advantages of rapid and easy passage of drugs. PMID:25582487

  1. Analytical Evaluation of Bit Error Rate Performance of a Free-Space Optical Communication System with Receive Diversity Impaired by Pointing Error

    NASA Astrophysics Data System (ADS)

    Nazrul Islam, A. K. M.; Majumder, S. P.

    2015-06-01

    Analysis is carried out to evaluate the conditional bit error rate conditioned on a given value of pointing error for a Free Space Optical (FSO) link with multiple receivers using Equal Gain Combining (EGC). The probability density function (pdf) of output signal to noise ratio (SNR) is also derived in presence of pointing error with EGC. The average BER of a SISO and SIMO FSO links are analytically evaluated by averaging the conditional BER over the pdf of the output SNR. The BER performance results are evaluated for several values of pointing jitter parameters and number of IM/DD receivers. The results show that, the FSO system suffers significant power penalty due to pointing error and can be reduced by increasing in the number of receivers at a given value of pointing error. The improvement of receiver sensitivity over SISO is about 4 dB and 9 dB when the number of photodetector is 2 and 4 at a BER of 10-10. It is also noticed that, system with receive diversity can tolerate higher value of pointing error at a given BER and transmit power.

  2. Finite Element Analysis of the LOLA Receiver Telescope Lens

    NASA Technical Reports Server (NTRS)

    Matzinger, Elizabeth

    2007-01-01

    This paper presents the finite element stress and distortion analysis completed on the Receiver Telescope lens of the Lunar Orbiter Laser Altimeter (LOLA). LOLA is one of six instruments on the Lunar Reconnaissance Orbiter (LRO), scheduled to launch in 2008. LOLA's main objective is to produce a high-resolution global lunar topographic model to aid in safe landings and enhance surface mobility in future exploration missions. The Receiver Telescope captures the laser pulses transmitted through a diffractive optical element (DOE) and reflected off the lunar surface. The largest lens of the Receiver Telescope, Lens 1, is a 150 mm diameter aspheric lens originally designed to be made of BK7 glass. The finite element model of the Receiver Telescope Lens 1 is comprised of solid elements and constrained in a manner consistent with the behavior of the mounting configuration of the Receiver Telescope tube. Twenty-one temperature load cases were mapped to the nodes based on thermal analysis completed by LOLA's lead thermal analyst, and loads were applied to simulate the preload applied from the ring flexure. The thermal environment of the baseline design (uncoated BK7 lens with no baffle) produces large radial and axial gradients in the lens. These large gradients create internal stresses that may lead to part failure, as well as significant bending that degrades optical performance. The high stresses and large distortions shown in the analysis precipitated a design change from BK7 glass to sapphire.

  3. Audience Analysis: A Programmed Approach to Receiver Behavior.

    ERIC Educational Resources Information Center

    Gibson, James W.; Hanna, Michael S.

    This branching programmed instructional workbook focuses on how people behave when they hear messages. Included are materials on audience/receiver behavior and a self-instructional supplement to materials that are a part of other communication studies. Drawings and diagrams illustrate the discussions of self-analysis, selection of audiences and…

  4. Superconducting integrated terahertz receiver for spectral analysis of gas compounds

    NASA Astrophysics Data System (ADS)

    Kinev, N. V.; Filippenko, L. V.; Kalashnikov, K. V.; Kiselev, O. S.; Vaks, V. L.; Domracheva, E. G.; Koshelets, V. P.

    2016-08-01

    A new highly sensitive device for analysis of gas compounds in terahertz frequency range based on the superconducting integrated receiver is being developed. Such receiver for spectral research of Earth atmosphere from balloon-borne instrument was developed earlier in Kotel'nikov Institute of Radio Engineering and Electronics and successfully operated during several flight missions. In this work, the laboratory setup for gas spectroscopy in the range of 450-700 GHz with the noise temperature below 150 K and spectral resolution better than 0.5 MHz is presented. First results of measurements of NH3 and H2O absorption spectra are obtained.

  5. SAAF: SANS data Analysis using Analytical Functions

    SciTech Connect

    Zhao, Jinkui

    2011-01-01

    The recently completed Extended Q-Range Small Angle Scattering Diffractometer (EQ-SANS) has put the focus on its software needs with renewed urgency. In a series of efforts, we aim at providing a complete set of software solutions on the EQ-SANS instrument. These programs include initial data processing, data correction and reduction, analytical model fitting to the scattering data, Monte Carlo simulation for structure determination, and virtual instrument simulation for experiment planning. SAAF is one such program for analytical data modeling. It takes the reduced EQ-SANS data and allows users to fit the data to analytical models. These models are easy to write. They can either be user written, or from the pre-supplied model library.

  6. Finite element analysis of the LOLA receiver telescope lens

    NASA Astrophysics Data System (ADS)

    Matzinger, Elizabeth A.

    2007-09-01

    This paper presents the finite element stress and distortion analysis completed on the receiver telescope lens of the Lunar Orbiter Laser Altimeter (LOLA). LOLA is one of six instruments on the Lunar Reconnaissance Orbiter (LRO), scheduled to launch in 2008. LOLA's main objective is to produce a high-resolution global lunar topographic model to aid in safe landings and enhance surface mobility in future exploration missions. A receiver telescope captures the laser pulses transmitted through a diffractive optical element (DOE) and reflected off the lunar surface. The largest lens of the receiver telescope was modeled with solid elements and constrained in a manner consistent with the behavior of the mounting configuration. Twenty-one temperature load cases were mapped to the nodes based on thermal analysis completed by LOLA's lead thermal analyst, and loads were applied to simulate the preload applied from the ring flexure. The thermal environment of the baseline design produces large radial and axial gradients in the lens. These large gradients create internal stresses that may lead to part failure, as well as significant bending that degrades optical performance. The high stresses and large distortions shown in the analysis precipitated a design change from BK7 glass to sapphire.

  7. UV Lidar Receiver Analysis for Tropospheric Sensing of Ozone

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; DeYoung, Russell J.

    2013-01-01

    A simulation of a ground based Ultra-Violet Differential Absorption Lidar (UV-DIAL) receiver system was performed under realistic daytime conditions to understand how range and lidar performance can be improved for a given UV pulse laser energy. Calculations were also performed for an aerosol channel transmitting at 3 W. The lidar receiver simulation studies were optimized for the purpose of tropospheric ozone measurements. The transmitted lidar UV measurements were from 285 to 295 nm and the aerosol channel was 527-nm. The calculations are based on atmospheric transmission given by the HITRAN database and the Modern Era Retrospective Analysis for Research and Applications (MERRA) meteorological data. The aerosol attenuation is estimated using both the BACKSCAT 4.0 code as well as data collected during the CALIPSO mission. The lidar performance is estimated for both diffuseirradiance free cases corresponding to nighttime operation as well as the daytime diffuse scattered radiation component based on previously reported experimental data. This analysis presets calculations of the UV-DIAL receiver ozone and aerosol measurement range as a function of sky irradiance, filter bandwidth and laser transmitted UV and 527-nm energy

  8. Single Cell Analysis of a Bacterial Sender-Receiver System

    PubMed Central

    Mückl, Andrea; Kapsner, Korbinian; Gerland, Ulrich; Simmel, Friedrich C.

    2016-01-01

    Monitoring gene expression dynamics on the single cell level provides important information on cellular heterogeneity and stochasticity, and potentially allows for more accurate quantitation of gene expression processes. We here study bacterial senders and receivers genetically engineered with components of the quorum sensing system derived from Aliivibrio fischeri on the single cell level using microfluidics-based bacterial chemostats and fluorescence video microscopy. We track large numbers of bacteria over extended periods of time, which allows us to determine bacterial lineages and filter out subpopulations within a heterogeneous population. We quantitatively determine the dynamic gene expression response of receiver bacteria to varying amounts of the quorum sensing inducer N-3-oxo-C6-homoserine lactone (AHL). From this we construct AHL response curves and characterize gene expression dynamics of whole bacterial populations by investigating the statistical distribution of gene expression activity over time. The bacteria are found to display heterogeneous induction behavior within the population. We therefore also characterize gene expression in a homogeneous bacterial subpopulation by focusing on single cell trajectories derived only from bacteria with similar induction behavior. The response at the single cell level is found to be more cooperative than that obtained for the heterogeneous total population. For the analysis of systems containing both AHL senders and receiver cells, we utilize the receiver cells as ‘bacterial sensors’ for AHL. Based on a simple gene expression model and the response curves obtained in receiver-only experiments, the effective AHL concentration established by the senders and their ‘sending power’ is determined. PMID:26808777

  9. Economic Risk Analysis: Using Analytical and Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    O'Donnell, Brendan R.; Hickner, Michael A.; Barna, Bruce A.

    2002-01-01

    Describes the development and instructional use of a Microsoft Excel spreadsheet template that facilitates analytical and Monte Carlo risk analysis of investment decisions. Discusses a variety of risk assessment methods followed by applications of the analytical and Monte Carlo methods. Uses a case study to illustrate use of the spreadsheet tool…

  10. Correlation analysis between ionospheric scintillation levels and receiver tracking performance

    NASA Astrophysics Data System (ADS)

    Sreeja, V.; Aquino, M.; Elmas, Z. G.; Forte, B.

    2012-06-01

    Rapid fluctuations in the amplitude and phase of a transionospheric radio signal caused by small scale plasma density irregularities in the ionosphere are known as scintillation. Scintillation can seriously impair a GNSS (Global Navigation Satellite Systems) receiver tracking performance, thus affecting the required levels of availability, accuracy and integrity, and consequently the reliability of modern day GNSS based applications. This paper presents an analysis of correlation between scintillation levels and tracking performance of a GNSS receiver for GPS L1C/A, L2C and GLONASS L1, L2 signals. The analyses make use of data recorded over Presidente Prudente (22.1°S, 51.4°W, dip latitude ˜12.3°S) in Brazil, a location close to the Equatorial Ionisation Anomaly (EIA) crest in Latin America. The study presents for the first time this type of correlation analysis for GPS L2C and GLONASS L1, L2 signals. The scintillation levels are defined by the amplitude scintillation index, S4 and the receiver tracking performance is evaluated by the phase tracking jitter. Both S4 and the phase tracking jitter are estimated from the post correlation In-Phase (I) and Quadra-Phase (Q) components logged by the receiver at a high rate. Results reveal that the dependence of the phase tracking jitter on the scintillation levels can be represented by a quadratic fit for the signals. The results presented in this paper are of importance to GNSS users, especially in view of the forthcoming high phase of solar cycle 24 (predicted for 2013).

  11. Analytical analysis of particle-core dynamics

    SciTech Connect

    Batygin, Yuri K

    2010-01-01

    Particle-core interaction is a well-developed model of halo formation in high-intensity beams. In this paper, we present an analytical solution for averaged, single particle dynamics, around a uniformly charged beam. The problem is analyzed through a sequence of canonical transformations of the Hamiltonian, which describes nonlinear particle oscillations. A closed form expression for maximum particle deviation from the axis is obtained. The results of this study are in good agreement with numerical simulations and with previously obtained data.

  12. Summative Mass Analysis of Algal Biomass - Integration of Analytical Procedures: Laboratory Analytical Procedure (LAP)

    SciTech Connect

    Laurens, L. M. L.

    2013-12-01

    This procedure guides the integration of laboratory analytical procedures to measure algal biomass constituents in an unambiguous manner and ultimately achieve mass balance closure for algal biomass samples. Many of these methods build on years of research in algal biomass analysis.

  13. Receive Mode Analysis and Design of Microstrip Reflectarrays

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam

    2011-01-01

    Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed

  14. Analytical Hopf Bifurcation and Stability Analysis of T System

    NASA Astrophysics Data System (ADS)

    Robert, A. Van Gorder; Roy Choudhury, S.

    2011-04-01

    Complex dynamics are studied in the T system, a three-dimensional autonomous nonlinear system. In particular, we perform an extended Hopf bifurcation analysis of the system. The periodic orbit immediately following the Hopf bifurcation is constructed analytically for the T system using the method of multiple scales, and the stability of such orbits is analyzed. Such analytical results complement the numerical results present in the literature. The analytical results in the post-bifurcation regime are verified and extended via numerical simulations, as well as by the use of standard power spectra, autocorrelation functions, and fractal dimensions diagnostics. We find that the T system exhibits interesting behaviors in many parameter regimes.

  15. FASP, an analytic resource appraisal program for petroleum play analysis

    USGS Publications Warehouse

    Crovelli, R.A.; Balay, R.H.

    1986-01-01

    An analytic probabilistic methodology for resource appraisal of undiscovered oil and gas resources in play analysis is presented in a FORTRAN program termed FASP. This play-analysis methodology is a geostochastic system for petroleum resource appraisal in explored as well as frontier areas. An established geologic model considers both the uncertainty of the presence of the assessed hydrocarbon and its amount if present. The program FASP produces resource estimates of crude oil, nonassociated gas, dissolved gas, and gas for a geologic play in terms of probability distributions. The analytic method is based upon conditional probability theory and many laws of expectation and variance. ?? 1986.

  16. Units of analysis in task-analytic research.

    PubMed

    Haring, T G; Kennedy, C H

    1988-01-01

    We develop and discuss four criteria for evaluating the appropriateness of units of analysis for task-analytic research and suggest potential alternatives to the units of analysis currently used. Of the six solutions discussed, the most commonly used unit of analysis in current behavior analytic work, percentage correct, meets only one of the four criteria. Five alternative units of analysis are presented and evaluated: (a) percentage of opportunities to perform meeting criterion, (b) trials to criteria, (c) cumulative competent performances, (d) percentage correct with competent performance coded, and (e) percentage correct with competent performance coded and a grid showing performance on individual steps of the task analysis. Of the solutions evaluated, only one--percentage correct with competent performance coded and a task analysis grid--met all four criteria.

  17. Generalized Roe and Metz receiver operating characteristic model: analytic link between simulated decision scores and empirical AUC variances and covariances.

    PubMed

    Gallas, Brandon D; Hillis, Stephen L

    2014-10-01

    Modeling and simulation are often used to understand and investigate random quantities and estimators. In 1997, Roe and Metz introduced a simulation model to validate analysis methods for the popular endpoint in reader studies to evaluate medical imaging devices, the reader-averaged area under the receiver operating characteristic (ROC) curve. Here, we generalize the notation of the model to allow more flexibility in recognition that variances of ROC ratings depend on modality and truth state. We also derive and validate equations for computing population variances and covariances for reader-averaged empirical AUC estimates under the generalized model. The equations are one-dimensional integrals that can be calculated using standard numerical integration techniques. This work provides the theoretical foundation and validation for a Java application called iRoeMetz that can simulate multireader multicase ROC studies and numerically calculate the corresponding variances and covariances of the empirical AUC. The iRoeMetz application and source code can be found at the "iMRMC" project on the google code project hosting site. These results and the application can be used by investigators to investigate ROC endpoints, validate analysis methods, and plan future studies.

  18. Generalized Roe and Metz receiver operating characteristic model: analytic link between simulated decision scores and empirical AUC variances and covariances

    PubMed Central

    Gallas, Brandon D.; Hillis, Stephen L.

    2014-01-01

    Abstract. Modeling and simulation are often used to understand and investigate random quantities and estimators. In 1997, Roe and Metz introduced a simulation model to validate analysis methods for the popular endpoint in reader studies to evaluate medical imaging devices, the reader-averaged area under the receiver operating characteristic (ROC) curve. Here, we generalize the notation of the model to allow more flexibility in recognition that variances of ROC ratings depend on modality and truth state. We also derive and validate equations for computing population variances and covariances for reader-averaged empirical AUC estimates under the generalized model. The equations are one-dimensional integrals that can be calculated using standard numerical integration techniques. This work provides the theoretical foundation and validation for a Java application called iRoeMetz that can simulate multireader multicase ROC studies and numerically calculate the corresponding variances and covariances of the empirical AUC. The iRoeMetz application and source code can be found at the “iMRMC” project on the google code project hosting site. These results and the application can be used by investigators to investigate ROC endpoints, validate analysis methods, and plan future studies. PMID:26158048

  19. Receiver operating characteristic analysis improves diagnosis by radionuclide ventriculography

    SciTech Connect

    Dickinson, C.Z.; Forman, M.B.; Vaugh, W.K.; Sandler, M.P.; Kronenberg, M.W.

    1985-05-01

    Receiver operating characteristic analysis (ROC) evaluates continuous variables to define diagnostic criteria for the optimal sensitivity (SENS) and specificity (SPEC) of a test. The authors studied exercise-induced chest pain (CP), ST-changes on electrocardiography (ECG) and rest-exercise gated radionuclide ventriculography (RVG) using ROC to clarify the optimal criteria for detecting myocardial ischemia due to coronary artherosclerosis (CAD). The data of 95 consecutive patients studied with coronary angiography, rest-exercise RVG and ECG were reviewed. 77 patients had ''significant'' CAD (greater than or equal to50% lesions). Exercise-induced CP, ECG abnormalities (ST-T shifts) and RVG abnormalities (change in ejection fraction, 2-view regional wall motion change and relative end-systolic volume) were evaluated to define optimal SENS/SPEC of each and for the combined data. ROC curves were constructed by multiple logistic regression (MLR). By MLR, RVG alone was superior to ECG and CP. The combination of all three produced the best ROC curve for the entire group and for clinical subsets based on the number of diseased vessels and the presence or absence of prior myocardial infarction. When CP, ECG and RVG were combined, the optimal SENS/SPEC for detection of single vessel disease was 88/86. The SENS/SPEC for 3 vessel disease was 93/95. Thus, the application of RVG for the diagnosis of myocardial ischemia is improved with the inclusion of ECG and CP data by the use of a multiple logistic regression model. ROC analysis allows clinical application of multiple data for diagnosing CAD at desired SENS/SPEC rather than by arbitrary single-standard criteria.

  20. Reflections on the Nature of Analysis and Some Analytical Skills

    ERIC Educational Resources Information Center

    Strauss, D. F. M.

    2008-01-01

    Before a meaningful account can be given of scholarly communication an investigation of the nature of "analytical skills" is required--the aim of this article. It sets out to come to terms with the meaning of analysis by showing that it rests on two mutually cohering features, namely identifying and distinguishing. Since identification implies…

  1. Analytical study of seismic effects of a solar receiver mounted on concrete towers with different fundamental periods

    NASA Astrophysics Data System (ADS)

    Deng, Lin

    2016-05-01

    This paper examines the seismic effects experienced by a solar receiver mounted on concrete towers with different fundamental periods. Ten concrete towers are modeled with the empty solar receiver structure and loaded solar receiver structure to examine the tower seismic effects on the solar receiver. The fundamental periods of the towers range from 0.22 seconds to 4.58 seconds, with heights ranging from 40.5 meters to 200 meters. Thirty earthquake ground motion records are used to investigate the responses of each of the combined receiver-on-tower models as well as the receiver-on-ground models by the STAAD Pro software using time history analyses. The earthquake ground motion records are chosen based on the ratio of the peak ground acceleration to the peak ground velocity, ranging from 0.29 g/m/s to 4.88 g/m/s. For each of the combined models, the base shear at the interface between the receiver and the concrete tower is compared with the base shear of the receiver-on-ground model, and the ratio of the two base shears represents the structure amplification factor. It is found that the peak mean plus one standard deviation value of the structure amplification factor matches well with equation 13.3-1 in ASCE 7-10 for the empty solar receiver structure. However, when the solar receiver structure is loaded with dead loads, the peak value is greatly suppressed, and using equation 13.3-1 in ASCE 7-10 will be overly conservative.

  2. Waste Analysis Plan for the Waste Receiving and Processing (WRAP) Facility

    SciTech Connect

    TRINER, G.C.

    1999-11-01

    The purpose of this waste analysis plan (WAP) is to document the waste acceptance process, sampling methodologies, analytical techniques, and overall processes that are undertaken for dangerous, mixed, and radioactive waste accepted for confirmation, nondestructive examination (NDE) and nondestructive assay (NDA), repackaging, certification, and/or storage at the Waste Receiving and Processing Facility (WRAP). Mixed and/or radioactive waste is treated at WRAP. WRAP is located in the 200 West Area of the Hanford Facility, Richland, Washington. Because dangerous waste does not include source, special nuclear, and by-product material components of mixed waste, radionuclides are not within the scope of this documentation. The information on radionuclides is provided only for general knowledge.

  3. Complexity analysis of simulations with analytic bond-order potentials

    NASA Astrophysics Data System (ADS)

    Teijeiro, Carlos; Hammerschmidt, Thomas; Seiser, Bernhard; Drautz, Ralf; Sutmann, Godehard

    2016-02-01

    The modeling of materials at the atomistic level with interatomic potentials requires a reliable description of different bonding situations and relevant system properties. For this purpose, analytic bond-order potentials (BOPs) provide a systematic and robust approximation to density functional theory (DFT) and tight binding (TB) calculations at reasonable computational cost. This paper presents a formal analysis of the computational complexity of analytic BOP simulations, based on a detailed assessment of the most computationally intensive parts. Different implementation algorithms are presented alongside with optimizations for efficient numerical processing. The theoretical complexity study is complemented by systematic benchmarks of the scalability of the algorithms with increasing system size and accuracy level of the BOP approximation. Both approaches demonstrate that the computation of atomic forces in analytic BOPs can be performed with a similar scaling as the computation of atomic energies.

  4. Receiver Widelane Analysis and Its Effect on Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Elsobeiey, M.

    2014-11-01

    Typically, differential carrier-phase-based methods have been used in positioning applications that require high accuracy. The main advantage of differential methods is solving the carrier-phase ambiguities and obtain millimetre-level accuracy carrier-phase measurements. Recent studies showed that it is possible to fix the un-differenced carrier-phase ambiguities into integers which is well-known as un-differenced carrier-phase ambiguity resolution. Unfortunately, the IGS neglects satellite hardware delay during satellite clock corrections estimation process. In case of differential methods, however, this will not affect the user as all common errors between the reference and rover receivers will be cancelled out by. Point positioning, on the other hand, will be affected by neglecting satellite hardware delays as those hardware delays will be lumped into the carrier-phase ambiguities destroying its integer nature. To solve this problem, satellite clock corrections must be estimated based on clock correction for each observable bases. The user, on the other hand, can form the ionosphere-free linear combination and divide and fix its two components, namely widelane and narrowlane. If both ambiguities are successfully fixed, few millimetres level of accuracy measurements are then obtained. In this paper, one month (December, 2013) of GPS data is used to study the receiver widelane bias, its behaviour over time, and receiver dependency are provided. It is shown that the receiver widelane bias is receiver dependent, stable over time for high-grade geodetic receivers. These results are expected to have a great impact on precise point positioning (PPP) conversion time and PPP carrierphase ambiguity resolution.

  5. Fiber alignment analysis of a receiver with integrated MEMS VOA

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Hickey, Ryan; Irwin, Rob; Li, Ming; Wang, Zhengxuan

    2006-09-01

    The structure of the optical path of a novel VOA integrated receiver is presented. The method to enhance the attenuation performance of the Receiver is described in detail. The standard coplanar package module exhibits a fluent attenuation curve and can achieve more than -20dB attenuation at ~ 6.5V drive voltage. S21, S22 performance and specifications of the module are explained in the paper. All these features provide customers considerable benefits, including high quality, low power consumption and cost, board real estate flexibility and ease of use.

  6. An Analysis of Earth Science Data Analytics Use Cases

    NASA Technical Reports Server (NTRS)

    Shie, Chung-Lin; Kempler, Steve

    2014-01-01

    The increase in the number and volume, and sources, of globally available Earth science data measurements and datasets have afforded Earth scientists and applications researchers unprecedented opportunities to study our Earth in ever more sophisticated ways. In fact, the NASA Earth Observing System Data Information System (EOSDIS) archives have doubled from 2007 to 2014, to 9.1 PB (Ramapriyan, 2009; and https:earthdata.nasa.govaboutsystem-- performance). In addition, other US agency, international programs, field experiments, ground stations, and citizen scientists provide a plethora of additional sources for studying Earth. Co--analyzing huge amounts of heterogeneous data to glean out unobvious information is a daunting task. Earth science data analytics (ESDA) is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. It can include Data Preparation, Data Reduction, and Data Analysis. Through work associated with the Earth Science Information Partners (ESIP) Federation, a collection of Earth science data analytics use cases have been collected and analyzed for the purpose of extracting the types of Earth science data analytics employed, and requirements for data analytics tools and techniques yet to be implemented, based on use case needs. ESIP generated use case template, ESDA use cases, use case types, and preliminary use case analysis (this is a work in progress) will be presented.

  7. Analytical parameters for amplitude-modulated multiplexed flow analysis.

    PubMed

    Kurokawa, Yohei; Takeuchi, Masaki; Tanaka, Hideji

    2010-01-01

    Analytical conditions of amplitude-modulated multiplexed flow analysis, the basic concept of which was recently proposed by our group, are investigated for higher sample throughput rate. The performance of the improved system is evaluated by applying it to the determination of chloride ions. The flow rates of two sample solutions are independently varied in accordance with sinusoidal voltage signals, each having different frequency. The solutions are merged with a reagent solution and/or a diluent, while the total flow rate is held constant. Downstream, the analytical signal V(d) is monitored with a spectrophotometer. The V(d) shows a complicated profile resulting from amplitude modulated and multiplexed information on the two samples. The V(d) can, however, be deconvoluted to the contribution of each sample through fast Fourier transform (FFT). The amplitudes of the separated wave components are closely related to the concentrations of the analytes in the samples. By moving the window for FFT analysis with time, a temporal profile of the amplitudes can be obtained in real-time. Analytical conditions such as modulation period and system configuration have been optimized using aqueous solutions of Malachite Green (MG). Adequate amplitudes are obtained at the period of as low as 5 s. At this period, the calibration curve for the MG concentration of 0-30 micromol dm(-3) has enough linearity (r(2) = 0.999) and the limit of detection (3.3sigma) is 1.3 micromol dm(-3); the relative standard deviation of repeated measurements (C(MG) = 15 micromol dm(-3), n = 10) is 2.4%. The developed system has been applied to the determination of chloride ions by a mercury(II) thiocyanate method. The system can adequately follow the changes in analyte concentration. The recoveries of chloride ion spiked in real water samples (river and tap water) are satisfactory, around 100%. PMID:20631441

  8. Design and Analysis of a Hyperspectral Microwave Receiver Subsystem

    NASA Technical Reports Server (NTRS)

    Blackwell, W.; Galbraith, C.; Hancock, T.; Leslie, R.; Osaretin, I.; Shields, M.; Racette, P.; Hillard, L.

    2012-01-01

    Hyperspectral microwave (HM) sounding has been proposed to achieve unprecedented performance. HM operation is achieved using multiple banks of RF spectrometers with large aggregate bandwidth. A principal challenge is Size/Weight/Power scaling. Objectives of this work: 1) Demonstrate ultra-compact (100 cm3) 52-channel IF processor (enabler); 2) Demonstrate a hyperspectral microwave receiver subsystem; and 3) Deliver a flight-ready system to validate HM sounding.

  9. Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging

    NASA Astrophysics Data System (ADS)

    Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke

    2011-12-01

    In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.

  10. Big Data Visual Analytics for Exploratory Earth System Simulation Analysis

    SciTech Connect

    Steed, Chad A.; Ricciuto, Daniel M.; Shipman, Galen M.; Smith, Brian E.; Thornton, Peter E.; Wang, Dali; Shi, Xiaoying; Williams, Dean N.

    2013-12-01

    Rapid increases in high performance computing are feeding the development of larger and more complex data sets in climate research, which sets the stage for so-called big data analysis challenges. However, conventional climate analysis techniques are inadequate in dealing with the complexities of today s data. In this paper, we describe and demonstrate a visual analytics system, called the Exploratory Data analysis ENvironment (EDEN), with specific application to the analysis of complex earth system simulation data sets. EDEN represents the type of interactive visual analysis tools that are necessary to transform data into insight, thereby improving critical comprehension of earth system processes. In addition to providing an overview of EDEN, we describe real-world studies using both point ensembles and global Community Land Model Version 4 (CLM4) simulations.

  11. Performance analysis of GPS receivers in impulsive noise

    NASA Astrophysics Data System (ADS)

    Liu, Liyu; Amin, Moeness

    2005-06-01

    The use of GPS has broadened to include mounting on or inside manned or autonomous vehicles which makes it subject to interference generated from motor emissions. Many sources of interference are typically modeled as impulsive noise whose characteristics may vary in terms of power, pulse width, and pulse occurrences. In this paper, we examine the effect of impulsive noise on GPS delay lock loops (DLL). We consider the DLL for the GPS Coarse Acquisition code (C/A), which is used in civilian applications, but also needed in military GPS receivers to perform signal acquisition and tracking. We focus on the statistics of the noise components of the early, late, punctual correlators, which contribute to the discriminator error. The discriminator noise components are produced from the correlation between the impulsive noise and the early, late and punctual reference C/A code. Due to long time averaging, these components assume Gaussian distributions. The discriminator error variance is derived, incorporating the front-end precorrelation filter. It is shown that the synchronization error variance is significantly affected by the power of the received impulsive noise, the precorrelation filter, and the sample rate.

  12. The performance analysis of linux networking - packet receiving

    SciTech Connect

    Wu, Wenji; Crawford, Matt; Bowden, Mark; /Fermilab

    2006-11-01

    The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed.

  13. Receiver Operating Characteristic Analysis for Detecting Explosives-related Threats

    SciTech Connect

    Oxley, Mark E; Venzin, Alexander M

    2012-11-14

    The Department of Homeland Security (DHS) and the Transportation Security Administration (TSA) are interested in developing a standardized testing procedure for determining the performance of candidate detection systems. This document outlines a potential method for judging detection system performance as well as determining if combining the information from a legacy system with a new system can signicantly improve performance. In this document, performance corresponds to the Neyman-Pearson criterion applied to the Receiver Operating Characteristic (ROC) curves of the detection systems in question. A simulation was developed to investigate how the amount of data provided by the vendor in the form of the ROC curve eects the performance of the combined detection system. Furthermore, the simulation also takes into account the potential eects of correlation and how this information can also impact the performance of the combined system.

  14. Central Andean crustal structure from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Ryan, Jamie; Beck, Susan; Zandt, George; Wagner, Lara; Minaya, Estela; Tavera, Hernado

    2016-07-01

    The Central Andean Plateau (15°-27°S) is a high plateau in excess of 3 km elevation, associated with thickened crust along the western edge of the South America plate, in the convergent margin between the subducting Nazca plate and the Brazilian craton. We have calculated receiver functions using seismic data from a recent portable deployment of broadband seismometers in the Bolivian orocline (12°-21°S) region and combined them with waveforms from 38 other stations in the region to investigate crustal thickness and crust and mantle structures. Results from the receiver functions provide a more detailed map of crustal thickness than previously existed, and highlight mid-crustal features that match well with prior studies. The active volcanic arc and Altiplano have thick crust with Moho depths increasing from the central Altiplano (65 km) to the northern Altiplano (75 km). The Eastern Cordillera shows large along strike variations in crustal thickness. Along a densely sampled SW-NE profile through the Bolivian orocline there is a small region of thin crust beneath the high peaks of the Cordillera Real where the average elevations are near 4 km, and the Moho depth varies from 55 to 60 km, implying the crust is undercompensated by ~ 5 km. In comparison, a broader region of high elevations in the Eastern Cordillera to the southeast near ~ 20°S has a deeper Moho at ~ 65-70 km and appears close to isostatic equilibrium at the Moho. Assuming the modern-day pattern of high precipitation on the flanks of the Andean plateau has existed since the late Miocene, we suggest that climate induced exhumation can explain some of the variations in present day crustal structure across the Bolivian orocline. We also suggest that south of the orocline at ~ 20°S, the thicker and isostatically compensated crust is due to the absence of erosional exhumation and the occurrence of lithospheric delamination.

  15. Learning Geospatial Analysis Skills with Consumer-Grade GPS Receivers and Low Cost Spatial Analysis Software

    ERIC Educational Resources Information Center

    Linehan, Peter E.

    2006-01-01

    Spatial analysis technologies are increasingly important tools for all aspects of forest resource management. Field work previously accomplished with map, compass, and engineers' scale is now being augmented, or superseded, by the use of GPS and GIS. Professional-grade GPS receivers and commercial GIS software are preferred for their accuracy and…

  16. Composable Analytic Systems for next-generation intelligence analysis

    NASA Astrophysics Data System (ADS)

    DiBona, Phil; Llinas, James; Barry, Kevin

    2015-05-01

    Lockheed Martin Advanced Technology Laboratories (LM ATL) is collaborating with Professor James Llinas, Ph.D., of the Center for Multisource Information Fusion at the University at Buffalo (State of NY), researching concepts for a mixed-initiative associate system for intelligence analysts to facilitate reduced analysis and decision times while proactively discovering and presenting relevant information based on the analyst's needs, current tasks and cognitive state. Today's exploitation and analysis systems have largely been designed for a specific sensor, data type, and operational context, leading to difficulty in directly supporting the analyst's evolving tasking and work product development preferences across complex Operational Environments. Our interactions with analysts illuminate the need to impact the information fusion, exploitation, and analysis capabilities in a variety of ways, including understanding data options, algorithm composition, hypothesis validation, and work product development. Composable Analytic Systems, an analyst-driven system that increases flexibility and capability to effectively utilize Multi-INT fusion and analytics tailored to the analyst's mission needs, holds promise to addresses the current and future intelligence analysis needs, as US forces engage threats in contested and denied environments.

  17. Hybridizing experimental, numerical, and analytical stress analysis techniques

    NASA Astrophysics Data System (ADS)

    Rowlands, Robert E.

    2001-06-01

    Good measurements enjoy the advantage of conveying what actually occurs. However, recognizing that vast amounts of displacement, strain and/or stress-related information can now be recorded at high resolution, effective and reliable means of processing the data become important. It can therefore be advantageous to combine measured result with analytical and computations methods. This presentation will describe such synergism and applications to engineering problems. This includes static and transient analysis, notched and perforated composites, and fracture of composites and fiber-filled cement. Experimental methods of moire, thermo elasticity and strain gages are emphasized. Numerical techniques utilized include pseudo finite-element and boundary-element concepts.

  18. Facilitating the Analysis of Immunological Data with Visual Analytic Techniques

    PubMed Central

    Shih, David C.; Ho, Kevin C.; Melnick, Kyle M.; Rensink, Ronald A.; Kollmann, Tobias R.; Fortuno III, Edgardo S.

    2011-01-01

    Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach. PMID:21248691

  19. Facilitating the analysis of immunological data with visual analytic techniques.

    PubMed

    Shih, David C; Ho, Kevin C; Melnick, Kyle M; Rensink, Ronald A; Kollmann, Tobias R; Fortuno, Edgardo S

    2011-01-02

    Visual analytics (VA) has emerged as a new way to analyze large dataset through interactive visual display. We demonstrated the utility and the flexibility of a VA approach in the analysis of biological datasets. Examples of these datasets in immunology include flow cytometry, Luminex data, and genotyping (e.g., single nucleotide polymorphism) data. Contrary to the traditional information visualization approach, VA restores the analysis power in the hands of analyst by allowing the analyst to engage in real-time data exploration process. We selected the VA software called Tableau after evaluating several VA tools. Two types of analysis tasks analysis within and between datasets were demonstrated in the video presentation using an approach called paired analysis. Paired analysis, as defined in VA, is an analysis approach in which a VA tool expert works side-by-side with a domain expert during the analysis. The domain expert is the one who understands the significance of the data, and asks the questions that the collected data might address. The tool expert then creates visualizations to help find patterns in the data that might answer these questions. The short lag-time between the hypothesis generation and the rapid visual display of the data is the main advantage of a VA approach.

  20. Crustal structure beneath China from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Chen, Youlin; Niu, Fenglin; Liu, Ruifeng; Huang, Zhibin; TkalčIć, Hrvoje; Sun, Li; Chan, Winston

    2010-03-01

    We collected and processed a large amount of high-quality broadband teleseismic waveform data recorded by the 48 Chinese National Digital Seismic Network stations to estimate large-scale lateral variations of crustal thickness and Vp/Vs ratio (hence Poisson's ratio) beneath China. A statistical method was used to select mutually coherent receiver functions at each station, which yielded over 200 traces for most of the stations. With the conventional H-κ (the crustal thickness and Vp/Vs ratio) approach, there is a large trade-off between H and κ. Consequently, multiple maxima are frequently observed in the H-κ domain. We introduced a weight function that measures the coherence between the P-to-S conversion and the reverberation phases at each H-κ grid to reduce the trade-off. A 4th-root stacking method was further applied to reduce uncorrelated noise relative to the linear stack. These modifications turned out to be very effective in reducing the H-κ trade-off and yielded reliable estimates of crustal thickness and Vp/Vs ratio. The crust beneath eastern China is as thin as 31-33 km and the underlying Moho is relatively flat and sharp. In the western part of China, the crust is considerably thicker and shows large variations. The Moho is observed at about 51 km depth along the Tian Shan fold system and about 84 km deep beneath the central part of the Tibetan Plateau. The transition occurs at the so-called N-S belt between about 100° and 110°E, which is featured by unusually high seismicity and large gravity anomalies. The average Vp/Vs ratio over the mainland China crust is about 1.730 (σ = 0.249), significantly lower than the global average 1.78 (σ = 0.27) of the continental crust. This lower Vp/Vs ratio may suggest a general absence of mafic lowermost crustal layer beneath China.

  1. Design, demonstration and analysis of a modified wavelength-correlating receiver for incoherent OCDMA system.

    PubMed

    Zhou, Heng; Qiu, Kun; Wang, Leyang

    2011-03-28

    A novel wavelength-correlating receiver for incoherent Optical Code Division Multiple Access (OCDMA) system is proposed and demonstrated in this paper. Enabled by the wavelength conversion based scheme, the proposed receiver can support various code types including one-dimensional optical codes and time-spreading/wavelength-hopping two dimensional codes. Also, a synchronous detection scheme with time-to- wavelength based code acquisition is proposed, by which code acquisition time can be substantially reduced. Moreover, a novel data-validation methodology based on all-optical pulse-width monitoring is introduced for the wavelength-correlating receiver. Experimental demonstration of the new proposed receiver is presented and low bit error rate data-receiving is achieved without optical hard limiting and electronic power thresholding. For the first time, a detailed theoretical performance analysis specialized for the wavelength-correlating receiver is presented. Numerical results show that the overall performance of the proposed receiver prevails over conventional OCDMA receivers.

  2. Software for computer-aided receiver operating characteristic (ROC) analysis

    NASA Astrophysics Data System (ADS)

    Engel, John R.; Craine, Eric R.

    1994-04-01

    We are currently developing an easy-to-use, microcomputer-based software application to help researchers perform ROC studies. The software will have facilities for aiding the researcher in all phases of an ROC study, including experiment design, setting up and conducting test sessions, analyzing results and generating reports. The initial version of the software, named 'ROC Assistant', operates on Macintosh computers and enables the user to enter a case list, run test sessions and produce an ROC curve. We are in the process of developing enhanced versions which will incorporate functions for statistical analysis, experimental design and online help. In this paper we discuss the ROC methodology upon which the software is based as well as our software development effort to date.

  3. Gravity field error analysis: Applications of GPS receivers and gradiometers on low orbiting platforms

    NASA Technical Reports Server (NTRS)

    Schrama, E.

    1990-01-01

    The concept of a Global Positioning System (GPS) receiver as a tracking facility and a gradiometer as a separate instrument on a low orbiting platform offers a unique tool to map the Earth's gravitational field with unprecedented accuracies. The former technique allows determination of the spacecraft's ephemeris at any epoch to within 3 to 10 cm, the latter permits the measurement of the tensor of second order derivatives of the gravity field to within 0.01 to 0.0001 Eotvos units depending on the type of gradiometer. First, a variety of error sources in gradiometry where emphasis is placed on the rotational problem pursuing as well a static as a dynamic approach is described. Next, an analytical technique is described and applied for an error analysis of gravity field parameters from gradiometer and GPS observation types. Results are discussed for various configurations proposed on Topex/Poseidon, Gravity Probe-B, and Aristoteles, indicating that GPS only solutions may be computed up to degree and order 35, 55, and 85 respectively, whereas a combined GPS/gradiometer experiment on Aristoteles may result in an acceptable solution up to degree and order 240.

  4. Gravity field error analysis - Applications of Global Positioning System receivers and gradiometers on low orbiting platforms

    NASA Technical Reports Server (NTRS)

    Schrama, Ernst J. O.

    1991-01-01

    The concept of a Global Positioning System (GPS) receiver as a tracking facility and a gradiometer as a separate instrument on a low-orbiting platform offers a unique tool to map the earth's gravitational field with unprecedented accuracies. The former technique allows determination of the spacecraft's ephemeris at any epoch to within 3-10 cm, the latter permits the measurement of the tensor of second order derivatives of the gravity field to within 0.01 to 0.0001 Eotvos units depending on the type of gradiometer. First, a variety of error sources in gradiometry where emphasis is placed on the rotational problem pursuing as well a static as a dynamic approach is described. Next, an analytical technique is described and applied for an error analysis of gravity field parameters from gradiometer and GPS observation types. Results are discussed for various configurations proposed on Topex/Poseidon, Gravity Probe-B, and Aristoteles, indicating that GPS only solutions may be computed up to degree and order 35, 55, and 85, respectively, whereas a combined GPS/gradiometer experiment on Aristoteles may result in an acceptable solution up to degree and order 240.

  5. Gravity field error analysis - Applications of Global Positioning System receivers and gradiometers on low orbiting platforms

    NASA Astrophysics Data System (ADS)

    Schrama, Ernst J. O.

    1991-11-01

    The concept of a Global Positioning System (GPS) receiver as a tracking facility and a gradiometer as a separate instrument on a low-orbiting platform offers a unique tool to map the earth's gravitational field with unprecedented accuracies. The former technique allows determination of the spacecraft's ephemeris at any epoch to within 3-10 cm, the latter permits the measurement of the tensor of second order derivatives of the gravity field to within 0.01 to 0.0001 Eotvos units depending on the type of gradiometer. First, a variety of error sources in gradiometry where emphasis is placed on the rotational problem pursuing as well a static as a dynamic approach is described. Next, an analytical technique is described and applied for an error analysis of gravity field parameters from gradiometer and GPS observation types. Results are discussed for various configurations proposed on Topex/Poseidon, Gravity Probe-B, and Aristoteles, indicating that GPS only solutions may be computed up to degree and order 35, 55, and 85, respectively, whereas a combined GPS/gradiometer experiment on Aristoteles may result in an acceptable solution up to degree and order 240.

  6. Analysis and design methodology for the development of optimized, direct-detection CO{sub 2} DIAL receivers

    SciTech Connect

    Cooke, B.J.; Laubscher, B.E.; Cafferty, M.

    1996-12-31

    The analysis methodology and corresponding analytical tools for the design of optimized, low-noise, hard target return CO{sub 2} Differential Absorption Lidar (DIAL) receiver systems implementing both single element detectors and multi-pixel imaging arrays for passive/active, remote-sensing applications are presented. System parameters and components composing the receiver include: aperture, focal length, field of view, cold shield requirements, image plane dimensions, pixel dimensions, pixel pitch and fill factor, detection quantum efficiency, optical filter requirements, amplifier and temporal sampling parameters. The performance analysis is accomplished by calculating the system`s CO{sub 2} laser range response, total noise, optical geometric form factor and optical resolution. The noise components include speckle, photon noise due to signal, scene and atmospheric background, cold shield, and electronic noise. System resolution is simulated through cascaded optical transfer functions and includes effects due to atmosphere, optics, image sampling, and system motion. Experimental results of a developmental single-element detector receiver designed to detect 100 ns wide laser pulses (10 - 100 kHz pulse repetition rates) backscattered from hard-targets at nominal ranges of 10 km are presented. The receiver sensitivity is near-background noise limited, given an 8.5-11.5 {mu}m radiant optical bandwidth, with the total noise floor spectrally white for maximum pulse averaging efficiency.

  7. Physical and Chemical Analytical Analysis: A key component of Bioforensics

    SciTech Connect

    Velsko, S P

    2005-02-15

    The anthrax letters event of 2001 has raised our awareness of the potential importance of non-biological measurements on samples of biological agents used in a terrorism incident. Such measurements include a variety of mass spectral, spectroscopic, and other instrumental techniques that are part of the current armamentarium of the modern materials analysis or analytical chemistry laboratory. They can provide morphological, trace element, isotopic, and other molecular ''fingerprints'' of the agent that may be key pieces of evidence, supplementing that obtained from genetic analysis or other biological properties. The generation and interpretation of such data represents a new domain of forensic science, closely aligned with other areas of ''microbial forensics''. This paper describes some major elements of the R&D agenda that will define this sub-field in the immediate future and provide the foundations for a coherent national capability. Data from chemical and physical analysis of BW materials can be useful to an investigation of a bio-terror event in two ways. First, it can be used to compare evidence samples collected at different locations where such incidents have occurred (e.g. between the powders in the New York and Washington letters in the Amerithrax investigation) or between the attack samples and those seized during the investigation of sites where it is suspected the material was manufactured (if such samples exist). Matching of sample properties can help establish the relatedness of disparate incidents, and mis-matches might exclude certain scenarios, or signify a more complex etiology of the events under investigation. Chemical and morphological analysis for sample matching has a long history in forensics, and is likely to be acceptable in principle in court, assuming that match criteria are well defined and derived from known limits of precision of the measurement techniques in question. Thus, apart from certain operational issues (such as how to

  8. Automating the analytical laboratory via the Chemical Analysis Automation paradigm

    SciTech Connect

    Hollen, R.; Rzeszutko, C.

    1997-10-01

    To address the need for standardization within the analytical chemistry laboratories of the nation, the Chemical Analysis Automation (CAA) program within the US Department of Energy, Office of Science and Technology`s Robotic Technology Development Program is developing laboratory sample analysis systems that will automate the environmental chemical laboratories. The current laboratory automation paradigm consists of islands-of-automation that do not integrate into a system architecture. Thus, today the chemist must perform most aspects of environmental analysis manually using instrumentation that generally cannot communicate with other devices in the laboratory. CAA is working towards a standardized and modular approach to laboratory automation based upon the Standard Analysis Method (SAM) architecture. Each SAM system automates a complete chemical method. The building block of a SAM is known as the Standard Laboratory Module (SLM). The SLM, either hardware or software, automates a subprotocol of an analysis method and can operate as a standalone or as a unit within a SAM. The CAA concept allows the chemist to easily assemble an automated analysis system, from sample extraction through data interpretation, using standardized SLMs without the worry of hardware or software incompatibility or the necessity of generating complicated control programs. A Task Sequence Controller (TSC) software program schedules and monitors the individual tasks to be performed by each SLM configured within a SAM. The chemist interfaces with the operation of the TSC through the Human Computer Interface (HCI), a logical, icon-driven graphical user interface. The CAA paradigm has successfully been applied in automating EPA SW-846 Methods 3541/3620/8081 for the analysis of PCBs in a soil matrix utilizing commercially available equipment in tandem with SLMs constructed by CAA.

  9. A Double Scattering Analytical Model For Elastic Recoil Detection Analysis

    SciTech Connect

    Barradas, N. P.; Lorenz, K.; Alves, E.; Darakchieva, V.

    2011-06-01

    We present an analytical model for calculation of double scattering in elastic recoil detection measurements. Only events involving the beam particle and the recoil are considered, i.e. 1) an ion scatters off a target element and then produces a recoil, and 2) an ion produces a recoil which then scatters off a target element. Events involving intermediate recoils are not considered, i.e. when the primary ion produces a recoil which then produces a second recoil. If the recoil element is also present in the stopping foil, recoil events in the stopping foil are also calculated. We included the model in the standard code for IBA data analysis NDF, and applied it to the measurement of hydrogen in Si.

  10. A Visual Analytics Approach for Correlation, Classification, and Regression Analysis

    SciTech Connect

    Steed, Chad A; SwanII, J. Edward; Fitzpatrick, Patrick J.; Jankun-Kelly, T.J.

    2013-01-01

    New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today s increasing complex, multivariate data sets. In this paper, a visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today s data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. This chapter provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.

  11. Visual analytics for model selection in time series analysis.

    PubMed

    Bögl, Markus; Aigner, Wolfgang; Filzmoser, Peter; Lammarsch, Tim; Miksch, Silvia; Rind, Alexander

    2013-12-01

    Model selection in time series analysis is a challenging task for domain experts in many application areas such as epidemiology, economy, or environmental sciences. The methodology used for this task demands a close combination of human judgement and automated computation. However, statistical software tools do not adequately support this combination through interactive visual interfaces. We propose a Visual Analytics process to guide domain experts in this task. For this purpose, we developed the TiMoVA prototype that implements this process based on user stories and iterative expert feedback on user experience. The prototype was evaluated by usage scenarios with an example dataset from epidemiology and interviews with two external domain experts in statistics. The insights from the experts' feedback and the usage scenarios show that TiMoVA is able to support domain experts in model selection tasks through interactive visual interfaces with short feedback cycles.

  12. Analytical analysis of single- and three-phase induction motors

    SciTech Connect

    Davey, K.R.

    1998-09-01

    The analysis of single and multiphase induction motors continues to represent a challenge to researchers in computational electromagnetics due to the presence of r{Omega} x B electric fields. This contribution cannot be inserted into the Green`s function for boundary element codes; finite difference and finite element approaches are forced to hard code these effects, compensating at high speeds with upwinding techniques. The direct computation of these affects using transfer relations in a linear environment offers an analytical backdrop both for benchmark testing numerical codes and for design assessment criteria. In addition to torque-speed predictions, the terminal relations and total power dissipation in the rotor are computed for an exposed winding three-phase and single-phase machine.

  13. A Visual Analytics Approach for Correlation, Classification, and Regression Analysis

    SciTech Connect

    Steed, Chad A; SwanII, J. Edward; Fitzpatrick, Patrick J.; Jankun-Kelly, T.J.

    2012-02-01

    New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today's increasing complex, multivariate data sets. In this paper, a novel visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today's data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. The current work provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.

  14. Focused analyte spray emission apparatus and process for mass spectrometric analysis

    DOEpatents

    Roach, Patrick J.; Laskin, Julia; Laskin, Alexander

    2012-01-17

    An apparatus and process are disclosed that deliver an analyte deposited on a substrate to a mass spectrometer that provides for trace analysis of complex organic analytes. Analytes are probed using a small droplet of solvent that is formed at the junction between two capillaries. A supply capillary maintains the droplet of solvent on the substrate; a collection capillary collects analyte desorbed from the surface and emits analyte ions as a focused spray to the inlet of a mass spectrometer for analysis. The invention enables efficient separation of desorption and ionization events, providing enhanced control over transport and ionization of the analyte.

  15. An Analysis of Feedback from a Behavior Analytic Perspective.

    PubMed

    Mangiapanello, Kathleen A; Hemmes, Nancy S

    2015-05-01

    The present paper presents a systematic analysis from a behavior analytic perspective of procedures termed feedback. Although feedback procedures are widely reported in the discipline of psychology, including in the field of behavior analysis, feedback is neither consistently defined nor analyzed. Feedback is frequently treated as a principle of behavior; however, its effects are rarely analyzed in terms of well-established principles of learning and behavior analysis. On the assumption that effectiveness of feedback procedures would be enhanced when their use is informed by these principles, we sought to provide a conceptually systematic account of feedback effects in terms of operant conditioning principles. In the first comprehensive review of this type, we compare feedback procedures with those of well-defined operant procedures. We also compare the functional relations that have been observed between parameters of consequence delivery and behavior under both feedback and operant procedures. The similarities observed in the preceding analyses suggest that processes revealed in operant conditioning procedures are sufficient to explain the phenomena observed in studies on feedback.

  16. An analytical paradigm for the analysis of national inmigration patterns.

    PubMed

    Weisberg, Y; Eaglstein, A S

    1988-09-01

    This study proposes and demonstrates an analytic paradigm based upon a substantive categorization of a set of inmigration correlates. It exemplifies the notion of categorizing, analyzing according to the categorization, and subsequently discussing the phenomenon in more depth. The paradigm has 2 steps: 1) the variables are categorized according to the cells resulting from the intersection of a preferably small number of nominal dimensions and 2) the data are analyzed, directly anchored in the prior categorization. The data used is Israel's 1983 census macro-data gathered from the Central Bureau of Statistics for the Israeli towns with populations of at least 5000. The authors defined 6 variables as push variables and 4 as pull variables. Results of the regression employing push variables show that 4 variables accounting for 72% of inmigration were found to significantly predict inmigration: 1) unemployment, 2) percentage of Asians-Africans, 3) town size, and 4) religiosity. Within the pull classification, the regression analysis reveals that 2 of the 4 variables explain 31% of the inmigration variance: 1) educational level (26%) and 2) income (5%). The 1st regression analysis on the 2nd dimension shows that the percentage of Asian-African origin and town population size account for 32% of the immigration variance. In the 2nd regression analysis, unemployment explains 48% of the inmigration variance and educational level explains 8%. In the 3rd regression, only home crowding explains a significant amount of the immigration variance (19%). Results of a multiple regression analysis show that unemployment level, percentage of Asian-Africans, population size, and level of religiosity account for 72% of the inmigration variance. Thus, the characteristics of a town inmigrating (push variables) are demographic, economic, and social. However, the attractive features of a town are only economic. Among all economic factors, unemployment is primary. In addition, not only are

  17. Dairy Analytics and Nutrient Analysis (DANA) Prototype System User Manual

    SciTech Connect

    Sam Alessi; Dennis Keiser

    2012-10-01

    This document is a user manual for the Dairy Analytics and Nutrient Analysis (DANA) model. DANA provides an analysis of dairy anaerobic digestion technology and allows users to calculate biogas production, co-product valuation, capital costs, expenses, revenue and financial metrics, for user customizable scenarios, dairy and digester types. The model provides results for three anaerobic digester types; Covered Lagoons, Modified Plug Flow, and Complete Mix, and three main energy production technologies; electricity generation, renewable natural gas generation, and compressed natural gas generation. Additional options include different dairy types, bedding types, backend treatment type as well as numerous production, and economic parameters. DANA’s goal is to extend the National Market Value of Anaerobic Digester Products analysis (informa economics, 2012; Innovation Center, 2011) to include a greater and more flexible set of regional digester scenarios and to provide a modular framework for creation of a tool to support farmer and investor needs. Users can set up scenarios from combinations of existing parameters or add new parameters, run the model and view a variety of reports, charts and tables that are automatically produced and delivered over the web interface. DANA is based in the INL’s analysis architecture entitled Generalized Environment for Modeling Systems (GEMS) , which offers extensive collaboration, analysis, and integration opportunities and greatly speeds the ability construct highly scalable web delivered user-oriented decision tools. DANA’s approach uses server-based data processing and web-based user interfaces, rather a client-based spreadsheet approach. This offers a number of benefits over the client-based approach. Server processing and storage can scale up to handle a very large number of scenarios, so that analysis of county, even field level, across the whole U.S., can be performed. Server based databases allow dairy and digester

  18. Transitional Probability Analysis of Two Child Behavior Analytic Therapy Cases

    ERIC Educational Resources Information Center

    Xavier, Rodrigo Nunes; Kanter, Jonathan William; Meyer, Sonia Beatriz

    2012-01-01

    This paper aimed to highlight the process of therapist direct contingent responding to shape client behavior in two Child Behavior Analytic Therapy (CBAT) cases using transitional probabilities. The Functional Analytic Psychotherapy Rating Scale (FAPRS) was used to code client behaviors and the Multidimensional System for Coding Behaviors in…

  19. Analysis of analytic nonresonant background removal algorithm for MCARS spectra

    NASA Astrophysics Data System (ADS)

    Roberson, Stephen D.; Bowman Pilkington, Sherrie; Pellegrino, Paul M.

    2016-05-01

    Multiplex Coherent Anti Stokes Raman Spectroscopy (MCARS) has been shown to generate a complete Raman spectrum of a material on a millisecond time scale which allows for rapid identification of a wide variety of molecular targets. Along with the desired resonant spectrum due to the vibrational Raman spectroscopy of the analyte, MCARS is known to simultaneously generate a nonresonant spectrum that can obscure the desired Raman spectrum which hinders detection. Extracting the desired resonant Raman signal analytically from the overall MCARS signal has proven difficult without having prior knowledge of the analyte. We have developed an algorithm that utilizes a combination of the maximum entropy method in conjunction with advanced Fourier filtering to analytically remove the nonresonant background from our MCARS spectra without having prior knowledge of the vibrational spectrum of the analyte. In this report, we will report on the theoretical background for this algorithm as well as our experimental work testing this algorithm under various nonresonant spectra conditions for a number of analytes. We will systematically vary the amount of nonresonant background generated in the sample by changing the temporal overlap of the two beams necessary to generate the MCARS signal. Additionally, we place the analyte into increasing concentrations of water to generate increasing amounts of nonresonant background spectra to test the algorithm's effectiveness. Finally, we compare the analyte vibrational spectral output from the algorithm to the Raman spectrum measured with the spontaneous Raman system in the laboratory of the same sample in an effort to ascertain accuracy of the output spectra.

  20. Analysis of heat pipe receivers for point-focus solar concentrators

    SciTech Connect

    Adkins, D.R.

    1988-01-01

    Heat-pipe solar receivers are used to transfer concentrated solar energy from the focal point of a parabolic dish concentrator to the working fluid of a heat engine (or in some instances a chemical reactor). Concentrated solar energy that is collected on the front (absorber) surface of a heat pipe receiver is removed by the evaporation of an intermediate working fluid on the back side of the absorber surface. The vaporized fluid flows to the heater tubes of an engine where it condenses and transfers energy to the heat engine's working fluid. The condensed vapor then returns to the absorber surface where it is redistributed across the surface by a wick. Heat pipes are an attractive option for coupling solar concentrators to heat engines because of their near isothermal operating characteristics and their ability to transfer large amounts of heat from relatively small surface areas. This paper investigates design factors that must be considered in constructing a solar heat pipe receiver. Particular emphasis is placed on designing a wick structure to transport the working fluid across the solar absorber surface, but general issues concerning fluid flow in heat pipe receivers are also presented. Analytical tools for the design of heat-pipe solar receivers are also provided. 18 refs., 11 figs., 3 tabs.

  1. Thermal stress analysis of eccentric tube receiver using concentrated solar radiation

    SciTech Connect

    Wang, Fuqiang; Shuai, Yong; Yuan, Yuan; Yang, Guo; Tan, Heping

    2010-10-15

    In the parabolic trough concentrator with tube receiver system, the heat transfer fluid flowing through the tube receiver can induce high thermal stress and deflection. In this study, the eccentric tube receiver is introduced with the aim to reduce the thermal stresses of tube receiver. The ray-thermal-structural sequential coupled numerical analyses are adopted to obtain the concentrated heat flux distributions, temperature distributions and thermal stress fields of both the eccentric and concentric tube receivers. During the sequential coupled numerical analyses, the concentrated heat flux distribution on the bottom half periphery of tube receiver is obtained by Monte-Carlo ray tracing method, and the fitting function method is introduced for the calculated heat flux distribution transformation from the Monte-Carlo ray tracing model to the CFD analysis model. The temperature distributions and thermal stress fields are obtained by the CFD and FEA analyses, respectively. The effects of eccentricity and oriented angle variation on the thermal stresses of eccentric tube receiver are also investigated. It is recommended to adopt the eccentric tube receiver with optimum eccentricity and 90 oriented angle as tube receiver for the parabolic trough concentrator system to reduce the thermal stresses. (author)

  2. Linking TEM analytical spectroscopies for an assumptionless compositional analysis.

    PubMed

    Kothleitner, Gerald; Grogger, Werner; Dienstleder, Martina; Hofer, Ferdinand

    2014-06-01

    The classical implementation for putting quantitative figures on maps to reveal elemental compositions in transmission electron microscopy is by analytical methods like X-ray and energy-loss spectroscopy. Typically, the technique in use often depends on whether lighter or heavier elements are present and-more practically-which calibrations are available or sample-related properties are known. A framework linking electron energy-loss spectroscopy (EELS) and energy-dispersive X-ray (EDX) signals such that absolute volumetric concentrations can be derived without assumptions made a priori about the unknown sample, is largely missing. In order to combine both techniques and harness their respective potentials for a light and heavy element analysis, we have set up a powerful hardware configuration and implemented an experimental approach, which reduces the need for estimates on many parameters needed for quantitative work such as densities, absolute thicknesses, theoretical ionization cross-sections, etc. Calibrations on specimens with known geometry allow the measurement of inelastic mean free paths. As a consequence, mass-thicknesses obtained from the EDX ζ-factor approach can be broken up and quantities like concentrations and partial energy-differential ionization cross-sections become accessible. ζ-factors can then be used for conversion into EELS cross-sections that are hard to determine otherwise, or conversely, connecting EDXS and EELS in a quantitative manner quite effectively. PMID:24598412

  3. CFD analysis of supercritical CO2 used as HTF in a solar tower receiver

    NASA Astrophysics Data System (ADS)

    Roldán, M. I.; Fernández-Reche, J.

    2016-05-01

    The relative cost of a solar receiver can be minimized by the selection of an appropriate heat transfer fluid capable of achieving high receiver efficiencies. In a conventional central receiver system, the concentrated solar energy is transferred from the receiver tube walls to the heat transfer fluid (HTF), which passes through a heat exchanger to generate steam for a Rankine cycle. Thus, higher working fluid temperature is associated with greater efficiency in receiver and power cycle. Emerging receiver designs that can enable higher efficiencies using advanced power cycles, such as supercritical CO2 (s-CO2) closed-loop Brayton cycles, include direct heating of s-CO2 in tubular receiver designs capable of withstanding high internal fluid pressures (around 20 MPa) and temperatures (900 K). Due to the high pressures required and the presence of moving components installed in pipelines (ball-joints and/or flexible connections), the use of s-CO2 presents many technical challenges due to the compatibility of seal materials and fluid leakages of the moving connections. These problems are solved in solar tower systems because the receiver is fixed. In this regard, a preliminary analysis of a tubular receiver with s-CO2 as HTF has been developed using the design of a molten-salt receiver which was previously tested at Plataforma Solar de Almería (PSA). Therefore, a simplified CFD model has been carried out in this study in order to analyze the feasibility of s-CO2 as HTF in solar towers. Simulation results showed that the heat gained by s-CO2 was around 75% greater than the one captured by molten salts (fluid inlet temperature of 715 K), but at a pressure range of 7.5-9.7 MPa. Thus, the use of s-CO2 as HTF in solar tower receivers appears to be a promising alternative, taking into account both the operating conditions required and their maintenance cost.

  4. Performance analysis and receiver architectures of DCF77 radio-controlled clocks.

    PubMed

    Engeler, Daniel

    2012-05-01

    DCF77 is a longwave radio transmitter located in Germany. Atomic clocks generate a 77.5-kHz carrier which is amplitude- and phase-modulated to broadcast the official time. The signal is used by industrial and consumer radio-controlled clocks. DCF77 faces competition from the Global Positioning System (GPS) which provides higher accuracy time. Still, DCF77 and other longwave time services worldwide remain popular because they allow indoor reception at lower cost, lower power, and sufficient accuracy. Indoor longwave reception is challenged by signal attenuation and electromagnetic interference from an increasing number of devices, particularly switched-mode power supplies. This paper introduces new receiver architectures and compares them with existing detectors and time decoders. Simulations and analytical calculations characterize the performance in terms of bit error rate and decoding probability, depending on input noise and narrowband interference. The most promising detector with maximum-likelihood time decoder displays the time in less than 60 s after powerup and at a noise level of E(b)/N(0) = 2.7 dB, an improvement of 20 dB over previous receivers. A field-programmable gate array-based demonstration receiver built for the purposes of this paper confirms the capabilities of these new algorithms. The findings of this paper enable future high-performance DCF77 receivers and further study of indoor longwave reception.

  5. Performance analysis and receiver architectures of DCF77 radio-controlled clocks.

    PubMed

    Engeler, Daniel

    2012-05-01

    DCF77 is a longwave radio transmitter located in Germany. Atomic clocks generate a 77.5-kHz carrier which is amplitude- and phase-modulated to broadcast the official time. The signal is used by industrial and consumer radio-controlled clocks. DCF77 faces competition from the Global Positioning System (GPS) which provides higher accuracy time. Still, DCF77 and other longwave time services worldwide remain popular because they allow indoor reception at lower cost, lower power, and sufficient accuracy. Indoor longwave reception is challenged by signal attenuation and electromagnetic interference from an increasing number of devices, particularly switched-mode power supplies. This paper introduces new receiver architectures and compares them with existing detectors and time decoders. Simulations and analytical calculations characterize the performance in terms of bit error rate and decoding probability, depending on input noise and narrowband interference. The most promising detector with maximum-likelihood time decoder displays the time in less than 60 s after powerup and at a noise level of E(b)/N(0) = 2.7 dB, an improvement of 20 dB over previous receivers. A field-programmable gate array-based demonstration receiver built for the purposes of this paper confirms the capabilities of these new algorithms. The findings of this paper enable future high-performance DCF77 receivers and further study of indoor longwave reception. PMID:22622972

  6. Unifying Approach to Analytical Chemistry and Chemical Analysis: Problem-Oriented Role of Chemical Analysis.

    ERIC Educational Resources Information Center

    Pardue, Harry L.; Woo, Jannie

    1984-01-01

    Proposes an approach to teaching analytical chemistry and chemical analysis in which a problem to be resolved is the focus of a course. Indicates that this problem-oriented approach is intended to complement detailed discussions of fundamental and applied aspects of chemical determinations and not replace such discussions. (JN)

  7. Performance enhancement of rake-receiver using continuous and discrete wavelet transforms analysis through NLOS propagation

    NASA Astrophysics Data System (ADS)

    Fayadh, Rashid A.; Malek, F.; Fadhil, Hilal A.; Dawood, Sameer A.; Abdullah, Farah Salwani

    2015-05-01

    In this paper, three levels of analysis and synthesis filter banks were used to create coefficients for a continuous wavelet transform (CWT) and discrete wavelet transform (DWT). The main property of these wavelet transform schemes is their ability to construct the transmitted signal across a log-normal fading channel over additive white Gaussian noise (AWGN). Wireless rake-receiver structure was chosen as a major application to reduce the inter-symbol interference (ISI) and to minimize the noise. In this work, a new scheme of rake receiver is proposed to receive indoor, multi-path components (MPCs) for ultra-wideband (UWB) wireless communication systems. Rake receivers consist of a continuous wavelet rake (CW-rake) and a discrete wavelet rake (DW-rake), and they use huge bandwidth (7.5 GHz), as reported by the Federal Communications Commission (FCC). The indoor channel models chose for analysis in this research were the non line-of-sight (LOS) channel model (CM4 from 4 to 10 meters) to show the behavior of bit error rate (BER) with respect to signal-to noise ratio (SNR). Two types of rake receiver were used in the simulation, i.e., partial-rake and selective-rake receivers with the maximal ratio combining (MRC) technique to capture the energy of the signal from the output of the rake's fingers.

  8. Analytical studies: a framework for quality improvement design and analysis.

    PubMed

    Provost, Lloyd P

    2011-04-01

    Conducting studies for learning is fundamental to improvement. Deming emphasised that the reason for conducting a study is to provide a basis for action on the system of interest. He classified studies into two types depending on the intended target for action. An enumerative study is one in which action will be taken on the universe that was studied. An analytical study is one in which action will be taken on a cause system to improve the future performance of the system of interest. The aim of an enumerative study is estimation, while an analytical study focuses on prediction. Because of the temporal nature of improvement, the theory and methods for analytical studies are a critical component of the science of improvement.

  9. Performance Analysis of Receive Diversity in Wireless Sensor Networks over GBSBE Models

    PubMed Central

    Goel, Shivali; Abawajy, Jemal H.; Kim, Tai-hoon

    2010-01-01

    Wireless sensor networks have attracted a lot of attention recently. In this paper, we develop a channel model based on the elliptical model for multipath components involving randomly placed scatterers in the scattering region with sensors deployed on a field. We verify that in a sensor network, the use of receive diversity techniques improves the performance of the system. Extensive performance analysis of the system is carried out for both single and multiple antennas with the applied receive diversity techniques. Performance analyses based on variations in receiver height, maximum multipath delay and transmit power have been performed considering different numbers of antenna elements present in the receiver array, Our results show that increasing the number of antenna elements for a wireless sensor network does indeed improve the BER rates that can be obtained. PMID:22163510

  10. Analysis of telescope array receivers for deep-space inter-planetary optical communication link between Earth and Mars

    NASA Astrophysics Data System (ADS)

    Hashmi, Ali Javed; Eftekhar, Ali Asghar; Adibi, Ali; Amoozegar, Farid

    2010-05-01

    Optical communication technology shows promising prospects to fulfill the large bandwidth communication requirements of future deep-space exploration missions that are launched by NASA and various other international space agencies. At Earth, a telescope with a large aperture diameter is required to capture very weak optical signals that are transmitted from distant planets and to support large bandwidth communication link. A single large telescope has the limitations of cost, single point failure in case of malfunction, difficulty in manufacturing high quality optics, maintenance, and trouble in providing communication operations when transmitting spacecraft is close to the Sun. An array of relatively smaller-sized telescopes electrically connected to form an aggregate aperture area equivalent to a single large telescope is a viable alternative to a monolithic gigantic aperture. In this paper, we present the design concept and analysis of telescope array receivers for an optical communication link between Earth and Mars. Pulse-position modulation (PPM) is used at the transmitter end and photon-counting detectors along with the direct-detection technique are employed at each telescope element in the array. We also present the optimization of various system parameters, such as detector size (i.e., receiver field of view), PPM slot width, and the PPM order M, to mitigate the atmospheric turbulence and background noise effects, and to maximize the communication system performance. The performance of different array architectures is evaluated through analytical techniques and Monte-Carlo simulations for a broad range of operational scenarios, such as, Earth-Mars conjunction, Earth-Mars opposition, and different background and turbulence conditions. It is shown that the performance of the telescope array-based receiver is equivalent to a single large telescope; and as compared to current RF technology, telescope array-based optical receivers can provide several orders

  11. An experimental analysis of a doped lithium fluoride direct absorption solar receiver

    NASA Technical Reports Server (NTRS)

    Kesseli, James; Pollak, Tom; Lacy, Dovie

    1988-01-01

    An experimental analysis of two key elements of a direct absorption solar receiver for use with Brayton solar dynamic systems was conducted. Experimental data are presented on LiF crystals doped with dysprosium, samarium, and cobalt fluorides. In addition, a simulation of the cavity/window environment was performed and a posttest inspection was conducted to evaluate chemical reactivity, transmissivity, and condensation rate.

  12. Applying Bayesian Modeling and Receiver Operating Characteristic Methodologies for Test Utility Analysis

    ERIC Educational Resources Information Center

    Wang, Qiu; Diemer, Matthew A.; Maier, Kimberly S.

    2013-01-01

    This study integrated Bayesian hierarchical modeling and receiver operating characteristic analysis (BROCA) to evaluate how interest strength (IS) and interest differentiation (ID) predicted low–socioeconomic status (SES) youth's interest-major congruence (IMC). Using large-scale Kuder Career Search online-assessment data, this study fit three…

  13. Application of Multidimensional Spectrum Analysis for Analytical Chemistry

    SciTech Connect

    Hatsukawa, Yuichi; Hayakawa, Takehito; Toh, Yosuke; Shinohara, Nobuo; Oshima, Masumi

    1999-12-31

    Feasibility of application of the multidimensional {gamma} ray spectroscopy for analytical chemistry was examined. Two reference igneous rock (JP-1, JB-1a) samples issued by the Geological Survey of Japan (GSJ) were irradiated at a research reactor with thermal neutrons, and {gamma} rays from the radioisotopes produced by neutron capture reactions were measured using a {gamma}-ray detector array. Simultaneously 27 elements were observed with no chemical separation.

  14. Power and Efficiency Analysis of a Solar Central Receiver Combined Cycle Plant with a Small Particle Heat Exchanger Receiver

    NASA Astrophysics Data System (ADS)

    Virgen, Matthew Miguel

    Two significant goals in solar plant operation are lower cost and higher efficiencies. To achieve those goals, a combined cycle gas turbine (CCGT) system, which uses the hot gas turbine exhaust to produce superheated steam for a bottoming Rankine cycle by way of a heat recovery steam generator (HRSG), is investigated in this work. Building off of a previous gas turbine model created at the Combustion and Solar Energy Laboratory at SDSU, here are added the HRSG and steam turbine model, which had to handle significant change in the mass flow and temperature of air exiting the gas turbine due to varying solar input. A wide range of cases were run to explore options for maximizing both power and efficiency from the proposed CSP CCGT plant. Variable guide vanes (VGVs) were found in the earlier model to be an effective tool in providing operational flexibility to address the variable nature of solar input. Combined cycle efficiencies in the range of 50% were found to result from this plant configuration. However, a combustor inlet temperature (CIT) limit leads to two distinct Modes of operation, with a sharp drop in both plant efficiency and power occurring when the air flow through the receiver exceeded the CIT limit. This drawback can be partially addressed through strategic use of the VGVs. Since system response is fully established for the relevant range of solar input and variable guide vane angles, the System Advisor Model (SAM) from NREL can be used to find what the actual expected solar input would be over the course of the day, and plan accordingly. While the SAM software is not yet equipped to model a Brayton cycle cavity receiver, appropriate approximations were made in order to produce a suitable heliostat field to fit this system. Since the SPHER uses carbon nano-particles as the solar absorbers, questions of particle longevity and how the particles might affect the flame behavior in the combustor were addressed using the chemical kinetics software Chemkin

  15. Identification and analysis of factors affecting thermal shock resistance of ceramic materials in solar receivers

    NASA Astrophysics Data System (ADS)

    Hasselman, D. P. H.; Singh, J. P.; Satyamurthy, K.

    1980-07-01

    An analysis was conducted of the possible modes of thermal stress failure of brittle ceramics for potential use in point-focussing solar receivers. The pertinent materials properties which control thermal stress resistance were identified for conditions of steady-state and transient heat flow, convective and radiative heat transfer, thermal buckling and thermal fatigue as well as catastrophic crack propagation. Selection rules for materials with optimum thermal stress resistance for a particular thermal environment were identified. Recommendations for materials for particular components were made. The general requirements for a thermal shock testing program quantitatively meaningful for point-focussing solar receivers were outlined. Recommendations for follow-on theoretical analyses were made.

  16. An Analysis of the Effects of RFID Tags on Narrowband Navigation and Communication Receivers

    NASA Technical Reports Server (NTRS)

    LaBerge, E. F. Charles

    2007-01-01

    The simulated effects of the Radio Frequency Identification (RFID) tag emissions on ILS Localizer and ILS Glide Slope functions match the analytical models developed in support of DO-294B provided that the measured peak power levels are adjusted for 1) peak-to-average power ratio, 2) effective duty cycle, and 3) spectrum analyzer measurement bandwidth. When these adjustments are made, simulated and theoretical results are in extraordinarily good agreement. The relationships hold over a large range of potential interference-to-desired signal power ratios, provided that the adjusted interference power is significantly higher than the sum of the receiver noise floor and the noise-like contributions of all other interference sources. When the duty-factor adjusted power spectral densities are applied in the evaluation process described in Section 6 of DO-294B, most narrowband guidance and communications radios performance parameters are unaffected by moderate levels of RFID interference. Specific conclusions and recommendations are provided.

  17. Application of differential analysis of VLF signals for seismic-ionospheric precursor detection from multiple receivers

    NASA Astrophysics Data System (ADS)

    Skeberis, Christos; Zaharis, Zaharias; Xenos, Thomas; Contadakis, Michael; Stratakis, Dimitrios; Tommaso, Maggipinto; Biagi, Pier Francesco

    2015-04-01

    This study investigates the application of differential analysis on VLF signals emitted from a single transmitter and received by multiple stations in order to filter and detect disturbances that can be attributed to seismic-ionospheric precursor phenomena. The cross-correlation analysis applied on multiple VLF signals provides a way of discerning the nature of a given disturbance and accounts for more widespread geomagnetic interferences compared to local precursor phenomena. For the purpose of this paper, data acquired in Thessaloniki (40.59N, 22,78E) and in Heraklion (35.31N, 25.10E) from the VLF station in Tavolara, Italy (ICV station Lat. 40.923, Lon. 9.731) for a period of four months (September 2014 - December 2014) are used. The receivers have been developed by Elettronika Srl and are part of the International Network for Frontier Research on Earthquake Precursors (INFREP). A normalization process and an improved variant of the Hilbert-Huang transform are initially applied to the received VLF signals. The signals derived from the first two Intrinsic Mode Functions (IMF1 and IMF2) undergo a cross-correlation analysis and, in this way, time series from the two receivers can be compared. The efficacy of the processing method and the results produced by the proposed process are then discussed. Finally, results are presented along with an evaluation of the discrimination and detection capabilities of the method on disturbances of the received signals. Based upon the results, the merits of such a processing method are discussed to further improve the current method by using differential analysis to better classify between different disturbances but, more importantly, discriminate between points of interest in the provided spectra. This could provide an improved method of detecting disturbances attributed to seismic-ionospheric precursor phenomena and also contribute to a real-time method for correlating seismic activity with the observed disturbances.

  18. Solar power generation by use of Stirling engine and heat loss analysis of its cavity receiver

    NASA Astrophysics Data System (ADS)

    Hussain, Tassawar

    position (AP=H/D) were used to characterize the different configurations of Cavity Receiver and it was found that the Cavity Receiver with AR=0.5 and AP=0.53 has the maximum capability to utilize the solar heat to attain the maximum temperature of the heat pipe receiver. Experimental heat loss analysis at low temperature for different configurations of the cavity receiver was performed and air film temperature profiles along the wall height (H) of the cavity receiver were determined. Since sodium heat pipes operate at high temperature (973K), there are huge possibilities of radiation and convection heat losses for direct solar heating of the heater head. Therefore mathematical modeling of heat loss analysis and its numerical solution at high temperature was also included in the research objectives. 2-D axisymmetric model with weakly compressible Navier Stokes equation and general heat conduction and convection equations were simultaneously solved using the finite element method approach. Computational fluid dynamics package COMSOL 3.5a was used as a numerical tool. The temperature, and flow field pattern inside the cavity receiver were also visualized by means of surface contours. Heat loss analysis were performed for different configurations of Cavity Receiver and the numerical solution of different configuration showed that the aperture ratio (AR) plays a significant role for convection and radiation heat losses whereas the aperture position (AP) effects are negligible.

  19. DEVELOPMENT OF ANALYTICAL STANDARDS FOR THE ANALYSIS OF ORGANIC COMPOUNDS PRESENT IN PM2.5

    EPA Science Inventory

    The paper discusses the development of analytical standards for the analysis of organic compounds present in particulate matter with aerodynamic diameters < 2.5 micrometers (PM2.5). The methods and analytical standards described in the paper are designed to be implemented in a b...

  20. Marketing Mix Formulation for Higher Education: An Integrated Analysis Employing Analytic Hierarchy Process, Cluster Analysis and Correspondence Analysis

    ERIC Educational Resources Information Center

    Ho, Hsuan-Fu; Hung, Chia-Chi

    2008-01-01

    Purpose: The purpose of this paper is to examine how a graduate institute at National Chiayi University (NCYU), by using a model that integrates analytic hierarchy process, cluster analysis and correspondence analysis, can develop effective marketing strategies. Design/methodology/approach: This is primarily a quantitative study aimed at…

  1. Cross-correlation analysis and interpretation of spaced-receiver measurements

    SciTech Connect

    Costa, E.; Fougere, P.F.; Basu, S.

    1988-04-01

    Two algorithms that provide a statistical treatment to the estimation of the parameters of the cross-correlation analysis of spaced-receiver data are reviewed. Their results are compared using signals transmitted by a quasi-stationary polar beacon and received at Goose Bay, Labrador; and transmitted by the orbiting Hilat satellite and received at Tromso, Norway. A good general agreement is displayed in this comparison. The former experiment indicates the possibility of extreme daily variations in the anisotropy of the ground diffraction pattern and in the true drift velocity of the in-situ irregularities. The latter experiment displays geometrical enhancements in the intensity scintillation index S sub 4, in the rms phase fluctuation sigma/sub phi/ and in the axial ration of the ellipse which characterizes the anisotropy of the ground-diffraction pattern, around the region of local L shell alignment of the ray paths. Increases of these parameters also are observed northward of Tromso. These observations are thus consistent with a morphological model for anisotropy of high-latitude nighttime F-region irregularities proposed in the literature. A possible dependence of the results of the spaced-receiver measurements on the receiver baselines is discussed. It is argued that this mechanism could be responsible for the relatively small values of the anisotropy of the diffraction pattern obtained from the Hilat measurements at Tromso.

  2. A review on the analysis of the crustal and upper mantle structure using receiver functions

    NASA Astrophysics Data System (ADS)

    Hu, Jiafu; Yang, Haiyan; Li, Guangquan; Peng, Hengchu

    2015-11-01

    The discontinuities in the earth such as Moho, lithosphere-asthenosphere boundary (LAB), 410 and 660 km discontinuities, are characterized with an abrupt jump in velocities of P and S waves. The depths of these discontinuities are an important parameter to investigate tectonic evolution in the lithosphere. Receiver functions technique with teleseismic events is very suitable for studying the crust and upper mantle structure beneath stations, thus becoming one of the standard tools for such study. The principle of receiver functions is to separate the converted Ps or Sp phases generated at the discontinuities beneath stations in the case that the direct P or S is a delta function. In this paper, the methods of receiver function analysis are collected from literatures. We introduce the coordinate transform technique for the separation of Ps or Sp waves, the deconvolution algorithm to extract P and S receiver functions, the waveform fitting method to invert for S-wave velocity structure, the stacking technique to improve signals, and the migration from time series to depth domain. With some illustrative examples, the care that should be taken in study of the crustal and upper mantle structure using receiver functions are summarized.

  3. Canonical Correlation Analysis as a General Analytical Model.

    ERIC Educational Resources Information Center

    Fan, Xitao

    This paper focuses on three aspects related to the conceptualization and application of canonical correlation analysis as a dominant statistical model: (1) partial canonical correlation analysis and its application in statistical testing; (2) the relation between canonical correlation analysis and discriminant analysis; and (3) the relation…

  4. Analysis and analytical characterization of bioheat transfer during radiofrequency ablation.

    PubMed

    Wang, Keyong; Tavakkoli, Fatemeh; Wang, Shujuan; Vafai, Kambiz

    2015-04-13

    Understanding thermal transport and temperature distribution within biological organs is important for therapeutic aspects related to hyperthermia treatments such as radiofrequency ablation (RFA). Unlike surface heating, the RFA treatment volumetrically heats up the biological media using a heating probe which provides the input energy. In this situation, the shape of the affected region is annular, which is described by an axisymmetric geometry. To better understand the temperature responses of the living tissues subject to RFA, comprehensive characteristics of bioheat transport through the annular biological medium is presented under local thermal non-equilibrium (LTNE) condition. Following the operational features of the RFA treatment, based on the porous media theory, analytical solutions have been derived for the blood and tissue temperature distributions as well as an overall heat exchange correlation in cylindrical coordinates. Our analytical results have been validated against three limiting cases which exist in the literature. The effects of various physiological parameters, such as metabolic heat generation, volume fraction of the vascular space, ratio of the effective blood to tissue conductivities, different biological media and the rate of heat exchange between the lumen and the tissue are investigated. Solutions developed in this study are valuable for thermal therapy planning of RFA. A criterion is also established to link deep heating protocol to surface heating.

  5. Organic analysis and analytical methods development: FY 1995 progress report

    SciTech Connect

    Clauss, S.A.; Hoopes, V.; Rau, J.

    1995-09-01

    This report describes the status of organic analyses and developing analytical methods to account for the organic components in Hanford waste tanks, with particular emphasis on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-103 (Tank 103-SY). The analytical data are to serve as an example of the status of methods development and application. Samples of the convective and nonconvective layers from Tank 103-SY were analyzed for total organic carbon (TOC). The TOC value obtained for the nonconvective layer using the hot persulfate method was 10,500 {mu}g C/g. The TOC value obtained from samples of Tank 101-SY was 11,000 {mu}g C/g. The average value for the TOC of the convective layer was 6400 {mu}g C/g. Chelator and chelator fragments in Tank 103-SY samples were identified using derivatization. gas chromatography/mass spectrometry (GC/MS). Organic components were quantified using GC/flame ionization detection. Major components in both the convective and nonconvective-layer samples include ethylenediaminetetraacetic acid (EDTA), nitrilotriacetic acid (NTA), succinic acid, nitrosoiminodiacetic acid (NIDA), citric acid, and ethylenediaminetriacetic acid (ED3A). Preliminary results also indicate the presence of C16 and C18 carboxylic acids in the nonconvective-layer sample. Oxalic acid was one of the major components in the nonconvective layer as determined by derivatization GC/flame ionization detection.

  6. Preliminary Analysis of the CASES GPS Receiver Performance during Simulated Seismic Displacements

    NASA Astrophysics Data System (ADS)

    De la Rosa-Perkins, A.; Reynolds, A.; Crowley, G.; Azeem, I.

    2014-12-01

    We explore the ability of a new GPS software receiver, called CASES (Connected Autonomous Space Environment Sensor), to measure seismic displacements in realtime. Improvements in GPS technology over the last 20 years allow for precise measurement of ground motion during seismic events. For example, GPS data has been used to calculate displacement histories at an earthquake's epicenter and fault slip estimations with great accuracy. This is supported by the ability to measure displacements directly using GPS, bypassing the double integration that accelerometers require, and by higher clipping limits than seismometers. The CASES receiver developed by ASTRA in collaboration with Cornell University and the University of Texas, Austin represents a new geodetic-quality software-based GPS receiver that measures ionospheric space weather in addition to the usual navigation solution. To demonstrate, in a controlled environment, the ability of the CASES receiver to measure seismic displacements, we simulated ground motions similar to those generated during earthquakes, using a shake box instrumented with an accelerometer and a GPS antenna. The accelerometer measured the box's actual displacement. The box moved on a manually controlled axis that underwent varied one-dimensional motions (from mm to cm) at different frequencies and amplitudes. The CASES receiver was configured to optimize the accuracy of the position solution. We quantified the CASES GPS receiver performance by comparing the GPS solutions against the accelerometer data using various statistical analysis methods. The results of these tests will be presented. The CASES receiver is designed with multiple methods of accessing the data in realtime, ranging from internet connection, blue-tooth, cell-phone modem and Iridium modem. Because the CASES receiver measures ionospheric space weather in addition to the usual navigation solution, CASES provides not only the seimic signal, but also the ionospheric space weather

  7. Risk analysis of 222Rn gas received from East Anatolian Fault Zone in Turkey

    NASA Astrophysics Data System (ADS)

    Yilmaz, Mucahit; Kulahci, Fatih

    2016-06-01

    In this study, risk analysis and probability distribution methodologies are applied for 222Rn gas data received from Sürgü (Malatya) station located on East Anatolian Fault Zone (EAFZ). 222Rn data are recorded between 21.02.2007 and 06.06.2010 dates. For study are used total 1151 222Rn data. Changes in concentration of 222Rn are modeled as statistically.

  8. Analytical band Monte Carlo analysis of electron transport in silicene

    NASA Astrophysics Data System (ADS)

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V-1 s-1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ˜100 cm2 V-1 s-1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  9. Two-port network analysis and modeling of a balanced armature receiver.

    PubMed

    Kim, Noori; Allen, Jont B

    2013-07-01

    Models for acoustic transducers, such as loudspeakers, mastoid bone-drivers, hearing-aid receivers, etc., are critical elements in many acoustic applications. Acoustic transducers employ two-port models to convert between acoustic and electromagnetic signals. This study analyzes a widely-used commercial hearing-aid receiver ED series, manufactured by Knowles Electronics, Inc. Electromagnetic transducer modeling must consider two key elements: a semi-inductor and a gyrator. The semi-inductor accounts for electromagnetic eddy-currents, the 'skin effect' of a conductor (Vanderkooy, 1989), while the gyrator (McMillan, 1946; Tellegen, 1948) accounts for the anti-reciprocity characteristic [Lenz's law (Hunt, 1954, p. 113)]. Aside from Hunt (1954), no publications we know of have included the gyrator element in their electromagnetic transducer models. The most prevalent method of transducer modeling evokes the mobility method, an ideal transformer instead of a gyrator followed by the dual of the mechanical circuit (Beranek, 1954). The mobility approach greatly complicates the analysis. The present study proposes a novel, simplified and rigorous receiver model. Hunt's two-port parameters, the electrical impedance Ze(s), acoustic impedance Za(s) and electro-acoustic transduction coefficient Ta(s), are calculated using ABCD and impedance matrix methods (Van Valkenburg, 1964). The results from electrical input impedance measurements Zin(s), which vary with given acoustical loads, are used in the calculation (Weece and Allen, 2010). The hearing-aid receiver transducer model is designed based on energy transformation flow [electric→ mechanic→ acoustic]. The model has been verified with electrical input impedance, diaphragm velocity in vacuo, and output pressure measurements. This receiver model is suitable for designing most electromagnetic transducers and it can ultimately improve the design of hearing-aid devices by providing a simplified yet accurate, physically

  10. Analytical modeling and vibration analysis of internally cracked rectangular plates

    NASA Astrophysics Data System (ADS)

    Joshi, P. V.; Jain, N. K.; Ramtekkar, G. D.

    2014-10-01

    This study proposes an analytical model for nonlinear vibrations in a cracked rectangular isotropic plate containing a single and two perpendicular internal cracks located at the center of the plate. The two cracks are in the form of continuous line with each parallel to one of the edges of the plate. The equation of motion for isotropic cracked plate, based on classical plate theory is modified to accommodate the effect of internal cracks using the Line Spring Model. Berger's formulation for in-plane forces makes the model nonlinear. Galerkin's method used with three different boundary conditions transforms the equation into time dependent modal functions. The natural frequencies of the cracked plate are calculated for various crack lengths in case of a single crack and for various crack length ratio for the two cracks. The effect of the location of the part through crack(s) along the thickness of the plate on natural frequencies is studied considering appropriate crack compliance coefficients. It is thus deduced that the natural frequencies are maximally affected when the crack(s) are internal crack(s) symmetric about the mid-plane of the plate and are minimally affected when the crack(s) are surface crack(s), for all the three boundary conditions considered. It is also shown that crack parallel to the longer side of the plate affect the vibration characteristics more as compared to crack parallel to the shorter side. Further the application of method of multiple scales gives the nonlinear amplitudes for different aspect ratios of the cracked plate. The analytical results obtained for surface crack(s) are also assessed with FEM results. The FEM formulation is carried out in ANSYS.

  11. SplitRFLab: A MATLAB GUI toolbox for receiver function analysis based on SplitLab

    NASA Astrophysics Data System (ADS)

    Xu, Mijian; Huang, Hui; Huang, Zhouchuan; Wang, Liangshu

    2016-02-01

    We add new modules for receiver function (RF) analysis in SplitLab toolbox, which includes the manual RF analysis module, automatic RF analysis and related quality control modules, and H- k stacking module. The updated toolbox (named SplitRFLab toolbox), especially its automatic RF analysis module, could calculate the RFs quickly and efficiently, which is very useful in RF analysis with huge amount of seismic data. China is now conducting the ChinArray project that plans to deploy thousands of portable stations across Chinese mainland. Our SplitRFLab toolbox may obtain reliable RF results quickly at the first time, which provide essentially new constraint to the crustal and mantle structures.

  12. Receiver design, performance analysis, and evaluation for space-borne laser altimeters and space-to-space laser ranging systems

    NASA Technical Reports Server (NTRS)

    Davidson, Frederic M.; Sun, Xiaoli; Field, Christopher T.

    1994-01-01

    Accomplishments in the following areas of research are presented: receiver performance study of spaceborne laser altimeters and cloud and aerosol lidars; receiver performance analysis for space-to-space laser ranging systems; and receiver performance study for the Mars Environmental Survey (MESUR).

  13. Receiver design, performance analysis, and evaluation for space-borne laser altimeters and space-to-space laser ranging systems

    NASA Technical Reports Server (NTRS)

    Davidson, Frederic M.; Field, Christopher T.; Sun, Xiaoli

    1996-01-01

    We report here the design and the performance measurements of the breadboard receiver of the Geoscience Laser Altimeter System (GLAS). The measured ranging accuracy was better than 2 cm and 10 cm for 5 ns and 30 ns wide received laser pulses under the expected received signal level, which agreed well with the theoretical analysis. The measured receiver sensitivity or the link margin was also consistent with the theory. The effects of the waveform digitizer sample rate and resolution were also measured.

  14. A Purposive Approach to Content Analysis: Designing Analytical Frameworks

    ERIC Educational Resources Information Center

    Gerbic, Philippa; Stacey, Elizabeth

    2005-01-01

    Content analysis of computer conferences provides a rich source of data for researching and understanding online learning. However, the complexities of using content analysis in a relatively new research field have resulted in researchers often avoiding this method and using more familiar methods such as survey and interview instead. This article…

  15. Analytical and Experimental Vibration Analysis of a Faulty Gear System

    NASA Technical Reports Server (NTRS)

    Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.

    1994-01-01

    A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structure. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.

  16. An analysis of fosaprepitant-induced venous toxicity in patients receiving highly emetogenic chemotherapy

    PubMed Central

    Leal, Alexis D.; Grendahl, Darryl C.; Seisler, Drew K.; Sorgatz, Kristine M.; Anderson, Kari J.; Hilger, Crystal R.; Loprinzi, Charles L.

    2015-01-01

    Purpose Fosaprepitant is an antiemetic used for chemotherapy-induced nausea and vomiting. We recently reported increased infusion site adverse events (ISAE) in a cohort of breast cancer patients receiving chemotherapy with doxorubicin and cyclophosphamide (AC). In this current study, we evaluated the venous toxicity of fosaprepitant use with non-anthracycline platinum-based antineoplastic regimens. Methods A retrospective review was conducted of the first 81 patients initiated on fosaprepitant among patients receiving highly emetogenic chemotherapy, on or after January 1, 2011 at Mayo Clinic Rochester. None of these regimens included an anthracycline. Data collected included baseline demographics, chemotherapy regimen, type of intravenous access and type, and severity of ISAE. Data from these patients were compared to previously collected data from patients who had received AC. Statistical analysis using χ2 and univariate logistic regression was used to evaluate the association between treatment regimen, fosaprepitant, and risk of ISAE. Results Among these 81 patients, the incidence of ISAE was 7.4 % in the non-anthracycline platinum group. The most commonly reported ISAE were swelling (3 %), extravasation (3 %), and phlebitis (3 %). When stratified by regimen, fosaprepitant was associated with a statistically significant increased risk of ISAE in the anthracycline group (OR 8.1; 95 % CI 2.0–31.9) compared to the platinum group. Conclusions Fosaprepitant antiemetic therapy causes significant ISAE that are appreciably higher than previous reports. Patients receiving platinum-based chemotherapy appear to have less significant ISAE than do patients who receive anthracycline-based regimens. PMID:24964876

  17. Waste Receiving and Processing (WRAP) Facility Final Safety Analysis Report (FSAR)

    SciTech Connect

    TOMASZEWSKI, T.A.

    2000-04-25

    The Waste Receiving and Processing Facility (WRAP), 2336W Building, on the Hanford Site is designed to receive, confirm, repackage, certify, treat, store, and ship contact-handled transuranic and low-level radioactive waste from past and present U.S. Department of Energy activities. The WRAP facility is comprised of three buildings: 2336W, the main processing facility (also referred to generically as WRAP); 2740W, an administrative support building; and 2620W, a maintenance support building. The support buildings are subject to the normal hazards associated with industrial buildings (no radiological materials are handled) and are not part of this analysis except as they are impacted by operations in the processing building, 2336W. WRAP is designed to provide safer, more efficient methods of handling the waste than currently exist on the Hanford Site and contributes to the achievement of as low as reasonably achievable goals for Hanford Site waste management.

  18. A Qualitative Analysis of the Perception of Stigma Among Latinos Receiving Antidepressants

    PubMed Central

    Interian, Alejandro; Martinez, Igda E.; Guarnaccia, Peter J.; Vega, William A.; Escobar, Javier I.

    2008-01-01

    Objective This study sought to describe the role of stigma in antidepressant adherence among Latinos. Methods The study utilized data generated from six focus groups of Latino outpatients receiving antidepressants (N=30). By using a grounded theory approach, qualitative analysis focused specifically on the role of stigma in antidepressant treatment, as well as salient Latino values. Results Perceptions of stigma were related to both the diagnosis of depression and use of antidepressant medication. Qualitative analyses showed that antidepressant use was seen as implying more severe illness, weakness or failure to cope with problems, and being under the effects of a drug. Reports of stigma were also related to social consequences. Also, the perceived negative attributes of antidepressant use were at odds with self-perceived cultural values. Conclusions Stigma was a prominent concern among Latinos receiving antidepressants, and stigma often affected adherence. Furthermore, culture is likely to play an important role in the communication of stigma and its associated complications. PMID:18048562

  19. Multicriteria decision analysis in ranking of analytical procedures for aldrin determination in water.

    PubMed

    Tobiszewski, Marek; Orłowski, Aleksander

    2015-03-27

    The study presents the possibility of multi-criteria decision analysis (MCDA) application when choosing analytical procedures with low environmental impact. A type of MCDA, Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE), was chosen as versatile tool that meets all the analytical chemists--decision makers requirements. Twenty five analytical procedures for aldrin determination in water samples (as an example) were selected as input alternatives to MCDA analysis. Nine different criteria describing the alternatives were chosen from different groups--metrological, economical and the most importantly--environmental impact. The weights for each criterion were obtained from questionnaires that were sent to experts, giving three different scenarios for MCDA results. The results of analysis show that PROMETHEE is very promising tool to choose the analytical procedure with respect to its greenness. The rankings for all three scenarios placed solid phase microextraction and liquid phase microextraction--based procedures high, while liquid-liquid extraction, solid phase extraction and stir bar sorptive extraction--based procedures were placed low in the ranking. The results show that although some of the experts do not intentionally choose green analytical chemistry procedures, their MCDA choice is in accordance with green chemistry principles. The PROMETHEE ranking results were compared with more widely accepted green analytical chemistry tools--NEMI and Eco-Scale. As PROMETHEE involved more different factors than NEMI, the assessment results were only weakly correlated. Oppositely, the results of Eco-Scale assessment were well-correlated as both methodologies involved similar criteria of assessment.

  20. Design and Analysis of the Aperture Shield Assembly for a Space Solar Receiver

    NASA Technical Reports Server (NTRS)

    Strumpf, Hal J.; Trinh, Tuan; Westelaken, William; Krystkowiak, Christopher; Avanessian, Vahe; Kerslake, Thomas W.

    1997-01-01

    A joint U.S./Russia program has been conducted to design, develop, fabricate, launch, and operate the world's first space solar dynamic power system on the Russian Space Station Mir. The goal of the program was to demonstrate and confirm that solar dynamic power systems are viable for future space applications such as the International Space Station (ISS). The major components of the system include a solar receiver, a closed Brayton cycle power conversion unit, a power conditioning and control unit, a solar concentrator, a radiator, a thermal control system, and a Space Shuttle carrier. Unfortunately, the mission was demanifested from the ISS Phase 1 Space Shuttle Program in 1996. However, NASA Lewis is proposing to use the fabricated flight hardware as part of an all-American flight demonstration on the ISS in 2002. The present paper concerns the design and analysis of the solar receiver aperture shield assembly. The aperture shield assembly comprises the front face of the cylindrical receiver and is located at the focal plane of the solar concentrator. The aperture shield assembly is a critical component that protects the solar receiver structure from highly concentrated solar fluxes during concentrator off-pointing events. A full-size aperture shield assembly was fabricated. This unit was essentially identical to the flight configuration, with the exception of materials substitution. In addition, a thermal shock test aperture shield assembly was fabricated. This test article utilized the flight materials and was used for high-flux testing in the solar simulator test rig at NASA Lewis. This testing is described in a companion paper.

  1. Meta-analysis as Statistical and Analytical Method of Journal’s Content Scientific Evaluation

    PubMed Central

    Masic, Izet; Begic, Edin

    2015-01-01

    Introduction: A meta-analysis is a statistical and analytical method which combines and synthesizes different independent studies and integrates their results into one common result. Goal: Analysis of the journals “Medical Archives”, “Materia Socio Medica” and “Acta Informatica Medica”, which are located in the most eminent indexed databases of the biomedical milieu. Material and methods: The study has retrospective and descriptive character, and included the period of the calendar year 2014. Study included six editions of all three journals (total of 18 journals). Results: In this period was published a total of 291 articles (in the “Medical Archives” 110, “Materia Socio Medica” 97, and in “Acta Informatica Medica” 84). The largest number of articles was original articles. Small numbers have been published as professional, review articles and case reports. Clinical events were most common in the first two journals, while in the journal “Acta Informatica Medica” belonged to the field of medical informatics, as part of pre-clinical medical disciplines. Articles are usually required period of fifty to fifty nine days for review. Articles were received from four continents, mostly from Europe. The authors are most often from the territory of Bosnia and Herzegovina, then Iran, Kosovo and Macedonia. Conclusion: The number of articles published each year is increasing, with greater participation of authors from different continents and abroad. Clinical medical disciplines are the most common, with the broader spectrum of topics and with a growing number of original articles. Greater support of the wider scientific community is needed for further development of all three of the aforementioned journals. PMID:25870484

  2. Analytic scaling analysis of high harmonic generation conversion efficiency.

    PubMed

    Falcão-Filho, E L; Gkortsas, M; Gordon, Ariel; Kärtner, Franz X

    2009-06-22

    Closed form expressions for the high harmonic generation (HHG) conversion efficiency are obtained for the plateau and cutoff regions. The presented formulas eliminate most of the computational complexity related to HHG simulations, and enable a detailed scaling analysis of HHG efficiency as a function of drive laser parameters and material properties. Moreover, in the total absence of any fitting procedure, the results show excellent agreement with experimental data reported in the literature. Thus, this paper opens new pathways for the global optimization problem of extreme ultraviolet (EUV) sources based on HHG.

  3. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  4. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers.

    PubMed

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-04-21

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner-Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments.

  5. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers.

    PubMed

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner-Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704

  6. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704

  7. Analytical approaches to expanding the use of capillary electrophoresis in routine food analysis.

    PubMed

    Castañeda, Gregorio; Rodríguez-Flores, Juana; Ríos, Angel

    2005-06-01

    Capillary Electrophoresis (CE) is becoming an ever more powerful analytical technique for the separation, identification, and quantification of a wide variety of compounds of interest in many application fields. Particularly in food analysis this technique can offer interesting advantages over chromatographic techniques because of its greater simplicity and efficiency. Nevertheless, CE needs to advance with regard to compatibility with sample matrices, sensitivity, and robustness of the methodologies in order to gain even wider acceptance in food analysis laboratories, specially for routine work. This article presents various approaches to expanding the analytical usefulness of CE in food analysis, discussing their advantages over conventional CE. These approaches focus on sample screening, automated sample preparation with on-line CE arrangements, and the automatic integration of calibration in routine analytical work with CE.

  8. A conflict of analysis: analytical chemistry and milk adulteration in Victorian Britain.

    PubMed

    Steere-Williams, Jacob

    2014-08-01

    This article centres on a particularly intense debate within British analytical chemistry in the late nineteenth century, between local public analysts and the government chemists of the Inland Revenue Service. The two groups differed in both practical methodologies and in the interpretation of analytical findings. The most striking debates in this period were related to milk analysis, highlighted especially in Victorian courtrooms. It was in protracted court cases, such as the well known Manchester Milk Case in 1883, that analytical chemistry was performed between local public analysts and the government chemists, who were often both used as expert witnesses. Victorian courtrooms were thus important sites in the context of the uneven professionalisation of chemistry. I use this tension to highlight what Christopher Hamlin has called the defining feature of Victorian public health, namely conflicts of professional jurisdiction, which adds nuance to histories of the struggle of professionalisation and public credibility in analytical chemistry.

  9. Patient perspectives on care received at community acupuncture clinics: a qualitative thematic analysis

    PubMed Central

    2013-01-01

    Background Community acupuncture is a recent innovation in acupuncture service delivery in the U.S. that aims to improve access to care through low-cost treatments in group-based settings. Patients at community acupuncture clinics represent a broader socioeconomic spectrum and receive more frequent treatments compared to acupuncture users nationwide. As a relatively new model of acupuncture in the U.S., little is known about the experiences of patients at community acupuncture clinics and whether quality of care is compromised through this high-volume model. The aim of this study was to assess patients’ perspectives on the care received through community acupuncture clinics. Methods The investigators conducted qualitative, thematic analysis of written comments from an observational, cross-sectional survey of clients of the Working Class Acupuncture clinics in Portland, Oregon. The survey included an open-ended question for respondents to share comments about their experiences with community acupuncture. Comments were received from 265 community acupuncture patients. Results Qualitative analysis of written comments identified two primary themes that elucidate patients’ perspectives on quality of care: 1) aspects of health care delivery unique to community acupuncture, and 2) patient engagement in health care. Patients identified unique aspects of community acupuncture, including structures that facilitate access, processes that make treatments more comfortable and effective and holistic outcomes including physical improvements, enhanced quality of life, and empowerment. The group setting, community-based locations, and low cost were highlighted as aspects of this model that allow patients to access acupuncture. Conclusions Patients’ perspectives on the values and experiences unique to community acupuncture offer insights on the quality of care received in these settings. The group setting, community-based locations, and low cost of this model potentially

  10. An introduction to clinical microeconomic analysis: purposes and analytic methods.

    PubMed

    Weintraub, W S; Mauldin, P D; Becker, E R

    1994-06-01

    The recent concern with health care economics has fostered the development of a new discipline that is generally called clinical microeconomics. This is a discipline in which microeconomic methods are used to study the economics of specific medical therapies. It is possible to perform stand alone cost analyses, but more profound insight into the medical decision making process may be accomplished by combining cost studies with measures of outcome. This is most often accomplished with cost-effectiveness or cost-utility studies. In cost-effectiveness studies there is one measure of outcome, often death. In cost-utility studies there are multiple measures of outcome, which must be grouped together to give an overall picture of outcome or utility. There are theoretical limitations to the determination of utility that must be accepted to perform this type of analysis. A summary statement of outcome is quality adjusted life years (QALYs), which is utility time socially discounted survival. Discounting is used because people value a year of future life less than a year of present life. Costs are made up of in-hospital direct, professional, follow-up direct, and follow-up indirect costs. Direct costs are for medical services. Indirect costs reflect opportunity costs such as lost time at work. Cost estimates are often based on marginal costs, or the cost for one additional procedure of the same type. Finally an overall statistic may be generated as cost per unit increase in effectiveness, such as dollars per QALY.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:10151059

  11. Error analysis for semi-analytic displacement derivatives with respect to shape and sizing variables

    NASA Technical Reports Server (NTRS)

    Fenyes, Peter A.; Lust, Robert V.

    1989-01-01

    Sensitivity analysis is fundamental to the solution of structural optimization problems. Consequently, much research has focused on the efficient computation of static displacement derivatives. As originally developed, these methods relied on analytical representations for the derivatives of the structural stiffness matrix (K) with respect to the design variables (b sub i). To extend these methods for use with complex finite element formulations and facilitate their implementation into structural optimization programs using the general finite element method analysis codes, the semi-analytic method was developed. In this method the matrix the derivative of K/the derivative b sub i is approximated by finite difference. Although it is well known that the accuracy of the semi-analytic method is dependent on the finite difference parameter, recent work has suggested that more fundamental inaccuracies exist in the method when used for shape optimization. Another study has argued qualitatively that these errors are related to nonuniform errors in the stiffness matrix derivatives. The accuracy of the semi-analytic method is investigated. A general framework was developed for the error analysis and then it is shown analytically that the errors in the method are entirely accounted for by errors in delta K/delta b sub i. Furthermore, it is demonstrated that acceptable accuracy in the derivatives can be obtained through careful selection of the finite difference parameter.

  12. In vivo degradation of orthodontic miniscrew implants: surface analysis of as-received and retrieved specimens.

    PubMed

    Iijima, Masahiro; Muguruma, Takeshi; Kawaguchi, Masahiro; Yasuda, Yoshitaka; Mizoguchi, Itaru

    2015-02-01

    This study investigated in vivo degradation of Ti-6Al-4V alloy miniscrew implants. Miniscrew implants were placed in patients, and the surfaces were studied upon retrieval by scanning electron microscopy, microscale X-ray photoelectron spectroscopy, elastic recoil detection analysis and nanoindentation testing. Bone-like structures were formed on the retrieved specimens. The hardness and elastic modulus of the surfaces of the retrieved specimens were significantly lower than the as-received specimens, although no statistically significant differences were observed for the hardness and elastic modulus in the bulk region. Thick organic over-layer containing carbon, oxygen, and nitrogen, with the thickness greater than 50 nm, covered the retrieved specimens, and higher concentrations of hydrogen were detected in the retrieved specimens compared with the as-received specimens. Minimal degradation of the bulk mechanical properties of miniscrew implants was observed after clinical use, although precipitation of bone-like structures, formation of a carbonaceous contamination layer, and hydrogen absorption were observed on the surfaces of miniscrew implants. PMID:25631268

  13. Diagnostic accuracy and receiver-operating characteristics curve analysis in surgical research and decision making.

    PubMed

    Søreide, Kjetil; Kørner, Hartwig; Søreide, Jon Arne

    2011-01-01

    In surgical research, the ability to correctly classify one type of condition or specific outcome from another is of great importance for variables influencing clinical decision making. Receiver-operating characteristic (ROC) curve analysis is a useful tool in assessing the diagnostic accuracy of any variable with a continuous spectrum of results. In order to rule a disease state in or out with a given test, the test results are usually binary, with arbitrarily chosen cut-offs for defining disease versus health, or for grading of disease severity. In the postgenomic era, the translation from bench-to-bedside of biomarkers in various tissues and body fluids requires appropriate tools for analysis. In contrast to predetermining a cut-off value to define disease, the advantages of applying ROC analysis include the ability to test diagnostic accuracy across the entire range of variable scores and test outcomes. In addition, ROC analysis can easily examine visual and statistical comparisons across tests or scores. ROC is also favored because it is thought to be independent from the prevalence of the condition under investigation. ROC analysis is used in various surgical settings and across disciplines, including cancer research, biomarker assessment, imaging evaluation, and assessment of risk scores.With appropriate use, ROC curves may help identify the most appropriate cutoff value for clinical and surgical decision making and avoid confounding effects seen with subjective ratings. ROC curve results should always be put in perspective, because a good classifier does not guarantee the expected clinical outcome. In this review, we discuss the fundamental roles, suggested presentation, potential biases, and interpretation of ROC analysis in surgical research.

  14. Comparative analysis of methods for real-time analytical control of chemotherapies preparations.

    PubMed

    Bazin, Christophe; Cassard, Bruno; Caudron, Eric; Prognon, Patrice; Havard, Laurent

    2015-10-15

    Control of chemotherapies preparations are now an obligation in France, though analytical control is compulsory. Several methods are available and none of them is presumed as ideal. We wanted to compare them so as to determine which one could be the best choice. We compared non analytical (visual and video-assisted, gravimetric) and analytical (HPLC/FIA, UV/FT-IR, UV/Raman, Raman) methods thanks to our experience and a SWOT analysis. The results of the analysis show great differences between the techniques, but as expected none us them is without defects. However they can probably be used in synergy. Overall for the pharmacist willing to get involved, the implementation of the control for chemotherapies preparations must be widely anticipated, with the listing of every parameter, and remains according to us an analyst's job. PMID:26299761

  15. Comparative analysis of methods for real-time analytical control of chemotherapies preparations.

    PubMed

    Bazin, Christophe; Cassard, Bruno; Caudron, Eric; Prognon, Patrice; Havard, Laurent

    2015-10-15

    Control of chemotherapies preparations are now an obligation in France, though analytical control is compulsory. Several methods are available and none of them is presumed as ideal. We wanted to compare them so as to determine which one could be the best choice. We compared non analytical (visual and video-assisted, gravimetric) and analytical (HPLC/FIA, UV/FT-IR, UV/Raman, Raman) methods thanks to our experience and a SWOT analysis. The results of the analysis show great differences between the techniques, but as expected none us them is without defects. However they can probably be used in synergy. Overall for the pharmacist willing to get involved, the implementation of the control for chemotherapies preparations must be widely anticipated, with the listing of every parameter, and remains according to us an analyst's job.

  16. Synergistic effect of combining two nondestructive analytical methods for multielemental analysis.

    PubMed

    Toh, Yosuke; Ebihara, Mitsuru; Kimura, Atsushi; Nakamura, Shoji; Harada, Hideo; Hara, Kaoru Y; Koizumi, Mitsuo; Kitatani, Fumito; Furutaka, Kazuyoshi

    2014-12-16

    We developed a new analytical technique that combines prompt gamma-ray analysis (PGA) and time-of-flight elemental analysis (TOF) by using an intense pulsed neutron beam at the Japan Proton Accelerator Research Complex. It allows us to obtain the results from both methods at the same time. Moreover, it can be used to quantify elemental concentrations in the sample, to which neither of these methods can be applied independently, if a new analytical spectrum (TOF-PGA) is used. To assess the effectiveness of the developed method, a mixed sample of Ag, Au, Cd, Co, and Ta, and the Gibeon meteorite were analyzed. The analytical capabilities were compared based on the gamma-ray peak selectivity and signal-to-noise ratios. TOF-PGA method showed high merits, although the capability may differ based on the target and coexisting elements. PMID:25371049

  17. [Local Regression Algorithm Based on Net Analyte Signal and Its Application in Near Infrared Spectral Analysis].

    PubMed

    Zhang, Hong-guang; Lu, Jian-gang

    2016-02-01

    Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.

  18. Receiving Basin for Offsite Fuels and the Resin Regeneration Facility Safety Analysis Report, Executive Summary

    SciTech Connect

    Shedrow, C.B.

    1999-11-29

    The Safety Analysis Report documents the safety authorization basis for the Receiving Basin for Offsite Fuels (RBOF) and the Resin Regeneration Facility (RRF) at the Savannah River Site (SRS). The present mission of the RBOF and RRF is to continue in providing a facility for the safe receipt, storage, handling, and shipping of spent nuclear fuel assemblies from power and research reactors in the United States, fuel from SRS and other Department of Energy (DOE) reactors, and foreign research reactors fuel, in support of the nonproliferation policy. The RBOF and RRF provide the capability to handle, separate, and transfer wastes generated from nuclear fuel element storage. The DOE and Westinghouse Savannah River Company, the prime operating contractor, are committed to managing these activities in such a manner that the health and safety of the offsite general public, the site worker, the facility worker, and the environment are protected.

  19. Performance and Integrity Analysis of the Vector Tracking Architecture of GNSS Receivers

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Susmita

    Frequent loss or attenuation of signals in urban areas and integrity (or reliability of system performance) are two principal challenges facing the Global Navigation Satellite Systems or GNSS today. They are of critical importance especially to safety or liability-critical applications where system malfunction can cause safety problems or has legal/economic consequences. To deal with the problem of integrity, algorithms called integrity monitors have been developed and fielded. These monitors are designed to raise an alarm when situations resulting in misleading information are identified. However, they do not enhance the ability of a GNSS receiver to track weak signals. Among several approaches proposed to deal with the problem of frequent signal outage, an advanced GNSS receiver architecture called vector tracking loops has attracted much attention in recent years. While there is an extensive body of knowledge that documents vector tracking's superiority to deal with weak signals, prior work on vector loop integrity monitoring is scant. Systematic designs of a vector loop-integrity monitoring scheme can find use in above-mentioned applications that are inherently vulnerable to frequent signal loss or attenuation. Developing such a system, however, warrants a thorough understanding of the workings of the vector architecture as the open literature provides very few preliminary studies in this regard. To this end, the first aspect of this research thoroughly explains the internal operations of the vector architecture. It recasts the existing complex vector architecture equations into parametric models that are mathematically tractable. An in-depth theoretical analysis of these models reveals that inter-satellite aiding is the key to vector tracking's superiority. The second aspect of this research performs integrity studies of the vector loops. Simulation results from the previous analysis show that inter-satellite aiding allows easy propagation of errors (and

  20. A Visual Analytics Approach to Structured Data Analysis to Enhance Nonproliferation and Arms Control Verification Activities

    SciTech Connect

    Gillen, David S.

    2014-08-07

    Analysis activities for Nonproliferation and Arms Control verification require the use of many types of data. Tabular structured data, such as Excel spreadsheets and relational databases, have traditionally been used for data mining activities, where specific queries are issued against data to look for matching results. The application of visual analytics tools to structured data enables further exploration of datasets to promote discovery of previously unknown results. This paper discusses the application of a specific visual analytics tool to datasets related to the field of Arms Control and Nonproliferation to promote the use of visual analytics more broadly in this domain. Visual analytics focuses on analytical reasoning facilitated by interactive visual interfaces (Wong and Thomas 2004). It promotes exploratory analysis of data, and complements data mining technologies where known patterns can be mined for. Also with a human in the loop, they can bring in domain knowledge and subject matter expertise. Visual analytics has not widely been applied to this domain. In this paper, we will focus on one type of data: structured data, and show the results of applying a specific visual analytics tool to answer questions in the Arms Control and Nonproliferation domain. We chose to use the T.Rex tool, a visual analytics tool developed at PNNL, which uses a variety of visual exploration patterns to discover relationships in structured datasets, including a facet view, graph view, matrix view, and timeline view. The facet view enables discovery of relationships between categorical information, such as countries and locations. The graph tool visualizes node-link relationship patterns, such as the flow of materials being shipped between parties. The matrix visualization shows highly correlated categories of information. The timeline view shows temporal patterns in data. In this paper, we will use T.Rex with two different datasets to demonstrate how interactive exploration of

  1. Social Cognitive Predictors of College Students' Academic Performance and Persistence: A Meta-Analytic Path Analysis

    ERIC Educational Resources Information Center

    Brown, Steven D.; Tramayne, Selena; Hoxha, Denada; Telander, Kyle; Fan, Xiaoyan; Lent, Robert W.

    2008-01-01

    This study tested Social Cognitive Career Theory's (SCCT) academic performance model using a two-stage approach that combined meta-analytic and structural equation modeling methodologies. Unbiased correlations obtained from a previously published meta-analysis [Robbins, S. B., Lauver, K., Le, H., Davis, D., & Langley, R. (2004). Do psychosocial…

  2. A rational design change methodology based on experimental and analytical modal analysis

    SciTech Connect

    Weinacht, D.J.; Bennett, J.G.

    1993-08-01

    A design methodology that integrates analytical modeling and experimental characterization is presented. This methodology represents a powerful tool for making rational design decisions and changes. An example of its implementation in the design, analysis, and testing of a precisions machine tool support structure is given.

  3. THE IMPORTANCE OF PROPER INTENSITY CALIBRATION FOR RAMAN ANALYSIS OF LOW-LEVEL ANALYTES IN WATER

    EPA Science Inventory

    Modern dispersive Raman spectroscopy offers unique advantages for the analysis of low-concentration analytes in aqueous solution. However, we have found that proper intensity calibration is critical for obtaining these benefits. This is true not only for producing spectra with ...

  4. Determining an Effective Intervention within a Brief Experimental Analysis for Reading: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Burns, Matthew K.; Wagner, Dana

    2008-01-01

    The current study applied meta-analytic procedures to brief experimental analysis research of reading fluency interventions to better inform practice and suggest areas for future research. Thirteen studies were examined to determine what magnitude of effect was needed to identify an intervention as the most effective within a brief experimental…

  5. Social Cognitive Career Theory, Conscientiousness, and Work Performance: A Meta-Analytic Path Analysis

    ERIC Educational Resources Information Center

    Brown, Steven D.; Lent, Robert W.; Telander, Kyle; Tramayne, Selena

    2011-01-01

    We performed a meta-analytic path analysis of an abbreviated version of social cognitive career theory's (SCCT) model of work performance (Lent, Brown, & Hackett, 1994). The model we tested included the central cognitive predictors of performance (ability, self-efficacy, performance goals), with the exception of outcome expectations. Results…

  6. Receiver operating characteristic analysis for intelligent medical systems--a new approach for finding confidence intervals.

    PubMed

    Tilbury, J B; Van Eetvelt, P W; Garibaldi, J M; Curnow, J S; Ifeachor, E C

    2000-07-01

    Intelligent systems are increasingly being deployed in medicine and healthcare, but there is a need for a robust and objective methodology for evaluating such systems. Potentially, receiver operating characteristic (ROC) analysis could form a basis for the objective evaluation of intelligent medical systems. However, it has several weaknesses when applied to the types of data used to evaluate intelligent medical systems. First, small data sets are often used, which are unsatisfactory with existing methods. Second, many existing ROC methods use parametric assumptions which may not always be valid for the test cases selected. Third, system evaluations are often more concerned with particular, clinically meaningful, points on the curve, rather than on global indexes such as the more commonly used area under the curve. A novel, robust and accurate method is proposed, derived from first principles, which calculates the probability density function (pdf) for each point on a ROC curve for any given sample size. Confidence intervals are produced as contours on the pdf. The theoretical work has been validated by Monte Carlo simulations. It has also been applied to two real-world examples of ROC analysis, taken from the literature (classification of mammograms and differential diagnosis of pancreatic diseases), to investigate the confidence surfaces produced for real cases, and to illustrate how analysis of system performance can be enhanced. We illustrate the impact of sample size on system performance from analysis of ROC pdf's and 95% confidence boundaries. This work establishes an important new method for generating pdf's, and provides an accurate and robust method of producing confidence intervals for ROC curves for the small sample sizes typical of intelligent medical systems. It is conjectured that, potentially, the method could be extended to determine risks associated with the deployment of intelligent medical systems in clinical practice.

  7. Evaluation of 3D multimodality image registration using receiver operating characteristic (ROC) analysis

    NASA Astrophysics Data System (ADS)

    Holton Tainter, Kerrie S.; Robb, Richard A.; Taneja, Udita; Gray, Joel E.

    1995-04-01

    Receiver operating characteristic analysis has evolved as a useful method for evaluating the discriminatory capability and efficacy of visualization. The ability of such analysis to account for the variance in decision criteria of multiple observers, multiple reading, and a wide range of difficulty in detection among case studies makes ROC especially useful for interpreting the results of a viewing experiment. We are currently using ROC analysis to evaluate the effectiveness of using fused multispectral, or complementary multimodality imaging data in the diagnostic process. The use of multispectral image recordings, gathered from multiple imaging modalities, to provide advanced image visualization and quantization capabilities in evaluating medical images is an important challenge facing medical imaging scientists. Such capabilities would potentially significantly enhance the ability of clinicians to extract scientific and diagnostic information from images. a first step in the effective use of multispectral information is the spatial registration of complementary image datasets so that a point-to-point correspondence exists between them. We are developing a paradigm of measuring the accuracy of existing image registration techniques which includes the ability to relate quantitative measurements, taken from the images themselves, to the decisions made by observers about the state of registration (SOR) of the 3D images. We have used ROC analysis to evaluate the ability of observers to discriminate between correctly registered and incorrectly registered multimodality fused images. We believe this experience is original and represents the first time that ROC analysis has been used to evaluate registered/fused images. We have simulated low-resolution and high-resolution images from real patient MR images of the brain, and fused them with the original MR to produce colorwash superposition images whose exact SOR is known. We have also attempted to extend this analysis to

  8. Lithospheric structure of the eastern flank of the Rio Grande Rift via receiver function velocity analysis

    NASA Astrophysics Data System (ADS)

    Agrawal, M.; Pulliam, J.; Sen, M. K.; Grand, S.

    2015-12-01

    To better delineate a seismic anomaly beneath the eastern flank of the Rio Grande Rift identified by seismic tomography, we depth-migrated Ps and Sp receiver functions using data from the SIEDCAR (Seismic Investigation of Edge Driven Convection Association with Rio Grande Rift) and USArray Transportable Array (TA) deployments. We performed Common Conversion Point (CCP) stacking to improve the S/N ratio of receiver functions. Using an incorrect velocity model for depth migration of a stacked CCP image may generate an inaccurate picture of the subsurface. To find sufficiently accurate P- and S-velocity models for migration, we optimize the average correlation value of common receiver gathers for target features - in this case the Moho and the LAB - while perturbing the shear wave velocities in a process driven by simulated annealing. The technique simultaneously finds depths to major discontinuities (in this case the Moho and LAB) and S and P velocity profiles beneath each seismic station in a manner that is similar to velocity analysis in reflection seismology. An application to data acquired in southeastern New Mexico and west Texas, at an average station spacing of 35 km, reveals an abrupt increase in lithospheric thickness from west to east, from the Rio Grande Rift to the Great Plains craton. Previous studies found an elongated high velocity anomaly that extends to depths approaching 500 km in southeastern New Mexico and west Texas that is distinct from the thick Great Plains lithosphere. Our stacked 3-D image confirms the anomaly's existence and shows that it is more laterally extensive than was previously indicated. Recent numerical modeling suggests that an abrupt change in lithospheric thickness, which creates a step change in densities, may produce a gravitational instability that leads to thicker mantle lithosphere dripping off into the lower density asthenosphere. As the mantle deforms it alternately thickens and thins the crust, producing topographic

  9. Nonparametric estimation receiver operating characteristic analysis for performance evaluation on combined detection and estimation tasks.

    PubMed

    Wunderlich, Adam; Goossens, Bart

    2014-10-01

    In an effort to generalize task-based assessment beyond traditional signal detection, there is a growing interest in performance evaluation for combined detection and estimation tasks, in which signal parameters, such as size, orientation, and contrast are unknown and must be estimated. One motivation for studying such tasks is their rich complexity, which offers potential advantages for imaging system optimization. To evaluate observer performance on combined detection and estimation tasks, Clarkson introduced the estimation receiver operating characteristic (EROC) curve and the area under the EROC curve as a summary figure of merit. This work provides practical tools for EROC analysis of experimental data. In particular, we propose nonparametric estimators for the EROC curve, the area under the EROC curve, and for the variance/covariance matrix of a vector of correlated EROC area estimates. In addition, we show that reliable confidence intervals can be obtained for EROC area, and we validate these intervals with Monte Carlo simulation. Application of our methodology is illustrated with an example comparing magnetic resonance imaging [Formula: see text]-space sampling trajectories. MATLAB® software implementing the EROC analysis estimators described in this work is publicly available at http://code.google.com/p/iqmodelo/. PMID:26158044

  10. Economic Analysis of a Pine Plantation Receiving Repeated Applications of Biosolids

    PubMed Central

    Wang, Hailong; Kimberley, Mark O.; Wilks, Peter J.

    2013-01-01

    Treated biosolids have been applied to 750-ha of a Pinus radiata forest plantation on Rabbit Island near Nelson City in New Zealand since 1996. A long-term research trial was established in 1997 to investigate the effects of the biosolids applications on the receiving environment and tree growth. An analysis of the likely economic impact of biosolids application shows that biosolids application has been beneficial. Stem volume of the high treatment (biosolids applied at 600 kg N ha-1 every three years) was 36% greater than the control treatment (no biosolids applied), and stem volume of the standard treatment (300 kg N ha-1) was 27% greater than the control treatment at age 18 years of age. Biosolids treatments have effectively transformed a low productivity forest site to a medium productivity site. Although this increased productivity has been accompanied by some negative influences on wood quality attributes with reduced wood stiffness, wood density, and larger branches, an economic analysis shows that the increased stem volume and greater average log diameter in the biosolids treatments outweighs these negative effects. The high and standard biosolids treatments are predicted to increase the net stumpage value of logs by 24% and 14% respectively at harvesting, providing a large positive impact on the forest owner’s economic return. PMID:23451262

  11. Mantle wedge anisotropy in Southern Tyrrhenian Subduction Zone (Italy), from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, Nicola; Park, Jeffrey; Lucente, Francesco Pio

    2008-12-01

    We constrain mantle wedge seismic structure in the Southern Tyrrhenian Subduction Zone (Italy) using teleseismic receiver functions (RF) recorded at station CUC of the Mednet seismographic network. Station CUC lies above the northern portion of the Calabrian slab segment, which is recognized from deep seismicity and tomographic imaging as a narrow, laterally high-arched slab fragment, extending from the surface below Calabria down to the transition zone. To better define the descending slab interface and possible shear-coupled flow in the mantle wedge above the slab, we computed receiver functions from the P-coda of 147 teleseismic events to analyze the back-azimuth dependence of Ps converted phases from interfaces beneath CUC. We stack the RF data-set with back azimuth to compute its harmonic expansion, which relates to the effects of interface dip and anisotropy at layer boundaries. The seismic structure constrained through the RF analysis is characterized in its upper part by a sub-horizontal Moho at about 25 km depth, overlying a thin isotropic layer at top of mantle. For the deeper part, back-azimuth variation suggests two alternative models, each with an anisotropic layer between two dipping interfaces near 70- and 90-km depth, with fast- and slow-symmetry axes, respectively, above the Apennines slab. Although independent evidence suggests a north-south strike for the slab beneath CUC, the trend of the inferred anisotropy is 45° clockwise from north, inconsistent with a simple downdip shear-coupled flow model in the supra-slab mantle wedge. However complexities of flow and induced rock fabric in the Tyrrhenian back arc may arise due to several concurring factors such as the arcuate shape of the Apennines slab, its retreating kinematics, or slab edge effects.

  12. Myopathy in older people receiving statin therapy: a systematic review and meta-analysis

    PubMed Central

    Iwere, Roli B; Hewitt, Jonathan

    2015-01-01

    Objective The aim of the present study was to determine the risk of myopathy in older people receiving statin therapy. Methods Eligible studies were identified searching Ovid Medline, EMBASE, Scopus, CINAHL, Cochrane and PSYCHINFO databases (1987 to July 2014). The selection criteria comprised randomized controlled studies that compared the effects of statin monotherapy and placebo on muscle adverse events in the older adult (65+ years). Data were extracted and assessed for validity by the authors. Odds ratios and 95% confidence intervals (CIs) were used to calculate binary outcomes. Evidence from included studies were pooled in a meta-analysis using Revman 5.3. Results The trials assessed in the systematic review showed little or no evidence of a difference in risks between treatment and placebo groups, with myalgia [odds ratio (OR) 1.03, 95% CI 0.90, 1.17; I2 = 0%; P = 0.66] and combined muscle adverse events (OR 1.03, 95% CI 0.91, 1.18; I2 = 0%; P = 0.61) (myopathy). No evidence was found for an increased risk of rhabdomyolysis (OR 2.93, 95% CI 0.30, 28.18; I2 = 0%; P = 0.35) in the seven trials that reported this. No trials reported mortality due to a muscle-related event. Discontinuations due to an adverse effect were reduced in the treatment group compared with placebo (OR 0.74, 95% CI 0.50, 1.09; I2 = 0%; P = 0.13). Conclusion The results obtained from the present review suggest that statins are relatively safe, even in older people. There was no evidence to suggest an increased risk of myopathy in older adults receiving statin therapy. There is slightly increased seen with rhabdomyolysis when compared with the general population, although the event is relatively rare. Statins should be prescribed to elderly people who need it, and not withheld, as its myopathy safety profile is tolerable. PMID:26032930

  13. How Can Visual Analytics Assist Investigative Analysis? Design Implications from an Evaluation.

    PubMed

    Youn-Ah Kang; Görg, Carsten; Stasko, John

    2011-05-01

    Despite the growing number of systems providing visual analytic support for investigative analysis, few empirical studies of the potential benefits of such systems have been conducted, particularly controlled, comparative evaluations. Determining how such systems foster insight and sensemaking is important for their continued growth and study, however. Furthermore, studies that identify how people use such systems and why they benefit (or not) can help inform the design of new systems in this area. We conducted an evaluation of the visual analytics system Jigsaw employed in a small investigative sensemaking exercise, and compared its use to three other more traditional methods of analysis. Sixteen participants performed a simulated intelligence analysis task under one of the four conditions. Experimental results suggest that Jigsaw assisted participants to analyze the data and identify an embedded threat. We describe different analysis strategies used by study participants and how computational support (or the lack thereof) influenced the strategies. We then illustrate several characteristics of the sensemaking process identified in the study and provide design implications for investigative analysis tools based thereon. We conclude with recommendations on metrics and techniques for evaluating visual analytics systems for investigative analysis.

  14. Receiver subsystem analysis report (RADL Item 4-1). The 10-MWe solar thermal central-receiver pilot plant: Solar-facilities design integration

    NASA Astrophysics Data System (ADS)

    1982-04-01

    The results of thermal hydraulic, design for the stress analyses which are required to demonstrate that the receiver design for the Barstow Solar Pilot Plant satisfies the general design and performance requirements during the plant's design life are presented. Recommendations are made for receiver operation. The analyses are limited to receiver subsystem major structural parts (primary tower, receiver unit core support structure), pressure parts (absorber panels, feedwater, condensate and steam piping/components, flash tank, and steam mainfold) and shielding.

  15. Use of analytical electron microscopy for the individual particle analysis of the Arctic haze aerosol

    SciTech Connect

    Sheridan, P.J.

    1986-01-01

    To explore the usefulness of the analytical electron microscope for the analysis and source apportionment of individual aerosol particles, aerosol samples amenable to individual particle analysis were collected from a remote region. These samples were from the Arctic haze aerosol, and were collected on board a research aircraft during the Arctic Gas and Aerosol Sampling Program in spring 1983. Before elemental analysis by analytical electron microscopy (AEM) could be performed, an extensive relative sensitivity factor study was undertaken to calibrate the microscope/detector system for quanitative x-ray microanalysis. Subsequently determined elemental data, along with morphological information, were used to group the particles into classes with similar characteristics. Forty-seven classes of particles were found in the Arctic samples, the most populous classes containing H/sub 2/SO/sub 4/ droplets, carbonaceous particles, lithophilic particles, CaSO/sub 4/ or NaCl. Several classes containing anthropogenic particles were also identified.

  16. Analytical tools for the analysis of β-carotene and its degradation products

    PubMed Central

    Stutz, H.; Bresgen, N.; Eckl, P. M.

    2015-01-01

    Abstract β-Carotene, the precursor of vitamin A, possesses pronounced radical scavenging properties. This has centered the attention on β-carotene dietary supplementation in healthcare as well as in the therapy of degenerative disorders and several cancer types. However, two intervention trials with β-carotene have revealed adverse effects on two proband groups, that is, cigarette smokers and asbestos-exposed workers. Beside other causative reasons, the detrimental effects observed have been related to the oxidation products of β-carotene. Their generation originates in the polyene structure of β-carotene that is beneficial for radical scavenging, but is also prone to oxidation. Depending on the dominant degradation mechanism, bond cleavage might occur either randomly or at defined positions of the conjugated electron system, resulting in a diversity of cleavage products (CPs). Due to their instability and hydrophobicity, the handling of standards and real samples containing β-carotene and related CPs requires preventive measures during specimen preparation, analyte extraction, and final analysis, to avoid artificial degradation and to preserve the initial analyte portfolio. This review critically discusses different preparation strategies of standards and treatment solutions, and also addresses their protection from oxidation. Additionally, in vitro oxidation strategies for the generation of oxidative model compounds are surveyed. Extraction methods are discussed for volatile and non-volatile CPs individually. Gas chromatography (GC), (ultra)high performance liquid chromatography (U)HPLC, and capillary electrochromatography (CEC) are reviewed as analytical tools for final analyte analysis. For identity confirmation of analytes, mass spectrometry (MS) is indispensable, and the appropriate ionization principles are comprehensively discussed. The final sections cover analysis of real samples and aspects of quality assurance, namely matrix effects and method

  17. Characterization of bacterial communities in sediments receiving various wastewater effluents with high-throughput sequencing analysis.

    PubMed

    Lu, Xiao-Ming; Lu, Peng-Zhen

    2014-04-01

    454 Pyrosequencing was applied to examine bacterial communities in sediment samples collected from a river receiving effluent discharge from rural domestic sewage (RDS) and various factories, including a tannery (TNS), clothing plant (CTS), and button factory (BTS), respectively. For each sample, 4,510 effective sequences were selected and utilized to do the bacterial diversity and abundance analysis, respectively. In total, 1,288, 2,036, 1,800, and 2,150 operational taxonomic units were obtained at 3% distance cutoff in TNS, CTS, BTS, and RDS, respectively. Bacterial phylotype richness in RDS was higher than the other samples, and TNS had the least richness. The most predominant class in the TNS, CTS, and BTS samples is Betaproteobacteria. Cyanobacteria (no_rank) is the most predominant one in the RDS sample. Circa 31% sequences in TNS were affiliated with the Rhodocyclales order. In the four samples, Aeromonas, Arcobacter, Clostridium, Legionella, Leptospira, Mycobacterium, Pseudomonas, and Treponema genera containing pathogenic bacteria were detected. Characterization of bacterial communities in sediments from various downstream branches indicated that distinct wastewater effluents have similar potential to reduce the natural variability in river ecosystems and contribute to the river biotic homogenization. PMID:24477925

  18. Information needs of clinical teams: analysis of questions received by the Clinical Informatics Consult Service

    PubMed Central

    Jerome, Rebecca N.; Giuse, Nunzia B.; Wilder Gish, Kimbra; Sathe, Nila A.; Dietrich, Mary S.

    2001-01-01

    Objectives: To examine the types of questions received by Clinical Informatics Consult Service (CICS) librarians from clinicians on rounds and to analyze the number of clearly differentiated viewpoints provided in response. Design: Questions were retrieved from an internal database, the CICS Knowledge Base, and analyzed for redundancy by subject analysis. The unique questions were classified into ten categories by subject. Treatment-related questions were analyzed for the number of viewpoints represented in the librarian's response. Results: The CICS Knowledge Base contained 476 unique questions and 71 redundant questions. Among the unique queries, the top two categories accounted for 67%: treatment (36%) and disease description (31%). Within the treatment-related subset, 138 questions (59%) required representation of more than one viewpoint in the librarian's response. Discussion: Questions generated by clinicians frequently require comprehensive, critical appraisal of the medical literature, a need that can be filled by librarians trained in such techniques. This study demonstrates that many questions require representation of more than one viewpoint to answer completely. Moreover, the redundancy rate underscores the need for resources like the CICS Knowledge Base. By critically analyzing the medical literature, CICS librarians are providing a time-saving and valuable service for clinicians and charting new territory for librarians. PMID:11337949

  19. ROCView: prototype software for data collection in jackknife alternative free-response receiver operating characteristic analysis

    PubMed Central

    Thompson, J; Hogg, P; Thompson, S; Manning, D; Szczepura, K

    2012-01-01

    ROCView has been developed as an image display and response capture (IDRC) solution to image display and consistent recording of reader responses in relation to the free-response receiver operating characteristic paradigm. A web-based solution to IDRC for observer response studies allows observations to be completed from any location, assuming that display performance and viewing conditions are consistent with the study being completed. The simplistic functionality of the software allows observations to be completed without supervision. ROCView can display images from multiple modalities, in a randomised order if required. Following registration, observers are prompted to begin their image evaluation. All data are recorded via mouse clicks, one to localise (mark) and one to score confidence (rate) using either an ordinal or continuous rating scale. Up to nine “mark-rating” pairs can be made per image. Unmarked images are given a default score of zero. Upon completion of the study, both true-positive and false-positive reports can be downloaded and adapted for analysis. ROCView has the potential to be a useful tool in the assessment of modality performance difference for a range of imaging methods. PMID:22573294

  20. Numerical analysis of radiation propagation in innovative volumetric receivers based on selective laser melting techniques

    NASA Astrophysics Data System (ADS)

    Alberti, Fabrizio; Santiago, Sergio; Roccabruna, Mattia; Luque, Salvador; Gonzalez-Aguilar, Jose; Crema, Luigi; Romero, Manuel

    2016-05-01

    Volumetric absorbers constitute one of the key elements in order to achieve high thermal conversion efficiencies in concentrating solar power plants. Regardless of the working fluid or thermodynamic cycle employed, design trends towards higher absorber output temperatures are widespread, which lead to the general need of components of high solar absorptance, high conduction within the receiver material, high internal convection, low radiative and convective heat losses and high mechanical durability. In this context, the use of advanced manufacturing techniques, such as selective laser melting, has allowed for the fabrication of intricate geometries that are capable of fulfilling the previous requirements. This paper presents a parametric design and analysis of the optical performance of volumetric absorbers of variable porosity conducted by means of detailed numerical ray tracing simulations. Sections of variable macroscopic porosity along the absorber depth were constructed by the fractal growth of single-cell structures. Measures of performance analyzed include optical reflection losses from the absorber front and rear faces, penetration of radiation inside the absorber volume, and radiation absorption as a function of absorber depth. The effects of engineering design parameters such as absorber length and wall thickness, material reflectance and porosity distribution on the optical performance of absorbers are discussed, and general design guidelines are given.

  1. Transmitter and receiver antenna gain analysis for laser radar and communication systems

    NASA Technical Reports Server (NTRS)

    Klein, B. J.; Degnan, J. J.

    1973-01-01

    A comprehensive and fairly self-contained study of centrally obscured optical transmitting and receiving antennas is presented and is intended for use by the laser radar and communication systems designer. The material is presented in a format which allows the rapid and accurate evaluation of antenna gain. The Fresnel approximation to scalar wave theory is reviewed and the antenna analysis proceeds in terms of the power gain. Conventional range equations may then be used to calculate the power budget. The transmitter calculations, resulting in near and far field antenna gain patterns, assumes the antenna is illuminated by a laser operating in the fundamental cavity mode. A simple equation is derived for matching the incident source distribution to a general antenna configuration for maximum on-axis gain. An interpretation of the resultant gain curves allows a number of auxiliary design curves to be drawn which display the losses in antenna gain due to pointing errors and the cone angle of the outgoing beam as a function of antenna size and central obscuration. The use of telescope defocusing as an approach to spreading the beam for target acquisition is compared to some alternate methods.

  2. Crustal structure beneath SE Tibet from joint analysis of receiver functions and Rayleigh wave dispersion

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoxiao; Bao, Xuewei; Xu, Mingjie; Eaton, David W.; Song, Xiaodong; Wang, Liangshu; Ding, Zhifeng; Mi, Ning; Yu, Dayong; Li, Hua

    2014-03-01

    New constraints on the pattern of crustal flow in SE Tibet are obtained from joint analysis of receiver functions and Rayleigh wave dispersion with a newly deployed seismic array. The crust in the Sichuan-Yunnan Diamond Block has an average thickness of ~45 km and gradually thins toward the Indo-China Block to the west and the Yangtze Block to the east. High VP/VS ratios are detected to the west of the Xiaojiang fault, but not in the Yangtze Block to the east. The S wave velocity profile reveals that intra-crustal low-velocity zones (IC-LVZs) are strongly heterogeneous, with two LVZs in the middle and mid-lower crust, respectively, in marked contrast to previous observations of a single LVZ. Combined with other observations, the two IC-LVZs are interpreted as isolated channels of crustal flow at different depths beneath SE Tibet, resulting in the observed complex pattern of radial anisotropy and further elucidating patterns of flow and deformation.

  3. Receiver Function Analysis of Strong-motion Stations in Kaohsiung-Pingtung area, Taiwan

    NASA Astrophysics Data System (ADS)

    Lin, Che-Min; Wen, Kuo-Liang; Kuo, Chun-Hsiang; Huang, Jyun-Yan

    2016-04-01

    The Kaohsiung City and Pingtung County are located in southern Taiwan and bounded on the west side by several active faults. The shallow velocity structure of thick alluvium basin in this area should be delineated to understand the seismic site effect of strong ground motion. Receiver Function (RF) is a conventional technique for studying the structure of the crust and upper mantle beneath the seismometer. But, the RF analysis of high-frequency acceleration seismograms is also proved to be feasible for estimating shallow structures recently. This study applied the RF technique on the Strong-motion records of almost one-hundred TSMIP stations in Kaohsiung-Pingtung area to estimate the shallow shear-wave velocity structures. The averaged RFs of all stations exhibit the obvious variation because of the different geologies and site conditions. After the forward modeling of RFs based on the Genetic Algorithms (GA) searching, the shallow shear-wave velocity structures beneath all the strong-motion stations in the Kaohsiung-Pingtung area were estimated to delineate the iso-depth contour maps of the main formation interfaces and a preliminary shallow 3D velocity model.

  4. Receiver Function Analysis using Ocean-bottom Seismometer Records around the Kii Peninsula, Southwestern Japan

    NASA Astrophysics Data System (ADS)

    Akuhara, T.; Mochizuki, K.

    2014-12-01

    Recent progress on receiver function (RF) analysis has provided us with new insight about the subsurface structure. The method is now gradually being more applied to records of ocean-bottom seismometers (OBSs). In the present study, we conducted RF analysis using OBS records at 32 observation sites around the Kii Peninsula, southwestern Japan, from 2003 to 2007 (Mochizuki et al., 2010, GRL). We addressed problems concerning water reverberations. We first checked the effects of water reverberations on the OBS vertical component records by calculating vertical P-wave RFs (Langston and Hammer, 2001, BSSA), where the OBS vertical component records were deconvolved by stacked traces of on-land records as source functions. The resultant RFs showed strong peaks corresponding to the water reverberations. Referring to these RFs, we constructed inverse filters to remove the effects of water reverberations from the vertical component records, which were assumed to be represented by two parameters, a two-way travel time within the water layer, and a reflection coefficient at the seafloor. We then calculated radial RFs using the filtered, reverberation-free, vertical component records of OBS data as source functions. The resultant RFs showed that some phases at later times became clearer than those obtained by an ordinary method. From the comparison with a previous tomography model (Akuhara et al., 2013, GRL), we identified phases originating from the oceanic Moho, which delineates the relationship between the depth of earthquakes and the oceanic Moho: seaward intraslab seismicity is high within the oceanic crust while the landward seismicity is high within the oceanic mantle. This character may be relevant to the dehydration process.

  5. Cancer risk in older people receiving statin therapy: a meta-analysis of randomized controlled trials

    PubMed Central

    Liu, Hong-Wei; Bian, Su-Yan; Zhu, Qi-Wei; Zhao, Yue-Xiang

    2016-01-01

    Background Although statins are well tolerated by most aged people, their potential carcinogenicity is considered as one of the biggest factors limiting the use of statins. The aim of the present study was to determine the risk of cancer in people aged over 60 years receiving statin therapy. Methods A comprehensive search for articles published up to December 2015 was performed, reviews of each randomized controlled trials (RCTs) that compared the effects of statin mono-therapy with placebo on the risk of cancer in people aged > 60 years were conducted and data abstracted. All the included studies were evaluated for publication bias and heterogeneity. Pooled odds ratios (OR) estimates and 95% confidence intervals (CIs) were calculated using the random effects model. Results A total of 12 RCTs, involving 62,927 patients (31,517 in statin therapy group and 31,410 in control group), with a follow-up duration of 1.9–5.4 years, contributed to the analysis. The statin therapy did not affect the overall incidence of cancer (OR = 1.03, 95% CI: 0.94–1.14, P = 0.52); subgroup analyses showed that neither the variety nor the chemical properties of the statins accounted for the incidence of cancer in older people. Conclusions Our meta-analysis findings do not support a potential cancer risk of statin treatment in people over 60 years old. Further targeted researches with a longer follow-up duration are warranted to confirm this issue. PMID:27781060

  6. Polyomavirus JCV excretion and genotype analysis in HIV-infected patients receiving highly active antiretroviral therapy

    NASA Technical Reports Server (NTRS)

    Lednicky, John A.; Vilchez, Regis A.; Keitel, Wendy A.; Visnegarwala, Fehmida; White, Zoe S.; Kozinetz, Claudia A.; Lewis, Dorothy E.; Butel, Janet S.

    2003-01-01

    OBJECTIVE: To assess the frequency of shedding of polyomavirus JC virus (JCV) genotypes in urine of HIV-infected patients receiving highly active antiretroviral therapy (HAART). METHODS: Single samples of urine and blood were collected prospectively from 70 adult HIV-infected patients and 68 uninfected volunteers. Inclusion criteria for HIV-infected patients included an HIV RNA viral load < 1000 copies, CD4 cell count of 200-700 x 106 cells/l, and stable HAART regimen. PCR assays and sequence analysis were carried out using JCV-specific primers against different regions of the virus genome. RESULTS: JCV excretion in urine was more common in HIV-positive patients but not significantly different from that of the HIV-negative group [22/70 (31%) versus 13/68 (19%); P = 0.09]. HIV-positive patients lost the age-related pattern of JCV shedding (P = 0.13) displayed by uninfected subjects (P = 0.01). Among HIV-infected patients significant differences in JCV shedding were related to CD4 cell counts (P = 0.03). Sequence analysis of the JCV regulatory region from both HIV-infected patients and uninfected volunteers revealed all to be JCV archetypal strains. JCV genotypes 1 (36%) and 4 (36%) were the most common among HIV-infected patients, whereas type 2 (77%) was the most frequently detected among HIV-uninfected volunteers. CONCLUSION: These results suggest that JCV shedding is enhanced by modest depressions in immune function during HIV infection. JCV shedding occurred in younger HIV-positive persons than in the healthy controls. As the common types of JCV excreted varied among ethnic groups, JCV genotypes associated with progressive multifocal leukoencephalopathy may reflect demographics of those infected patient populations.

  7. Analysis of solar receiver flux distributions for US/Russian solar dynamic system demonstration on the MIR Space Station

    NASA Technical Reports Server (NTRS)

    Kerslake, Thomas W.; Fincannon, James

    1995-01-01

    The United States and Russia have agreed to jointly develop a solar dynamic (SD) system for flight demonstration on the Russian MIR space station starting in late 1997. Two important components of this SD system are the solar concentrator and heat receiver provided by Russia and the U.S., respectively. This paper describes optical analysis of the concentrator and solar flux predictions on target receiver surfaces. The optical analysis is performed using the code CIRCE2. These analyses account for finite sun size with limb darkening, concentrator surface slope and position errors, concentrator petal thermal deformation, gaps between petals, and the shading effect of the receiver support struts. The receiver spatial flux distributions are then combined with concentrator shadowing predictions. Geometric shadowing patterns are traced from the concentrator to the target receiver surfaces. These patterns vary with time depending on the chosen MIR flight attitude and orbital mechanics of the MIR spacecraft. The resulting predictions provide spatial and temporal receiver flux distributions for any specified mission profile. The impact these flux distributions have on receiver design and control of the Brayton engine are discussed.

  8. Experimental approach to validation of an analytical and numerical thermal analysis of a travelling wave tube

    NASA Astrophysics Data System (ADS)

    Wiejak, W.; Wymysłowski, A.

    2016-01-01

    Travelling Wave Tube (TWT) is an electronic vacuum microwave device, which is used as a high power microwave amplifier, mainly in telecommunication purposes, e.g. radar systems. TWT's is an alternative solution in comparison to semiconductor devices in case of high power and high frequency applications. Thermal behaviour of TWT is one of the key aspects influencing its reliability and working parameters. The main goal of the research was to perform analytical, experimental and numerical analysis of a temperature distribution of a low band TWT in case of a typical working condition. Because the theoretical analysis seems to be very complex thus it was decided to compare the experimental results with the numerical simulations as well as with the simplified analytical formulas. As a first step of the presented research, the analytical analysis and numerical modelling of the helix TWT was carried out. The objective of the thermal analysis was to assess the temperature distribution in different parts of the helix TWT assembly during the extreme standard and working conditions. As a second stage of the research the numerical results were validated by the experimental measurements, which were carried out using a specially designed TWT test samples and corresponding experimental measurement tools.

  9. Analytical Protocols for Analysis of Organic Molecules in Mars Analog Materials

    NASA Technical Reports Server (NTRS)

    Mahaffy, Paul R.; Brinkerhoff, W.; Buch, A.; Demick, J.; Glavin, D. P.

    2004-01-01

    A range of analytical techniques and protocols that might be applied b in situ investigations of martian fines, ices, and rock samples are evaluated by analysis of organic molecules m Mars analogues. These simulants 6om terrestrial (i.e. tephra from Hawaii) or extraterrestrial (meteoritic) samples are examined by pyrolysis gas chromatograph mass spectrometry (GCMS), organic extraction followed by chemical derivatization GCMS, and laser desorption mass spectrometry (LDMS). The combination of techniques imparts analysis breadth since each technique provides a unique analysis capability for Certain classes of organic molecules.

  10. Text Stream Trend Analysis using Multiscale Visual Analytics with Applications to Social Media Systems

    SciTech Connect

    Steed, Chad A; Beaver, Justin M; BogenII, Paul L.; Drouhard, Margaret MEG G; Pyle, Joshua M

    2015-01-01

    In this paper, we introduce a new visual analytics system, called Matisse, that allows exploration of global trends in textual information streams with specific application to social media platforms. Despite the potential for real-time situational awareness using these services, interactive analysis of such semi-structured textual information is a challenge due to the high-throughput and high-velocity properties. Matisse addresses these challenges through the following contributions: (1) robust stream data management, (2) automated sen- timent/emotion analytics, (3) inferential temporal, geospatial, and term-frequency visualizations, and (4) a flexible drill-down interaction scheme that progresses from macroscale to microscale views. In addition to describing these contributions, our work-in-progress paper concludes with a practical case study focused on the analysis of Twitter 1% sample stream information captured during the week of the Boston Marathon bombings.

  11. Photoacoustic spectrum analysis for microstructure characterization in biological tissue: analytical model.

    PubMed

    Xu, Guan; Fowlkes, J Brian; Tao, Chao; Liu, Xiaojun; Wang, Xueding

    2015-05-01

    Photoacoustic spectrum (PA) analysis (PASA) has been found to have the ability to identify the microstructures in phantoms and biological tissues. PASA adopts the procedures in ultrasound spectrum analysis, although the signal generation mechanisms related to ultrasound backscatter and PA wave generation differ. The purpose of this study was to theoretically validate PASA. The analytical solution to the power spectrum of PA signals generated by identical microspheres following discrete uniform random distribution in space was derived. The simulation and experiment validation of the analytical solution include: (i) the power spectrum profile of a single microsphere with a diameter of 300 μm, and (ii) the PASA parameters of the PA signals generated by randomly distributed microspheres 100, 200, 300, 400 and 500 μm in diameter, at concentrations of 30, 60, 120, 240, 480 per 1.5(3) cm(3) in the observation range 0.5-13 MHz.

  12. Biological monitoring systems for hazardous waste sites (production and analysis of analytical reference materials)

    SciTech Connect

    Bohman, V.R.; Blincoe, C.R.; Miller, G.C.; Scholl, R.L.; Sutton, W.W.

    1989-02-01

    EPA programs in pesticides, toxics, and hazardous-waste require analytical reference materials. This project emphasized the collection of and analysis of urine, fat, and blood for ultimate use as reference samples and the practicality of using certain metabolites to indicate previous exposure to chlorinated hydrocarbons. The reference samples can, with verified compound concentrations, be used as qualifying samples when evaluating a technique to use for a particular analysis. However, the reference materials may be of greatest benefit when used by laboratories to determine analytical accuracy for samples of human urine, blood, etc. This is because the standards, like the unknown samples, will contain pollutant compounds and associated metabolites (all in vivo incorporated). Dairy animals were used during this study.

  13. Methods for integrating moderation and mediation: a general analytical framework using moderated path analysis.

    PubMed

    Edwards, Jeffrey R; Lambert, Lisa Schurer

    2007-03-01

    Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated and the mediated effects under investigation. This article presents a general analytical framework for combining moderation and mediation that integrates moderated regression analysis and path analysis. This framework clarifies how moderator variables influence the paths that constitute the direct, indirect, and total effects of mediated models. The authors empirically illustrate this framework and give step-by-step instructions for estimation and interpretation. They summarize the advantages of their framework over current approaches, explain how it subsumes moderated mediation and mediated moderation, and describe how it can accommodate additional moderator and mediator variables, curvilinear relationships, and structural equation models with latent variables.

  14. Receiver subsystem analysis report (RADL Item 4-1). 10-MWe Solar Thermal Central-Receiver Pilot Plant: solar-facilities design integration

    SciTech Connect

    Not Available

    1982-04-01

    The results are presented of those thermal hydraulic, structural, and stress analyses required to demonstrate that the Receiver design for the Barstow Solar Pilot Plant will satisfy the general design and performance requirements during the plant's design life. Recommendations resulting from those analyses and supporting test programs are presented regarding operation of the receiver. The analyses are limited to receiver subsystem major structural parts (primary tower, receiver unit core support structure), pressure parts (absorber panels, feedwater, condensate and steam piping/components, flash tank, and steam mainfold) and shielding. (LEW)

  15. HEAP: Heat Energy Analysis Program, a computer model simulating solar receivers. [solving the heat transfer problem

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1979-01-01

    A computer program which can distinguish between different receiver designs, and predict transient performance under variable solar flux, or ambient temperatures, etc. has a basic structure that fits a general heat transfer problem, but with specific features that are custom-made for solar receivers. The code is written in MBASIC computer language. The methodology followed in solving the heat transfer problem is explained. A program flow chart, an explanation of input and output tables, and an example of the simulation of a cavity-type solar receiver are included.

  16. Irreversible and reversible reactive chromatography: analytical solutions and moment analysis for rectangular pulse injections.

    PubMed

    Bibi, Sameena; Qamar, Shamsul; Seidel-Morgenstern, Andreas

    2015-03-13

    This work is concerned with the analysis of models for linear reactive chromatography describing irreversible A→B and reversible A↔B reactions. In contrast to previously published results rectangular reactant pulses are injected into initially empty or pre-equilibrated columns assuming both Dirichlet and Danckwerts boundary conditions. The models consist of two partial differential equations, accounting for convection, longitudinal dispersion and first order chemical reactions. Due to the effect of involved mechanisms on solute transport, analytical and numerical solutions of the models could be helpful to understand, design and optimize chromatographic reactors. The Laplace transformation is applied to solve the model equations analytically for linear adsorption isotherms. Statistical temporal moments are derived from solutions in the Laplace domain. Analytical results are compared with numerical predictions generated using a high-resolution finite volume scheme for two sets of boundary conditions. Several case studies are carried out to analyze reactive liquid chromatographic processes for a wide range of mass transfer and reaction kinetics. Good agreements in the results validate the correctness of the analytical solutions and accuracy of the proposed numerical algorithm.

  17. Transient analysis of a molten salt central receiver (MSCR) in a solar power plant

    NASA Astrophysics Data System (ADS)

    Joshi, A.; Wang, C.; Akinjiola, O.; Lou, X.; Neuschaefer, C.; Quinn, J.

    2016-05-01

    Alstom is developing solar power tower plants utilizing molten salt as the working fluid. In solar power tower, the molten salt central receiver (MSCR) atop of the tower is constructed of banks of tubes arranged in panels creating a heat transfer surface exposed to the solar irradiation from the heliostat field. The molten salt heat transfer fluid (HTF), in this case 60/40%wt NaNO3-KNO3, flows in serpentine flow through the surface collecting sensible heat thus raising the HTF temperature from 290°C to 565°C. The hot molten salt is stored and dispatched to produce superheated steam in a steam generator, which in turn produces electricity in the steam turbine generator. The MSCR based power plant with a thermal energy storage system (TESS) is a fully dispatchable renewable power plant with a number of opportunities for operational and economic optimization. This paper presents operation and controls challenges to the MSCR and the overall power plant, and the use of dynamic model computer simulation based transient analyses applied to molten salt based solar thermal power plant. This study presents the evaluation of the current MSCR design, using a dynamic model, with emphasis on severe events affecting critical process response, such as MS temperature deviations, and recommend MSCR control design improvements based on the results. Cloud events are the scope of the transient analysis presented in this paper. The paper presents results from a comparative study to examine impacts or effects on key process variables related to controls and operation of the MSCR plant.

  18. Receiver operating characteristic analysis for the detection of simulated microcalcifications on mammograms using hardcopy images

    NASA Astrophysics Data System (ADS)

    Lai, Chao-Jen; Shaw, Chris C.; Whitman, Gary J.; Yang, Wei T.; Dempsey, Peter J.; Nguyen, Victoria; Ice, Mary F.

    2006-08-01

    The aim of this study was to compare mammography systems based on three different detectors—a conventional screen-film (SF) combination, an a-Si/CsI flat-panel (FP)-based detector, and a charge-coupled device (CCD)-based x-ray phosphor-based detector—for their performance in detecting simulated microcalcifications (MCs). 112-150 µm calcium carbonate grains were used to simulate MCs and were overlapped with a slab phantom of simulated 50% adipose/50% glandular breast tissue-equivalent material referred to as the uniform background. For the tissue structure background, 200-250 µm calcium carbonate grains were used and overlapped with an anthropomorphic breast phantom. All MC phantom images were acquired with and without magnification (1.8X). The hardcopy images were reviewed by five mammographers. A five-point confidence level rating was used to score each detection task. Receiver operating characteristic (ROC) analysis was performed, and the areas under the ROC curves (Azs) were used to compare the performances of the three mammography systems under various conditions. The results showed that, with a uniform background and contact images, the FP-based system performed significantly better than the SF and the CCD-based systems. For magnified images with a uniform background, the SF and the FP-based systems performed equally well and significantly better than the CCD-based system. With tissue structure background and contact images, the SF system performed significantly better than the FP and the CCD-based systems. With magnified images and a tissue structure background, the SF and the CCD-based systems performed equally well and significantly better than the FP-based system. In the detection of MCs in the fibroglandular and the heterogeneously dense regions, no significant differences were found except that the SF system performed significantly better than the CCD-based system in the fibroglandular regions for the contact images.

  19. Regional Cerebral Blood Flow In Dementia: Receiver-Operating-Characteristic Analysis

    NASA Astrophysics Data System (ADS)

    Zemcov, Alexander; Barclay, Laurie; Sansone, Joseph; Blass, John P.; Metz, Charles E.

    1985-06-01

    The coupling of mentation to regional cerebral blood flow (rCBF) has prompted the application of the Xe-133 inhalation method of measuring rCBF in the differential diagnosis of the two most common dementing diseases, Alzheimer's disease and multi-infarct dementia (MID). In this study receiver-operating-characteristic (ROC) curve analysis was used to assess the effectiveness of a 32 detector Xe-133 inhalation system in discriminating between patients with Alzheimer's disease and normal controls, MID patients and normal controls and between patients with Alzheimer's disease and MID. The populations were clinically evaluated as 1) normal (age 63.1 + 13.1, n=23), 2) Alzheimer's disease (age 72.7 + 7.0, n=82), 3) MID (age 76.4 + 7.6, n=27): The mean flow values for all detectors were lowest for the Alzheimer's disease group, larger for the MID group and largest for the normal controls. The dynamic relationship between the correct identifications (true posi-tives) versus incorrect identifications (false positives) per detector for any 2 pairs of clinical groups varies as the cutoff value of flow is changed over the range of experimental blood flow values. Therefore a quantitative characterization of the "decision" or ROC curve (TP vs FP) for each detector and for each pair of clinical groups provides a measure of the overall diagnostic efficacy of the detector. Detectors directed approximately toward the speech, auditory and association cortices were most effective in disciminatinq between each of the dementia groups and the controls. Frontal detectors were diagnostically inefficient. The Xe-133 inhalation system provided virtually no diagnostic power in discriminating between the two forms of dementia, however. Therefore this imaging technology is most useful when assessing the general diagnostic state of dementia (Alz-heimer's disease and MID) from normal cognitive function.

  20. Parametric Analysis of Cyclic Phase Change and Energy Storage in Solar Heat Receivers

    NASA Technical Reports Server (NTRS)

    Hall, Carsie A., III; Glakpe, Emmanuel K.; Cannon, Joseph N.; Kerslake, Thomas W.

    1997-01-01

    A parametric study on cyclic melting and freezing of an encapsulated phase change material (PCM), integrated into a solar heat receiver, has been performed. The cyclic nature of the present melt/freeze problem is relevant to latent heat thermal energy storage (LHTES) systems used to power solar Brayton engines in microgravity environments. Specifically, a physical and numerical model of the solar heat receiver component of NASA Lewis Research Center's Ground Test Demonstration (GTD) project was developed. Multi-conjugate effects such as the convective fluid flow of a low-Prandtl-number fluid, coupled with thermal conduction in the phase change material, containment tube and working fluid conduit were accounted for in the model. A single-band thermal radiation model was also included to quantify reradiative energy exchange inside the receiver and losses through the aperture. The eutectic LiF-CaF2 was used as the phase change material (PCM) and a mixture of He/Xe was used as the working fluid coolant. A modified version of the computer code HOTTube was used to generate results in the two-phase regime. Results indicate that parametric changes in receiver gas inlet temperature and receiver heat input effects higher sensitivity to changes in receiver gas exit temperatures.

  1. Cancer risk in patients receiving renal replacement therapy: A meta-analysis of cohort studies

    PubMed Central

    Shang, Weifeng; Huang, Liu; Li, Li; Li, Xiaojuan; Zeng, Rui; Ge, Shuwang; Xu, Gang

    2016-01-01

    It has been reported that patients receiving renal replacement therapy (RRT), including dialysis and kidney transplantation, tend to have an increased risk of cancer; however, studies on the degree of this risk have remained inconclusive. The present meta-analysis was therefore performed to quantify the cancer risk in patients with RRT. Cohort studies assessing overall cancer risk in RRT patients published before May 29, 2015 were included following systematic searches with of PubMed, EMBASE and the reference lists of the studies retrieved. Random-effects meta-analyses were used to pool standardized incidence rates (SIRs) with 95% confidence intervals (CIs). Heterogeneity tests, sensitivity analyses and publication bias assessment were performed. A total of 18 studies including 22 cohort studies were eventually identified, which comprised a total of 1,528,719 patients. In comparison with the general population, the pooled SIR for patients with dialysis including non-melanoma skin cancer (NMSC), dialysis excluding NMSC, transplantation including NMSC, transplantation excluding NMSC and RRT were 1.40 (95% CI, 1.36–1.45), 1.35 (95% CI, 1.23–1.50), 3.26 (95% CI, 2.29–4.63), 2.08 (95% CI, 1.73–2.50) and 2.01 (95% CI, 1.70–2.38), respectively. The cancer risk was particularly high in subgroups of large sample size trials, female patients, younger patients (age at first dialysis, 0–34 years; age at transplantation, 0–20 years), the first year of RRT and non-Asian transplant patients. A significant association was also found between RRT and the majority of organ-specific cancers. However, neither dialysis nor transplantation was associated with breast, body of uterus, colorectal or prostate cancer. Significant heterogeneity was found regarding the association between RRT and overall cancer as well as the majority of site-specific cancer types. However, this heterogeneity had no substantial influence on the pooled SIR for overall cancer in RRT according to the

  2. Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results

    NASA Technical Reports Server (NTRS)

    Wells, D. N.; Allen, P. A.

    2012-01-01

    An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.

  3. Phase-recovery improvement using analytic wavelet transform analysis of a noisy interferogram cepstrum.

    PubMed

    Etchepareborda, Pablo; Vadnjal, Ana Laura; Federico, Alejandro; Kaufmann, Guillermo H

    2012-09-15

    We evaluate the extension of the exact nonlinear reconstruction technique developed for digital holography to the phase-recovery problems presented by other optical interferometric methods, which use carrier modulation. It is shown that the introduction of an analytic wavelet analysis in the ridge of the cepstrum transformation corresponding to the analyzed interferogram can be closely related to the well-known wavelet analysis of the interferometric intensity. Subsequently, the phase-recovery process is improved. The advantages and limitations of this framework are analyzed and discussed using numerical simulations in singular scalar light fields and in temporal speckle pattern interferometry. PMID:23041878

  4. Techno-economic analysis of receiver replacement scenarios in a parabolic trough field

    NASA Astrophysics Data System (ADS)

    Röger, Marc; Lüpfert, Eckhard; Caron, Simon; Dieckmann, Simon

    2016-05-01

    The heat loss of an evacuated parabolic trough receiver of solar thermal power plants ranges typically between values below 150 and 200 W/m at 350°C. Defects, such as glass breakage by wind events and coating degradation, anti-reflection coating degradation or hydrogen accumulation in the annulus, decrease the annual electricity production. This study examines the effect of different receiver performance loss scenarios on the energetic and economic output of a modern 150-MWel-parabolic trough plant with 7.5-hours molten-salt storage, located in Ma'an, Jordan over the whole lifetime by modeling it in an extended version of the software greenius. Compared to the reference scenario, a wind event in year 5 (10, 15) causing glass envelope breakage and consequential degradation of the selective coating of 5.6% of the receivers reduces the electricity output by 5.1% (3.8%, 2.5%), the net present value is reduced by 36.5% (23.1%, 13.1%). The payback time of receiver replacement is only 0.7 years and hence this measure is recommended. The highest negative impact on performance and net present value of a project has the hydrogen accumulation scenario (50% of field affected) in event year 5 (10,15) reducing net electric output by 10.7% (8.1%, 5.4%) and the net present value by 77.0% (48.7%, 27.6%). Replacement of the receivers or even better an inexpensive repair solution is an energetically and economically sensible solution. The option of investing in premium receivers with Xe-capsule during the construction phase is a viable option if the surplus cost for premium receivers is lower than 10 to 20 percent.

  5. Analytical quality of environmental analysis: Recent results and future trends of the IAEA-ILMR's Analytical Quality Control Program

    SciTech Connect

    Ballestra, S.; Vas, D.; Holm, E.; Lopez, J.J.; Parsi, P. )

    1988-01-01

    The Analytical Quality Control Services Program of the IAEA-ILMR covers a wide variety of intercalibration and reference materials. The purpose of the program is to ensure the comparability of the results obtained by the different participants and to enable laboratories engaged in low-level analyses of marine environmental materials to control their analytical performance. Within the past five years, the International Laboratory of Marine Radioactivity in Monaco has organized eight intercomparison exercises, on a world-wide basis, on natural materials of marine origin comprising sea water, sediment, seaweed and fish flesh. Results on artificial (fission and activation products, transuranium elements) and natural radionuclides were compiled and evaluated. Reference concentration values were established for a number of the intercalibration samples allowing them to become certified as reference materials available for general distribution. The results of the fish flesh sample and those of the deep-sea sediment are reviewed. The present status of three on-going intercomparison exercises on post-Chernobyl samples IAEA-306 (Baltic Sea sediment), IAEA-307 (Mediterranean sea-plant Posidonia oceanica) and IAEA-308 (Mediterranean mixed seaweed) is also described. 1 refs., 4 tabs.

  6. Social cognition and the cerebellum: A meta-analytic connectivity analysis.

    PubMed

    Van Overwalle, Frank; D'aes, Tine; Mariën, Peter

    2015-12-01

    This meta-analytic connectivity modeling (MACM) study explores the functional connectivity of the cerebellum with the cerebrum in social cognitive processes. In a recent meta-analysis, Van Overwalle, Baetens, Mariën, and Vandekerckhove (2014) documented that the cerebellum is implicated in social processes of "body" reading (mirroring; e.g., understanding other persons' intentions from observing their movements) and "mind" reading (mentalizing, e.g., inferring other persons' beliefs, intentions or personality traits, reconstructing persons' past, future, or hypothetical events). In a recent functional connectivity study, Buckner et al. (2011) offered a novel parcellation of cerebellar topography that substantially overlaps with the cerebellar meta-analytic findings of Van Overwalle et al. (2014). This overlap suggests that the involvement of the cerebellum in social reasoning depends on its functional connectivity with the cerebrum. To test this hypothesis, we explored the meta-analytic co-activations as indices of functional connectivity between the cerebellum and the cerebrum during social cognition. The MACM results confirm substantial and distinct connectivity with respect to the functions of (a) action understanding ("body" reading) and (b) mentalizing ("mind" reading). The consistent and strong connectivity findings of this analysis suggest that cerebellar activity during social judgments reflects distinct mirroring and mentalizing functionality, and that these cerebellar functions are connected with corresponding functional networks in the cerebrum. PMID:26419890

  7. Estimation and analysis of GPS receiver differential code biases using KGN in Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Choi, B. K.; Cho, J. H.; Lee, S. J.

    2011-05-01

    The total electron content (TEC) estimation by the Global Positioning System (GPS) can be seriously affected by the differential code biases (DCB), referred to as inter-frequency biases (IFB), of the satellite and receiver so that an accuracy of GPS-TEC value is dependent on the error of DCBs estimation. In this paper, we proposed the singular value decomposition (SVD) method to estimate the DCB of GPS satellites and receivers using the Korean GPS network (KGN) in South Korea. The receiver DCBs of about 49 GPS reference stations in KGN were determined for the accurate estimation of the regional ionospheric TEC. They obtained from the daily solution have large biases ranging from +5 to +27 ns for geomagnetic quiet days. The receiver DCB of SUWN reference station was compared with the estimates of IGS and JPL global ionosphere map (GIM). The results have shown comparatively good agreement at the level within 0.2 ns. After correction of receiver DCBs and knowing the satellite DCBs, the comparison between the behavior of the estimated TEC and that of GIMs was performed for consecutive three days. We showed that there is a good agreement between KASI model and GIMs.

  8. Analysis of two kinds of tree as physical barriers against erythemal UVB radiation received.

    PubMed

    Ysasi, Gonzalo G; Ribera, Luis J C

    2013-01-01

    Differences between global radiation UVER (erythemal ultraviolet solar radiation) received under full sun and diffuse radiation received under the shadow of two types of tree are analyzed to check the importance of these components on human exposure to UV radiation. Blue Line spores dosimeters of VioSpor were used for measurement of erythemal dose of UV radiation (able to produce erythema in human skin.) The response profile of these devices is extremely similar to human skin, thus they are suitable to determine and predict the interactions between UV erythema and human skin. Measurements were obtained in relatively clear days from February to December 2009 between 9:30 and 15:30 h. Three dosimeters were placed on a horizontal surface: one in full sun and the other two under the shadow of each tree. Values of UVER in both cases, in full sun and under the shadow of pine and Sauce, were obtained. In addition, the comparison was made between values of dose received in each case and the exposure limits recommended by the International Commission on Non-Ionizing Radiation Protection (ICNIRP). Finally, average daily irradiance received under the shadow of each tree in comparison with those received in full sun, was also analyzed using two PMA2100 radiometers situated on a horizontal surface. PMID:23190700

  9. Analytical testing

    NASA Technical Reports Server (NTRS)

    Flannelly, W. G.; Fabunmi, J. A.; Nagy, E. J.

    1981-01-01

    Analytical methods for combining flight acceleration and strain data with shake test mobility data to predict the effects of structural changes on flight vibrations and strains are presented. This integration of structural dynamic analysis with flight performance is referred to as analytical testing. The objective of this methodology is to analytically estimate the results of flight testing contemplated structural changes with minimum flying and change trials. The category of changes to the aircraft includes mass, stiffness, absorbers, isolators, and active suppressors. Examples of applying the analytical testing methodology using flight test and shake test data measured on an AH-1G helicopter are included. The techniques and procedures for vibration testing and modal analysis are also described.

  10. Analytical methods of the U.S. Geological Survey's New York District Water-Analysis Laboratory

    USGS Publications Warehouse

    Lawrence, Gregory B.; Lincoln, Tricia A.; Horan-Ross, Debra A.; Olson, Mark L.; Waldron, Laura A.

    1995-01-01

    The New York District of the U.S. Geological Survey (USGS) in Troy, N.Y., operates a water-analysis laboratory for USGS watershed-research projects in the Northeast that require analyses of precipitation and of dilute surface water and soil water for major ions; it also provides analyses of certain chemical constituents in soils and soil gas samples. This report presents the methods for chemical analyses of water samples, soil-water samples, and soil-gas samples collected in wateshed-research projects. The introduction describes the general materials and technicques for eachmethod and explains the USGS quality-assurance program and data-management procedures; it also explains the use of cross reference to the three most commonly used methods manuals for analysis of dilute waters. The body of the report describes the analytical procedures for (1) solution analysis, (2) soil analysis, and (3) soil-gas analysis. The methods are presented in alphabetical order by constituent. The method for each constituent is preceded by (1) reference codes for pertinent sections of the three manuals mentioned above, (2) a list of the method's applications, and (3) a summary of the procedure. The methods section for each constitutent contains the following categories: instrumentation and equipment, sample preservation and storage, reagents and standards, analytical procedures, quality control, maintenance, interferences, safety considerations, and references. Sufficient information is presented for each method to allow the resulting data to be appropriately used in environmental samples.

  11. A kinetic analysis using fractals of cellular analyte-receptor binding and dissociation.

    PubMed

    Sadana, A; Vo-Dinh, T

    2001-02-01

    A fractal analysis is presented for cellular analyte-receptor binding and dissociation kinetics using a biosensor. Data taken from the literature may be modelled, in the case of binding, using a single-fractal analysis or a dual-fractal analysis. The dual-fractal analysis represents a change in the binding mechanism as the reaction progresses on the surface. The predictive relationship developed for the equilibrium constant, K (affinity which is equal to k(d)/k(1or2)), as a function of the analyte concentration is of particular value since it provides a means by which the affinity may be manipulated. This should be of assistance in cell-surface reactions, drug-candidate optimization and for the design of immunodiagnostic devices. Relationships are also presented for the binding and dissociation rate coefficients as a function of their corresponding fractal dimension, D(f) or the degree of heterogeneity that exists on the surface, and the analyte concentration in solution. When analyte-receptor binding or dissociation is involved, an increase in the heterogeneity on the surface (increase in D(f) or D(fd) as the case may be) leads to an increase in the binding and the dissociation rate coefficients. It is suggested that an increase in the degree of heterogeneity on the surface leads to an increase in the turbulence on the surface owing to the irregularities on the surface. This turbulence promotes mixing, minimizes diffusional limitations and leads subsequently to an increase in the binding and the dissociation rate coefficients. The binding and dissociation rate coefficients are rather sensitive to the degree of heterogeneity, D(f) and D(fd), respectively, that exists on the biosensor surface. The heterogeneity on the surface in general affects the binding and dissociation rate coefficients differently. In general, the analyte concentration in solution has a mild affect on the fractal dimension for binding or the fractal dimension for dissociation. This is indicated

  12. Dynamic buckling analysis of delaminated composite plates using semi-analytical finite strip method

    NASA Astrophysics Data System (ADS)

    Ovesy, H. R.; Totounferoush, A.; Ghannadpour, S. A. M.

    2015-05-01

    The delamination phenomena can become of paramount importance when the design of the composite plates is concerned. In the current study, the effect of through-the-width delamination on dynamic buckling behavior of a composite plate is studied by implementing semi-analytical finite strip method. In this method, the energy and work integrations are computed analytically due to the implementation of trigonometric functions. Moreover, the method can lead to converged results with comparatively small number of degrees of freedom. These features have made the method quite efficient. To account for delamination effects, displacement field is enriched by adding appropriate terms. Also, the penetration of the delamination surfaces is prevented by incorporating an appropriate contact scheme into the time response analysis. Some selected results are validated against those available in the literature.

  13. Modeling of closed-loop recycling liquid-liquid chromatography: Analytical solutions and model analysis.

    PubMed

    Kostanyan, Artak E

    2015-08-01

    In closed-loop recycling (CLR) chromatography, the effluent from the outlet of a column is directly returned into the column through the sample feed line and continuously recycled until the required separation is reached. To select optimal operating conditions for the separation of a given feed mixture, an appropriate mathematical description of the process is required. This work is concerned with the analysis of models for the CLR separations. Due to the effect of counteracting mechanisms on separation of solutes, analytical solutions of the models could be helpful to understand and optimize chromatographic processes. The objective of this work was to develop analytical expressions to describe the CLR counter-current (liquid-liquid) chromatography (CCC). The equilibrium dispersion and cell models were used to describe the transport and separation of solutes inside a CLR CCC column. The Laplace transformation is applied to solve the model equations. Several possible CLR chromatography methods for the binary and complex mixture separations are simulated.

  14. Preliminary results from receiver function analysis in a seismological network across the Pamir

    NASA Astrophysics Data System (ADS)

    Schneider, Felix M.; Yuan, Xiaohui; Sippl, Christan; Schurr, Bernd; Mechie, James; Minaev, Vlad; Oimahmadov, Ilhomjon; Gadoev, Mustafo; Abdybachaev, Ulan A.

    2010-05-01

    The multi-disciplinary TIen Shan-PAmir GEodynamic (TIPAGE) program aims to investigate the dynamics of the orogeny of the Tien Shan and Pamir mountains, which are situated in south Kyrgyzstan and east Tajikistan in Central Asia. Deformation and uplift accompanied by crustal thickening is mainly induced by the collision between the Indian and Eurasian continental plates. As a local feature this collision provides the world's largest active intra-continental subduction zone. Within the framework of the TIPAGE program we operate a temporary seismic array consisting of 32 broadband and 8 short period seismic stations for a period of two years (from 2008 to 2010) covering an area of 300 x 300 km over the main part of the central Pamir plateau and the Alai-range of the southern Tien Shan. In the first year 24 broadband stations were set up in a 350-km long north-south profile geometry from Osh in southern Kyrgyzstan to Zorkul in south-eastern Tajikistan with approximately 15 km station spacing. We perform a receiver function (RF) analysis of converted P and S waves from teleseismic earthquakes at epicentral distances of 35-95 degrees with a minimum magnitude of 5.5. Therefore we decompose their wavefields by rotating the coordinate systems of the recorded seismograms from a N,E,Z into a SH,SV,P system. RFs are isolated by deconvolution of the P-component from the SH- and SV-component. They provide a robust tool to locate discontinuities in wave velocity like the Moho and thus represent the method of choice to determine crustal thickness. First results show a crustal thickness of 70-80km. Xenolith findings from depths of 100km reported by Hacker et al. (2005) give indication for even higher values. The N-S profile geometry will produce a high resolution RF image to map the gross crustal and lithospheric structure. In addition a 2D network with additional 16 stations will enable an investigation of lateral structure variation. We give an introduction to the project and

  15. Crustal and upper mantle structure beneath Mount Ruapehu, New Zealand derived by Receiver Function Analysis

    NASA Astrophysics Data System (ADS)

    Kinoshita, S.; Savage, M. K.; Aoki, Y.

    2013-12-01

    We reveal a detailed image of velocity discontinuities by Receiver Function (RF) Analysis beneath Mount Ruapehu, New Zealand. We calculated RFs from 235 teleseismic earthquakes from January 2006 to December 2010 recorded at 25 GeoNet stations around Mt. Ruapehu. The obtained RFs show positive phases at ~3-5 and ~6-8 seconds,respectively. They represent the conversion from P to S waves at the Moho of the Australian (AUS) and subducting Pacific (PAC) plates, respectively. Common Conversion Point (CCP) stacking of RF amplitudes converts these to positive velocity boundaries at 50-100 km and 20-40 km; these correspond to the Moho of the PAC slab subducting northwestward and the AUS slab, respectively. A CCP image with a cut off frequency of 1Hz along the subducted slab indicates that the positive Moho phase of the slab disappears at a depth of ~70 km just below Ruapehu, whileit continues down to 100km to the south of Ruapehu. This is consistent with the result of Bannister et al. (2007) that a negative RF phase at the top of the PAC slab appears to terminate at ~60 km depth 100 km to the northwest of Ruapehu. Our study, combined with Bannister et al. (2007), thus supports the presence of a structural boundary between the subducted Hikrangi Plateau and oceanic crust of normal thickness at 60 km (Reyners et al. 2006). The north-south (N-S) CCP images to the west of Ruapehu show that the Moho of AUS plate deepens southward as previously reported in Salmon et al., (2011). A N-S CCP image across Ruapehu reveals a discontinuity of velocity boundary just below Ruapehu. RFs with a cutoff frequency of 1Hz show a remarkable broad positive phase at ~30 km depth to the north of Ruapehu and two positive phase at 25 km and 40 km depth to the south of Ruapehu. (Figure) RFs with a cutoff frequency of 2Hz show two clear positive phases at 20 km and 30 km depths to the north of Ruapehu and two positive phases at 25 km and 40 km depth to the south of Ruapehu. These positive phases

  16. Receiver operating characteristic analysis for the detection of simulated microcalcifications on mammograms using hardcopy images

    PubMed Central

    Lai, Chao-Jen; Shaw, Chris C; Whitman, Gary J; Yang, Wei T; Dempsey, Peter J; Nguyen, Victoria; Ice, Mary F

    2007-01-01

    The aim of this study was to compare mammography systems based on three different detectors—a conventional screen—film (SF) combination, an a-Si/CsI flat-panel (FP)-based detector, and a charge-coupled device (CCD)-based x-ray phosphor-based detector—for their performance in detecting simulated microcalcifications (MCs). 112–150 μm calcium carbonate grains were used to simulate MCs and were overlapped with a slab phantom of simulated 50% adipose/50% glandular breast tissue-equivalent material referred to as the uniform background. For the tissue structure background, 200–250 μm calcium carbonate grains were used and overlapped with an anthropomorphic breast phantom. All MC phantom images were acquired with and without magnification (1.8X). The hardcopy images were reviewed by five mammographers. A five-point confidence level rating was used to score each detection task. Receiver operating characteristic (ROC) analysis was performed, and the areas under the ROC curves (Azs) were used to compare the performances of the three mammography systems under various conditions. The results showed that, with a uniform background and contact images, the FP-based system performed significantly better than the SF and the CCD-based systems. For magnified images with a uniform background, the SF and the FP-based systems performed equally well and significantly better than the CCD-based system. With tissue structure background and contact images, the SF system performed significantly better than the FP and the CCD-based systems. With magnified images and a tissue structure background, the SF and the CCD-based systems performed equally well and significantly better than the FP-based system. In the detection of MCs in the fibroglandular and the heterogeneously dense regions, no significant differences were found except that the SF system performed significantly better than the CCD-based system in the fibroglandular regions for the contact images. (Some figures in this

  17. A brief history of free-response receiver operating characteristic paradigm data analysis.

    PubMed

    Chakraborty, Dev P

    2013-07-01

    In the receiver operating characteristic paradigm the observer assigns a single rating to each image and the location of the perceived abnormality, if any, is ignored. In the free-response receiver operating characteristic paradigm the observer is free to mark and rate as many suspicious regions as are considered clinically reportable. Credit for a correct localization is given only if a mark is sufficiently close to an actual lesion; otherwise, the observer's mark is scored as a location-level false positive. Until fairly recently there existed no accepted method for analyzing the resulting relatively unstructured data containing random numbers of mark-rating pairs per image. This report reviews the history of work in this field, which has now spanned more than five decades. It introduces terminology used to describe the paradigm, proposed measures of performance (figures of merit), ways of visualizing the data (operating characteristics), and software for analyzing free-response receiver operating characteristic studies.

  18. Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.

  19. Screening for Depressive Disorders Using the Mood and Anxiety Symptoms Questionnaire Anhedonic Depression Scale: A Receiver-Operating Characteristic Analysis

    ERIC Educational Resources Information Center

    Bredemeier, Keith; Spielberg, Jeffery M.; Silton, Rebecca Levin; Berenbaum, Howard; Heller, Wendy; Miller, Gregory A.

    2010-01-01

    The present study examined the utility of the anhedonic depression scale from the Mood and Anxiety Symptoms Questionnaire (MASQ-AD scale) as a way to screen for depressive disorders. Using receiver-operating characteristic analysis, we examined the sensitivity and specificity of the full 22-item MASQ-AD scale, as well as the 8- and 14-item…

  20. Assessing the Classification Accuracy of Early Numeracy Curriculum-Based Measures Using Receiver Operating Characteristic Curve Analysis

    ERIC Educational Resources Information Center

    Laracy, Seth D.; Hojnoski, Robin L.; Dever, Bridget V.

    2016-01-01

    Receiver operating characteristic curve (ROC) analysis was used to investigate the ability of early numeracy curriculum-based measures (EN-CBM) administered in preschool to predict performance below the 25th and 40th percentiles on a quantity discrimination measure in kindergarten. Areas under the curve derived from a sample of 279 students ranged…

  1. Prediction Accuracy of the Washington and Illinois Risk Assessment Instruments: An Application of Receiver Operating Characteristic Curve Analysis.

    ERIC Educational Resources Information Center

    Camasso, Michael J.; Jagannathan, Radha

    1995-01-01

    Compares the predictive performances of the Illinois CANTS 17B and the Washington State Risk Matrix on a sample of New Jersey child protective services cases using logistic regression and receiver operating characteristic curve analysis. Both instruments predict case recidivism, closings, and substantiation with probabilities greater than chance.…

  2. Improvement of analytical capabilities of neutron activation analysis laboratory at the Colombian Geological Survey

    NASA Astrophysics Data System (ADS)

    Parrado, G.; Cañón, Y.; Peña, M.; Sierra, O.; Porras, A.; Alonso, D.; Herrera, D. C.; Orozco, J.

    2016-07-01

    The Neutron Activation Analysis (NAA) laboratory at the Colombian Geological Survey has developed a technique for multi-elemental analysis of soil and plant matrices, based on Instrumental Neutron Activation Analysis (INAA) using the comparator method. In order to evaluate the analytical capabilities of the technique, the laboratory has been participating in inter-comparison tests organized by Wepal (Wageningen Evaluating Programs for Analytical Laboratories). In this work, the experimental procedure and results for the multi-elemental analysis of four soil and four plant samples during participation in the first round on 2015 of Wepal proficiency test are presented. Only elements with radioactive isotopes with medium and long half-lives have been evaluated, 15 elements for soils (As, Ce, Co, Cr, Cs, Fe, K, La, Na, Rb, Sb, Sc, Th, U and Zn) and 7 elements for plants (Br, Co, Cr, Fe, K, Na and Zn). The performance assessment by Wepal based on Z-score distributions showed that most results obtained |Z-scores| ≤ 3.

  3. Perturbation and Nonlinear Dynamic Analysis of Acoustic Phonatory Signal in Parkinsonian Patients Receiving Deep Brain Stimulation

    ERIC Educational Resources Information Center

    Lee, Victoria S.; Zhou, Xiao Ping; Rahn, Douglas A., III; Wang, Emily Q.; Jiang, Jack J.

    2008-01-01

    Nineteen PD patients who received deep brain stimulation (DBS), 10 non-surgical (control) PD patients, and 11 non-pathologic age- and gender-matched subjects performed sustained vowel phonations. The following acoustic measures were obtained on the sustained vowel phonations: correlation dimension (D[subscript 2]), percent jitter, percent shimmer,…

  4. Development of Analytical Algorithm for the Performance Analysis of Power Train System of an Electric Vehicle

    NASA Astrophysics Data System (ADS)

    Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon

    Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.

  5. Stress analysis for multilayered coating systems using semi-analytical BEM with geometric non-linearities

    NASA Astrophysics Data System (ADS)

    Zhang, Yao-Ming; Gu, Yan; Chen, Jeng-Tzong

    2011-05-01

    For a long time, most of the current numerical methods, including the finite element method, have not been efficient to analyze stress fields of very thin structures, such as the problems of thin coatings and their interfacial/internal mechanics. In this paper, the boundary element method for 2-D elastostatic problems is studied for the analysis of multi-coating systems. The nearly singular integrals, which is the primary obstacle associated with the BEM formulations, are dealt with efficiently by using a semi-analytical algorithm. The proposed semi-analytical integral formulas, compared with current analytical methods in the BEM literature, are suitable for high-order geometry elements when nearly singular integrals need to be calculated. Owing to the employment of the curved surface elements, only a small number of elements need to be divided along the boundary, and high accuracy can be achieved without increasing more computational efforts. For the test problems studied, very promising results are obtained when the thickness of coated layers is in the orders of 10-6-10-9, which is sufficient for modeling most coated systems in the micro- or nano-scales.

  6. Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.

    PubMed

    Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H

    2014-01-01

    Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing. PMID:26741176

  7. Analytical analysis of borehole experiments for the estimation of subsurface thermal properties

    NASA Astrophysics Data System (ADS)

    Moscoso Lembcke, Luis G.; Roubinet, Delphine; Gidel, Floriane; Irving, James; Pehme, Peeter; Parker, Beth L.

    2016-05-01

    Estimating subsurface thermal properties is required in many research fields and applications. To this end, borehole experiments such as the thermal response test (TRT) and active-line-source (ALS) method are of significant interest because they allow us to determine thermal property estimates in situ. With these methods, the subsurface thermal conductivity and diffusivity are typically estimated using asymptotic analytical expressions, whose simplifying assumptions have an impact on the accuracy of the values obtained. In this paper, we develop new analytical tools for interpreting borehole thermal experiments, and we use these tools to assess the impact of such assumptions on thermal property estimates. Quite importantly, our results show that the simplifying assumptions of currently used analytical models can result in errors in the estimated thermal conductivity and diffusivity of up to 60% and 40%, respectively. We also show that these errors are more important for short-term analysis and can be reduced with an appropriate choice of experimental duration. Our results demonstrate the need for cautious interpretation of the data collected during TRT and ALS experiments as well as for improvement of the existing in-situ experimental methods.

  8. Transition zone structure beneath northern Italy investigated using receiver function analysis.

    NASA Astrophysics Data System (ADS)

    Levin, V.; Benoit, M.; Park, J.

    2006-12-01

    The convergence of the Eurasian and African plates has created complex tectonic features in the Apennine region in Northern Italy. While convergence within the orogen has been ongoing since ~30 Ma, syn- convergent extension has been active in this region since ~15 Ma. This extension is often attributed to slab rollback of the subducted Adriatic lithosphere. However, the present state of previously subducted Adriatic lithosphere is debated. End-member scenarios proposed are a "normal" continuous subduction with rollback, and a complete slab detachment and sinking. First-order constraints on the state of the subducted slab could be placed on the basis of its location and shape relative to the Apennines mountain chain. One way to obtain such constraints is through mapping of the transition zone interfaces. Slab material crossing the bounds of the transition zone should deflect them due to its lower temperature. We use migration and stacking of teleseismic receiver functions to construct images of the upper mantle structure beneath Northern Italy. Our data come from a recently completed 3-year long 45-station portable array deployment in the region (part of the RETREAT project, see website). We present preliminary receiver function stacks from over 4100 high quality receiver functions using time- domain deconvolution. For the stacking procedure, a specific time and amplitude was correlated with a specific position in the subsurface to bin the receiver functions, and then the amplitudes of the receiver functions were summed in 0.25 degree bins. We found that including traces from at least 3 stations together with a minimum of 30 receiver functions per bin produced stacks with discernable P-S converted arrivals. Converted-wave images of the upper mantle structure display considerable lateral variability with a fairly short (order of 1°) length scale. The 410 km discontinuity is visible in the eastern part of the study region, beneath the continental Adria plate, but is

  9. Analytical Services Management System

    SciTech Connect

    Church, Shane; Nigbor, Mike; Hillman, Daniel

    2005-03-30

    Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standard chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.

  10. Analytical Services Management System

    2005-03-30

    Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standardmore » chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.« less

  11. New analysis technique for estimating zonal irregularity drifts and variability in the equatorial F region using spaced receiver scintillation data

    SciTech Connect

    Vacchione, J.D.; Franke, S.J.; Yeh, K.C.

    1987-10-01

    A new technique for analyzing spaced receiver measurements of equatorial scintillation is applied to VHF scintillation data. The technique is based on a model that includes both propagation effects and the statistical characteristics of scintillation-producing irregularities. Nonlinear least squares fitting is used to fit the model to measured auto- and cross-correlation functions of the signal amplitude fading on spaced receivers. The results are compared with mean and random drift estimates obtained using the classical type of correlation analysis. 17 references.

  12. Preliminary analysis of fluctuations in the received uplink-beacon-power data obtained from the GOLD experiments

    NASA Technical Reports Server (NTRS)

    Jeganathan, M.; Wilson, K. E.; Lesh, J. R.

    1996-01-01

    Uplink data from recent free-space optical communication experiments carried out between the Table Mountain Facility and the Japanese Engineering Test Satellite are used to study fluctuations caused by beam propagation through the atmosphere. The influence of atmospheric scintillation, beam wander and jitter, and multiple uplink beams on the statistics of power received by the satellite is analyzed and compared to experimental data. Preliminary analysis indicates the received signal obeys an approximate lognormal distribution, as predicted by the weak-turbulence model, but further characterization of other sources of fluctuations is necessary for accurate link predictions.

  13. Preliminary Analysis of Fluctuations in the Received Uplink-Beacon-Power Data Obtained From the GOLD Experiments

    NASA Technical Reports Server (NTRS)

    Jeganathan, M.; Wilson, K. E.; Lesh, J. R.

    1996-01-01

    Uplink data from recent free-space optical communication experiments carried out between the Table Mountain Facility and the Japanese Engineering Test Satellite are used to study fluctuations caused by beam propagation through the atmosphere. The influence of atmospheric scintillation, beam wander and jitter, and multiple uplink beams on the statistics of power received by the satellite is analyzed and compared to experimental data. Preliminary analysis indicates the received signal obeys an approximate lognormal distribution, as predicted by the weak-turbulence model, but further characterization of other sources of fluctuations is necessary for accurate link predictions.

  14. Automation of statistical analysis in the WIPP hazardous waste facility permit for analytical results from characterization

    SciTech Connect

    Shokes, T.; Einerson, J.

    2007-07-01

    One goal of characterizing, processing, and shipping waste to the Waste Isolation Pilot Plant (WIPP) is to make all activities as efficient as possible. Data management and repetitive calculations are a critical part of the process that can be automated, thereby increasing the accuracy and rate at which work is completed and reducing costs. This paper presents the tools developed to automate statistical analysis and other calculations required by the WIPP Hazardous Waste Facility Permit (HWFP). Statistical analyses are performed on the analytical results on gas samples from the headspace of waste containers and solid samples from the core of the waste container. The calculations include determining the number of samples, test for the shape of the distribution of the analytical results, mean, standard deviation, upper 90-percent confidence limit of the mean, and the minimum required Waste Acceptance Plan (WAP) sample size. The input data for these calculations are from the batch data reports for headspace gas analytical results and solids analysis, which must also be obtained and collated for proper use. The most challenging component of the statistical analysis, if performed manually, is the determination of the distribution shape; therefore, the distribution testing is typically performed using a certified software tool. All other calculations can be completed manually, with a spreadsheet, custom developed software, and/or certified software tool. Out of the options available, manually performing the calculations or using a spreadsheet are the least desirable. These methods rely heavily on the availability of an expert, such as a statistician, to perform the calculation. These methods are also more open to human error such as transcription or 'cut and paste' errors. A SAS program is in the process of being developed to perform the calculations. Due to the potential size of the data input files and the need to archive the data in an accessible format, the SAS

  15. Fluxball magnetic field analysis using a hybrid analytical/FEM/BEM with equivalent currents

    NASA Astrophysics Data System (ADS)

    Fernandes, João F. P.; Camilo, Fernando M.; Machado, V. Maló

    2016-03-01

    In this paper, a fluxball electric machine is analyzed concerning the magnetic flux, force and torque. A novel method is proposed based in a special hybrid FEM/BEM (Finite Element Method/Boundary Element Method) with equivalent currents by using an analytical treatment for the source field determination. The method can be applied to evaluate the magnetic field in axisymmetric problems, in the presence of several magnetic materials. Same results obtained by a commercial Finite Element Analysis tool are presented for validation purposes with the proposed method.

  16. An analytical approach to grid sensitivity analysis for NACA four-digit wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1992-01-01

    Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  17. Algal Biomass Analysis by Laser-Based Analytical Techniques—A Review

    PubMed Central

    Pořízka, Pavel; Prochazková, Petra; Prochazka, David; Sládková, Lucia; Novotný, Jan; Petrilak, Michal; Brada, Michal; Samek, Ota; Pilát, Zdeněk; Zemánek, Pavel; Adam, Vojtěch; Kizek, René; Novotný, Karel; Kaiser, Jozef

    2014-01-01

    Algal biomass that is represented mainly by commercially grown algal strains has recently found many potential applications in various fields of interest. Its utilization has been found advantageous in the fields of bioremediation, biofuel production and the food industry. This paper reviews recent developments in the analysis of algal biomass with the main focus on the Laser-Induced Breakdown Spectroscopy, Raman spectroscopy, and partly Laser-Ablation Inductively Coupled Plasma techniques. The advantages of the selected laser-based analytical techniques are revealed and their fields of use are discussed in detail. PMID:25251409

  18. Solar power satellite rectenna design study: Directional receiving elements and parallel-series combining analysis

    NASA Technical Reports Server (NTRS)

    Gutmann, R. J.; Borrego, J. M.

    1978-01-01

    Rectenna conversion efficiencies (RF to dc) approximating 85 percent were demonstrated on a small scale, clearly indicating the feasibility and potential of efficiency of microwave power to dc. The overall cost estimates of the solar power satellite indicate that the baseline rectenna subsystem will be between 25 to 40 percent of the system cost. The directional receiving elements and element extensions were studied, along with power combining evaluation and evaluation extensions.

  19. GNSS software receiver sampling noise and clock jitter performance and impact analysis

    NASA Astrophysics Data System (ADS)

    Chen, Jian Yun; Feng, XuZhe; Li, XianBin; Wu, GuangYao

    2015-02-01

    In the design of a multi-frequency multi-constellation GNSS software defined radio receivers is becoming more and more popular due to its simple architecture, flexible configuration and good coherence in multi-frequency signal processing. It plays an important role in navigation signal processing and signal quality monitoring. In particular, GNSS software defined radio receivers driving the sampling clock of analogue-to-digital converter (ADC) by FPGA implies that a more flexible radio transceiver design is possible. According to the concept of software defined radio (SDR), the ideal is to digitize as close to the antenna as possible. Whereas the carrier frequency of GNSS signal is of the frequency of GHz, converting at this frequency is expensive and consumes more power. Band sampling method is a cheaper, more effective alternative. When using band sampling method, it is possible to sample a RF signal at twice the bandwidth of the signal. Unfortunately, as the other side of the coin, the introduction of SDR concept and band sampling method induce negative influence on the performance of the GNSS receivers. ADC's suffer larger sampling clock jitter generated by FPGA; and low sampling frequency introduces more noise to the receiver. Then the influence of sampling noise cannot be neglected. The paper analyzes the sampling noise, presents its influence on the carrier noise ratio, and derives the ranging error by calculating the synchronization error of the delay locked loop. Simulations aiming at each impact factors of sampling-noise-induced ranging error are performed. Simulation and experiment results show that if the target ranging accuracy is at the level of centimeter, the quantization length should be no less than 8 and the sampling clock jitter should not exceed 30ps.

  20. Combining ray tracing and CFD in the thermal analysis of a parabolic dish tubular cavity receiver

    NASA Astrophysics Data System (ADS)

    Craig, Ken J.; Marsberg, Justin; Meyer, Josua P.

    2016-05-01

    This paper describes the numerical evaluation of a tubular receiver used in a dish Brayton cycle. In previous work considering the use of Computational Fluid Dynamics (CFD) to perform the calculation of the absorbed radiation from the parabolic dish into the cavity as well as the resulting conjugate heat transfer, it was shown that an axi-symmetric model of the dish and receiver absorbing surfaces was useful in reducing the computational cost required for a full 3-D discrete ordinates solution, but concerns remained about its accuracy. To increase the accuracy, the Monte Carlo ray tracer SolTrace is used to perform the calculation of the absorbed radiation profile to be used in the conjugate heat transfer CFD simulation. The paper describes an approach for incorporating a complex geometry like a tubular receiver generated using CFD software into SolTrace. The results illustrate the variation of CFD mesh density that translates into the number of elements in SolTrace as well as the number of rays used in the Monte Carlo approach and their effect on obtaining a resolution-independent solution. The conjugate heat transfer CFD simulation illustrates the effect of applying the SolTrace surface heat flux profile solution as a volumetric heat source to heat up the air inside the tube. Heat losses due to convection and thermal re-radiation are also determined as a function of different tube absorptivities.

  1. Optics characterization of a 900-GHz HEB receiver for the ASTE telescope: design, measurement and tolerance analysis

    NASA Astrophysics Data System (ADS)

    Gonzalez, A.; Soma, T.; Shiino, T.; Kaneko, K.; Uzawa, Y.; Yamamoto, S.

    2014-09-01

    The optics of a 900-GHz HEB receiver for the ASTE telescope have been analyzed by quasi-optical analysis and Physical Optics simulations in combination with beam pattern measurements. The disagreement between simulations and measurements has motivated an extensive campaign of Monte Carlo analyses to find out the cause of such a difference in results. Monte Carlo analyses have considered fabrication and assembly tolerances in all components in the RF chain, as well as some non-expected fabrication errors. This strategy has allowed determining the defective component. In short, the use of all available analyses techniques together with measurements has allowed singling out an underperforming element in an astronomical receiver. The change of this component will improve the optical efficiency and ease astronomical observations. These ideas can be of interest for any quasi-optical receiver at THz frequencies.

  2. Numerical analysis of 50 Gbaud homodyne coherent receivers relying on line-coding and injection locking in lasers

    NASA Astrophysics Data System (ADS)

    Xydas, Yannis; Ressopoulos, Constantinos; Bogris, Adonis

    2015-12-01

    We present a numerical analysis of 50 Gbaud coherent detection enabled by injection locked lasers and line coding. The coherent receiver was tested with respect to an ideal receiver for two higher order modulation formats (16-QAM, QPSK) and under diverse operating regimes relating to the slave laser linewidth properties, the injection level and the frequency detuning between the incoming signal and the slave laser. The impact of the slave laser properties and line coding techniques on the receiver performance is highlighted showing that the technique could be used as a practical solution in order to enable low-cost and short reach n × 100 Gb/s Ethernet communication systems with the potential of flexibility in terms of the data rate.

  3. Receiver function analysis and preliminary body wave tomography of the MACOMO network in Madagascar

    NASA Astrophysics Data System (ADS)

    Pratt, M. J.; Wysession, M. E.; Wiens, D. A.; Nyblade, A.; Aleqabi, G. I.; Shore, P.; Rambolamana, G.; Sy Tanjona Andriampenomanana ny Ony, F.; Rakotondraibe, T.

    2013-12-01

    We present results from a set of seismological studies of the continental island of Madagascar using new seismic data from the NSF-funded MACOMO (MAdagascar, COmores, and MOzambique) IRIS PASSCAL broadband seismometer array. MACOMO involved the deployment during 2011-2013 of 26 broadband seismometers in Madagascar and 6 seismometers in Mozambique, providing the first seismic imaging across the world's 4th-largest island. We present preliminary crustal structure variations from receiver function analyses and body wave tomography results. We calculate radial receiver functions for all Madagascar stations and use the weighted linear regression methodology of Herrmann and Ammon [2002] to invert for shear velocity. Upper mantle and crustal structures from the receiver function analyses are used to help determine starting models for the teleseismic travel-time tomography. The tectonic structure of Madagascar is generally divided into four crustal blocks. Initial seismic imaging shows that the Archean Antongil block that runs along the east of the island has the thickest crust (>40 km) and three Proterozoic terranes that make up the central highlands and are bounded by fault and shear zones are closer to the average crustal thickness (35 km). There has been late Cenozoic intraplate volcanism in northern and central Madagascar (as recently as 1 million years ago), and different hypotheses for its origin will be evaluated by the preliminary results from the three different seismic studies. Complete analyses will be done incorporating seismic data from simultaneous and complementary array of both land- and ocean-based seismometers from French and German deployments.

  4. Analysis, development and testing of a fixed tilt solar collector employing reversible Vee-Trough reflectors and vacuum tube receivers

    NASA Technical Reports Server (NTRS)

    Selcuk, M. K.

    1979-01-01

    The Vee-Trough/Vacuum Tube Collector (VTVTC) aimed to improve the efficiency and reduce the cost of collectors assembled from evacuated tube receivers. The VTVTC was analyzed rigorously and a mathematical model was developed to calculate the optical performance of the vee-trough concentrator and the thermal performance of the evacuated tube receiver. A test bed was constructed to verify the mathematical analyses and compare reflectors made out of glass, Alzak and aluminized GEB Teflon. Tests were run at temperatures ranging from 95 to 180 C during the months of April, May, June, July and August 1977. Vee-trough collector efficiencies of 35-40 per cent were observed at an operating temperature of about 175 C. Test results compared well with the calculated values. Test data covering a complete day are presented for selected dates throughout the test season. Predicted daily useful heat collection and efficiency values are presented for a year's duration at operation temperatures ranging from 65 to 230 C. Estimated collector costs and resulting thermal energy costs are presented. Analytical and experimental results are discussed along with an economic evaluation.

  5. Visual analytics for multimodal social network analysis: a design study with social scientists.

    PubMed

    Ghani, Sohaib; Kwon, Bum Chul; Lee, Seungyoon; Yi, Ji Soo; Elmqvist, Niklas

    2013-12-01

    Social network analysis (SNA) is becoming increasingly concerned not only with actors and their relations, but also with distinguishing between different types of such entities. For example, social scientists may want to investigate asymmetric relations in organizations with strict chains of command, or incorporate non-actors such as conferences and projects when analyzing coauthorship patterns. Multimodal social networks are those where actors and relations belong to different types, or modes, and multimodal social network analysis (mSNA) is accordingly SNA for such networks. In this paper, we present a design study that we conducted with several social scientist collaborators on how to support mSNA using visual analytics tools. Based on an openended, formative design process, we devised a visual representation called parallel node-link bands (PNLBs) that splits modes into separate bands and renders connections between adjacent ones, similar to the list view in Jigsaw. We then used the tool in a qualitative evaluation involving five social scientists whose feedback informed a second design phase that incorporated additional network metrics. Finally, we conducted a second qualitative evaluation with our social scientist collaborators that provided further insights on the utility of the PNLBs representation and the potential of visual analytics for mSNA. PMID:24051769

  6. Shallow Sedimentary Structure of the Brahmaputra Valley Constraint from Receiver Functions Analysis

    NASA Astrophysics Data System (ADS)

    Saikia, Sowrav; Chopra, Sumer; Baruah, Santanu; Singh, Upendra K.

    2016-08-01

    In this study, receiver functions from ten Broadband seismograph stations on Cenozoic sediment formations of Brahmaputra valley and its neighboring region in northeastern part of India are determined. Receiver function traces from this region show delay in peak by 1-2.5 s and associated minor peaks with the direct P-phase peak. Based on such observation, we try to image sedimentary structure of the Brahmaputra valley plain, adjacent Shillong plateau and Himalayan foredeep region. An adapted hybrid global waveform inversion technique has been applied to extract sedimentary basin structure beneath each site. The sedimentary cover of the basin is about 0.5-6.5 km thick across the valley, 0.5-1.0 km on Shillong plateau and 2.0-5.0 km in nearby foredeep region. We have found that sedimentary thickness increases from SW to NE along the Brahmaputra valley and towards the Eastern Himalayan syntaxes. The estimated sediment thickness and S wave velocity structure agree well with the results of previous active source, gravity, and deep borehole studies carried out in this region. The thick crustal low velocity sediment cover in Brahmaputra valley is expected to amplify ground motions during earthquakes and therefore important for seismic hazard assessment of the region.

  7. Analysis, design, and experimental results for lightweight space heat receiver canisters, phase 1

    NASA Technical Reports Server (NTRS)

    Schneider, Michael G.; Brege, Mark A.; Heidenreich, Gary R.

    1991-01-01

    Critical technology experiments have been performed on thermal energy storage modules in support of the Brayton Advanced Heat Receiver program. The modules are wedge-shaped canisters designed to minimize the mechanical stresses that occur during the phase change of the lithium fluoride phase change material. Nickel foam inserts were used in some of the canisters to provide thermal conductivity enhancement and to distribute the void volume. Two canisters, one with a nickel foam insert, and one without, were thermally cycled in various orientations in a fluidized bed furnace. The only measurable impact of the nickel foam was seen when the back and short sides of the canister were insulated to simulate operation in the advanced receiver design. In tests with insulation, the furnace to back side delta T was larger in the canister with the nickel foam insert, probably due to the radiant absorptivity of the nickel. However, the differences in the temperature profiles of the two canisters were small, and in many cases the profiles matched fairly well. Computed Tomography (CT) was successfully used to nondestructively demarcate void locations in the canisters. Finally, canister dimensional stability, which was measured throughout the thermal cycling test program with an inspection fixture was satisfactory with a maximum change of 0.635 mm (0.025 in.).

  8. Analytical Technology

    SciTech Connect

    Goheen, Steven C.

    2001-07-01

    Characterizing environmental samples has been exhaustively addressed in the literature for most analytes of environmental concern. One of the weak areas of environmental analytical chemistry is that of radionuclides and samples contaminated with radionuclides. The analysis of samples containing high levels of radionuclides can be far more complex than that of non-radioactive samples. This chapter addresses the analysis of samples with a wide range of radioactivity. The other areas of characterization examined in this chapter are the hazardous components of mixed waste, and special analytes often associated with radioactive materials. Characterizing mixed waste is often similar to characterizing waste components in non-radioactive materials. The largest differences are in associated safety precautions to minimize exposure to dangerous levels of radioactivity. One must attempt to keep radiological dose as low as reasonably achievable (ALARA). This chapter outlines recommended procedures to safely and accurately characterize regulated components of radioactive samples.

  9. Analytical mitigation of solute-capillary interactions in double detection Taylor Dispersion Analysis.

    PubMed

    Latunde-Dada, Seyi; Bott, Rachel; Hampton, Karl; Leszczyszyn, Oksana Iryna

    2015-08-21

    Taylor Dispersion Analysis (TDA) in the presence of interactions between solutes and capillary walls yields inaccurate results for the diffusion coefficients of the solutes because the resulting concentration profiles are broadened and asymmetric. Whilst there are practical ways of mitigating these interactions, it is not always possible to eradicate them completely. In this paper, an analytical method of mitigating the effects of the adsorptions is presented. By observing the dispersion of the solute molecules at two detection points and using the expected relations between measured parameters, such as the standard deviations and peak amplitudes, the dispersive components of the profiles were isolated with a constrained fitting algorithm. The method was successfully applied to lysozyme and cytochrome C which adsorb onto fused silica capillary walls. Furthermore, this illustrates an advantage of using the fitting method for Taylor Dispersion Analysis. PMID:26189206

  10. Analytical modeling and thermodynamic analysis of robust superhydrophobic surfaces with inverse-trapezoidal microstructures.

    PubMed

    Im, Maesoon; Im, Hwon; Lee, Joo-Hyung; Yoon, Jun-Bo; Choi, Yang-Kyu

    2010-11-16

    A polydimethylsiloxane (PDMS) elastomer surface with perfectly ordered microstructures having an inverse-trapezoidal cross-sectional profile (simply PDMS trapezoids) showed superhydrophobic and transparent characteristics under visible light as reported in our previous work. The addition of a fluoropolymer (Teflon) coating enhances both features and provides oleophobicity. This paper focuses on the analytical modeling of the fabricated PDMS trapezoids structure and thermodynamic analysis based on the Gibbs free energy analysis. Additionally, the wetting characteristics of the fabricated PDMS trapezoids surface before and after the application of the Teflon coating are analytically explained. The Gibbs free energy analysis reveals that, due to the Teflon coating, the Cassie-Baxter state becomes energetically more favorable than the Wenzel state and the contact angle difference between the Cassie-Baxter state and the Wenzel state decreases. These two findings support the robustness of the superhydrophobicity of the fabricated Teflon-coated PDMS trapezoids. This is then verified via the impinging test of a water droplet at a high speed. The dependencies of the design parameters in the PDMS trapezoids on the hydrophobicity are also comprehensively studied through a thermodynamic analysis. Geometrical dependency on the hydrophobicity shows that overhang microstructures do not have a significant influence on the hydrophobicity. In contrast, the intrinsic contact angle of the structural material is most important in determining the apparent contact angle. On the other hand, the experimental results showed that the side angles of the overhangs are critical not for the hydrophobic but for the oleophobic property with liquids of a low surface tension. Understanding of design parameters in the PDMS trapezoids surface gives more information for implementation of superhydrophobic surfaces. PMID:20879754

  11. Analytical modeling and thermodynamic analysis of robust superhydrophobic surfaces with inverse-trapezoidal microstructures.

    PubMed

    Im, Maesoon; Im, Hwon; Lee, Joo-Hyung; Yoon, Jun-Bo; Choi, Yang-Kyu

    2010-11-16

    A polydimethylsiloxane (PDMS) elastomer surface with perfectly ordered microstructures having an inverse-trapezoidal cross-sectional profile (simply PDMS trapezoids) showed superhydrophobic and transparent characteristics under visible light as reported in our previous work. The addition of a fluoropolymer (Teflon) coating enhances both features and provides oleophobicity. This paper focuses on the analytical modeling of the fabricated PDMS trapezoids structure and thermodynamic analysis based on the Gibbs free energy analysis. Additionally, the wetting characteristics of the fabricated PDMS trapezoids surface before and after the application of the Teflon coating are analytically explained. The Gibbs free energy analysis reveals that, due to the Teflon coating, the Cassie-Baxter state becomes energetically more favorable than the Wenzel state and the contact angle difference between the Cassie-Baxter state and the Wenzel state decreases. These two findings support the robustness of the superhydrophobicity of the fabricated Teflon-coated PDMS trapezoids. This is then verified via the impinging test of a water droplet at a high speed. The dependencies of the design parameters in the PDMS trapezoids on the hydrophobicity are also comprehensively studied through a thermodynamic analysis. Geometrical dependency on the hydrophobicity shows that overhang microstructures do not have a significant influence on the hydrophobicity. In contrast, the intrinsic contact angle of the structural material is most important in determining the apparent contact angle. On the other hand, the experimental results showed that the side angles of the overhangs are critical not for the hydrophobic but for the oleophobic property with liquids of a low surface tension. Understanding of design parameters in the PDMS trapezoids surface gives more information for implementation of superhydrophobic surfaces.

  12. Receiver Function Analysis of the Lithospheric Structure Beneath the Western Great Plains

    NASA Astrophysics Data System (ADS)

    Thurner, S.; Zhai, Y.; Levander, A.

    2010-12-01

    The lithosphere in the western Great Plain region of the Southwestern U.S. has been subject to tectonic deformation from the Proterozoic to present day. Proterozoic island arc terranes accreted onto the North American continent between 1.8 and 1.1 Ga, forming the original continent, and there is evidence for Proterozoic continental extension which formed basement penetrating faults between 1.5 and .6 Ga . This was followed by the uplift of the Ancestral Rockies and, most recently, the subduction of the Farallon plate beneath North America. Extension has occurred throughout the Basin and Range and formed the Rio Grand Rift (RGR). However, the relative impact that large scale tectonic forces, regional asthenospheric upwelling, and preexisting structural weaknesses have on the extension of the RGR is still undetermined. This study seeks to better understand the current tectonic system east of the Colorado Plateau beneath the RGR and western Great Plains. We use teleseismic receiver functions to investigate the nature of extension in the RGR as well as its connection to the small-scale convection thought to be occurring beneath the Colorado Plateau-RGR-Great Plains region. Our receiver function images were generated from 85 earthquake events recorded at 187 USArray Transportable Array seismic stations located throughout the western Great Plains (Latitude: 28-48, Longitude: -105-100). Previous studies have indicated crustal thickness between 39 km and 50 km beneath the Great Plains and as thin as 35 km beneath the RGR (Wilson et.al, 2005). Tomography results have shown high velocity anomalies on both sides of the RGR, extending to 600 km depth beneath the western Great Plains, and a low velocity anomaly directly beneath the RGR (Gok et. al, 2003, Wilson et. al, 2005, Gao et. al, Song and Helmberger, 2007). The western Great Plains high velocity anomaly has been interpreted to be part of the downwelling portion of an edge driven convection system induced by a lateral

  13. Spine Radiosurgery: A Dosimetric Analysis in 124 Patients Who Received 18 Gy

    SciTech Connect

    Schipani, Stefano; Wen, Winston; Jin, Jain-Yue; Kim, Jin Koo; Ryu, Samuel

    2012-12-01

    Purpose: To define the safely tolerated doses to organs at risk (OARs) adjacent to the target volume (TV) of spine radiosurgery (SRS) with 18-Gy in a single fraction. Methods and Materials: A total of 124 patient cases with 165 spine metastases were reviewed. An 18-Gy single-fraction regimen was prescribed to the 90% isodose line encompassing the TV. A constraint of 10 Gy to 10% of the spinal cord outlined 6 mm above and below the TV was used. Dosimetric data to OARs were analyzed. Results: A total of 124 patients (100%) were followed-up, and median follow-up time was 7 months (1-50 months). Symptoms and local control were achieved in 114 patients (92%). Acute Radiation Therapy Oncology Group (RTOG) grade 1 oral mucositis occurred in 11 of 11 (100%) patients at risk for oropharyngeal toxicity after cervical spine treatment. There were no RTOG grade 2-4 acute or late complications. Median TV was 43.2 cc (5.3-175.4 cc) and 90% of the TV received median dose of 19 Gy (17-19.8 Gy). Median (range) of spinal cord maximum dose (Dmax), dose to spinal cord 0.35 cc (Dsc0.35), and cord volume receiving 10 Gy (Vsc10) were 13.8 Gy (5.4-21 Gy), 8.9 Gy (2.6-11.4 Gy) and 0.33 cc (0-1.6 cc), respectively. Other OARs were evaluated when in proximity to the TV. Esophagus (n=58), trachea (n=28), oropharynx (n=11), and kidneys (n=34) received median (range) V10 and V15 of 3.1 cc (0-5.8 cc) and 1.2 cc (0-2.9 cc), 2.8 cc (0-4.9 cc), and 0.8 cc (0-2.1 cc), 3.4 cc (0-6.2 cc) and 1.6 cc (0-3.2 cc), 0.3 cc (0-0.8 cc) and 0.08 cc (0-0.1 cc), respectively. Conclusions: Cord Dmax of 14 Gy and D0.35 of 10 Gy are safe dose constraints for 18-Gy single-fraction SRS. Esophagus V10 of 3 cc and V15 of 1 cc, trachea V10 of 3 cc, and V15 of 1 cc, oropharynx V10 of 3.5 cc and V15 of 1.5 cc, kidney V10 of 0.3 cc, and V15 of 0.1 cc are planning guidelines when these OARs are in proximity to the TV.

  14. Preliminary performance analysis of a transverse flow spectrally selective two-slab packed bed volumetric receiver

    NASA Astrophysics Data System (ADS)

    Roos, Thomas H.; Harms, Thomas M.

    2016-05-01

    A new volumetric receiver concept has been investigated, based on an adaptation of the spectrally selective, two-slab packed bed volumetric receiver concept of Flamant et al. Both slabs comprise spheres of identical size - borosilicate for the transparent slab 1 and SiC for the opaque slab 2 - which are ordered in a hexagonally close-packed bed. The flow direction has been changed from parallel to the incident radiation and perpendicular to the window, to parallel to the window and perpendicular to the incident radiation (transverse flow). The gap between the window and slab 1 has been removed, so the bed is held in place by the sidewalls, the floor and the window, allowing arbitrary orientation and dispensing with the need for beam-down operation. The receiver has been subjected to constant solar radiative load of approximately 70 suns, and the effect of variations in flowrate, the degree of air preheating as well as the thickness of slab 2 on the outlet air temperature distributions have been measured. The effect of reducing the flowrate for both slab 2 thicknesses is to increase temperature everywhere relative to the maximum temperature, having the effect of "flattening" the pattern factor and tending towards more uniform temperature distribution. The effect of preheating for both slab 2 thicknesses is to move the location of maximum temperature deeper into the bed (away from the window). No significant effect is observed on pattern factor in the transparent region of the bed (slab 1), but temperatures in the opaque region increase relative to the maximum temperature. The results are consistent with the increasing contribution of radiative heat transfer relative to convective and conductive heat transfer as the bed temperature rises. In all cases, the air temperature closest to the window is lower than the maximum temperature, demonstrating the volumetric heating effect. Increasing the outlet air temperature (either due to preheating or due to decreasing

  15. Receiver operator characteristic (ROC) curve analysis of super high resolution video for histopathology

    NASA Astrophysics Data System (ADS)

    Bloom, Kenneth J.; Rozek, L. S.; Weinstein, Ronald S.

    1987-10-01

    The receiver operator characteristic (ROC) curve is used to assess the ability of a diagnostic test to distinguish between two discreet states, such as tumor present or tumor absent in a histopathologic section. We have used ROC methodology to assess the ability of pathologists to diagnose frozen section biopsies of breast tissue as benign or malignant, using both a conventional light microscope and a high resolution camera/monitor system. 115 consecutive frozen section breast biopsies were reviewed using each of the above modalities. Results yielded identical ROC curves for the conventional light microscope and high resolution camera/monitor system. Furthermore, the percentage of cases in which pathologists rendered an "equivocal" diagnosis was the same with both modalities.

  16. Analysis of the receiver response for a noncoaxial lidar system with fiber-optic output.

    PubMed

    Chourdakis, Giorgos; Papayannis, Alexandros; Porteneuve, Jacques

    2002-05-20

    The return signal of a noncoaxial lidar system with fiber-optic output is examined. The dependence of the overlap regions and the overlap factor of the system on the fiber diameter is calculated for several inclination angles between the laser beam and the optical receiver axes. The effect of central obstruction is included and both cases of Gaussian and quasi-Gaussian laser beam profiles are treated. The irradiance spatial distribution on the focal plane of the system is calculated and experimentally determined. Finally, an alignment procedure of the lidar system is described based on the comparison between the range-corrected lidar signal and the range-corrected exponentially attenuated Rayleigh backscattered coefficient.

  17. Seismic evidence of the Hainan mantle plume by receiver function analysis in southern China

    NASA Astrophysics Data System (ADS)

    Wei, S. Shawn; Chen, Y. John

    2016-09-01

    The Lei-Qiong region is the largest igneous province in southern China and may be a surface expression of a mantle plume beneath the region (the Hainan mantle plume). To investigate the existence of the Hainan mantle plume, we used P-to-S receiver function to image the major seismic discontinuities beneath this region with a regional dense broadband array. We found that the Moho discontinuity beneath the Leizhou Peninsula, mostly covered by Cenozoic basaltic outcrops, is 10-15 km deeper compared to the adjacent region of Eurasian continental margin, showing a thickened local crust by upwelling mantle materials. Additionally, the imaged 410- and 660-km discontinuities suggest a thinner-than-normal mantle transition zone beneath the region, implying that hot materials penetrate through the transition zone from the lower mantle. Both seismic evidences support the existence of the mantle plume, which might be 170-200°C hotter than the surrounding mantle.

  18. Characterization of impulse noise and analysis of its effect upon correlation receivers

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Moore, J. D.

    1971-01-01

    A noise model is formulated to describe the impulse noise in many digital systems. A simplified model, which assumes that each noise burst contains a randomly weighted version of the same basic waveform, is used to derive the performance equations for a correlation receiver. The expected number of bit errors per noise burst is expressed as a function of the average signal energy, signal-set correlation coefficient, bit time, noise-weighting-factor variance and probability density function, and a time range function which depends on the crosscorrelation of the signal-set basis functions and the noise waveform. A procedure is established for extending the results for the simplified noise model to the general model. Unlike the performance results for Gaussian noise, it is shown that for impulse noise the error performance is affected by the choice of signal-set basis functions and that Orthogonal signaling is not equivalent to On-Off signaling with the same average energy.

  19. Topological data analysis: A promising big data exploration tool in biology, analytical chemistry and physical chemistry.

    PubMed

    Offroy, Marc; Duponchel, Ludovic

    2016-03-01

    An important feature of experimental science is that data of various kinds is being produced at an unprecedented rate. This is mainly due to the development of new instrumental concepts and experimental methodologies. It is also clear that the nature of acquired data is significantly different. Indeed in every areas of science, data take the form of always bigger tables, where all but a few of the columns (i.e. variables) turn out to be irrelevant to the questions of interest, and further that we do not necessary know which coordinates are the interesting ones. Big data in our lab of biology, analytical chemistry or physical chemistry is a future that might be closer than any of us suppose. It is in this sense that new tools have to be developed in order to explore and valorize such data sets. Topological data analysis (TDA) is one of these. It was developed recently by topologists who discovered that topological concept could be useful for data analysis. The main objective of this paper is to answer the question why topology is well suited for the analysis of big data set in many areas and even more efficient than conventional data analysis methods. Raman analysis of single bacteria should be providing a good opportunity to demonstrate the potential of TDA for the exploration of various spectroscopic data sets considering different experimental conditions (with high noise level, with/without spectral preprocessing, with wavelength shift, with different spectral resolution, with missing data). PMID:26873463

  20. Topological data analysis: A promising big data exploration tool in biology, analytical chemistry and physical chemistry.

    PubMed

    Offroy, Marc; Duponchel, Ludovic

    2016-03-01

    An important feature of experimental science is that data of various kinds is being produced at an unprecedented rate. This is mainly due to the development of new instrumental concepts and experimental methodologies. It is also clear that the nature of acquired data is significantly different. Indeed in every areas of science, data take the form of always bigger tables, where all but a few of the columns (i.e. variables) turn out to be irrelevant to the questions of interest, and further that we do not necessary know which coordinates are the interesting ones. Big data in our lab of biology, analytical chemistry or physical chemistry is a future that might be closer than any of us suppose. It is in this sense that new tools have to be developed in order to explore and valorize such data sets. Topological data analysis (TDA) is one of these. It was developed recently by topologists who discovered that topological concept could be useful for data analysis. The main objective of this paper is to answer the question why topology is well suited for the analysis of big data set in many areas and even more efficient than conventional data analysis methods. Raman analysis of single bacteria should be providing a good opportunity to demonstrate the potential of TDA for the exploration of various spectroscopic data sets considering different experimental conditions (with high noise level, with/without spectral preprocessing, with wavelength shift, with different spectral resolution, with missing data).

  1. Granular Flow Dynamics on Earth, Moon, and Mars from analytical, numerical and field analysis

    NASA Astrophysics Data System (ADS)

    Lucas, A.; Mangeney, A.; Mhge, D.

    2010-12-01

    Prediction of landslides dynamics remains difficult in spite of a considerable body of work. A number of previous studies have been based on runout analysis in relation to mean dissipation calibration via the friction coefficient. However, the shape of the initial scar is generally unknown in real cases, which weakens landslide material spreading predictions and has alters calibration parameters of numerical models. We study numerically the effects of scar geometry on flow and distribution of the deposits and show that the initial shape of the scar, independent of the friction coefficient, does not affect the runout distance. In contrast, 3D tests show that the shape of the final deposits is a function of the scar geometry, and hence information on initial scar geometry and initial volume involved in the mass spreading may be retrieved from analysis of final deposit morphology. From an analytical solution we show here why the classical mobility (defined as the ratio between total height and runout distance) decreases when the volume increases, as is generally observed in geological data. We thus introduce analytically a new mobility variable obtained from geomorphic measurements reflecting the intrinsic dissipation independent of the aspect ratio, of the volume of the granular mass involved, of the underlying topography, and of the initial scar geometry. Comparison between experimental results, terrestrial, Lunar and Martian cases highlights a larger new mobility measure of natural granular flows compared to dry mass spreading simulated in the laboratory. In addition, landslides in a similar geological context give a single value showing the robustness of this new parameter. Finally, the new mobility provides a first order estimate of the effective friction required in models to reproduce the extent of the deposits in a given geological context. This enables a feedback analysis method for retrieving the volume and shape of the initial landslide material and then

  2. Analytics for Education

    ERIC Educational Resources Information Center

    MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin

    2014-01-01

    This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…

  3. Crustal thickness estimation in the Maule Region (Chile) from P-wave receiver function analysis

    NASA Astrophysics Data System (ADS)

    Dannowski, A.; Grevemeyer, I.; Thorwart, M. M.; Rabbel, W.; Flueh, E. R.

    2010-12-01

    A temporary passive seismic network of 31 broad-band stations was deployed in the region around Talca and Constitución between 35°S to 36°S latitude and 71°W to 72.5°W longitude. The network was operated between March and October 2008. Thus, we recorded data prior the magnitude Mw=8.8 earthquake of 27 February 2010 at a latitude of the major slip and surface uplift. The experiment was conducted to address fundamental questions on deformation processes, crustal and mantle structures, and fluid flow. We present first results of a teleseismic P receiver function study that covers the coastal region and reaches to the Andes. The aim is to determine the structure and thickness of the continental crust and constrain the state of hydration of the mantle wedge. The P-wave receiver function technique requires large teleseismic earthquakes from different distances and backazimuths. A few percent of the incident P-wave energy from a teleseismic event will be converted into S-wave (Ps) at significant and relatively sharp discontinuities beneath the station. A small converted S phase is produced that arrives at the station within the P wave coda directly after the direct P-wave. The converted Ps phase and their crustal multiples contain information about crustal properties, such as Moho depth and the crustal vp/vs ratio. We use teleseismic events with magnitudes mb > 5.5 at epicentral distances between 30° and 95° to examine P-to-S converted seismic phases. Our preliminary results provide new information about the thickness of the continental crust beneath the coastal region in Central Chile. At most of the stations we observed significant energy from P to S converted waves between 4 and 5 s after the direct P-wave within a positive phase interpreted as the Moho, occurring at 35 to 40 km. Thus, the great Maule earthquake of 27 February 2010 nucleated up-dip of the continental Moho and hence ruptured along a plate contact between subducted sediments and continental crust

  4. Analysis of structural dynamic data from Skylab. Volume 2: Skylab analytical and test model data

    NASA Technical Reports Server (NTRS)

    Demchak, L.; Harcrow, H.

    1976-01-01

    The orbital configuration test modal data, analytical test correlation modal data, and analytical flight configuration modal data are presented. Tables showing the generalized mass contributions (GMCs) for each of the thirty tests modes are given along with the two dimensional mode shape plots and tables of GMCs for the test correlated analytical modes. The two dimensional mode shape plots for the analytical modes and uncoupled and coupled modes of the orbital flight configuration at three development phases of the model are included.

  5. Cytobank: providing an analytics platform for community cytometry data analysis and collaboration.

    PubMed

    Chen, Tiffany J; Kotecha, Nikesh

    2014-01-01

    Cytometry is used extensively in clinical and laboratory settings to diagnose and track cell subsets in blood and tissue. High-throughput, single-cell approaches leveraging cytometry are developed and applied in the computational and systems biology communities by researchers, who seek to improve the diagnosis of human diseases, map the structures of cell signaling networks, and identify new cell types. Data analysis and management present a bottleneck in the flow of knowledge from bench to clinic. Multi-parameter flow and mass cytometry enable identification of signaling profiles of patient cell samples. Currently, this process is manual, requiring hours of work to summarize multi-dimensional data and translate these data for input into other analysis programs. In addition, the increase in the number and size of collaborative cytometry studies as well as the computational complexity of analytical tools require the ability to assemble sufficient and appropriately configured computing capacity on demand. There is a critical need for platforms that can be used by both clinical and basic researchers who routinely rely on cytometry. Recent advances provide a unique opportunity to facilitate collaboration and analysis and management of cytometry data. Specifically, advances in cloud computing and virtualization are enabling efficient use of large computing resources for analysis and backup. An example is Cytobank, a platform that allows researchers to annotate, analyze, and share results along with the underlying single-cell data.

  6. Remote access methods for exploratory data analysis and statistical modelling: Privacy-Preserving Analytics.

    PubMed

    Sparks, Ross; Carter, Chris; Donnelly, John B; O'Keefe, Christine M; Duncan, Jodie; Keighley, Tim; McAullay, Damien

    2008-09-01

    This paper is concerned with the challenge of enabling the use of confidential or private data for research and policy analysis, while protecting confidentiality and privacy by reducing the risk of disclosure of sensitive information. Traditional solutions to the problem of reducing disclosure risk include releasing de-identified data and modifying data before release. In this paper we discuss the alternative approach of using a remote analysis server which does not enable any data release, but instead is designed to deliver useful results of user-specified statistical analyses with a low risk of disclosure. The techniques described in this paper enable a user to conduct a wide range of methods in exploratory data analysis, regression and survival analysis, while at the same time reducing the risk that the user can read or infer any individual record attribute value. We illustrate our methods with examples from biostatistics using publicly available data. We have implemented our techniques into a software demonstrator called Privacy-Preserving Analytics (PPA), via a web-based interface to the R software. We believe that PPA may provide an effective balance between the competing goals of providing useful information and reducing disclosure risk in some situations.

  7. A simulation analysis of phase processing circuitry in the Ohio University Omega receiver prototype

    NASA Technical Reports Server (NTRS)

    Palkovic, R. A.

    1975-01-01

    A FORTRAN IV simulation study of the all-digital phase-processing circuitry is described. A digital phase-lock loop (DPLL) forms the heart of the Omega navigation receiver prototype, and through the DPLL, the phase of the 10.2 KHz Omega signal was estimated when the true signal phase is contaminated with noise. The DPLL uses a frequency synthesizer as the reference oscillator. The synthesizer is composed of synchronous rate multipliers (SRM's) driven by a temperature-compensated crystal oscillator, and the use of the SRM's in this application introduces phase jitter which degrades system performance. Simulation of the frequency synthesizer discussed was to analyze the circuits on a bit-by-bit level in order to evaluate the overall design, to see easily the effects of proposed design changes prior to actual breadboarding, to determine the optimum integration time for the DPLL in an environment typical of general aviation conditions, and to quantify the phase error introduced by the SRM synthesizer and examine its effect on the system.

  8. Crust and uppermantle structure of the Arabian shield from receiver runction analysis

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Ramesh, D.; Saul, J.; Sarkar, D.; Kind, R.

    2001-12-01

    Modelling of the receiver functions from an eight-station PASSCAL network on the Saudi Arabian shield reveals a complex crust with the presence of intra-crustal discontinuities and 7-10 km thick low-velocity layers at varied depths. The Moho depth beneath the stations generally varies between 35-38 km, and is thus shallower than that reported for most of the Proterozoic shields. The crust appears to be more felsic than the global Precambrian shield average, as evidenced by relatively low Poisson's ratios (0.25-0.26). The presence of a sub-crustal discontinuity associated with a positive velocity contrast at about 70 km depth is inferred beneath stations TAIF, RANI, AFIF and UQSK. Significant mantle stratification is observed beneath the Arabian shield, by way of two additional upper mantle discontinuities at about 230 km (Lehmann discontinuity) and 340 km depth. The presence of the 340 km discontinuity together with diffuse and delayed conversions from the 410 and 660 km boundaries suggests a thermally complex upper mantle. The region between 340 and 410 km appears to be a zone of reduced velocities. In agreement with inferences from tomographic studies and anisotropy measurements, our study supports the presence of a warm and deforming upper mantle.

  9. A guide to genome-wide association analysis and post-analytic interrogation.

    PubMed

    Reed, Eric; Nunez, Sara; Kulp, David; Qian, Jing; Reilly, Muredach P; Foulkes, Andrea S

    2015-12-10

    This tutorial is a learning resource that outlines the basic process and provides specific software tools for implementing a complete genome-wide association analysis. Approaches to post-analytic visualization and interrogation of potentially novel findings are also presented. Applications are illustrated using the free and open-source R statistical computing and graphics software environment, Bioconductor software for bioinformatics and the UCSC Genome Browser. Complete genome-wide association data on 1401 individuals across 861,473 typed single nucleotide polymorphisms from the PennCATH study of coronary artery disease are used for illustration. All data and code, as well as additional instructional resources, are publicly available through the Open Resources in Statistical Genomics project: http://www.stat-gen.org.

  10. Some analytical approximations to radiative transfer theory and their application for the analysis of reflectance data

    NASA Astrophysics Data System (ADS)

    Kokhanovsky, Alexander; Hopkinson, Ian

    2008-03-01

    We derive an analytical approximation in the framework of the radiative transfer theory for use in the analysis of diffuse reflectance measurements. This model uses two parameters to describe a material, the transport free path length, l, and the similarity parameter, s. Using a simple algebraic expression, s and l can be applied for the determination of the absorption coefficient Kabs, which can be easily compared to absorption coefficients measured using transmission spectroscopy. l and Kabs can be seen as equivalent to the S and K parameters, respectively, in the Kubelka-Munk formulation. The advantage of our approximation is a clear basis in the complete radiative transfer theory. We demonstrate the application of our model to a range of different paper types and to fabrics treated with known levels of a dye.

  11. Regime Transition in Electromechanical Fluid Atomization and Implications to Analyte Ionization for Mass Spectrometric Analysis

    PubMed Central

    Forbes, Thomas P.; Degertekin, F. Levent; Fedorov, Andrei G.

    2015-01-01

    The physical processes governing the transition from purely mechanical ejection to electromechanical ejection to electrospraying are investigated through complementary scaling analysis and optical visualization. Experimental characterization and visualization are performed with the ultrasonically-driven array of micromachined ultrasonic electrospray (AMUSE) ion source to decouple the electrical and mechanical fields. A new dimensionless parameter, the Fenn number, is introduced to define a transition between the spray regimes, in terms of its dependence on the characteristic Strouhal number for the ejection process. A fundamental relationship between the Fenn and Strouhal numbers is theoretically derived and confirmed experimentally in spraying liquid electrolytes of different ionic strength subjected to a varying magnitude electric field. This relationship and the basic understanding of the charged droplet generation physics have direct implications on the optimal ionization efficiency and mass spectrometric response for different types of analytes. PMID:20729096

  12. Dynamic Graph Analytic Framework (DYGRAF): greater situation awareness through layered multi-modal network analysis

    NASA Astrophysics Data System (ADS)

    Margitus, Michael R.; Tagliaferri, William A., Jr.; Sudit, Moises; LaMonica, Peter M.

    2012-06-01

    Understanding the structure and dynamics of networks are of vital importance to winning the global war on terror. To fully comprehend the network environment, analysts must be able to investigate interconnected relationships of many diverse network types simultaneously as they evolve both spatially and temporally. To remove the burden from the analyst of making mental correlations of observations and conclusions from multiple domains, we introduce the Dynamic Graph Analytic Framework (DYGRAF). DYGRAF provides the infrastructure which facilitates a layered multi-modal network analysis (LMMNA) approach that enables analysts to assemble previously disconnected, yet related, networks in a common battle space picture. In doing so, DYGRAF provides the analyst with timely situation awareness, understanding and anticipation of threats, and support for effective decision-making in diverse environments.

  13. The analytic solution of the structural analysis problem and its use in structural synthesis

    NASA Astrophysics Data System (ADS)

    Fuchs, M. B.

    An overview is presented of the analytic expressions for the inverse of the stiffness matrix, the nodal displacements, and the internal forces in linear elastic redundant structures. The inverse of the stiffness matrix and the nodal displacements are obtained using Binet and Cauchy's theorem on the product of compound matrices. The formula for the internal forces is derived from the principles of structural mechanics. This approach is shown to apply to all framed structures via the unimodal stiffnesses of its elements. Approximate models are constructed which are exact at preselected points along a line in the analysis space. An argument is also made for the use of multilinear polynomials as an alternative to Taylor expansion-based approximations.

  14. Analysis of the extracts of Isatis tinctoria by new analytical approaches of HPLC, MS and NMR.

    PubMed

    Zhou, Jue; Qu, Fan

    2011-01-01

    The methods of extraction, separation and analysis of alkaloids and indole glucosinolates (GLs) ofIsatis tinctoria were reviewed. Different analytical approaches such as High-pressure Liquid Chromatography (HPLC), Liquid Chromatography with Electrospray Ionization Mass Spectrometry (LC/ESI/MS), Electrospray Ionization Time-Of-Flight Mass Spectrometry (ESI-TOF-MS), and Nuclear Magnetic Resonance (NMR) were used to validate and identity of these constituents. These methods provide rapid separation, identification and quantitative measurements of alkaloids and GLs of Isatis tinctoria. By connection with different detectors to HPLC such as PDA, ELSD, ESI- and APCI-MS in positive and negative ion modes, complicated compounds could be detected with at least two independent detection modes. The molecular formula can be derived in a second step of ESI-TOF-MS data. But for some constituents, UV and MS cannot provide sufficient structure identification. After peak purification, NMR by semi-preparative HPLC can be used as a complementary method.

  15. NaK pool-boiler bench-scale receiver durability test: Test results and materials analysis

    SciTech Connect

    Andraka, C.E.; Goods, S.H.; Bradshaw, R.W.; Moreno, J.B.; Moss, T.A.; Jones, S.A.

    1994-06-01

    Pool-boiler reflux receivers have been considered as an alternative to heat pipes for the input of concentrated solar energy to Stirling-cycle engines in dish-Stirling electric generation systems. Pool boilers offer simplicity in design and fabrication. The operation of a full-scale pool-boiler receiver has been demonstrated for short periods of time. However, to generate cost-effective electricity, the receiver must operate Without significant maintenance for the entire system life, as much as 20 to 30 years. Long-term liquid-metal boiling stability and materials compatibility with refluxing NaK-78 is not known and must be determined for the pool boiler receiver. No boiling system has been demonstrated for a significant duration with the current porous boiling enhancement surface and materials. Therefore, it is necessary to simulate the full-scale pool boiler design as much as possible, including flux levels, materials, and operating cycles. On-sun testing is impractical because of the limited test time available. A test vessel was constructed with a porous boiling enhancement surface. The boiling surface consisted of a brazed stainless steel powder with about 50% porosity. The vessel was heated with a quartz lamp array providing about go W/CM2 peak incident thermal flux. The vessel was charged with NaK-78. This allows the elimination of costly electric preheating, both on this test and on fullscale receivers. The vessel was fabricated from Haynes 230 alloy. The vessel operated at 750{degrees}C around the clock, with a 1/2-hour shutdown cycle to ambient every 8 hours. The test completed 7500 hours of lamp-on operation time, and over 1000 startups from ambient. The test was terminated when a small leak in an Inconel 600 thermowell was detected. The test design and data are presented here. Metallurgical analysis of virgin and tested materials has begun, and initial results are also presented.

  16. Non invasive analysis of miniature paintings: proposal for an analytical protocol.

    PubMed

    Aceto, Maurizio; Agostino, Angelo; Fenoglio, Gaia; Gulmini, Monica; Bianco, Valentina; Pellizzi, Eleonora

    2012-06-01

    The characterisation of palettes used in manuscript illumination is a hard analytical task, due to value and fragility of the analysed items. Analysis on miniatures must be necessarily non-invasive and fast and requires the use of several techniques since no single technique is able to provide all information needed. In this work a four-step analytical protocol is proposed for non-invasive in situ characterisation of miniature paintings. The protocol allows the identification of coloured materials through the use in sequence of complementary techniques, so as to fully exploit the information given by each instrument. Preliminarily to the instrumental investigations on ancient books and miniatures is the compilation of spectroscopic databases obtained from "standard" samples prepared on parchment, according to recipes described in medieval artistic treatises. The protocol starts with an extensive investigation with UV-visible spectrophotometry in reflectance mode, collecting spectra from all the most significant painted areas in the manuscript; chemometric classification is then performed on the spectra to highlight areas possibly containing the same materials. The second step involves in-depth inspection of miniatures under optical microscopy that guides the interpretation of reflectance spectra. XRF spectrometry is then performed to characterise pigments and metal layers, to verify the presence of overlapping layers, to identify mordants in lakes and to recognise minor components that may yield information concerning provenance; in addition, chemometric classification can be performed on element concentrations to highlight similar areas. Finally, Raman spectroscopy is used to shed light on the uncertain cases, if still present. Such a procedure offers a wealth of information without causing stress to the manuscripts under analysis. PMID:22391225

  17. Assessment of Respirable Crystalline Silica Analysis Using Proficiency Analytical Testing Results from 2003–2013

    PubMed Central

    Harper, Martin; Sarkisian, Khatchatur; Andrew, Michael

    2015-01-01

    Analysis of Proficiency Analytical Testing (PAT) results between 2003 and 2013 suggest that the variation in respirable crystalline silica analysis is much smaller today than it was in the period 1990–1998, partly because of a change in sample production procedure and because the colorimetric method has been phased out, although quality improvements in the x-ray diffraction (XRD) or infrared (IR) methods may have also played a role. There is no practical difference between laboratories using XRD or IR methods or between laboratories which are accredited or those which are not. Reference laboratory means (assigned values) are not different from the means of all participants across the current range of mass loading, although there is a small difference in variance in the ratios of all participants to reference laboratory means based on method because the reference laboratories are much more likely to use XRD than are the others. Matrix interference does not lead to biases or substantially larger variances for either XRD or IR methods. Data from proficiency test sample analyses that include results from poorly performing laboratories should not be used to determine the validity of a method. PAT samples are not produced below 40 μg and variance may increase with lower masses, although this is not particularly predictable. PAT data from lower mass loadings will be required to evaluate analytical performance if exposure limits are lowered without change in sampling method. Task-specific exposure measurements for periods shorter than a full shift typically result in lower mass loadings and the quality of these analyses would also be better assured from being within the range of PAT mass loadings. High flow rate cyclones, whose performance has been validated, can be used to obtain higher mass loadings in environments of lower concentrations or where shorter sampling times are desired. PMID:25175284

  18. Assessment of respirable crystalline silica analysis using Proficiency Analytical Testing results from 2003-2013.

    PubMed

    Harper, Martin; Sarkisian, Khatchatur; Andrew, Michael

    2014-01-01

    Analysis of Proficiency Analytical Testing (PAT) results between 2003 and 2013 suggest that the variation in respirable crystalline silica analysis is much smaller today than it was in the period 1990-1998, partly because of a change in sample production procedure and because the colorimetric method has been phased out, although quality improvements in the x-ray diffraction (XRD) or infrared (IR) methods may have also played a role. There is no practical difference between laboratories using XRD or IR methods or between laboratories which are accredited or those which are not. Reference laboratory means (assigned values) are not different from the means of all participants across the current range of mass loading, although there is a small difference in variance in the ratios of all participants to reference laboratory means based on method because the reference laboratories are much more likely to use XRD than are the others. Matrix interference does not lead to biases or substantially larger variances for either XRD or IR methods. Data from proficiency test sample analyses that include results from poorly performing laboratories should not be used to determine the validity of a method. PAT samples are not produced below 40 μg and variance may increase with lower masses, although this is not particularly predictable. PAT data from lower mass loadings will be required to evaluate analytical performance if exposure limits are lowered without change in sampling method. Task-specific exposure measurements for periods shorter than a full shift typically result in lower mass loadings and the quality of these analyses would also be better assured from being within the range of PAT mass loadings. High flow rate cyclones, whose performance has been validated, can be used to obtain higher mass loadings in environments of lower concentrations or where shorter sampling times are desired.

  19. Non invasive analysis of miniature paintings: Proposal for an analytical protocol

    NASA Astrophysics Data System (ADS)

    Aceto, Maurizio; Agostino, Angelo; Fenoglio, Gaia; Gulmini, Monica; Bianco, Valentina; Pellizzi, Eleonora

    2012-06-01

    The characterisation of palettes used in manuscript illumination is a hard analytical task, due to value and fragility of the analysed items. Analysis on miniatures must be necessarily non-invasive and fast and requires the use of several techniques since no single technique is able to provide all information needed. In this work a four-step analytical protocol is proposed for non-invasive in situ characterisation of miniature paintings. The protocol allows the identification of coloured materials through the use in sequence of complementary techniques, so as to fully exploit the information given by each instrument. Preliminarily to the instrumental investigations on ancient books and miniatures is the compilation of spectroscopic databases obtained from "standard" samples prepared on parchment, according to recipes described in medieval artistic treatises. The protocol starts with an extensive investigation with UV-visible spectrophotometry in reflectance mode, collecting spectra from all the most significant painted areas in the manuscript; chemometric classification is then performed on the spectra to highlight areas possibly containing the same materials. The second step involves in-depth inspection of miniatures under optical microscopy that guides the interpretation of reflectance spectra. XRF spectrometry is then performed to characterise pigments and metal layers, to verify the presence of overlapping layers, to identify mordants in lakes and to recognise minor components that may yield information concerning provenance; in addition, chemometric classification can be performed on element concentrations to highlight similar areas. Finally, Raman spectroscopy is used to shed light on the uncertain cases, if still present. Such a procedure offers a wealth of information without causing stress to the manuscripts under analysis.

  20. Hazards Analysis Report Addendum Buildign 518/518A Industrial Gases & Chemtrack Receiving & Barcoding Facility

    SciTech Connect

    Hickman, R D

    2000-02-04

    This report documents the Hazards Analysis Report (HAR) Addendum for Buildings 518 and 518A. In summary, the description of the facility and the operations given in the 1995 PHA are the same as the present in this year 2000. The hazards description also remains the same. The hazards analysis in this HAR Addendum is different in that it needs to be compared to operations routinely ''performed'' by the public. The HAR Addendum characterizes the level of intrinsic potential hazards associated with a facility and provides the basis for hazard classification. The hazard classification determines the level of safety documentation required and the DOE order governing the safety analysis. The hazard classification also determines the level of review and approval required for the safety analysis. This facility does not contain any safety class systems or systems important to safety as defined in Department of Energy standard DOE-STD-3009-94. The hazards of primary concern associated with B518 and B518A are chemical in nature. The hazard classification is determined by comparing facility inventories of chemicals with threshold values for the various hazard classification levels. In this way, the hazard level of the facility can be ascertained. The most significant hazards that could affect people in the local area of B518 and B518A, elsewhere on the LLNL site, and off site, are associated with hazardous and toxic materials. These hazards are the focus of this report and are the basis for the facility hazard classification.

  1. Latent Class Analysis of Conduct Problems of Elementary Students Receiving Special Education Services

    ERIC Educational Resources Information Center

    Toupin, Jean; Déry, Michèle; Verlaan, Pierrette; Lemelin, Jean-Pascal; Lecocq, Aurélie; Jagiellowicz, Jadwiga

    2016-01-01

    Students with conduct problems (CPs) may present heterogeneity in terms of behavioral manifestations and service needs. Previous studies using Latent Class Analysis (LCA) to capture this heterogeneity have been conducted mostly with community samples and have often applied a narrow definition of CP. Considering this context, this study…

  2. Laser-induced Breakdown spectroscopy quantitative analysis method via adaptive analytical line selection and relevance vector machine regression model

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Yi, Cancan; Xu, Jinwu; Ma, Xianghong

    2015-05-01

    A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

  3. The Analytic Onion: Examining Training Issues from Different Levels of Analysis. Interim Technical Paper for Period July 1989-June 1991.

    ERIC Educational Resources Information Center

    Lamb, Theodore A.; Chin, Keric B. O.

    This paper proposes a conceptual framework based on different levels of analysis using the metaphor of the layers of an onion to help organize and structure thinking on research issues concerning training. It discusses the core of the "analytic onion," the biological level, and seven levels of analysis that surround that core: the individual, the…

  4. Seismic anisotropy indicators in Western Tibet: Shear wave splitting and receiver function analysis

    NASA Astrophysics Data System (ADS)

    Levin, Vadim; Roecker, Steven; Graham, Peter; Hosseini, Afsaneh

    2008-12-01

    Using recently collected data from western Tibet we find significant variation in the strength, vertical distribution and attributes of seismic wave speed anisotropy, constrained through a joint application of teleseismic shear wave splitting techniques and a study of P-S mode-converted waves (receiver functions). We find that the crust of Tibet is characterized by anisotropy on the order of 5%-15% concentrated in layers 10-20 km in thickness, and with relatively steep (30°-45° from the vertical) slow symmetry axes of anisotropy. These layers contribute no more than 0.3 s to the birefringence in teleseismic shear waves, significantly smaller than splitting in many of the observations, and much smaller than birefringence predicted by models developed through group inversions of shear-wave recordings. Consequently, we interpret models constrained with shear-wave observations in terms of structures in the upper mantle. Near the Altyn-Tagh fault our data favor a two-layer model, with the upper layer fast polarization approximately aligned with the strike of the fault. Near the Karakorum fault our data are well fit with a single layer of relatively modest (~ 0.5 s delay) anisotropy. Fast polarization in this layer is ~ 60°NE, similar to that of the lower layer in the model for the Altyn Tagh fault site. Assuming that layers of similar anisotropic properties at these two sites reflect a common cause, our finding favors a scenario where Indian lithosphere under-thrusts a significant fraction of the plateau. Data from a site at the southern edge of the Tarim basin appear to be inconsistent with a common model of seismic anisotropy distribution. We suspect that thick sediments underlying the site significantly distort observed waveforms. Our ability to resolve features of anisotropic structure in the crust and the upper mantle of western Tibet is limited by the small amount of data collected in a 6 month observing period. We stress the importance of future teleseismic

  5. Receiver function analysis of the crust and upper mantle in Fennoscandia - isostatic implications

    NASA Astrophysics Data System (ADS)

    Frassetto, Andrew; Thybo, Hans

    2013-11-01

    The mountains across southern Norway and other margins of the North Atlantic Ocean appear conspicuously high in the absence of recent convergent tectonics. We investigate this phenomenon with receiver functions calculated for seismometers deployed across southern Fennoscandia. These are used to constrain the structure and seismic properties of the lithosphere and primarily to measure the thickness and infer the bulk composition of the crust. Such parameters are key to understanding crustal isostasy and assessing its role, or lack thereof, in supporting the observed elevations. Our study focuses on the southern Scandes mountain range that has an average elevation >1.0 km above mean sea level. The crust-mantle boundary (Moho) is ubiquitously imaged, and we occasionally observe structures that may represent the base of the continental lithosphere or other thermal, chemical, or viscous boundaries in the upper mantle. The Moho resides at ˜25-30 km depth below mean sea level in southeastern coastal Norway and parts of Denmark, ˜35-45 km across the southern Scandes, and ˜50-60 km near the Norwegian-Swedish border. That section of thickest crust coincides with much of the Transscandinavian Igneous Belt and often exhibits a diffuse conversion at the Moho, which probably results from the presence of a high wave speed, mafic lower crust across inner Fennoscandia. A zone of thinned crust (<35 km) underlies the Oslo Graben. Crustal Vp/Vs ratio measurements show trends that generally correlate with Moho depth; relatively high Vp/Vs occurs near the coast and areas affected by post-Caledonide rifting and lower Vp/Vs appears in older, unrifted crust across the southern Scandes. Our results indicate that most of the observed surface elevation in the southern Scandes is supported by an Airy-like crustal root and potentially thin mantle lithosphere. To the east, where thicker crust and mantle lithosphere underlie low elevations, the presence of dense mafic lower crust fits a Pratt

  6. Proteomics analysis of liver tissues from C57BL/6J mice receiving low-dose 137Cs radiation.

    PubMed

    Yi, Lan; Li, Linwei; Yin, Jie; Hu, Nan; Li, Guangyue; Ding, Dexin

    2016-02-01

    Differentially expressed proteins in liver tissues of C57BL/6J mice receiving low-dose (137)Cs radiation were examined by proteomics analysis. Compared with the control group, 80 proteins were differentially expressed in the irradiated group. Among the 40 randomly selected proteins used for peptide mass fingerprinting analysis and bioinformatics, 24 were meaningful. These proteins were related to antioxidant defense, amino acid metabolism, detoxification, anti-tumor development, amino acid transport, anti-peroxidation, and composition of respiratory chain. Western blot analysis showed that catalase (CAT), glycine N-methyltransferase (GNMT), and glutathione S-transferase P1 (GSTP1) were up-regulated in the irradiated group; these results were in agreement with qPCR results. These results show that CAT, GNMT, and GSTP1 may be related to stress response induced by low-dose irradiation in mice liver. The underlying mechanism however requires further investigation. PMID:26429139

  7. Proteomics analysis of liver tissues from C57BL/6J mice receiving low-dose 137Cs radiation.

    PubMed

    Yi, Lan; Li, Linwei; Yin, Jie; Hu, Nan; Li, Guangyue; Ding, Dexin

    2016-02-01

    Differentially expressed proteins in liver tissues of C57BL/6J mice receiving low-dose (137)Cs radiation were examined by proteomics analysis. Compared with the control group, 80 proteins were differentially expressed in the irradiated group. Among the 40 randomly selected proteins used for peptide mass fingerprinting analysis and bioinformatics, 24 were meaningful. These proteins were related to antioxidant defense, amino acid metabolism, detoxification, anti-tumor development, amino acid transport, anti-peroxidation, and composition of respiratory chain. Western blot analysis showed that catalase (CAT), glycine N-methyltransferase (GNMT), and glutathione S-transferase P1 (GSTP1) were up-regulated in the irradiated group; these results were in agreement with qPCR results. These results show that CAT, GNMT, and GSTP1 may be related to stress response induced by low-dose irradiation in mice liver. The underlying mechanism however requires further investigation.

  8. Development of Soil Compaction Analysis Software (SCAN) Integrating a Low Cost GPS Receiver and Compactometer

    PubMed Central

    Hwang, Jinsang; Yun, Hongsik; Kim, Juhyong; Suh, Yongcheol; Hong, Sungnam; Lee, Dongha

    2012-01-01

    A software for soil compaction analysis (SCAN) has been developed for evaluating the compaction states using the data from the GPS as well as a compactometer attached on the roller. The SCAN is distinguished from other previous software for intelligent compaction (IC) in that it can use the results from various types of GPS positioning methods, and it also has an optimal structure for remotely managing the large amounts of data gathered from numerous rollers. For this, several methods were developed: (1) improving the accuracy of low cost GPS receiver’s positioning results; (2) modeling the trajectory of a moving roller using a GPS receiver’s results and linking it with the data from the compactometer; and (3) extracting the information regarding the compaction states of the ground from the modeled trajectory, using spatial analysis methods. The SCAN was verified throughout various field compaction tests, and it has been confirmed that it can be a very effective tool in evaluating field compaction states. PMID:22736955

  9. A revised risk analysis of stress ulcers in burn patients receiving ulcer prophylaxis

    PubMed Central

    Choi, Young Hwan; Lee, Jong Ho; Shin, Jae Jun; Cho, Young Soon

    2015-01-01

    Objective Most of the literature about Curling’s ulcer was published from 1960 through 1980. Therefore, an updated study of Curling’s ulcer is needed. We analyzed the risk factors affecting ulcer incidence in burn patients. Methods We retrospectively analyzed the medical records of burn patients who were admitted to two burn centers. We collected information about the general characteristics of patients, burn area size, abbreviated burn severity index, whether surgery was performed, endoscopy results, and the total body surface area (TBSA). We performed a multivariate regression analysis predicting development of Curling’s ulcer. Results In total, 135 patients (mean age, 49.5±13.5 years) underwent endoscopy. Endoscopy revealed ulcer in 51 patients: 36 (70.6%) with gastric ulcers, 9 (17.6%) with duodenal ulcers, and 6 (11.8%) with both ulcer types. Burn area, burn depth, epigastric pain, melena, intensive care unit admission, burn area >20% of TBSA, and undergoing surgery for the burn were significantly different between the ulcer and non-ulcer groups. Multivariate analysis showed two independent factors significantly associated with ulcer: epigastric pain (odds ratio [OR]: 4.55, 95% confidence interval [CI]: 1.74 to 11.90), major burn (TBSA > 20%)(OR: 4.31 ,95% CI: 1.34 to 13.85). Conclusion For burn patients, presence of epigastric pain and major burn with TBSA > 20% showed significant association with ulcer development. PMID:27752605

  10. Analysis of the Uncertainty in the Computation of Receiver Functions and Improvement in the Estimation of Receiver, PP and SS functions

    NASA Astrophysics Data System (ADS)

    Huang, X.; Gurrola, H.

    2013-12-01

    There are many methods used to produce receiver functions (RFs) that are referred to as deconvolution but some are based on cross-correlation methods. We produced a set of realistic synthetics to test four commonly used methods to produce RFs. These are the frequency domain 'water level deconvolution (FWLD), the time domain iterative deconvolution (TID), the least square matrix inversion (MID) and the log-spectral method of deconvolution (LSD). We also test a hybrid of the FWLD and TID methods that we refer to as the frequency domain iterative deconvolution (FID). Synthetics were produced by windowing a P recordings from real seismograms and convolving them with synthetic seismograms, and then adding samples of pre-event noise. We made synthetics with nose levels of 2, 4, 8 and 16 percent [defined as stdev(noise)/max(signal)]. For the synthetics all methods of receiver function were accurate in estimating arrival times of the P, Ps, PPs and PSs phase to within one sample (at 40 sps). But there was a strong bias toward low values in the estimation of amplitudes of these phases (even when compared as ratios, i.e. Ps/P). The larger the noise levels the larger the bias. For 2% noise all methods preformed well. As noise increased there was a wide range in bias. The LSD and FID method produced the smallest standard deviation (stdev) of the group in bootstrap estimates from 92 different synthetic. If these RFs were from real data, where we did not know the right answers, LSD and FID would have been considered to work equally well. However, knowing the correct answer, the LSD methods had tremendous bias toward underestimating the amplitude especially for smaller phases. In tests using real data from Arti, Russia (ARU), we found that all these methods preformed equally well in estimating the arrival times for larger phases (such as Pms) but for smaller phases (such as P410s) the FID and TID had half as much error in estimating the arrival times as compared to the other

  11. Simultaneous analysis of hydrodynamic and optical properties using analytical ultracentrifugation equipped with multiwavelength detection.

    PubMed

    Walter, Johannes; Sherwood, Peter J; Lin, Wei; Segets, Doris; Stafford, Walter F; Peukert, Wolfgang

    2015-03-17

    Analytical ultracentrifugation (AUC) has proven to be a powerful tool for the study of particle size distributions, particle shapes, and interactions with high accuracy and unrevealed resolution. In this work we show how the analysis of sedimentation velocity data from the AUC equipped with a multiwavelength detector (MWL) can be used to gain an even deeper understanding of colloidal and macromolecular mixtures. New data evaluation routines have been integrated in the software SEDANAL to allow for the handling of MWL data. This opens up a variety of new possibilities because spectroscopic information becomes available for individual components in mixtures at the same time using MWL-AUC. For systems of known optical properties information on the hydrodynamic properties of the individual components in a mixture becomes accessible. For the first time, the determination of individual extinction spectra of components in mixtures is demonstrated via MWL evaluation of sedimentation velocity data. In our paper we first provide the informational background for the data analysis and expose the accessible parameters of our methodology. We further demonstrate the data evaluation by means of simulated data. Finally, we give two examples which are highly relevant in the field of nanotechnology using colored silica and gold nanoparticles of different size and extinction properties.

  12. Analysis of Local Control in Patients Receiving IMRT for Resected Pancreatic Cancers

    SciTech Connect

    Yovino, Susannah; Maidment, Bert W.; Herman, Joseph M.; Pandya, Naimish; Goloubeva, Olga; Wolfgang, Chris; Schulick, Richard; Laheru, Daniel; Hanna, Nader; Alexander, Richard; Regine, William F.

    2012-07-01

    Purpose: Intensity-modulated radiotherapy (IMRT) is increasingly incorporated into therapy for pancreatic cancer. A concern regarding this technique is the potential for geographic miss and decreased local control. We analyzed patterns of first failure among patients treated with IMRT for resected pancreatic cancer. Methods and Materials: Seventy-one patients who underwent resection and adjuvant chemoradiation for pancreas cancer are included in this report. IMRT was used for all to a median dose of 50.4 Gy. Concurrent chemotherapy was 5-FU-based in 72% of patients and gemcitabine-based in 28%. Results: At median follow-up of 24 months, 49/71 patients (69%) had failed. The predominant failure pattern was distant metastases in 35/71 patients (49%). The most common site of metastases was the liver. Fourteen patients (19%) developed locoregional failure in the tumor bed alone in 5 patients, regional nodes in 4 patients, and concurrently with metastases in 5 patients. Median overall survival (OS) was 25 months. On univariate analysis, nodal status, margin status, postoperative CA 19-9 level, and weight loss during treatment were predictive for OS. On multivariate analysis, higher postoperative CA19-9 levels predicted for worse OS on a continuous basis (p < 0.01). A trend to worse OS was seen among patients with more weight loss during therapy (p = 0.06). Patients with positive nodes and positive margins also had significantly worse OS (HR for death 2.8, 95% CI 1.1-7.5; HR for death 2.6, 95% CI 1.1-6.2, respectively). Grade 3-4 nausea and vomiting was seen in 8% of patients. Late complication of small bowel obstruction occurred in 4 (6%) patients. Conclusions: This is the first comprehensive report of patterns of failure among patients treated with adjuvant IMRT for pancreas cancer. IMRT was not associated with an increase in local recurrences in our cohort. These data support the use of IMRT in the recently activated EORTC/US Intergroup/RTOG 0848 adjuvant pancreas

  13. An analytical model for the performance analysis of concurrent transmission in IEEE 802.15.4.

    PubMed

    Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto

    2014-03-20

    Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions.

  14. An Analytical Model for the Performance Analysis of Concurrent Transmission in IEEE 802.15.4

    PubMed Central

    Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto

    2014-01-01

    Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions. PMID:24658624

  15. Morphological analysis of the primary center receiving spatial information transferred by the waggle dance of honeybees.

    PubMed

    Ai, Hiroyuki; Hagio, Hiromi

    2013-08-01

    The waggle dancers of honeybees encodes roughly the distance and direction to the food source as the duration of the waggle phase and the body angle during the waggle phase. It is believed that hive-mates detect airborne vibrations produced during the waggle phase to acquire distance information and simultaneously detect the body axis during the waggle phase to acquire direction information. It has been further proposed that the orientation of the body axis on the vertical comb is detected by neck hairs (NHs) on the prosternal organ. The afferents of the NHs project into the prothoracic and mesothoracic ganglia and the dorsal subesophageal ganglion (dSEG). This study demonstrates somatotopic organization within the dSEG of the central projections of the mechanosensory neurons of the NHs. The terminals of the NH afferents in dSEG are in close apposition to those of Johnston's organ (JO) afferents. The sensory axons of both terminate in a region posterior to the crossing of the ventral intermediate tract (VIT) and the maxillary dorsal commissures I and III (MxDCI, III) in the subesophageal ganglion. These features of the terminal areas of the NH and JO afferents are common to the worker, drone, and queen castes of honeybees. Analysis of the spatial relationship between the NH neurons and the morphologically and physiologically characterized vibration-sensitive interneurons DL-Int-1 and DL-Int-2 demonstrated that several branches of DL-Int-1 are in close proximity to the central projection of the mechanosensory neurons of the NHs in the dSEG. PMID:23297020

  16. Morphological analysis of the primary center receiving spatial information transferred by the waggle dance of honeybees.

    PubMed

    Ai, Hiroyuki; Hagio, Hiromi

    2013-08-01

    The waggle dancers of honeybees encodes roughly the distance and direction to the food source as the duration of the waggle phase and the body angle during the waggle phase. It is believed that hive-mates detect airborne vibrations produced during the waggle phase to acquire distance information and simultaneously detect the body axis during the waggle phase to acquire direction information. It has been further proposed that the orientation of the body axis on the vertical comb is detected by neck hairs (NHs) on the prosternal organ. The afferents of the NHs project into the prothoracic and mesothoracic ganglia and the dorsal subesophageal ganglion (dSEG). This study demonstrates somatotopic organization within the dSEG of the central projections of the mechanosensory neurons of the NHs. The terminals of the NH afferents in dSEG are in close apposition to those of Johnston's organ (JO) afferents. The sensory axons of both terminate in a region posterior to the crossing of the ventral intermediate tract (VIT) and the maxillary dorsal commissures I and III (MxDCI, III) in the subesophageal ganglion. These features of the terminal areas of the NH and JO afferents are common to the worker, drone, and queen castes of honeybees. Analysis of the spatial relationship between the NH neurons and the morphologically and physiologically characterized vibration-sensitive interneurons DL-Int-1 and DL-Int-2 demonstrated that several branches of DL-Int-1 are in close proximity to the central projection of the mechanosensory neurons of the NHs in the dSEG.

  17. CALUTRON RECEIVER

    DOEpatents

    Barnes, S.W.

    1959-08-25

    An improvement in a calutron receiver for collecting the isotopes ts described. The electromagnetic separation of the isotopes produces a mass spectrum of closely adjacent beams of ions at the foci regions, and a dividing wall between the two pockets is arranged at an angle. Substantially all of the tons of the less abundant isotope enter one of the pockets and strike one side of the wall directly, while substantially none of the tons entering the other pocket strikes the wall directly.

  18. Trace analysis of energetic materials via direct analyte-probed nanoextraction coupled to direct analysis in real time mass spectrometry.

    PubMed

    Clemons, Kristina; Dake, Jeffrey; Sisco, Edward; Verbeck, Guido F

    2013-09-10

    Direct analysis in real time mass spectrometry (DART-MS) has proven to be a useful forensic tool for the trace analysis of energetic materials. While other techniques for detecting trace amounts of explosives involve extraction, derivatization, solvent exchange, or sample clean-up, DART-MS requires none of these. Typical DART-MS analyses directly from a solid sample or from a swab have been quite successful; however, these methods may not always be an optimal sampling technique in a forensic setting. For example, if the sample were only located in an area which included a latent fingerprint of interest, direct DART-MS analysis or the use of a swab would almost certainly destroy the print. To avoid ruining such potentially invaluable evidence, another method has been developed which will leave the fingerprint virtually untouched. Direct analyte-probed nanoextraction coupled to nanospray ionization-mass spectrometry (DAPNe-NSI-MS) has demonstrated excellent sensitivity and repeatability in forensic analyses of trace amounts of illicit drugs from various types of surfaces. This technique employs a nanomanipulator in conjunction with bright-field microscopy to extract single particles from a surface of interest and has provided a limit of detection of 300 attograms for caffeine. Combining DAPNe with DART-MS provides another level of flexibility in forensic analysis, and has proven to be a sufficient detection method for trinitrotoluene (TNT), RDX, and 1-methylaminoanthraquinone (MAAQ).

  19. Studies of the analyte carrier interface in flow injection analysis: Project report, June 1, 1987-February 1, 1988

    SciTech Connect

    Brown, S.D.

    1988-01-01

    The goal of this project is the study of rapid multicomponent analysis of transient species in flowing media. Application of methods developed for multicomponent analysis is aimed at the investigation of dispersion-controlled chemical reactions at an analyte bolus-carrier solution interface, and the study of the effects of competition between analyte species on the distribution of products in chemical reactions. To meet these goals, study of new methods for the analysis of three-dimensional data resulting from measurement of transient species by sensor arrays or by rapid-scan (or multiplex) detection has commenced. Research has concentrated on three areas during the past year. These areas are (1) flow reaction analysis, (2) modeling of complex reaction kinetics, and (3) new methods for multicomponent analysis of inadequately modeled systems. Progress in each area is evaluated. 3 figs.

  20. Moisture Analysis in Lotion by Karl Fischer Coulometry. An Experiment for Introductory Analytical Chemistry

    NASA Astrophysics Data System (ADS)

    Mabrouk, Patricia Ann; Castriotta, Kristine

    2001-10-01

    This paper describes an experiment that can be used in an introductory analytical chemistry laboratory course. It allows the student analyst to measure the moisture content of various hand and body lotions using the coulometric Karl Fischer (KF) technique, providing a modern alternative to the traditional electrochemical experiments usually explored in introductory analytical chemistry courses. The experiment introduces students to an important technique in industry and commerce, which is highly sensitive, accurate, and precise, and which can be used to study a wide range of samples. The measurement times are short, allowing students to experience the analytical problem-solving process from start to finish in a single 3-hour laboratory period. One KF coulometer can adequately service even a large analytical chemistry class (>80 students). In spring 2000, students identified the KF experiment as the most popular, most useful, and most educational experiment in our analytical chemistry laboratory curriculum.

  1. Analytical and experimental analysis of solute transport in heterogeneous porous media.

    PubMed

    Wu, Lei; Gao, Bin; Tian, Yuan; Muñoz-Carpena, Rafael

    2014-01-01

    Knowledge of solute transport in heterogeneous porous media is crucial to monitor contaminant fate and transport in soil and groundwater systems. In this study, we present new findings from experimental and mathematical analysis to improve current understanding of solute transport in structured heterogeneous porous media. Three saturated columns packed with different sand combinations were used to examine the breakthrough behavior of bromide, a conservative tracer. Experimental results showed that bromide had different breakthrough responses in the three types of sand combinations, indicating that heterogeneity in hydraulic conductivity has a significant effect on the solute transport in structured heterogeneous porous media. Simulations from analytical solutions of a two-domain solute transport model matched experimental breakthrough data well for all the experimental conditions tested. Experimental and model results show that under saturated flow conditions, advection dominates solute transport in both fast-flow and slow-flow domains. The sand with larger hydraulic conductivity provided a preferential flow path for solute transport (fast-flow domain) that dominates the mass transfer in the heterogeneous porous media. Importantly, the transport in the slow-flow domain and mass exchange between the domains also contribute to the flow and solute transport processes and thus must be considered when investigating contaminant transport in heterogeneous porous media. PMID:24279625

  2. Characterization of plutonium-bearing wastes by chemical analysis and analytical electron microscopy

    SciTech Connect

    Behrens, R.G.; Buck, E.C.; Dietz, N.L.; Bates, J.K.; Van Deventer, E.; Chaiko, D.J.

    1995-09-01

    This report summarizes the results of characterization studies of plutonium-bearing wastes produced at the US Department of Energy weapons production facilities. Several different solid wastes were characterized, including incinerator ash and ash heels from Rocky Flats Plant and Los Alamos National Laboratory; sand, stag, and crucible waste from Hanford; and LECO crucibles from the Savannah River Site. These materials were characterized by chemical analysis and analytical electron microscopy. The results showed the presence of discrete PuO{sub 2}PuO{sub 2{minus}x}, and Pu{sub 4}O{sub 7} phases, of about 1{mu}m or less in size, in all of the samples examined. In addition, a number of amorphous phases were present that contained plutonium. In all the ash and ash heel samples examined, plutonium phases were found that were completely surrounded by silicate matrices. Consequently, to achieve optimum plutonium recovery in any chemical extraction process, extraction would have to be coupled with ultrafine grinding to average particle sizes of less than 1 {mu}m to liberate the plutonium from the surrounding inert matrix.

  3. A Big Data Analytics Pipeline for the Analysis of TESS Full Frame Images

    NASA Astrophysics Data System (ADS)

    Wampler-Doty, Matthew; Pierce Doty, John

    2015-12-01

    We present a novel method for producing a catalogue of extra-solar planets and transients using the full frame image data from TESS. Our method involves (1) creating a fast Monte Carlo simulation of the TESS science instruments, (2) using the simulation to create a labeled dataset consisting of exoplanets with various orbital durations as well as transients (such as tidal disruption events), (3) using supervised machine learning to find optimal matched filters, Support Vector Machines (SVMs) and statistical classifiers (i.e. naïve Bayes and Markov Random Fields) to detect astronomical objects of interest and (4) “Big Data” analysis to produce a catalogue based on the TESS data. We will apply the resulting methods to all stars in the full frame images. We hope that by providing libraries that conform to industry standards of Free Open Source Software we may invite researchers from the astronomical community as well as the wider data-analytics community to contribute to our effort.

  4. Stable isotope ratio analysis: A potential analytical tool for the authentication of South African lamb meat.

    PubMed

    Erasmus, Sara Wilhelmina; Muller, Magdalena; van der Rijst, Marieta; Hoffman, Louwrens Christiaan

    2016-02-01

    Stable isotope ratios ((13)C/(12)C and (15)N/(14)N) of South African Dorper lambs from farms with different vegetation types were measured by isotope ratio mass spectrometry (IRMS), to evaluate it as a tool for the authentication of origin and feeding regime. Homogenised and defatted meat of the Longissimus lumborum (LL) muscle of lambs from seven different farms was assessed. The δ(13)C values were affected by the origin of the meat, mainly reflecting the diet. The Rûens and Free State farms had the lowest (p ⩽ 0.05) δ(15)N values, followed by the Northern Cape farms, with Hantam Karoo/Calvinia having the highest δ(15)N values. Discriminant analysis showed δ(13)C and δ(15)N differences as promising results for the use of IRMS as a reliable analytical tool for lamb meat authentication. The results suggest that diet, linked to origin, is an important factor to consider regarding region of origin classification for South African lamb.

  5. Analytical model and stability analysis of the leading edge spar of a passively morphing ornithopter wing.

    PubMed

    Wissa, Aimy; Calogero, Joseph; Wereley, Norman; Hubbard, James E; Frecker, Mary

    2015-12-01

    This paper presents the stability analysis of the leading edge spar of a flapping wing unmanned air vehicle with a compliant spine inserted in it. The compliant spine is a mechanism that was designed to be flexible during the upstroke and stiff during the downstroke. Inserting a variable stiffness mechanism into the leading edge spar affects its structural stability. The model for the spar-spine system was formulated in terms of the well-known Mathieu's equation, in which the compliant spine was modeled as a torsional spring with a sinusoidal stiffness function. Experimental data was used to validate the model and results show agreement within 11%. The structural stability of the leading edge spar-spine system was determined analytically and graphically using a phase plane plot and Strutt diagrams. Lastly, a torsional viscous damper was added to the leading edge spar-spine model to investigate the effect of damping on stability. Results show that for the un-damped case, the leading edge spar-spine response was stable and bounded; however, there were areas of instability that appear for a range of spine upstroke and downstroke stiffnesses. Results also show that there exist a damping ratio between 0.2 and 0.5, for which the leading edge spar-spine system was stable for all values of spine upstroke and downstroke stiffnesses. PMID:26502210

  6. Improved analytic extreme-mass-ratio inspiral model for scoping out eLISA data analysis

    NASA Astrophysics Data System (ADS)

    Chua, Alvin J. K.; Gair, Jonathan R.

    2015-12-01

    The space-based gravitational-wave detector eLISA has been selected as the ESA L3 mission, and the mission design will be finalized by the end of this decade. To prepare for mission formulation over the next few years, several outstanding and urgent questions in data analysis will be addressed using mock data challenges, informed by instrument measurements from the LISA Pathfinder satellite launching at the end of 2015. These data challenges will require accurate and computationally affordable waveform models for anticipated sources such as the extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes. Previous data challenges have made use of the well-known analytic EMRI waveforms of Barack and Cutler, which are extremely quick to generate but dephase relative to more accurate waveforms within hours, due to their mismatched radial, polar and azimuthal frequencies. In this paper, we describe an augmented Barack-Cutler model that uses a frequency map to the correct Kerr frequencies, along with updated evolution equations and a simple fit to a more accurate model. The augmented waveforms stay in phase for months and may be generated with virtually no additional computational cost.

  7. Analytical model and stability analysis of the leading edge spar of a passively morphing ornithopter wing.

    PubMed

    Wissa, Aimy; Calogero, Joseph; Wereley, Norman; Hubbard, James E; Frecker, Mary

    2015-10-26

    This paper presents the stability analysis of the leading edge spar of a flapping wing unmanned air vehicle with a compliant spine inserted in it. The compliant spine is a mechanism that was designed to be flexible during the upstroke and stiff during the downstroke. Inserting a variable stiffness mechanism into the leading edge spar affects its structural stability. The model for the spar-spine system was formulated in terms of the well-known Mathieu's equation, in which the compliant spine was modeled as a torsional spring with a sinusoidal stiffness function. Experimental data was used to validate the model and results show agreement within 11%. The structural stability of the leading edge spar-spine system was determined analytically and graphically using a phase plane plot and Strutt diagrams. Lastly, a torsional viscous damper was added to the leading edge spar-spine model to investigate the effect of damping on stability. Results show that for the un-damped case, the leading edge spar-spine response was stable and bounded; however, there were areas of instability that appear for a range of spine upstroke and downstroke stiffnesses. Results also show that there exist a damping ratio between 0.2 and 0.5, for which the leading edge spar-spine system was stable for all values of spine upstroke and downstroke stiffnesses.

  8. Network Traffic Analysis With Query Driven VisualizationSC 2005HPC Analytics Results

    SciTech Connect

    Stockinger, Kurt; Wu, Kesheng; Campbell, Scott; Lau, Stephen; Fisk, Mike; Gavrilov, Eugene; Kent, Alex; Davis, Christopher E.; Olinger,Rick; Young, Rob; Prewett, Jim; Weber, Paul; Caudell, Thomas P.; Bethel,E. Wes; Smith, Steve

    2005-09-01

    Our analytics challenge is to identify, characterize, and visualize anomalous subsets of large collections of network connection data. We use a combination of HPC resources, advanced algorithms, and visualization techniques. To effectively and efficiently identify the salient portions of the data, we rely on a multi-stage workflow that includes data acquisition, summarization (feature extraction), novelty detection, and classification. Once these subsets of interest have been identified and automatically characterized, we use a state-of-the-art-high-dimensional query system to extract data subsets for interactive visualization. Our approach is equally useful for other large-data analysis problems where it is more practical to identify interesting subsets of the data for visualization than to render all data elements. By reducing the size of the rendering workload, we enable highly interactive and useful visualizations. As a result of this work we were able to analyze six months worth of data interactively with response times two orders of magnitude shorter than with conventional methods.

  9. Shape Analysis of DNA-Au Hybrid Particles by Analytical Ultracentrifugation.

    PubMed

    Urban, Maximilan J; Holder, Isabelle T; Schmid, Marius; Fernandez Espin, Vanesa; Garcia de la Torre, Jose; Hartig, Jörg S; Cölfen, Helmut

    2016-08-23

    Current developments in nanotechnology have increased the demand for nanocrystal assemblies with well-defined shapes and tunable sizes. DNA is a particularly well-suited building block in nanoscale assemblies because of its scalable sizes, conformational variability, and convenient self-assembly capabilities via base pairing. In hybrid materials, gold nanoparticles (AuNPs) can be assembled into nanoparticle structures with programmable interparticle distances by applying appropriate DNA sequences. However, the development of stoichiometrically defined DNA/NP structures is still challenging since product mixtures are frequently obtained and their purification and characterization is the rate-limiting step in the development of DNA-NP hybrid assemblies. Improvements in nanostructure fractionation and characterization techniques offer great potential for nanotechnology applications in general. This study reports the application of analytical ultracentrifugation (AUC) for the characterization of anisotropic DNA-linked metal-crystal assemblies. On the basis of transmission electron microscopy data and the DNA primary sequence, hydrodynamic bead models are set up for the interpretation of the measured frictional ratios and sedimentation coefficients. We demonstrate that the presence of single DNA strands on particle surfaces as well as the shape factors of multiparticle structures in mixtures can be quantitatively described by AUC. This study will significantly broaden the possibilities to analyze mixtures of shape-anisotropic nanoparticle assemblies. By establishing insights into the analysis of nanostructure mixtures based on fundamental principles of sedimentation, a wide range of potential applications in basic research and industry become accessible. PMID:27459174

  10. Advanced qualification of pharmaceutical excipient suppliers by multiple analytics and multivariate analysis combined.

    PubMed

    Hertrampf, A; Müller, H; Menezes, J C; Herdling, T

    2015-11-10

    Pharmaceutical excipients have different functions within a drug formulation, consequently they can influence the manufacturability and/or performance of medicinal products. Therefore, critical to quality attributes should be kept constant. Sometimes it may be necessary to qualify a second supplier, but its product will not be completely equal to the first supplier product. To minimize risks of not detecting small non-similarities between suppliers and to detect lot-to-lot variability for each supplier, multivariate data analysis (MVA) can be used as a more powerful alternative to classical quality control that uses one-parameter-at-a-time monitoring. Such approach is capable of supporting the requirements of a new guideline by the European Parliament and Council (2015/C-95/02) demanding appropriate quality control strategies for excipients based on their criticality and supplier risks in ensuring quality, safety and function. This study compares calcium hydrogen phosphate from two suppliers. It can be assumed that both suppliers use different manufacturing processes. Therefore, possible chemical and physical differences were investigated by using Raman spectroscopy, laser diffraction and X-ray powder diffraction. Afterwards MVA was used to extract relevant information from each analytical technique. Both CaHPO4 could be discriminated by their supplier. The gained knowledge allowed to specify an enhanced strategy for second supplier qualification.

  11. Tools for the quantitative analysis of sedimentation boundaries detected by fluorescence optical analytical ultracentrifugation.

    PubMed

    Zhao, Huaying; Casillas, Ernesto; Shroff, Hari; Patterson, George H; Schuck, Peter

    2013-01-01

    Fluorescence optical detection in sedimentation velocity analytical ultracentrifugation allows the study of macromolecules at nanomolar concentrations and below. This has significant promise, for example, for the study of systems of high-affinity protein interactions. Here we describe adaptations of the direct boundary modeling analysis approach implemented in the software SEDFIT that were developed to accommodate unique characteristics of the confocal fluorescence detection system. These include spatial gradients of signal intensity due to scanner movements out of the plane of rotation, temporal intensity drifts due to instability of the laser and fluorophores, and masking of the finite excitation and detection cone by the sample holder. In an extensive series of experiments with enhanced green fluorescent protein ranging from low nanomolar to low micromolar concentrations, we show that the experimental data provide sufficient information to determine the parameters required for first-order approximation of the impact of these effects on the recorded data. Systematic deviations of fluorescence optical sedimentation velocity data analyzed using conventional sedimentation models developed for absorbance and interference optics are largely removed after these adaptations, resulting in excellent fits that highlight the high precision of fluorescence sedimentation velocity data, thus allowing a more detailed quantitative interpretation of the signal boundaries that is otherwise not possible for this system.

  12. Structural damage localization by outlier analysis of signal-processed mode shapes - Analytical and experimental validation

    NASA Astrophysics Data System (ADS)

    Ulriksen, M. D.; Damkilde, L.

    2016-02-01

    Contrary to global modal parameters such as eigenfrequencies, mode shapes inherently provide structural information on a local level. Therefore, this particular modal parameter and its derivatives are utilized extensively for damage identification. Typically, more or less advanced mathematical methods are employed to identify damage-induced discontinuities in the spatial mode shape signals, hereby, potentially, facilitating damage detection and/or localization. However, by being based on distinguishing damage-induced discontinuities from other signal irregularities, an intrinsic deficiency in these methods is the high sensitivity towards measurement noise. In the present paper, a damage localization method which, compared to the conventional mode shape-based methods, has greatly enhanced robustness towards measurement noise is proposed. The method is based on signal processing of a spatial mode shape by means of continuous wavelet transformation (CWT) and subsequent application of a generalized discrete Teager-Kaiser energy operator (GDTKEO) to identify damage-induced mode shape discontinuities. In order to evaluate whether the identified discontinuities are in fact damage-induced, outlier analysis is conducted by applying the Mahalanobis metric to major principal scores of the sensor-located bands of the signal-processed mode shape. The method is tested analytically and benchmarked with other mode shape-based damage localization approaches on the basis of a free-vibrating beam and validated experimentally in the context of a residential-sized wind turbine blade subjected to an impulse load.

  13. The Choice of Analytical Strategies in Inverse-Probability-of-Treatment–Weighted Analysis: A Simulation Study

    PubMed Central

    Yang, Shibing; Lu, Juan; Eaton, Charles B.; Harpe, Spencer; Lapane, Kate L.

    2015-01-01

    We sought to explore the impact of intention to treat and complex treatment use assumptions made during weight construction on the validity and precision of estimates derived from inverse-probability-of-treatment–weighted analysis. We simulated data assuming a nonexperimental design that attempted to quantify the effect of statin on lowering low-density lipoprotein cholesterol. We created 324 scenarios by varying parameter values (effect size, sample size, adherence level, probability of treatment initiation, associations between low-density lipoprotein cholesterol and treatment initiation and continuation). Four analytical approaches were used: 1) assuming intention to treat; 2) assuming complex mechanisms of treatment use; 3) assuming a simple mechanism of treatment use; and 4) assuming invariant confounders. With a continuous outcome, estimates assuming intention to treat were biased toward the null when there were nonnull treatment effect and nonadherence after treatment initiation. For each 1% decrease in the proportion of patients staying on treatment after initiation, the bias in estimated average treatment effect increased by 1%. Inverse-probability-of-treatment–weighted analyses that took into account the complex mechanisms of treatment use generated approximately unbiased estimates. Studies estimating the actual effect of a time-varying treatment need to consider the complex mechanisms of treatment use during weight construction. PMID:26316599

  14. Analytical and experimental analysis of solute transport in heterogeneous porous media.

    PubMed

    Wu, Lei; Gao, Bin; Tian, Yuan; Muñoz-Carpena, Rafael

    2014-01-01

    Knowledge of solute transport in heterogeneous porous media is crucial to monitor contaminant fate and transport in soil and groundwater systems. In this study, we present new findings from experimental and mathematical analysis to improve current understanding of solute transport in structured heterogeneous porous media. Three saturated columns packed with different sand combinations were used to examine the breakthrough behavior of bromide, a conservative tracer. Experimental results showed that bromide had different breakthrough responses in the three types of sand combinations, indicating that heterogeneity in hydraulic conductivity has a significant effect on the solute transport in structured heterogeneous porous media. Simulations from analytical solutions of a two-domain solute transport model matched experimental breakthrough data well for all the experimental conditions tested. Experimental and model results show that under saturated flow conditions, advection dominates solute transport in both fast-flow and slow-flow domains. The sand with larger hydraulic conductivity provided a preferential flow path for solute transport (fast-flow domain) that dominates the mass transfer in the heterogeneous porous media. Importantly, the transport in the slow-flow domain and mass exchange between the domains also contribute to the flow and solute transport processes and thus must be considered when investigating contaminant transport in heterogeneous porous media.

  15. Advanced organic analysis and analytical methods development: FY 1995 progress report. Waste Tank Organic Safety Program

    SciTech Connect

    Wahl, K.L.; Campbell, J.A.; Clauss, S.A.

    1995-09-01

    This report describes the work performed during FY 1995 by Pacific Northwest Laboratory in developing and optimizing analysis techniques for identifying organics present in Hanford waste tanks. The main focus was to provide a means for rapidly obtaining the most useful information concerning the organics present in tank waste, with minimal sample handling and with minimal waste generation. One major focus has been to optimize analytical methods for organic speciation. Select methods, such as atmospheric pressure chemical ionization mass spectrometry and matrix-assisted laser desorption/ionization mass spectrometry, were developed to increase the speciation capabilities, while minimizing sample handling. A capillary electrophoresis method was developed to improve separation capabilities while minimizing additional waste generation. In addition, considerable emphasis has been placed on developing a rapid screening tool, based on Raman and infrared spectroscopy, for determining organic functional group content when complete organic speciation is not required. This capability would allow for a cost-effective means to screen the waste tanks to identify tanks that require more specialized and complete organic speciation to determine tank safety.

  16. moocRP: Enabling Open Learning Analytics with an Open Source Platform for Data Distribution, Analysis, and Visualization

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin

    2016-01-01

    In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…

  17. An analytical solution of the Fokker-Planck equation in the phase-locked loop transient analysis

    NASA Technical Reports Server (NTRS)

    Zhang, Weijian

    1987-01-01

    A probabilistic approach is used to obtain an analytical solution to the Fokker-Planck equation used in the transient analysis of the phase-locked loop phase error process of the first-order phase-locked loop. The solution procedure, which is based on the Girsanov transformation, is described.

  18. A Systematic Mapping on the Learning Analytics Field and Its Analysis in the Massive Open Online Courses Context

    ERIC Educational Resources Information Center

    Moissa, Barbara; Gasparini, Isabela; Kemczinski, Avanilde

    2015-01-01

    Learning Analytics (LA) is a field that aims to optimize learning through the study of dynamical processes occurring in the students' context. It covers the measurement, collection, analysis and reporting of data about students and their contexts. This study aims at surveying existing research on LA to identify approaches, topics, and needs for…

  19. Injury-Related Consequences of Alcohol Misuse among Injured Patients Who Received SBI for Alcohol: A Latent Class Analysis

    PubMed Central

    Cochran, Gerald; Field, Craig; Caetano, Raul

    2013-01-01

    Background Screening and brief alcohol intervention has demonstrated efficacy in improving drinking and other risk behaviors for some patient populations. However, it is not clear that brief interventions are helpful to all injured patients who drink at risk levels. This paper identifies latent classes of intervention recipients based on injury-related consequences and risks of alcohol misuse and then determines which profiles experienced the greatest improvements in drinking. Methods A secondary analysis was conducted using data from injured patients (N=737) who reported heavy drinking and received a brief alcohol intervention in a Level-1 trauma center. Latent class analysis was used to determine patient profiles, and seven indicators commonly associated with alcohol-related injury from the Short Inventory of Problems+6 were used to determine the latent class measurement model. Covariates were regressed onto the model to assess factors related to class membership, and drinking outcomes were analyzed to examine improvements in drinking. Results Five classes emerged from the data. The classes that reported the greatest improvements in drinking following discharge were those characterized by multiple alcohol-related risks and those characterized by a history of alcohol-related accidents and injuries. Attributing the current injury to drinking was a significant predictor of class membership among those classes that reported higher levels of improvement. Conclusions This study provides tentative evidence that subclasses exist among heavy drinking injured patients who received a brief intervention in a Level-1 trauma center, and some subclasses experience greater drinking improvements than others. Further research is required to substantiate the findings of this secondary analysis. PMID:24821352

  20. Total cyanide analysis of tank core samples: Analytical results and supporting investigations. Revision 1

    SciTech Connect

    Pool, K.H.

    1994-03-01

    The potential for a ferrocyanide explosion in Hanford site single-shelled waste storage tanks (SSTS) poses a serious safety concern. This potential danger developed in the 1950s when {sup 137}Cs was scavenged during the reprocessing of uranium recovery process waste by co-precipitating it along with sodium in nickel ferrocyanide salt. Sodium or potassium ferrocyanide and nickel sulfate were added to the liquid waste stored in SSTs. The tank storage space resulting from the scavenging process was subsequently used to store other waste types. Ferrocyanide salts in combinations with oxidizing agents, such as nitrate and nitrite, are known to explode when key parameters (temperature, water content, oxidant concentration, and fuel [cyanide]) are in place. Therefore, reliable total cyanide analysis data for actual SST materials are required to address the safety issue. Accepted cyanide analysis procedures do not yield reliable results for samples containing nickel ferrocyanide materials because the compounds are insoluble in acidic media. Analytical chemists at Pacific Northwest Laboratory (PNL) have developed a modified microdistillation procedure (see below) for analyzing total cyanide in waste tank matrices containing nickel ferrocyanide materials. Pacific Northwest Laboratory analyzed samples from Hanford Waste Tank 241-C-112 cores 34, 35, and 36 for total cyanide content using technical procedure PNL-ALO-285 {open_quotes}Total Cyanide by Remote Microdistillation and Agrentometric Titration,{close_quotes} Rev. 0. This report summarizes the results of these analyses along with supporting quality control data, and, in addition, summarizes the results of the test to check the efficacy of sodium nickel ferrocyanide solubilization from an actual core sample by aqueous EDTA/en to verify that nickel ferrocyanide compounds were quantitatively solubilized before actual distillation.

  1. Evidence for the contemporary magmatic system beneath Long Valley Caldera from local earthquake tomography and receiver function analysis

    NASA Astrophysics Data System (ADS)

    Seccia, D.; Chiarabba, C.; de Gori, P.; Bianchi, I.; Hill, D. P.

    2011-12-01

    We present a new P wave and S wave velocity model for the upper crust beneath Long Valley Caldera obtained using local earthquake tomography and receiver function analysis. We computed the tomographic model using both a graded inversion scheme and a traditional approach. We complement the tomographic Vp model with a teleseismic receiver function model based on data from broadband seismic stations (MLAC and MKV) located on the SE and SW margins of the resurgent dome inside the caldera. The inversions resolve (1) a shallow, high-velocity P wave anomaly associated with the structural uplift of a resurgent dome; (2) an elongated, WNW striking low-velocity anomaly (8%-10 % reduction in Vp) at a depth of 6 km (4 km below mean sea level) beneath the southern section of the resurgent dome; and (3) a broad, low-velocity volume (˜5% reduction in Vp and as much as 40% reduction in Vs) in the depth interval 8-14 km (6-12 km below mean sea level) beneath the central section of the caldera. The two low-velocity volumes partially overlap the geodetically inferred inflation sources that drove uplift of the resurgent dome associated with caldera unrest between 1980 and 2000, and they likely reflect the ascent path for magma or magmatic fluids into the upper crust beneath the caldera.

  2. Evidence for the contemporary magmatic system beneath Long Valley Caldera from local earthquake tomography and receiver function analysis

    USGS Publications Warehouse

    Seccia, D.; Chiarabba, C.; De Gori, P.; Bianchi, I.; Hill, D.P.

    2011-01-01

    We present a new P wave and S wave velocity model for the upper crust beneath Long Valley Caldera obtained using local earthquake tomography and receiver function analysis. We computed the tomographic model using both a graded inversion scheme and a traditional approach. We complement the tomographic I/P model with a teleseismic receiver function model based on data from broadband seismic stations (MLAC and MKV) located on the SE and SW margins of the resurgent dome inside the caldera. The inversions resolve (1) a shallow, high-velocity P wave anomaly associated with the structural uplift of a resurgent dome; (2) an elongated, WNW striking low-velocity anomaly (8%–10 % reduction in I/P) at a depth of 6 km (4 km below mean sea level) beneath the southern section of the resurgent dome; and (3) a broad, low-velocity volume (–5% reduction in I/P and as much as 40% reduction in I/S) in the depth interval 8–14 km (6–12 km below mean sea level) beneath the central section of the caldera. The two low-velocity volumes partially overlap the geodetically inferred inflation sources that drove uplift of the resurgent dome associated with caldera unrest between 1980 and 2000, and they likely reflect the ascent path for magma or magmatic fluids into the upper crust beneath the caldera.

  3. Analytical Analysis and Case Study of Transient Behavior of Inrush Current in Power Transformer for Designing of Efficient Circuit Breakers

    NASA Astrophysics Data System (ADS)

    Harmanpreet, Singh, Sukhwinder; Kumar, Ashok; Kaur, Parneet

    2010-11-01

    Stability & security are main aspects in electrical power systems. Transformer protection is major issue of concern to system operation. There are many mall-trip cases of transformer protection are caused by inrush current problems. The phenomenon of transformer inrush current has been discussed in many papers since 1958. In this paper analytical analysis of inrush current in a transformer switched on dc and ac supply has been done. This analysis will help in design aspects of circuit breakers for better performance.

  4. Cost Analysis of an Air Brayton Receiver for a Solar Thermal Electric Power System in Selected Annual Production Volumes

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Pioneer Engineering and Manufacturing Company estimated the cost of manufacturing and Air Brayton Receiver for a Solar Thermal Electric Power System as designed by the AiResearch Division of the Garrett Corporation. Production costs were estimated at annual volumes of 100; 1,000; 5,000; 10,000; 50,000; 100,000 and 1,000,000 units. These costs included direct labor, direct material and manufacturing burden. A make or buy analysis was made of each part of each volume. At high volumes special fabrication concepts were used to reduce operation cycle times. All costs were estimated at an assumed 100% plant capacity. Economic feasibility determined the level of production at which special concepts were to be introduced. Estimated costs were based on the economics of the last half of 1980. Tooling and capital equipment costs were estimated for ach volume. Infrastructure and personnel requirements were also estimated.

  5. Nonlinear Analyte Concentration Gradients for One-Step Kinetic Analysis Employing Optical Microring Resonators

    PubMed Central

    Marty, Michael T.; Kuhnline Sloan, Courtney D.; Bailey, Ryan C.; Sligar, Stephen G.

    2012-01-01

    Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics. PMID:22686186

  6. Automated Ground-Water Sampling and Analysis of Hexavalent Chromium using a “Universal” Sampling/Analytical System

    PubMed Central

    Burge, Scott R.; Hoffman, Dave A.; Hartman, Mary J.; Venedam, Richard J.

    2005-01-01

    The capabilities of a “universal platform” for the deployment of analytical sensors in the field for long-term monitoring of environmental contaminants were expanded in this investigation. The platform was previously used to monitor trichloroethene in monitoring wells and at groundwater treatment systems (1,2). The platform was interfaced with chromium (VI) and conductivity analytical systems to monitor shallow wells installed adjacent to the Columbia River at the 100-D Area of the Hanford Site, Washington. A groundwater plume of hexavalent chromium is discharging into the Columbia River through the gravels beds used by spawning salmon. The sampling/analytical platform was deployed for the purpose of collecting data on subsurface hexavalent chromium concentrations at more frequent intervals than was possible with the previous sampling and analysis methods employed a the Site.

  7. Analytical and Experimental Modal Analysis for Operational Validation and Calibration of a Miniature Silicon Sensor

    NASA Astrophysics Data System (ADS)

    Zhang, P. Q.; Tang, X. L.; Shan, B. X.; Brandon, J. A.; Kwan, A. S. K.

    1998-07-01

    The development of micromechanical sensors poses new challenges in design, calibration and operation. The paper reports a study where analytical and experimental techniques are applied to a prototype sensor. Analytically, a substructuring method is used together with a proprietary finite element package. Experimentally, novel excitation and response transducers are used to provide input data for specialised time domain identification algorithms. The resulting comparisons provide confidence in the combination of techniques used in the study.

  8. Integrated Data Collection Analysis (IDCA) Program — KClO3 (as received)/Icing Sugar

    SciTech Connect

    Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.; Pollard, Colin J.; Warner, Kirstin F.; Sorenson, Daniel N.; Remmers, Daniel L.; Shelley, Timothy J.; Reyes, Jose A.; Hsu, Peter C.; Whipple, Richard E.; Reynolds, John G.

    2011-05-23

    The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small-Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of a mixture of KClO3 as received from the manufacturer mixed with icing sugar, sized through a 100-mesh sieve—KClO3/icing sugar (AR) mixture. This material was selected because of the challenge of performing SSST testing of a mixture of two solid materials. The mixture was found to: 1) be more sensitive to impact than RDX, similar to PETN, 2) be the same or less sensitive to friction than PETN, and 3) to be less sensitive to spark than RDX. The thermal analysis showed that the mixture has thermally stability similar to RDX and is perhaps more energetic upon decomposition but variable results indicate sampling issues. Compared to the 100-mesh sieved counter part, the KClO3/icing sugar (-100) mixture, the AR mixture was found to be about the same sensitivity towards impact, friction and ESD.

  9. Real-time analysis for intensive care: development and deployment of the artemis analytic system.

    PubMed

    Blount, Marion; Ebling, Maria R; Eklund, J Mikael; James, Andrew G; McGregor, Carolyn; Percival, Nathan; Smith, Kathleen P; Sow, Daby

    2010-01-01

    The lives of many thousands of children born premature or ill at term around the world have been saved by those who work within neonatal intensive care units (NICUs). Modern-day neonatologists, together with nursing staff and other specialists within this domain, enjoy modern technologies for activities such as financial transactions, online purchasing, music, and video on demand. Yet, when they move into their workspace, in many cases, they are supported by nearly the same technology they used 20 years ago. Medical devices provide visual displays of vital signs through physiological streams such as electrocardiogram (ECG), heart rate, blood oxygen saturation (SpO(2)), and respiratory rate. Electronic health record initiatives around the world provide an environment for the electronic management of medical records, but they fail to support the high-frequency interpretation of streaming physiological data. We have taken a collaborative research approach to address this need to provide a flexible platform for the real-time online analysis of patients' data streams to detect medically significant conditions that precede the onset of medical complications. The platform supports automated or clinician-driven knowledge discovery to discover new relationships between physiological data stream events and latent medical conditions as well as to refine existing analytics. Patients benefit from the system because earlier detection of signs of the medical conditions may lead to earlier intervention that may potentially lead to improved patient outcomes and reduced length of stays. The clinician benefits from a decision support tool that provides insight into multiple streams of data that are too voluminous to assess with traditional methods. The remainder of this article summarizes the strengths of our research collaboration and the resulting environment known as Artemis, which is currently being piloted within the NICU of The Hospital for Sick Children (SickKids) in Toronto

  10. An Open-Source Analytical Platform for Analysis of C. elegans Swimming Induced Paralysis

    PubMed Central

    Hardaway, J. Andrew; Wang, Jing; Fleming, Paul A.; Fleming, Katherine A.; Whitaker, Sarah M.; Nackenoff, Alex; Snarrenberg, Chelsea L.; Hardie, Shannon L.; Zhang, Bing; Blakely, Randy D.

    2014-01-01

    Background The nematode Caenhorhabditis elegans offers great power for the identification and characterization of genes that regulate behavior. In support of this effort, analytical methods are required that provide dimensional analyses of subcomponents of behavior. Previously, we demonstrated that loss of the presynaptic dopamine (DA) transporter, dat-1, evokes DA-dependent Swimming Induced Paralysis (Swip) (Mcdonald et al. 2007), a behavior compatible with forward genetic screens (Hardaway et al. 2012). New Method Here, we detail the development and implementation of SwimR, a set of tools that provide for an automated, kinetic analysis of C. elegans Swip. SwimR relies on open source programs that can be freely implemented and modified. Results We show that SwimR can display time-dependent alterations of swimming behavior induced by drug-treatment, illustrating this capacity with the dat-1 blocker and tricyclic antidepressant imipramine (IMI). We demonstrate the capacity of SwimR to extract multiple kinetic parameters that are impractical to obtain in manual assays. Comparison with Existing Methods Standard measurements of C. elegans swimming utilizes manual assessments of the number of animals exhibiting swimming versus paralysis. Our approach deconstructs the time course and rates of movement in an automated fashion, offering a significant increase in the information that can be obtained from swimming behavior. Conclusions The SwimR platform is a powerful tool for the deconstruction of worm thrashing behavior in the context of both genetic and pharmacological manipulations that can be used to segregate pathways that underlie nematode swimming mechanics. PMID:24792527

  11. Real-time analysis for intensive care: development and deployment of the artemis analytic system.

    PubMed

    Blount, Marion; Ebling, Maria R; Eklund, J Mikael; James, Andrew G; McGregor, Carolyn; Percival, Nathan; Smith, Kathleen P; Sow, Daby

    2010-01-01

    The lives of many thousands of children born premature or ill at term around the world have been saved by those who work within neonatal intensive care units (NICUs). Modern-day neonatologists, together with nursing staff and other specialists within this domain, enjoy modern technologies for activities such as financial transactions, online purchasing, music, and video on demand. Yet, when they move into their workspace, in many cases, they are supported by nearly the same technology they used 20 years ago. Medical devices provide visual displays of vital signs through physiological streams such as electrocardiogram (ECG), heart rate, blood oxygen saturation (SpO(2)), and respiratory rate. Electronic health record initiatives around the world provide an environment for the electronic management of medical records, but they fail to support the high-frequency interpretation of streaming physiological data. We have taken a collaborative research approach to address this need to provide a flexible platform for the real-time online analysis of patients' data streams to detect medically significant conditions that precede the onset of medical complications. The platform supports automated or clinician-driven knowledge discovery to discover new relationships between physiological data stream events and latent medical conditions as well as to refine existing analytics. Patients benefit from the system because earlier detection of signs of the medical conditions may lead to earlier intervention that may potentially lead to improved patient outcomes and reduced length of stays. The clinician benefits from a decision support tool that provides insight into multiple streams of data that are too voluminous to assess with traditional methods. The remainder of this article summarizes the strengths of our research collaboration and the resulting environment known as Artemis, which is currently being piloted within the NICU of The Hospital for Sick Children (SickKids) in Toronto

  12. Analysis of hydroponic fertilizer matrixes for perchlorate: comparison of analytical techniques.

    PubMed

    Collette, Timothy W; Williams, Ted L; Urbansky, Edward T; Magnuson, Matthew L; Hebert, Gretchen N; Strauss, Steven H

    2003-01-01

    Seven retail hydroponic nitrate fertilizer products, two liquid and five solid, were comparatively analyzed for the perchlorate anion (ClO4-) by ion chromatography (IC) with suppressed conductivity detection, complexation electrospray ionization mass spectrometry (cESI-MS), normal Raman spectroscopy, and infrared spectroscopy using an attenuated total reflectance crystal (ATR-FTIR) coated with a thin film of an organometallic ion-exchange compound. Three of the five solid products were found by all techniques to contain perchlorate at the level of approximately 100-350 mg kg(-1). The remaining products did not contain perchlorate above the detection level of any of the techniques. Comparative analysis using several analytical techniques that depend on different properties of perchlorate allow for a high degree of certainty in both the qualitative and quantitative determinations. This proved particularly useful for these samples, due to the complexity of the matrix. Analyses of this type, including multiple spectroscopic confirmations, may also be useful for other complicated matrixes (e.g., biological samples) or in forensic/regulatory frameworks where data are likely to be challenged. While the source of perchlorate in these hydroponic products is not known, the perchlorate-to-nitrate concentration ratio (w/w) in the aqueous extracts is generally consistent with the historical weight percent of water soluble components in caliche, a nitrate-bearing ore found predominantly in Chile. This ore, which is the only well-established natural source of perchlorate, is mined and used, albeit minimally, as a nitrogen source in some fertilizer products.

  13. HPTLC Plate Blotting for Liquid Microjunction Surface Sampling Probe Mass Spectrometric Analysis of Analytes Separated on a Wettable Phase Plate

    SciTech Connect

    Walworth, Matthew J; Stankovich, Joseph J; Van Berkel, Gary J; Schulz, Michael; Minarik, susanne

    2012-01-01

    A blotting method that transfers analytes separated on wettable HPTLC plates to a hydrophobic reversed-phase C8 HPLTC plate suitable for analysis with a liquid microjunction surface sampling probe electrospray ionization mass spectrometry system was described and demonstrated. The simple blotting procedure transfers the analyte from the wettable plate to the topmost surface of a rigidly backed, easy-to-mount hydrophobic substrate that already has been proven viable for analysis by this sampling probe/mass spectrometry system. The utility of the approach was demonstrated by the analysis of a four-component peptide mixture originally separated on a ProteoChrom HPTLC cellulose sheet and then blotted to the reversed phase HPTLC plate.

  14. A lunar-based analytical laboratory and contamination problems in analysis of Moon and Mars samples

    NASA Astrophysics Data System (ADS)

    Gehrke, Charles W.

    1997-07-01

    A summary follows of our experiences and techniques used in the analysis of samples from Apollo Missions 11 to 17. The studies were conducted at the Ames Research Center, Moffett Field, CA, the University of Missouri, Columbia, MO, and the University of Maryland, College Park, MD, 1969 - 1974. Our search was directed to water-extractable compounds with emphasis on amino acids. Gas chromatography, ion-exchange chromatography and gas chromatography combined with mass spectrometry were used for the analysis. It is our conclusion that amino acids are not present in the lunar regolith above the background levels of our investigation (ca. 1 - 3 ng/g). The scientific debate has become heated that primitive life existed on Mars 3.6 billion years ago as reported by the NASA-Stanford team led to David McKay. Mars is destined to receive humans early in the 21st Century, preceded by many international missions to Space Station Freedom and robotic missions to the Moon and Mars. First, we must `learn to live in space'. The Moon presents a base that provides the opportunities and challenges to assemble the international interdisciplinary intellectual scientific teams and partners with many disciplines to make the next step before human exploration of Mars and the search for evidence in Martian soil and samples returned to Earth laboratories. Our experiences learned in Moon analysis will be useful in Mars exploration and returned sample study. Sensitivity at the nanogram/gram level and selectivity of analysis are highly essential. As these figures show contamination of samples is a most serious problem. However with the use of ultraclean techniques in a 100 clean room contamination can be avoided. Our speck of dust, a tiny fragment of cigarette smoke, a particle of dandruff, a droplet of saliva, all can make your results questionable. In addition, the extraction of life molecules as amino acids from the Lunar samples was a difficult process and I am sure the same difficulties

  15. Evaluation of pollutant loads from stormwater BMPs to receiving water using load frequency curves with uncertainty analysis.

    PubMed

    Park, Daeryong; Roesner, Larry A

    2012-12-15

    This study examined pollutant loads released to receiving water from a typical urban watershed in the Los Angeles (LA) Basin of California by applying a best management practice (BMP) performance model that includes uncertainty. This BMP performance model uses the k-C model and incorporates uncertainty analysis and the first-order second-moment (FOSM) method to assess the effectiveness of BMPs for removing stormwater pollutants. Uncertainties were considered for the influent event mean concentration (EMC) and the aerial removal rate constant of the k-C model. The storage treatment overflow and runoff model (STORM) was used to simulate the flow volume from watershed, the bypass flow volume and the flow volume that passes through the BMP. Detention basins and total suspended solids (TSS) were chosen as representatives of stormwater BMP and pollutant, respectively. This paper applies load frequency curves (LFCs), which replace the exceedance percentage with an exceedance frequency as an alternative to load duration curves (LDCs), to evaluate the effectiveness of BMPs. An evaluation method based on uncertainty analysis is suggested because it applies a water quality standard exceedance based on frequency and magnitude. As a result, the incorporation of uncertainty in the estimates of pollutant loads can assist stormwater managers in determining the degree of total daily maximum load (TMDL) compliance that could be expected from a given BMP in a watershed.

  16. Radiation receiver

    DOEpatents

    Hunt, Arlon J.

    1983-01-01

    The apparatus for collecting radiant energy and converting same to alternate energy form includes a housing having an interior space and a radiation transparent window allowing, for example, solar radiation to be received in the interior space of the housing. Means are provided for passing a stream of fluid past said window and for injecting radiation absorbent particles in said fluid stream. The particles absorb the radiation and because of their very large surface area, quickly release the heat to the surrounding fluid stream. The fluid stream particle mixture is heated until the particles vaporize. The fluid stream is then allowed to expand in, for example, a gas turbine to produce mechanical energy. In an aspect of the present invention properly sized particles need not be vaporized prior to the entrance of the fluid stream into the turbine, as the particles will not damage the turbine blades. In yet another aspect of the invention, conventional fuel injectors are provided to inject fuel into the fluid stream to maintain the proper temperature and pressure of the fluid stream should the source of radiant energy be interrupted. In yet another aspect of the invention, an apparatus is provided which includes means for providing a hot fluid stream having hot particles disbursed therein which can radiate energy, means for providing a cooler fluid stream having cooler particles disbursed therein, which particles can absorb radiant energy and means for passing the hot fluid stream adjacent the cooler fluid stream to warm the cooler fluid and cooler particles by the radiation from the hot fluid and hot particles.

  17. Radiation receiver

    DOEpatents

    Hunt, A.J.

    1983-09-13

    The apparatus for collecting radiant energy and converting same to alternate energy form includes a housing having an interior space and a radiation transparent window allowing, for example, solar radiation to be received in the interior space of the housing. Means are provided for passing a stream of fluid past said window and for injecting radiation absorbent particles in said fluid stream. The particles absorb the radiation and because of their very large surface area, quickly release the heat to the surrounding fluid stream. The fluid stream particle mixture is heated until the particles vaporize. The fluid stream is then allowed to expand in, for example, a gas turbine to produce mechanical energy. In an aspect of the present invention properly sized particles need not be vaporized prior to the entrance of the fluid stream into the turbine, as the particles will not damage the turbine blades. In yet another aspect of the invention, conventional fuel injectors are provided to inject fuel into the fluid stream to maintain the proper temperature and pressure of the fluid stream should the source of radiant energy be interrupted. In yet another aspect of the invention, an apparatus is provided which includes means for providing a hot fluid stream having hot particles disbursed therein which can radiate energy, means for providing a cooler fluid stream having cooler particles disbursed therein, which particles can absorb radiant energy and means for passing the hot fluid stream adjacent the cooler fluid stream to warm the cooler fluid and cooler particles by the radiation from the hot fluid and hot particles. 5 figs.

  18. A minireview of analytical methods for the geographical origin analysis of teas (Camellia sinensis).

    PubMed

    Ye, N S

    2012-01-01

    Chemical compositions in tea leaves are influenced by their growing surrounding, and the content of these components are related to the quality of teas. The determination of the concentration of chemical composition in teas will predict the ranking of teas and indicate the geographical origins. This overview concerns an investigation of analytical methods that are being used for the determination of the geographical origin of tea. The analytical approaches have been subdivided into three groups: spectroscopic techniques, chromatographic techniques, and other techniques. The advantages, drawbacks, and reported applications concerning geographical authenticity are discussed.

  19. Policy-Making Theory as an Analytical Framework in Policy Analysis: Implications for Research Design and Professional Advocacy.

    PubMed

    Sheldon, Michael R

    2016-01-01

    Policy studies are a recent addition to the American Physical Therapy Association's Research Agenda and are critical to our understanding of various federal, state, local, and organizational policies on the provision of physical therapist services across the continuum of care. Policy analyses that help to advance the profession's various policy agendas will require relevant theoretical frameworks to be credible. The purpose of this perspective article is to: (1) demonstrate the use of a policy-making theory as an analytical framework in a policy analysis and (2) discuss how sound policy analysis can assist physical therapists in becoming more effective change agents, policy advocates, and partners with other relevant stakeholder groups. An exploratory study of state agency policy responses to address work-related musculoskeletal disorders is provided as a contemporary example to illustrate key points and to demonstrate the importance of selecting a relevant analytical framework based on the context of the policy issue under investigation.

  20. Policy-Making Theory as an Analytical Framework in Policy Analysis: Implications for Research Design and Professional Advocacy.

    PubMed

    Sheldon, Michael R

    2016-01-01

    Policy studies are a recent addition to the American Physical Therapy Association's Research Agenda and are critical to our understanding of various federal, state, local, and organizational policies on the provision of physical therapist services across the continuum of care. Policy analyses that help to advance the profession's various policy agendas will require relevant theoretical frameworks to be credible. The purpose of this perspective article is to: (1) demonstrate the use of a policy-making theory as an analytical framework in a policy analysis and (2) discuss how sound policy analysis can assist physical therapists in becoming more effective change agents, policy advocates, and partners with other relevant stakeholder groups. An exploratory study of state agency policy responses to address work-related musculoskeletal disorders is provided as a contemporary example to illustrate key points and to demonstrate the importance of selecting a relevant analytical framework based on the context of the policy issue under investigation. PMID:26450973

  1. Blade loss transient dynamics analysis, volume 2. Task 2: Theoretical and analytical development. Task 3: Experimental verification

    NASA Technical Reports Server (NTRS)

    Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.

    1981-01-01

    The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.

  2. Analysis of the Essential Nutrient Strontium in Marine Aquariums by Atomic Absorption Spectroscopy: An Undergraduate Analytical Chemistry Laboratory Exercise

    NASA Astrophysics Data System (ADS)

    Gilles de Pelichy, Laurent D.; Adam, Carl; Smith, Eugene T.

    1997-10-01

    An undergraduate atomic absorption spectroscopy (AAS) laboratory experiment is presented involving the analysis of the essential nutrient strontium in a real-life sample, sea water. The quantitative analysis of strontium in sea water is a problem well suited for an undergraduate analytical chemistry laboratory. Sea water contains numerous components which prevent the direct quantitative determination of strontium. Students learn first hand about the role of interferences in analytical measurements, and about the method of standard addition which is used to minimize these effects. This laboratory exercise also introduces undergraduate students to practical problems associated with AAS. We encourage students as a part of this experiment to collect and analyze marine water samples from local pet shops.

  3. GeneAnalytics: An Integrative Gene Set Analysis Tool for Next Generation Sequencing, RNAseq and Microarray Data

    PubMed Central

    Ben-Ari Fuchs, Shani; Lieder, Iris; Mazor, Yaron; Buzhor, Ella; Kaplan, Sergey; Bogoch, Yoel; Plaschkes, Inbar; Shitrit, Alina; Rappaport, Noa; Kohn, Asher; Edgar, Ron; Shenhav, Liraz; Safran, Marilyn; Lancet, Doron; Guan-Golan, Yaron; Warshawsky, David; Shtrichman, Ronit

    2016-01-01

    Abstract Postgenomics data are produced in large volumes by life sciences and clinical applications of novel omics diagnostics and therapeutics for precision medicine. To move from “data-to-knowledge-to-innovation,” a crucial missing step in the current era is, however, our limited understanding of biological and clinical contexts associated with data. Prominent among the emerging remedies to this challenge are the gene set enrichment tools. This study reports on GeneAnalytics™ (geneanalytics.genecards.org), a comprehensive and easy-to-apply gene set analysis tool for rapid contextualization of expression patterns and functional signatures embedded in the postgenomics Big Data domains, such as Next Generation Sequencing (NGS), RNAseq, and microarray experiments. GeneAnalytics' differentiating features include in-depth evidence-based scoring algorithms, an intuitive user interface and proprietary unified data. GeneAnalytics employs the LifeMap Science's GeneCards suite, including the GeneCards®—the human gene database; the MalaCards—the human diseases database; and the PathCards—the biological pathways database. Expression-based analysis in GeneAnalytics relies on the LifeMap Discovery®—the embryonic development and stem cells database, which includes manually curated expression data for normal and diseased tissues, enabling advanced matching algorithm for gene–tissue association. This assists in evaluating differentiation protocols and discovering biomarkers for tissues and cells. Results are directly linked to gene, disease, or cell “cards” in the GeneCards suite. Future developments aim to enhance the GeneAnalytics algorithm as well as visualizations, employing varied graphical display items. Such attributes make GeneAnalytics a broadly applicable postgenomics data analyses and interpretation tool for translation of data to knowledge-based innovation in various Big Data fields such as precision medicine, ecogenomics, nutrigenomics

  4. GeneAnalytics: An Integrative Gene Set Analysis Tool for Next Generation Sequencing, RNAseq and Microarray Data.

    PubMed

    Ben-Ari Fuchs, Shani; Lieder, Iris; Stelzer, Gil; Mazor, Yaron; Buzhor, Ella; Kaplan, Sergey; Bogoch, Yoel; Plaschkes, Inbar; Shitrit, Alina; Rappaport, Noa; Kohn, Asher; Edgar, Ron; Shenhav, Liraz; Safran, Marilyn; Lancet, Doron; Guan-Golan, Yaron; Warshawsky, David; Shtrichman, Ronit

    2016-03-01

    Postgenomics data are produced in large volumes by life sciences and clinical applications of novel omics diagnostics and therapeutics for precision medicine. To move from "data-to-knowledge-to-innovation," a crucial missing step in the current era is, however, our limited understanding of biological and clinical contexts associated with data. Prominent among the emerging remedies to this challenge are the gene set enrichment tools. This study reports on GeneAnalytics™ ( geneanalytics.genecards.org ), a comprehensive and easy-to-apply gene set analysis tool for rapid contextualization of expression patterns and functional signatures embedded in the postgenomics Big Data domains, such as Next Generation Sequencing (NGS), RNAseq, and microarray experiments. GeneAnalytics' differentiating features include in-depth evidence-based scoring algorithms, an intuitive user interface and proprietary unified data. GeneAnalytics employs the LifeMap Science's GeneCards suite, including the GeneCards®--the human gene database; the MalaCards-the human diseases database; and the PathCards--the biological pathways database. Expression-based analysis in GeneAnalytics relies on the LifeMap Discovery®--the embryonic development and stem cells database, which includes manually curated expression data for normal and diseased tissues, enabling advanced matching algorithm for gene-tissue association. This assists in evaluating differentiation protocols and discovering biomarkers for tissues and cells. Results are directly linked to gene, disease, or cell "cards" in the GeneCards suite. Future developments aim to enhance the GeneAnalytics algorithm as well as visualizations, employing varied graphical display items. Such attributes make GeneAnalytics a broadly applicable postgenomics data analyses and interpretation tool for translation of data to knowledge-based innovation in various Big Data fields such as precision medicine, ecogenomics, nutrigenomics, pharmacogenomics, vaccinomics

  5. GeneAnalytics: An Integrative Gene Set Analysis Tool for Next Generation Sequencing, RNAseq and Microarray Data.

    PubMed

    Ben-Ari Fuchs, Shani; Lieder, Iris; Stelzer, Gil; Mazor, Yaron; Buzhor, Ella; Kaplan, Sergey; Bogoch, Yoel; Plaschkes, Inbar; Shitrit, Alina; Rappaport, Noa; Kohn, Asher; Edgar, Ron; Shenhav, Liraz; Safran, Marilyn; Lancet, Doron; Guan-Golan, Yaron; Warshawsky, David; Shtrichman, Ronit

    2016-03-01

    Postgenomics data are produced in large volumes by life sciences and clinical applications of novel omics diagnostics and therapeutics for precision medicine. To move from "data-to-knowledge-to-innovation," a crucial missing step in the current era is, however, our limited understanding of biological and clinical contexts associated with data. Prominent among the emerging remedies to this challenge are the gene set enrichment tools. This study reports on GeneAnalytics™ ( geneanalytics.genecards.org ), a comprehensive and easy-to-apply gene set analysis tool for rapid contextualization of expression patterns and functional signatures embedded in the postgenomics Big Data domains, such as Next Generation Sequencing (NGS), RNAseq, and microarray experiments. GeneAnalytics' differentiating features include in-depth evidence-based scoring algorithms, an intuitive user interface and proprietary unified data. GeneAnalytics employs the LifeMap Science's GeneCards suite, including the GeneCards®--the human gene database; the MalaCards-the human diseases database; and the PathCards--the biological pathways database. Expression-based analysis in GeneAnalytics relies on the LifeMap Discovery®--the embryonic development and stem cells database, which includes manually curated expression data for normal and diseased tissues, enabling advanced matching algorithm for gene-tissue association. This assists in evaluating differentiation protocols and discovering biomarkers for tissues and cells. Results are directly linked to gene, disease, or cell "cards" in the GeneCards suite. Future developments aim to enhance the GeneAnalytics algorithm as well as visualizations, employing varied graphical display items. Such attributes make GeneAnalytics a broadly applicable postgenomics data analyses and interpretation tool for translation of data to knowledge-based innovation in various Big Data fields such as precision medicine, ecogenomics, nutrigenomics, pharmacogenomics, vaccinomics

  6. Big Data Geo-Analytical Tool Development for Spatial Analysis Uncertainty Visualization and Quantification Needs

    NASA Astrophysics Data System (ADS)

    Rose, K.; Bauer, J. R.; Baker, D. V.

    2015-12-01

    As big data computing capabilities are increasingly paired with spatial analytical tools and approaches, there is a need to ensure uncertainty associated with the datasets used in these analyses is adequately incorporated and portrayed in results. Often the products of spatial analyses, big data and otherwise, are developed using discontinuous, sparse, and often point-driven data to represent continuous phenomena. Results from these analyses are generally presented without clear explanations of the uncertainty associated with the interpolated values. The Variable Grid Method (VGM) offers users with a flexible approach designed for application to a variety of analyses where users there is a need to study, evaluate, and analyze spatial trends and patterns while maintaining connection to and communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations. In this presentation we will show how we are utilizing Hadoop to store and perform spatial analysis through the development of custom Spark and MapReduce applications that incorporate ESRI Hadoop libraries. The team will present custom 'Big Data' geospatial applications that run on the Hadoop cluster and integrate with ESRI ArcMap with the team's probabilistic VGM approach. The VGM-Hadoop tool has been specially built as a multi-step MapReduce application running on the Hadoop cluster for the purpose of data reduction. This reduction is accomplished by generating multi-resolution, non-overlapping, attributed topology that is then further processed using ESRI's geostatistical analyst to convey a probabilistic model of a chosen study region. Finally, we will share our approach for implementation of data reduction and topology generation

  7. Analytical solution of the time-dependent Bloch NMR flow equations: a translational mechanical analysis

    NASA Astrophysics Data System (ADS)

    Awojoyogbe, O. B.

    2004-08-01

    Various biological and physiological properties of living tissue can be studied by means of nuclear magnetic resonance techniques. Unfortunately, the basic physics of extracting the relevant information from the solution of Bloch nuclear magnetic resource (NMR) equations to accurately monitor the clinical state of biological systems is still not yet fully understood. Presently, there are no simple closed solutions known to the Bloch equations for a general RF excitation. Therefore the translational mechanical analysis of the Bloch NMR equations presented in this study, which can be taken as definitions of new functions to be studied in detail may reveal very important information from which various NMR flow parameters can be derived. Fortunately, many of the most important but hidden applications of blood flow parameters can be revealed without too much difficulty if appropriate mathematical techniques are used to solve the equations. In this study we are concerned with a mathematical study of the laws of NMR physics from the point of view of translational mechanical theory. The important contribution of this study is that solutions to the Bloch NMR flow equations do always exist and can be found as accurately as desired. We shall restrict our attention to cases where the radio frequency field can be treated by simple analytical methods. First we shall derive a time dependant second-order non-homogeneous linear differential equation from the Bloch NMR equation in term of the equilibrium magnetization M0, RF B1( t) field, T1 and T2 relaxation times. Then, we would develop a general method of solving the differential equation for the cases when RF B1( t)=0, and when RF B1( t)≠0. This allows us to obtain the intrinsic or natural behavior of the NMR system as well as the response of the system under investigation to a specific influence of external force to the system. Specifically, we consider the case where the RF B1 varies harmonically with time. Here the complete

  8. Methods for Integrating Moderation and Mediation: A General Analytical Framework Using Moderated Path Analysis

    ERIC Educational Resources Information Center

    Edwards, Jeffrey R.; Lambert, Lisa Schurer

    2007-01-01

    Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated…

  9. Instrumental Analysis of Biodiesel Content in Commercial Diesel Blends: An Experiment for Undergraduate Analytical Chemistry

    ERIC Educational Resources Information Center

    Feng, Z. Vivian; Buchman, Joseph T.

    2012-01-01

    The potential of replacing petroleum fuels with renewable biofuels has drawn significant public interest. Many states have imposed biodiesel mandates or incentives to use commercial biodiesel blends. We present an inquiry-driven experiment where students are given the tasks to gather samples, develop analytical methods using various instrumental…

  10. Publication Bias in Studies of an Applied Behavior-Analytic Intervention: An Initial Analysis

    ERIC Educational Resources Information Center

    Sham, Elyssa; Smith, Tristram

    2014-01-01

    Publication bias arises when studies with favorable results are more likely to be reported than are studies with null findings. If this bias occurs in studies with single-subject experimental designs (SSEDs) on applied behavior-analytic (ABA) interventions, it could lead to exaggerated estimates of intervention effects. Therefore, we conducted an…

  11. Analytic models of ducted turbomachinery tone noise sources. Volume 1: Analysis

    NASA Technical Reports Server (NTRS)

    Clark, T. L.; Ganz, U. W.; Graf, G. A.; Westall, J. S.

    1974-01-01

    The analytic models developed for computing the periodic sound pressure of subsonic fans and compressors in an infinite, hardwall annular duct with uniform flow are described. The basic sound-generating mechanism is the scattering into sound waves of velocity disturbances appearing to the rotor or stator blades as a series of harmonic gusts. The models include component interactions and rotor alone.

  12. Classification of a target analyte in solid mixtures using principal component analysis, support vector machines, and Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    O'Connell, Marie-Louise; Howley, Tom; Ryder, Alan G.; Leger, Marc N.; Madden, Michael G.

    2005-06-01

    The quantitative analysis of illicit materials using Raman spectroscopy is of widespread interest for law enforcement and healthcare applications. One of the difficulties faced when analysing illicit mixtures is the fact that the narcotic can be mixed with many different cutting agents. This obviously complicates the development of quantitative analytical methods. In this work we demonstrate some preliminary efforts to try and account for the wide variety of potential cutting agents, by discrimination between the target substance and a wide range of excipients. Near-infrared Raman spectroscopy (785 nm excitation) was employed to analyse 217 samples, a number of them consisting of a target analyte (acetaminophen) mixed with excipients of different concentrations by weight. The excipients used were sugars (maltose, glucose, lactose, sorbitol), inorganic materials (talcum powder, sodium bicarbonate, magnesium sulphate), and food products (caffeine, flour). The spectral data collected was subjected to a number of pre-treatment statistical methods including first derivative and normalisation transformations, to make the data more suitable for analysis. Various methods were then used to discriminate the target analytes, these included Principal Component Analysis (PCA), Principal Component Regression (PCR) and Support Vector Machines.

  13. Receiver Gain Modulation Circuit

    NASA Technical Reports Server (NTRS)

    Jones, Hollis; Racette, Paul; Walker, David; Gu, Dazhen

    2011-01-01

    A receiver gain modulation circuit (RGMC) was developed that modulates the power gain of the output of a radiometer receiver with a test signal. As the radiometer receiver switches between calibration noise references, the test signal is mixed with the calibrated noise and thus produces an ensemble set of measurements from which ensemble statistical analysis can be used to extract statistical information about the test signal. The RGMC is an enabling technology of the ensemble detector. As a key component for achieving ensemble detection and analysis, the RGMC has broad aeronautical and space applications. The RGMC can be used to test and develop new calibration algorithms, for example, to detect gain anomalies, and/or correct for slow drifts that affect climate-quality measurements over an accelerated time scale. A generalized approach to analyzing radiometer system designs yields a mathematical treatment of noise reference measurements in calibration algorithms. By treating the measurements from the different noise references as ensemble samples of the receiver state, i.e. receiver gain, a quantitative description of the non-stationary properties of the underlying receiver fluctuations can be derived. Excellent agreement has been obtained between model calculations and radiometric measurements. The mathematical formulation is equivalent to modulating the gain of a stable receiver with an externally generated signal and is the basis for ensemble detection and analysis (EDA). The concept of generating ensemble data sets using an ensemble detector is similar to the ensemble data sets generated as part of ensemble empirical mode decomposition (EEMD) with exception of a key distinguishing factor. EEMD adds noise to the signal under study whereas EDA mixes the signal with calibrated noise. It is mixing with calibrated noise that permits the measurement of temporal-functional variability of uncertainty in the underlying process. The RGMC permits the evaluation of EDA by

  14. Bayesian decision analysis as a tool for defining monitoring needs in the field of effects of CSOs on receiving waters.

    PubMed

    Korving, H; Clemens, F

    2002-01-01

    In recent years, decision analysis has become an important technique in many disciplines. It provides a methodology for rational decision-making allowing for uncertainties in the outcome of several possible actions to be undertaken. An example in urban drainage is the situation in which an engineer has to decide upon a major reconstruction of a system in order to prevent pollution of receiving waters due to CSOs. This paper describes the possibilities of Bayesian decision-making in urban drainage. In particular, the utility of monitoring prior to deciding on the reconstruction of a sewer system to reduce CSO emissions is studied. Our concern is with deciding whether a price should be paid for new information and which source of information is the best choice given the expected uncertainties in the outcome. The influence of specific uncertainties (sewer system data and model parameters) on the probability of CSO volumes is shown to be significant. Using Bayes' rule, to combine prior impressions with new observations, reduces the risks linked with the planning of sewer system reconstructions.

  15. Comparison of the performance of two measures of central adiposity among apparently healthy Nigerians using the receiver operating characteristic analysis

    PubMed Central

    Okafor, Christian Ifedili; Fasanmade, Olufemi; Ofoegbu, Esther; Ohwovoriole, Augustine Efedaye

    2011-01-01

    Objective: To compare the performance of waist circumference (WC) and waist-to-hip ratio (WHR) in predicting the presence of cardiovascular risk factors (hypertension and generalized obesity) in an apparently healthy population. Materials and Methods: We recruited 898 apparently healthy subjects (318 males and 580 females) of the Igbo ethnic group resident in Enugu (urban), Southeast Nigeria. Data collection was done using the World Health Organization Stepwise approach to Surveillance of risk factors (STEPS) instrument. Subjects had their weight, height, waist and hip circumferences, systolic and diastolic blood pressures measured according to the guidelines in the step 2 of STEPS instrument. Generalized obesity and hypertension were defined using body mass index (BMI) and JNC 7 classifications, respectively. Quantitative and qualitative variables were analyzed using t-test and Chi-square analysis, respectively, while the performance of WC and WHR was compared using the Receiver Operating Characteristic (ROC) analysis. P value was set at <0.05. Results: The mean age of the subjects was 48.7 (12.9) years. Central obesity was found in 76.9% and 66.5% of subjects using WHR and WC, respectively. WC had a significantly higher area under the curve (AUC) than WHR in all the cardiovascular risk groups, namely, generalized obesity (AUC = 0.88 vs. 0.62), hypertension alone (AUC = 0.60 vs. 0.53), and both generalized obesity and hypertension (AUC = 0.86 vs. 0.57). Conclusion: WC performed better than WHR in predicting the presence of cardiovascular risk factors. Being a simple index, it can easily be measured in routine clinic settings without the need for calculations or use of cumbersome techniques. PMID:22029004

  16. Responder Analysis of the Effects of Denosumab on Bone Mineral Density in Men Receiving Androgen Deprivation Therapy for Prostate Cancer

    PubMed Central

    Egerdie, Blair; Saad, Fred; Smith, Matthew R; Tammela, Teuvo LJ; Heracek, Jiri; Sieber, Paul; Ke, Chunlei; Leder, Benjamin; Dansey, Roger; Goessl, Carsten

    2013-01-01

    Background Men with prostate cancer are at risk of experiencing accelerated bone loss and fractures as a result of androgen deprivation therapy (ADT). Objective We evaluated the effects of denosumab, a fully human monoclonal antibody against RANKL, on preservation of BMD at 3 key skeletal sites (lumbar spine [LS], femoral neck [FN], and total hip [TH]) and the distal radius at 36 months both by responder category and individual responses in a waterfall plot analysis. Design, Setting, and Participants This phase 3, randomized, double-blind study of men with non-metastatic prostate cancer receiving ADT investigated the effects of denosumab on bone mineral density (BMD) and fractures. Patients were treated for 36 months. Intervention Subcutaneous denosumab 60 mg (n=734) or placebo (n=734) every 6 months for up to 36 months. Patients were instructed to take supplemental Calcium and vitamin D. Measurements Primary outcome measure: The percentage change from baseline to month 36 in LS, FN, and TH BMD was measured by dual energy x-ray absorptiometry. BMD at the distal 1/3 radius at 36 months was measured in a sub-study of 309 patients. Results and Limitations At 36 months, significantly more patients in the denosumab arm had increases of >3% BMD from baseline at each site studied compared with placebo (LS, 78% vs 17%; TH, 48% vs 6%; FN, 48% vs 13%; distal 1/3 radius, 40% vs 7%). The percentage of denosumab patients with bone loss at all 3 key BMD sites at month 36 was 1%, as opposed to 42% in placebo arm. At 36 months 69% of denosumab-treated patients had BMD increases at all three sites (LS, TH or FN) compared with 8% of placebo-treated patients. Lower baseline BMD was associated with higher magnitude lumbar spine, femoral neck, and total hip BMD responses to denosumab. Conclusions In men with prostate cancer receiving ADT significantly higher BMD response rates were observed with denosumab vs. placebo. Trial Registration This study is registered with Clinical

  17. Crustal structure of the rifted volcanic margins and uplifted plateau of Western Yemen from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Ahmed, Abdulhakim; Tiberi, Christel; Leroy, Sylvie; Stuart, Graham; Keir, Derek; Sholan, Jamal; Khanbari, Khaled; Al-Ganad, Ismeal; Basuyau, Clemence

    2013-04-01

    We analyse P-wave receiver functions across the western Gulf of Aden and southern Red Sea continental margins in Western Yemen to constrain crustal thickness, internal crustal structure, and bulk seismic velocity characteristics in order to address the role of magmatism, faulting and mechanical crustal thinning during continental breakup. We analyse teleseismic data from 21 stations forming the temporary Young Conjugate Margins Laboratory (YOCMAL) network together with GFZ and Yemeni permanent stations. Analysis of computed receiver functions shows that (1) the thickness of unextended crust on the Yemen plateau is ~35 km; (2) this thins to ~22 km in coastal areas and reaches less than 14 km on the Red Sea coast, where presence of a high velocity lower crust (HVLC) is evident. The average Vp/Vs ratio for the western Yemen Plateau is 1.79, increasing to ~1.92 near the Red Sea coast and decreasing to 1.68 for those stations located on or near the granitic rocks. Thinning of the crust, and by inference extension, occurs over a ~130 km wide transition zone from the Red Sea and Gulf of Aden coasts to the edges of the Yemen plateau. Thinning of continental crust is particularly localized in a <30-km-wide zone near the coastline, spatially co-incident with addition of magmatic underplate to the lower crust, above which at the surface we observe the presence of seaward dipping reflectors (SDRs)_and thickened Oligo-Miocene syn-rift basaltic flows. Our results strongly suggest the presence of high velocity mafic intrusions in the lower crust, which are likely either synrift magmatic intrusion into continental lower-crust or alternatively depleted upper mantle underplated to the base of the crust during the eruption of the SDRs. Our results also point toward a regional breakup history in which the onset of rifting was synchronous along the western Gulf of Aden and southern Red Sea volcanic margins followed by a second phase of extension along the Red Sea margin.

  18. Crustal structure of the rifted volcanic margins and uplifted plateau of Western Yemen from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Ahmed, Abdulhakim; Tiberi, Christel; Leroy, Sylvie; Stuart, Graham W.; Keir, Derek; Sholan, Jamal; Khanbari, Khaled; Al-Ganad, Ismael; Basuyau, Clémence

    2013-06-01

    We analyse P-wave receiver functions across the western Gulf of Aden and southern Red Sea continental margins in Western Yemen to constrain crustal thickness, internal crustal structure and the bulk seismic velocity characteristics in order to address the role of magmatism, faulting and mechanical crustal thinning during continental breakup. We analyse teleseismic data from 21 stations forming the temporary Young Conjugate Margins Laboratory (YOCMAL) network together with GFZ and Yemeni permanent stations. Analysis of computed receiver functions shows that (1) the thickness of unextended crust on the Yemen plateau is ˜35 km; (2) this thins to ˜22 km in coastal areas and reaches less than 14 km on the Red Sea coast, where presence of a high-velocity lower crust is evident. The average Vp/Vs ratio for the western Yemen Plateau is 1.79, increasing to ˜1.92 near the Red Sea coast and decreasing to 1.68 for those stations located on or near the granitic rocks. Thinning of the crust, and by inference extension, occurs over a ˜130-km-wide transition zone from the Red Sea and Gulf of Aden coasts to the edges of the Yemen plateau. Thinning of continental crust is particularly localized in a <30-km-wide zone near the coastline, spatially co-incident with addition of magmatic underplate to the lower crust, above which on the surface we observe the presence of seaward dipping reflectors (SDRs) and thickened Oligo-Miocene syn-rift basaltic flows. Our results strongly suggest the presence of high-velocity mafic intrusions in the lower crust, which are likely either synrift magmatic intrusion into continental lower crust or alternatively depleted upper mantle underplated to the base of the crust during the eruption of the SDRs. Our results also point towards a regional breakup history in which the onset of rifting was synchronous along the western Gulf of Aden and southern Red Sea volcanic margins followed by a second phase of extension along the Red Sea margin.

  19. Analytical determination of orbital elements using Fourier analysis. I. The radial velocity case

    NASA Astrophysics Data System (ADS)

    Delisle, J.-B.; Ségransan, D.; Buchschacher, N.; Alesina, F.

    2016-05-01

    We describe an analytical method for computing the orbital parameters of a planet from the periodogram of a radial velocity signal. The method is very efficient and provides a good approximation of the orbital parameters. The accuracy is mainly limited by the accuracy of the computation of the Fourier decomposition of the signal which is sensitive to sampling and noise. Our method is complementary with more accurate (and more expensive in computer time) numerical algorithms (e.g. Levenberg-Marquardt, Markov chain Monte Carlo, genetic algorithms). Indeed, the analytical approximation can be used as an initial condition to accelerate the convergence of these numerical methods. Our method can be applied iteratively to search for multiple planets in the same system.

  20. An analytical theory of the motion of PHOBOS and perturbation analysis

    NASA Astrophysics Data System (ADS)

    Emelyanov, N. V.; Nasonova, L. P.

    1989-08-01

    An analytical theory of the motion of Phobos is briefly described. The theory is realized by adapting the theory of artificial-satellite motion. A theoretical estimate of the coordinate computation method accuracy is about 0.5 m at an interval of 1 to 2 yrs. The perturbations of Phobos intermediate-orbit elements are analyzed and recommendations are made to account for perturbation factors. The possibility of determining the perturbing factors parameters from Phobos observations is discussed.

  1. Screening foods for radionuclide contamination via analysis of composited analytical portions.

    PubMed

    Cunningham, William C

    2014-02-01

    A procedure is presented for screening foods for radionuclide contamination. It was developed by the U.S. Food and Drug Administration (U.S. FDA) to be an option for augmenting analytical capability following a major radiological contamination event involving beta- or alpha-emitting radionuclides. The expected application of this procedure would be during late-phase monitoring, after initial monitoring suggests an area is contamination-free or levels are negligible but additional confirming data are desired. When food is taken from multiple samples and a composite analytical portion is prepared and analyzed using a quantitative method, it is possible to show that radionuclide activity levels are below a regulatory limit for a number of samples simultaneously. Although radionuclide activity levels are not obtained for individual samples, the number of samples that can be processed can be increased dramatically. In application, a limited number of selected samples would be analyzed using the usual unmodified quantitative method while the screening method would be used to provide supporting information for bulk quantities of samples. The procedure involves combining equal-mass portions from a number (n) of food samples to make a composite analytical portion, which is then analyzed using a quantitative method. Instead of comparing results with the regulatory limit, they are compared with a screening level equal to 1/n of the regulatory limit. If the observed activity concentration for the composite analytical portion is below the screening level, then the radionuclide levels are below the regulatory limit for all samples represented by it. Screening throughput will therefore depend on n. For n = 2, 3, etc., sample throughput will double, triple, etc., respectively. The maximum number of samples that may be combined is subject to limitations such as those associated with sample nonhomogeneity, detection capability, and the need to be able to discern abnormal

  2. Mathematical analysis of recent analytical approximations to the collapse of an empty spherical bubble.

    PubMed

    Amore, Paolo; Fernández, Francisco M

    2013-02-28

    We analyze the Rayleigh equation for the collapse of an empty bubble and provide an explanation for some recent analytical approximations to the model. We derive the form of the singularity at the second boundary point and discuss the convergence of the approximants. We also give a rigorous proof of the asymptotic behavior of the coefficients of the power series that are the basis for the approximate expressions.

  3. IoT Big-Data Centred Knowledge Granule Analytic and Cluster Framework for BI Applications: A Case Base Analysis.

    PubMed

    Chang, Hsien-Tsung; Mishra, Nilamadhab; Lin, Chung-Chih

    2015-01-01

    The current rapid growth of Internet of Things (IoT) in various commercial and non-commercial sectors has led to the deposition of large-scale IoT data, of which the time-critical analytic and clustering of knowledge granules represent highly thought-provoking application possibilities. The objective of the present work is to inspect the structural analysis and clustering of complex knowledge granules in an IoT big-data environment. In this work, we propose a knowledge granule analytic and clustering (KGAC) framework that explores and assembles knowledge granules from IoT big-data arrays for a business intelligence (BI) application. Our work implements neuro-fuzzy analytic architecture rather than a standard fuzzified approach to discover the complex knowledge granules. Furthermore, we implement an enhanced knowledge granule clustering (e-KGC) mechanism that is more elastic than previous techniques when assembling the tactical and explicit complex knowledge granules from IoT big-data arrays. The analysis and discussion presented here show that the proposed framework and mechanism can be implemented to extract knowledge granules from an IoT big-data array in such a way as to present knowledge of strategic value to executives and enable knowledge users to perform further BI actions.

  4. IoT Big-Data Centred Knowledge Granule Analytic and Cluster Framework for BI Applications: A Case Base Analysis

    PubMed Central

    Chang, Hsien-Tsung; Mishra, Nilamadhab; Lin, Chung-Chih

    2015-01-01

    The current rapid growth of Internet of Things (IoT) in various commercial and non-commercial sectors has led to the deposition of large-scale IoT data, of which the time-critical analytic and clustering of knowledge granules represent highly thought-provoking application possibilities. The objective of the present work is to inspect the structural analysis and clustering of complex knowledge granules in an IoT big-data environment. In this work, we propose a knowledge granule analytic and clustering (KGAC) framework that explores and assembles knowledge granules from IoT big-data arrays for a business intelligence (BI) application. Our work implements neuro-fuzzy analytic architecture rather than a standard fuzzified approach to discover the complex knowledge granules. Furthermore, we implement an enhanced knowledge granule clustering (e-KGC) mechanism that is more elastic than previous techniques when assembling the tactical and explicit complex knowledge granules from IoT big-data arrays. The analysis and discussion presented here show that the proposed framework and mechanism can be implemented to extract knowledge granules from an IoT big-data array in such a way as to present knowledge of strategic value to executives and enable knowledge users to perform further BI actions. PMID:26600156

  5. IoT Big-Data Centred Knowledge Granule Analytic and Cluster Framework for BI Applications: A Case Base Analysis.

    PubMed

    Chang, Hsien-Tsung; Mishra, Nilamadhab; Lin, Chung-Chih

    2015-01-01

    The current rapid growth of Internet of Things (IoT) in various commercial and non-commercial sectors has led to the deposition of large-scale IoT data, of which the time-critical analytic and clustering of knowledge granules represent highly thought-provoking application possibilities. The objective of the present work is to inspect the structural analysis and clustering of complex knowledge granules in an IoT big-data environment. In this work, we propose a knowledge granule analytic and clustering (KGAC) framework that explores and assembles knowledge granules from IoT big-data arrays for a business intelligence (BI) application. Our work implements neuro-fuzzy analytic architecture rather than a standard fuzzified approach to discover the complex knowledge granules. Furthermore, we implement an enhanced knowledge granule clustering (e-KGC) mechanism that is more elastic than previous techniques when assembling the tactical and explicit complex knowledge granules from IoT big-data arrays. The analysis and discussion presented here show that the proposed framework and mechanism can be implemented to extract knowledge granules from an IoT big-data array in such a way as to present knowledge of strategic value to executives and enable knowledge users to perform further BI actions. PMID:26600156

  6. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  7. Analytical sensitivity analysis of transient groundwater flow in a bounded model domain using the adjoint method

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming; Vesselinov, Velimir V.

    2015-07-01

    Sensitivity analyses are an important component of any modeling exercise. We have developed an analytical methodology based on the adjoint method to compute sensitivities of a state variable (hydraulic head) to model parameters (hydraulic conductivity and storage coefficient) for transient groundwater flow in a confined and randomly heterogeneous aquifer under ambient and pumping conditions. For a special case of two-dimensional rectangular domains, these sensitivities are represented in terms of the problem configuration (the domain size, boundary configuration, medium properties, pumping schedules and rates, and observation locations and times), and there is no need to actually solve the adjoint equations. As an example, we present analyses of the obtained solution for typical groundwater flow conditions. Analytical solutions allow us to calculate sensitivities efficiently, which can be useful for model-based analyses such as parameter estimation, data-worth evaluation, and optimal experimental design related to sampling frequency and locations of observation wells. The analytical approach is not limited to groundwater applications but can be extended to any other mathematical problem with similar governing equations and under similar conceptual conditions.

  8. Quantum analytical modeling and simulation of CNT on insulator (COI) and CNT on nothing (CON) FET: a comparative analysis

    NASA Astrophysics Data System (ADS)

    Mukherjee, Sudipta; Bandyopadhyay, Dipan; Dutta, Pranab Kishore; Sarkar, Subir Kumar

    2016-06-01

    A comprehensive performance analysis by quantum analytical modeling of CNT on insulator (COI) and CNT on nothing (CON) FET having channel length 20 nm has been proposed and investigated on the basis of 2D Poisson's Equation and solution of 1-D Schrodinger's Equation and validated using ATLAS 2D simulator. As classical approximations fail to describe carrier quantization, charge inversion and potential profile of a device at sub-100 nm regime, here for the first time an analytical model in quantum mechanical aspect for COI/CON devices has been derived. Effects of high-k dielectrics in place of conventional SiO2 over the device characteristics have been thoroughly discussed. Moreover, all noticeable benefits of our device to the so called SOI/SON architecture have also been vividly justified.

  9. Receiver Design, Performance Analysis, and Evaluation for Space-Borne Laser Altimeters and Space-to-Space Laser Ranging Systems

    NASA Technical Reports Server (NTRS)

    Davidson, Frederic M.; Sun, Xiaoli; Field, Christopher T.

    1996-01-01

    This progress report consists of two separate reports. The first one describes our work on the use of variable gain amplifiers to increase the receiver dynamic range of space borne laser altimeters such as NASA's Geoscience Laser Altimeter Systems (GLAS). The requirement of the receiver dynamic range was first calculated. A breadboard variable gain amplifier circuit was made and the performance was fully characterized. The circuit will also be tested in flight on board the Shuttle Laser Altimeter (SLA-02) next year. The second report describes our research on the master clock oscillator frequency calibration for space borne laser altimeter systems using global positioning system (GPS) receivers.

  10. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases

    PubMed Central

    2012-01-01

    Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of

  11. Bivalirudin versus unfractionated heparin: a meta-analysis of patients receiving percutaneous coronary intervention for acute coronary syndromes

    PubMed Central

    Farag, Mohamed; Gorog, Diana A; Prasad, Abhiram; Srinivasan, Manivannan

    2015-01-01

    Objective Acute coronary syndrome (ACS) encompasses ST segment elevation myocardial infarction (STEMI), with generally high thrombus burden and non-ST segment elevation ACS (NSTE-ACS), with lower thrombus burden. In the setting of percutaneous coronary intervention (PCI) for ACS, bivalirudin appears superior to unfractionated heparin (UFH), driven by reduced major bleeding. Recent trials suggest that the benefit of bivalirudin may be reduced with use of transradial access and evolution in antiplatelet therapy. Moreover, a differential role of bivalirudin in ACS cohorts is unknown. Methods A meta-analysis of randomised trials comparing bivalirudin and UFH in patients with ACS receiving PCI, with separate analyses in STEMI and NSTE-ACS groups. Overall estimates of treatment effect were calculated with random-effects model. Results In 5 trials of STEMI (10 358 patients), bivalirudin increased the risk of acute stent thrombosis (ST) (OR 3.62; CI 1.95 to 6.74; p<0.0001) compared with UFH. Bivalirudin reduced the risk of major bleeding only when compared with UFH plus planned glycoprotein IIb/IIIa inhibitors (GPI) (OR 0.49; CI 0.36 to 0.67; p<0.00001). In 14 NSTE-ACS trials (25 238 patients), there was no difference between bivalirudin and UFH in death, myocardial infarction or ST. However, bivalirudin reduced the risk of major bleeding compared with UFH plus planned GPI (OR 0.52; CI 0.43 to 0.62; p<0.00001), or UFH plus provisional GPI (OR 0.68; CI 0.46 to 1.01; p=0.05). The reduction in major bleeding with bivalirudin was not related to vascular access site. Conclusions Bivalirudin increases the risk of acute ST in STEMI, but may confer an advantage over UFH in NSTE-ACS while undergoing PCI, reducing major bleeding without an increase in ST. PMID:26448869

  12. Analysis of the Endogenous Deoxynucleoside Triphosphate Pool in HIV-Positive and -Negative Individuals Receiving Tenofovir-Emtricitabine.

    PubMed

    Chen, Xinhui; Castillo-Mancilla, Jose R; Seifert, Sharon M; McAllister, Kevin B; Zheng, Jia-Hua; Bushman, Lane R; MaWhinney, Samantha; Anderson, Peter L

    2016-09-01

    Tenofovir (TFV) disoproxil fumarate (TDF) and emtricitabine (FTC), two nucleos(t)ide analogs (NA), are coformulated as an anti-HIV combination tablet for treatment and preexposure prophylaxis (PrEP). TDF/FTC may have effects on the deoxynucleoside triphosphate (dNTP) pool due to their similar structures and similar metabolic pathways. We carried out a comprehensive clinical study to characterize the effects of TDF/FTC on the endogenous dNTP pool, from baseline to 30 days of TDF/FTC therapy, in both treatment-naive HIV-positive and HIV-negative individuals. dATP, dCTP, dGTP, and TTP were quantified in peripheral blood mononuclear cells (PBMC) with a validated liquid chromatography-tandem mass spectrometry (LC-MS/MS) methodology. Forty individuals (19 HIV-positive) were enrolled and underwent a baseline visit and then received TDF/FTC for at least 30 days. Longitudinal measurements were analyzed using mixed-model segmented linear regression analysis. The dNTPs were reduced by 14% to 37% relative to the baseline level within 3 days in both HIV-negative and HIV-positive individuals (P ≤ 0.003). These reductions persisted to various degrees at day 30. These findings indicate that dNTP pools are influenced by TDF/FTC therapy. This may alter cellular homeostasis and could increase the antiviral effect through a more favorable analog/dNTP ratio. Further work is needed to elucidate mechanisms, to evaluate the clinical significance of these findings, and to further probe differences between HIV-negative and HIV-positive individuals. (This study has been registered at ClinicalTrials.gov under identifier NCT01040091.). PMID:27353267

  13. 3D imaging of the Corinth rift from a new passive seismic tomography and receiver function analysis

    NASA Astrophysics Data System (ADS)

    Godano, Maxime; Gesret, Alexandrine; Noble, Mark; Lyon-Caen, Hélène; Gautier, Stéphanie; Deschamps, Anne

    2016-04-01

    model and earthquake location. In addition to the tomographic imaging, we perform a preliminary receiver function analysis of teleseismic data recorded by the broadband stations of the CRL network. The RF analysis should provide the interface depths beneath seismometers and increase the imaging resolution of the upper crustal structures provided by the 3D tomography. In this first attempt, we adjust the 1D velocity model that produces a synthetic RF as similar as possible to the observed RF for a subset of data. We compare the identified interfaces with structures imaged by the tomography.

  14. A semi-automated method for the detection of seismic anisotropy at depth via receiver function analysis

    NASA Astrophysics Data System (ADS)

    Licciardi, A.; Piana Agostinetti, N.

    2016-06-01

    Information about seismic anisotropy is embedded in the variation of the amplitude of the Ps pulses as a function of the azimuth, on both the Radial and the Transverse components of teleseismic receiver functions (RF). We develop a semi-automatic method to constrain the presence and the depth of anisotropic layers beneath a single seismic broad-band station. An algorithm is specifically designed to avoid trial and error methods and subjective crustal parametrizations in RF inversions, providing a suitable tool for large-size data set analysis. The algorithm couples together information extracted from a 1-D VS profile and from a harmonic decomposition analysis of the RF data set. This information is used to determine the number of anisotropic layers and their approximate position at depth, which, in turn, can be used to, for example, narrow the search boundaries for layer thickness and S-wave velocity in a subsequent parameter space search. Here, the output of the algorithm is used to invert an RF data set by means of the Neighbourhood Algorithm (NA). To test our methodology, we apply the algorithm to both synthetic and observed data. We make use of synthetic RF with correlated Gaussian noise to investigate the resolution power for multiple and thin (1-3 km) anisotropic layers in the crust. The algorithm successfully identifies the number and position of anisotropic layers at depth prior the NA inversion step. In the NA inversion, strength of anisotropy and orientation of the symmetry axis are correctly retrieved. Then, the method is applied to field measurement from station BUDO in the Tibetan Plateau. Two consecutive layers of anisotropy are automatically identified with our method in the first 25-30 km of the crust. The data are then inverted with the retrieved parametrization. The direction of the anisotropic axis in the uppermost layer correlates well with the orientation of the major planar structure in the area. The deeper anisotropic layer is associated with

  15. Predictors of Outcome in Traumatic Brain Injury: New Insight Using Receiver Operating Curve Indices and Bayesian Network Analysis

    PubMed Central

    Zador, Zsolt; Sperrin, Matthew; King, Andrew T.

    2016-01-01

    Background Traumatic brain injury remains a global health problem. Understanding the relative importance of outcome predictors helps optimize our treatment strategies by informing assessment protocols, clinical decisions and trial designs. In this study we establish importance ranking for outcome predictors based on receiver operating indices to identify key predictors of outcome and create simple predictive models. We then explore the associations between key outcome predictors using Bayesian networks to gain further insight into predictor importance. Methods We analyzed the corticosteroid randomization after significant head injury (CRASH) trial database of 10008 patients and included patients for whom demographics, injury characteristics, computer tomography (CT) findings and Glasgow Outcome Scale (GCS) were recorded (total of 13 predictors, which would be available to clinicians within a few hours following the injury in 6945 patients). Predictions of clinical outcome (death or severe disability at 6 months) were performed using logistic regression models with 5-fold cross validation. Predictive performance was measured using standardized partial area (pAUC) under the receiver operating curve (ROC) and we used Delong test for comparisons. Variable importance ranking was based on pAUC targeted at specificity (pAUCSP) and sensitivity (pAUCSE) intervals of 90–100%. Probabilistic associations were depicted using Bayesian networks. Results Complete AUC analysis showed very good predictive power (AUC = 0.8237, 95% CI: 0.8138–0.8336) for the complete model. Specificity focused importance ranking highlighted age, pupillary, motor responses, obliteration of basal cisterns/3rd ventricle and midline shift. Interestingly when targeting model sensitivity, the highest-ranking variables were age, severe extracranial injury, verbal response, hematoma on CT and motor response. Simplified models, which included only these key predictors, had similar performance (pAUCSP = 0

  16. Eliciting and Receiving Online Support: Using Computer-Aided Content Analysis to Examine the Dynamics of Online Social Support

    PubMed Central

    Kraut, Robert E; Levine, John M

    2015-01-01

    Background Although many people with serious diseases participate in online support communities, little research has investigated how participants elicit and provide social support on these sites. Objective The first goal was to propose and test a model of the dynamic process through which participants in online support communities elicit and provide emotional and informational support. The second was to demonstrate the value of computer coding of conversational data using machine learning techniques (1) by replicating results derived from human-coded data about how people elicit support and (2) by answering questions that are intractable with small samples of human-coded data, namely how exposure to different types of social support predicts continued participation in online support communities. The third was to provide a detailed description of these machine learning techniques to enable other researchers to perform large-scale data analysis in these communities. Methods Communication among approximately 90,000 registered users of an online cancer support community was analyzed. The corpus comprised 1,562,459 messages organized into 68,158 discussion threads. Amazon Mechanical Turk workers coded (1) 1000 thread-starting messages on 5 attributes (positive and negative emotional self-disclosure, positive and negative informational self-disclosure, questions) and (2) 1000 replies on emotional and informational support. Their judgments were used to train machine learning models that automatically estimated the amount of these 7 attributes in the messages. Across attributes, the average Pearson correlation between human-based judgments and computer-based judgments was .65. Results Part 1 used human-coded data to investigate relationships between (1) 4 kinds of self-disclosure and question asking in thread-starting posts and (2) the amount of emotional and informational support in the first reply. Self-disclosure about negative emotions (beta=.24, P<.001), negative

  17. Design of radar receivers

    NASA Astrophysics Data System (ADS)

    Sokolov, M. A.

    This handbook treats the design and analysis of of pulsed radar receivers, with emphasis on elements (especially IC elements) that implement optimal and suboptimal algorithms. The design methodology is developed from the viewpoint of statistical communications theory. Particular consideration is given to the synthesis of single-channel and multichannel detectors, the design of analog and digital signal-processing devices, and the analysis of IF amplifiers.

  18. Using Fuzzy Analytic Hierarchy Process multicriteria and Geographical information system for coastal vulnerability analysis in Morocco: The case of Mohammedia

    NASA Astrophysics Data System (ADS)

    Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha

    2016-04-01

    This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.

  19. Bragg-cell receiver study

    NASA Technical Reports Server (NTRS)

    Wilson, Lonnie A.

    1987-01-01

    Bragg-cell receivers are employed in specialized Electronic Warfare (EW) applications for the measurement of frequency. Bragg-cell receiver characteristics are fully characterized for simple RF emitter signals. This receiver is early in its development cycle when compared to the IFM receiver. Functional mathematical models are derived and presented in this report for the Bragg-cell receiver. Theoretical analysis is presented and digital computer signal processing results are presented for the Bragg-cell receiver. Probability density function analysis are performed for output frequency. Probability density function distributions are observed to depart from assumed distributions for wideband and complex RF signals. This analysis is significant for high resolution and fine grain EW Bragg-cell receiver systems.

  20. Analysis of low molecular weight metabolites in tea using mass spectrometry-based analytical methods.

    PubMed

    Fraser, Karl; Harrison, Scott J; Lane, Geoff A; Otter, Don E; Hemar, Yacine; Quek, Siew-Young; Rasmussen, Susanne

    2014-01-01

    Tea is the second most consumed beverage in the world after water and there are numerous reported health benefits as a result of consuming tea, such as reducing the risk of cardiovascular disease and many types of cancer. Thus, there is much interest in the chemical composition of teas, for example; defining components responsible for contributing to reported health benefits; defining quality characteristics such as product flavor; and monitoring for pesticide residues to comply with food safety import/export requirements. Covered in this review are some of the latest developments in mass spectrometry-based analytical techniques for measuring and characterizing low molecular weight components of tea, in particular primary and secondary metabolites. The methodology; more specifically the chromatography and detection mechanisms used in both targeted and non-targeted studies, and their main advantages and disadvantages are discussed. Finally, we comment on the latest techniques that are likely to have significant benefit to analysts in the future, not merely in the area of tea research, but in the analytical chemistry of low molecular weight compounds in general. PMID:24499071

  1. Analytical analysis of the Pennes bioheat transfer equation with sinusoidal heat flux condition on skin surface.

    PubMed

    Shih, Tzu-Ching; Yuan, Ping; Lin, Win-Li; Kou, Hong-Sen

    2007-11-01

    This study focuses on the effect of the temperature response of a semi-infinite biological tissue due to a sinusoidal heat flux at the skin. The Pennes bioheat transfer equation such as rho(t)c(t)( partial differentialT/ partial differentialt)+W(b)c(b)(T-T(a))=k partial differential(2)T/ partial differentialx(2) with the oscillatory heat flux boundary condition such as q(0,t)=q(0)e(iomegat) was investigated. By using the Laplace transform, the analytical solution of the Pennes bioheat transfer equation with surface sinusoidal heating condition is found. This analytical expression is suitable for describing the transient temperature response of tissue for the whole time domain from the starting periodic oscillation to the final steady periodic oscillation. The results show that the temperature oscillation due to the sinusoidal heating on the skin surface is unstable in the initial period. Further, it is unavailable to predict the blood perfusion rate via the phase shifting between the surface heat flux and the surface temperature. Moreover, the lower frequency of sinusoidal heat flux on the skin surface induces a more sensitive phase shift response to the blood perfusion rate change, but extends the beginning time of sampling because of the avoidance of the unavailable first cyclic oscillation.

  2. Evaluation of FTIR-based analytical methods for the analysis of simulated wastes

    SciTech Connect

    Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.

    1994-09-30

    Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data.

  3. An improved analytic solution for analysis of particle trajectories in fibrous, two-dimensional filters

    SciTech Connect

    Marshall, H.; Sahraoui, M.; Kaviany, M.

    1993-09-01

    The Kuwabara solution for creeping fluid flow through periodic arrangement of cylinders is widely used in analytic and numerical studies of fibrous filters. Numerical solutions have shown that the Kuwabara solution has systematic errors and when used for the particle trajectories in filters it results in some error in the predicted filter efficiency. The numerical solutions although accurate, preclude further analytic treatments and are not as compact and convenient to use as the Kuwabara solution. By re-examining the outer boundary conditions of the Kuwabara solution, we have derived a correction term to the Kuwabara solution to obtain an extended solution that is more accurate and improves prediction of the filter efficiency. By comparison with the numerical solutions, it is shown that the Kuwabara solution is the high porosity asymptote and that the extended solution has an improved porosity dependence. We explain a rectification which can make particle collection less efficient for periodic, in-line arrangements of fibers with particle diffusion or body force. This rectification also results in the alignment of particles with inertia (i.e., high Stokes number particles).

  4. Performance outlook of the SCRAP receiver

    NASA Astrophysics Data System (ADS)

    Lubkoll, Matti; von Backström, Theodor W.; Harms, Thomas M.

    2016-05-01

    A combined cycle (CC) concentrating solar power (CSP) plant provides significant potential to achieve an efficiency increase and an electricity cost reduction compared to current single-cycle plants. A CC CSP system requires a receiver technology capable of effectively transferring heat from concentrated solar irradiation to a pressurized air stream of a gas turbine. The small number of pressurized air receivers demonstrated to date have practical limitations, when operating at high temperatures and pressures. As yet, a robust, scalable and efficient system has to be developed and commercialized. A novel receiver system, the Spiky Central Receiver Air Pre-heater (SCRAP) concept has been proposed to comply with these requirements. The SCRAP system is conceived as a solution for an efficient and robust pressurized air receiver that could be implemented in CC CSP concepts or standalone solar Brayton cycles without a bottoming Rankine cycle. The presented work expands on previous publications on the thermal modeling of the receiver system. Based on the analysis of a single heat transfer element (spike), predictions for its thermal performance can be made. To this end the existing thermal model was improved by heat transfer characteristics for the jet impingement region of the spike tip as well as heat transfer models simulating the interaction with ambient. While the jet impingement cooling effect was simulated employing a commercial CFD code, the ambient heat transfer model was based on simplifying assumptions in order to employ empirical and analytical equations. The thermal efficiency of a spike under design conditions (flux 1.0 MW/m2, air outlet temperature just below 800 °C) was calculated at approximately 80 %, where convective heat losses account for 16.2 % of the absorbed radiation and radiative heat losses for a lower 2.9 %. This effect is due to peak surface temperatures occurring at the root of the spikes. It can thus be concluded that the geometric

  5. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  6. Analysis of Pre-Analytic Factors Affecting the Success of Clinical Next-Generation Sequencing of Solid Organ Malignancies

    PubMed Central

    Chen, Hui; Luthra, Rajyalakshmi; Goswami, Rashmi S.; Singh, Rajesh R.; Roy-Chowdhuri, Sinchita

    2015-01-01

    Application of next-generation sequencing (NGS) technology to routine clinical practice has enabled characterization of personalized cancer genomes to identify patients likely to have a response to targeted therapy. The proper selection of tumor sample for downstream NGS based mutational analysis is critical to generate accurate results and to guide therapeutic intervention. However, multiple pre-analytic factors come into play in determining the success of NGS testing. In this review, we discuss pre-analytic requirements for AmpliSeq PCR-based sequencing using Ion Torrent Personal Genome Machine (PGM) (Life Technologies), a NGS sequencing platform that is often used by clinical laboratories for sequencing solid tumors because of its low input DNA requirement from formalin fixed and paraffin embedded tissue. The success of NGS mutational analysis is affected not only by the input DNA quantity but also by several other factors, including the specimen type, the DNA quality, and the tumor cellularity. Here, we review tissue requirements for solid tumor NGS based mutational analysis, including procedure types, tissue types, tumor volume and fraction, decalcification, and treatment effects. PMID:26343728

  7. Advances in analytical methodology for bioinorganic speciation analysis: metallomics, metalloproteomics and heteroatom-tagged proteomics and metabolomics.

    PubMed

    Szpunar, Joanna

    2005-04-01

    The recent developments in analytical techniques capable of providing information on the identity and quantity of heteroatom-containing biomolecules are critically discussed. Particular attention is paid to the emerging areas of bioinorganic analysis including: (i) a comprehensive analysis of the entirety of metal and metalloid species within a cell or tissue type (metallomics), (ii) the study of the part of the metallome involving the protein ligands (metalloproteomics), and (iii) the use of a heteroelement, naturally present in a protein or introduced in a tag added by means of derivatisation, for the spotting and quantification of proteins (heteroatom-tagged proteomics). Inductively coupled plasma mass spectrometry (ICP MS), used as detector in chromatography and electrophoresis, and supported by electrospray and MALDI MS, appears as the linchpin analytical technique for these emerging areas. This review focuses on the recent advances in ICP MS in biological speciation analysis including sensitive detection of non-metals, especially of sulfur and phosphorus, couplings to capillary and nanoflow HPLC and capillary electrophoresis, laser ablation ICP MS detection of proteins in gel electrophoresis, and isotope dilution quantification of biomolecules. The paper can be considered as a followup of a previous review by the author on a similar topic (J. Szpunar, Analyst, 2000, 125, 963).

  8. Metagenomic analysis of bacterial community composition and antibiotic resistance genes in a wastewater treatment plant and its receiving surface water.

    PubMed

    Tang, Junying; Bu, Yuanqing; Zhang, Xu-Xiang; Huang, Kailong; He, Xiwei; Ye, Lin; Shan, Zhengjun; Ren, Hongqiang

    2016-10-01

    The presence of pathogenic bacteria and the dissemination of antibiotic resistance genes (ARGs) may pose big risks to the rivers that receive the effluent from municipal wastewater treatment plants (WWTPs). In this study, we investigated the changes of bacterial community and ARGs along treatment processes of one WWTP, and examined the effects of the effluent discharge on the bacterial community and ARGs in the receiving river. Pyrosequencing was applied to reveal bacterial community composition including potential bacterial pathogen, and Illumina high-throughput sequencing was used for profiling ARGs. The results showed that the WWTP had good removal efficiency on potential pathogenic bacteria (especially Arcobacter butzleri) and ARGs. Moreover, the bacterial communities of downstream and upstream of the river showed no significant difference. However, the increase in the abundance of potential pathogens and ARGs at effluent outfall was observed, indicating that WWTP effluent might contribute to the dissemination of potential pathogenic bacteria and ARGs in the receiving river.

  9. Metagenomic analysis of bacterial community composition and antibiotic resistance genes in a wastewater treatment plant and its receiving surface water.

    PubMed

    Tang, Junying; Bu, Yuanqing; Zhang, Xu-Xiang; Huang, Kailong; He, Xiwei; Ye, Lin; Shan, Zhengjun; Ren, Hongqiang

    2016-10-01

    The presence of pathogenic bacteria and the dissemination of antibiotic resistance genes (ARGs) may pose big risks to the rivers that receive the effluent from municipal wastewater treatment plants (WWTPs). In this study, we investigated the changes of bacterial community and ARGs along treatment processes of one WWTP, and examined the effects of the effluent discharge on the bacterial community and ARGs in the receiving river. Pyrosequencing was applied to reveal bacterial community composition including potential bacterial pathogen, and Illumina high-throughput sequencing was used for profiling ARGs. The results showed that the WWTP had good removal efficiency on potential pathogenic bacteria (especially Arcobacter butzleri) and ARGs. Moreover, the bacterial communities of downstream and upstream of the river showed no significant difference. However, the increase in the abundance of potential pathogens and ARGs at effluent outfall was observed, indicating that WWTP effluent might contribute to the dissemination of potential pathogenic bacteria and ARGs in the receiving river. PMID:27340885

  10. Experimental analysis of the pressure drop and heat transfer through metal foams used as volumetric receivers under concentrated solar radiation

    SciTech Connect

    Albanakis, C.; Missirlis, D.; Yakinthos, K.; Goulas, A.; Michailidis, N.; Omar, H.; Tsipas, D.; Granier, B.

    2009-01-15

    The main objective of this work was to evaluate the behavior of porous materials, when treated as volumetric receivers under concentrated solar radiation. For this reason various porous metallic and ceramic materials have been tested as potential receivers for concentrated solar radiation. The experimental investigation showed that their efficiency was depending on both materials parameters and flow conditions. In this work, a variety of foam materials such as Ni and Ni alloy, inconel, copper, aluminum and SiC with different open cell porosity were tested as potential media to be used as volumetric receivers and heat exchangers. However, since the results were similar, for space economy, only the results of two of them, nickel and inconel were presented in detail and compared with each other. (author)

  11. Asymptotic BER analysis of FSO with multiple receive apertures over ℳ -distributed turbulence channels with pointing errors.

    PubMed

    Yang, Liang; Hasna, Mazen Omar; Gao, Xiqi

    2014-07-28

    In this paper, we consider a free-space optical (FSO) communication with multiple receive apertures over ℳ -distributed turbulence channels with pointing errors. In particular, we consider two different combining schemes at the receiver: optimal combining (OC) and selection combining (SC). With these setups, the statistic characters of the instantaneous electrical signal-to-noise ratio (SNR) are derived. Then, using the cumulative density function (CDF)-based method, we analyze the asymptotic bit-error rate (BER) performance. The derived results help quantifying the diversity order of our considered systems.

  12. Nonlinear waveform analysis for water-layer response and its application to high-frequency receiver function analysis using OBS array

    NASA Astrophysics Data System (ADS)

    Akuhara, Takeshi; Mochizuki, Kimihiro; Kawakatsu, Hitoshi; Takeuchi, Nozomu

    2016-07-01

    Determination of a response of the seawater column to teleseismic plane wave is important to suppress adverse effects of water reverberations in calculating receiver functions (RFs) using ocean-bottom seismometer (OBS) records. We present a novel nonlinear-waveform analysis method using the simulated annealing algorithm to determine such a water-layer response recorded by an OBS array. We then demonstrate its usefulness for the RF estimation through its application to synthetic and observed data. Synthetic experiments suggest that the water-layer response constrained in this way has a potential to improve RFs of OBS records drastically even in the high-frequency range (to 4 Hz). By applying it to data observed by the OBS array around the Kii Peninsula, southwestern Japan, we identified a low-velocity zone at the top of the subducting Philippine Sea plate. This zone may represent the incoming fluid-rich sediment layer that has been reported by active source seismic survey.

  13. Intracavity optogalvanic spectroscopy. An analytical technique for 14C analysis with subattomole sensitivity.

    PubMed

    Murnick, Daniel E; Dogru, Ozgur; Ilkmen, Erhan

    2008-07-01

    We show a new ultrasensitive laser-based analytical technique, intracavity optogalvanic spectroscopy, allowing extremely high sensitivity for detection of (14)C-labeled carbon dioxide. Capable of replacing large accelerator mass spectrometers, the technique quantifies attomoles of (14)C in submicrogram samples. Based on the specificity of narrow laser resonances coupled with the sensitivity provided by standing waves in an optical cavity and detection via impedance variations, limits of detection near 10(-15) (14)C/(12)C ratios are obtained. Using a 15-W (14)CO2 laser, a linear calibration with samples from 10(-15) to >1.5 x 10(-12) in (14)C/(12)C ratios, as determined by accelerator mass spectrometry, is demonstrated. Possible applications include microdosing studies in drug development, individualized subtherapeutic tests of drug metabolism, carbon dating and real time monitoring of atmospheric radiocarbon. The method can also be applied to detection of other trace entities.

  14. Mission Analysis Program for Solar Electric Propulsion (MAPSEP). Volume 1: Analytical manual for earth orbital MAPSEP

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An introduction to the MAPSEP organization and a detailed analytical description of all models and algorithms are given. These include trajectory and error covariance propagation methods, orbit determination processes, thrust modeling, and trajectory correction (guidance) schemes. Earth orbital MAPSEP contains the capability of analyzing almost any currently projected low thrust mission from low earth orbit to super synchronous altitudes. Furthermore, MAPSEP is sufficiently flexible to incorporate extended dynamic models, alternate mission strategies, and almost any other system requirement imposed by the user. As in the interplanetary version, earth orbital MAPSEP represents a trade-off between precision modeling and computational speed consistent with defining necessary system requirements. It can be used in feasibility studies as well as in flight operational support. Pertinent operational constraints are available both implicitly and explicitly. However, the reader should be warned that because of program complexity, MAPSEP is only as good as the user and will quickly succumb to faulty user inputs.

  15. Biomarker Analysis of Stored Blood Products: Emphasis on Pre-Analytical Issues

    PubMed Central

    Delobel, Julien; Rubin, Olivier; Prudent, Michel; Crettaz, David; Tissot, Jean-Daniel; Lion, Niels

    2010-01-01

    Millions of blood products are transfused every year; many lives are thus directly concerned by transfusion. The three main labile blood products used in transfusion are erythrocyte concentrates, platelet concentrates and fresh frozen plasma. Each of these products has to be stored according to its particular components. However, during storage, modifications or degradation of those components may occur, and are known as storage lesions. Thus, biomarker discovery of in vivo blood aging as well as in vitro labile blood products storage lesions is of high interest for the transfusion medicine community. Pre-analytical issues are of major importance in analyzing the various blood products during storage conditions as well as according to various protocols that are currently used in blood banks for their preparations. This paper will review key elements that have to be taken into account in the context of proteomic-based biomarker discovery applied to blood banking. PMID:21151459

  16. Biomarker analysis of stored blood products: emphasis on pre-analytical issues.

    PubMed

    Delobel, Julien; Rubin, Olivier; Prudent, Michel; Crettaz, David; Tissot, Jean-Daniel; Lion, Niels

    2010-01-01

    Millions of blood products are transfused every year; many lives are thus directly concerned by transfusion. The three main labile blood products used in transfusion are erythrocyte concentrates, platelet concentrates and fresh frozen plasma. Each of these products has to be stored according to its particular components. However, during storage, modifications or degradation of those components may occur, and are known as storage lesions. Thus, biomarker discovery of in vivo blood aging as well as in vitro labile blood products storage lesions is of high interest for the transfusion medicine community. Pre-analytical issues are of major importance in analyzing the various blood products during storage conditions as well as according to various protocols that are currently used in blood banks for their preparations. This paper will review key elements that have to be taken into account in the context of proteomic-based biomarker discovery applied to blood banking. PMID:21151459

  17. Priority survey between indicators and analytic hierarchy process analysis for green chemistry technology assessment

    PubMed Central

    Kim, Sungjune; Hong, Seokpyo; Ahn, Kilsoo; Gong, Sungyong

    2015-01-01

    Objectives This study presents the indicators and proxy variables for the quantitative assessment of green chemistry technologies and evaluates the relative importance of each assessment element by consulting experts from the fields of ecology, chemistry, safety, and public health. Methods The results collected were subjected to an analytic hierarchy process to obtain the weights of the indicators and the proxy variables. Results These weights may prove useful in avoiding having to resort to qualitative means in absence of weights between indicators when integrating the results of quantitative assessment by indicator. Conclusions This study points to the limitations of current quantitative assessment techniques for green chemistry technologies and seeks to present the future direction for quantitative assessment of green chemistry technologies. PMID:26206364

  18. Numerical stability analysis of the pseudo-spectral analytical time-domain PIC algorithm

    SciTech Connect

    Godfrey, Brendan B.; Vay, Jean-Luc; Haber, Irving

    2014-02-01

    The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper derives and solves the numerical dispersion relation for the PSATD algorithm and compares the results with corresponding behavior of the more conventional pseudo-spectral time-domain (PSTD) and finite difference time-domain (FDTD) algorithms. In general, PSATD offers superior stability properties over a reasonable range of time steps. More importantly, one version of the PSATD algorithm, when combined with digital filtering, is almost completely free of the numerical Cherenkov instability for time steps (scaled to the speed of light) comparable to or smaller than the axial cell size.

  19. Towards more complete specifications for acceptable analytical performance - a plea for error grid analysis.

    PubMed

    Krouwer, Jan S; Cembrowski, George S

    2011-07-01

    Abstract We examine limitations of common analytical performance specifications for quantitative assays. Specifications can be either clinical or regulatory. Problems with current specifications include specifying limits for only 95% of the results, having only one set of limits that demarcate no harm from minor harm, using incomplete models for total error, not accounting for the potential of user error, and not supplying sufficient protocol requirements. Error grids are recommended to address these problems as error grids account for 100% of the data and stratify errors into different severity categories. Total error estimation from a method comparison can be used to estimate the inner region of an error grid, but the outer region needs to be addressed using risk management techniques. The risk management steps, foreign to many in laboratory medicine, are outlined.

  20. Dynamical analysis of the avian-human influenza epidemic model using the semi-analytical method

    NASA Astrophysics Data System (ADS)

    Jabbari, Azizeh; Kheiri, Hossein; Bekir, Ahmet

    2015-03-01

    In this work, we present a dynamic behavior of the avian-human influenza epidemic model by using efficient computational algorithm, namely the multistage differential transform method(MsDTM). The MsDTM is used here as an algorithm for approximating the solutions of the avian-human influenza epidemic model in a sequence of time intervals. In order to show the efficiency of the method, the obtained numerical results are compared with the fourth-order Runge-Kutta method (RK4M) and differential transform method(DTM) solutions. It is shown that the MsDTM has the advantage of giving an analytical form of the solution within each time interval which is not possible in purely numerical techniques like RK4M.

  1. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  2. Teaching the Analytical Life

    ERIC Educational Resources Information Center

    Jackson, Brian

    2010-01-01

    Using a survey of 138 writing programs, I argue that we must be more explicit about what we think students should get out of analysis to make it more likely that students will transfer their analytical skills to different settings. To ensure our students take analytical skills with them at the end of the semester, we must simplify the task we…

  3. Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example

    PubMed Central

    Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio

    2014-01-01

    Abstract Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable

  4. Hard Data Analytics Problems Make for Better Data Analysis Algorithms: Bioinformatics as an Example.

    PubMed

    Bacardit, Jaume; Widera, Paweł; Lazzarini, Nicola; Krasnogor, Natalio

    2014-09-01

    Data mining and knowledge discovery techniques have greatly progressed in the last decade. They are now able to handle larger and larger datasets, process heterogeneous information, integrate complex metadata, and extract and visualize new knowledge. Often these advances were driven by new challenges arising from real-world domains, with biology and biotechnology a prime source of diverse and hard (e.g., high volume, high throughput, high variety, and high noise) data analytics problems. The aim of this article is to show the broad spectrum of data mining tasks and challenges present in biological data, and how these challenges have driven us over the years to design new data mining and knowledge discovery procedures for biodata. This is illustrated with the help of two kinds of case studies. The first kind is focused on the field of protein structure prediction, where we have contributed in several areas: by designing, through regression, functions that can distinguish between good and bad models of a protein's predicted structure; by creating new measures to characterize aspects of a protein's structure associated with individual positions in a protein's sequence, measures containing information that might be useful for protein structure prediction; and by creating accurate estimators of these structural aspects. The second kind of case study is focused on omics data analytics, a class of biological data characterized for having extremely high dimensionalities. Our methods were able not only to generate very accurate classification models, but also to discover new biological knowledge that was later ratified by experimentalists. Finally, we describe several strategies to tightly integrate knowledge extraction and data mining in order to create a new class of biodata mining algorithms that can natively embrace the complexity of biological data, efficiently generate accurate information in the form of classification/regression models, and extract valuable new

  5. Gravity field error analysis for pendulum formations by a semi-analytical approach

    NASA Astrophysics Data System (ADS)

    Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico

    2016-10-01

    Many geoscience disciplines push for ever higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure compared to uc(Grace). One possibility to increase the sensitivity and isotropy by adding cross-track information is a pair of satellites flying in a pendulum formation. This formation contains two satellites which have different ascending nodes and arguments of latitude, but have the same orbital height and inclination. In this study, the semi-analytical approach for efficient pre-mission error assessment is presented, and the transfer coefficients of range, range-rate and range-acceleration gravitational perturbations are derived analytically for the pendulum formation considering a set of opening angles. The new challenge is the time variations of the opening angle and the range, leading to temporally variable transfer coefficients. This is solved by Fourier expansion of the sine/cosine of the opening angle and the central angle. The transfer coefficients are further applied to assess the error patterns which are caused by different orbital parameters. The simulation results indicate that a significant improvement in accuracy and isotropy is obtained for small and medium initial opening angles of single polar pendulums, compared to uc(Grace). The optimal initial opening angles are 45° and 15° for accuracy and isotropy, respectively. For a Bender configuration, which is constituted by a polar uc(Grace) and an inclined pendulum in this paper, the behaviour of results is dependent on the inclination (prograde vs. retrograde) and on the relative baseline orientation (left or right leading). The simulation for a sun-synchronous orbit shows better results for the left leading case.

  6. Implementation of an analytical verification technique on three building energy-analysis codes: SUNCAT 2. 4, DOE 2. 1, and DEROB III

    SciTech Connect

    Wortman, D.; O'Doherty, B.; Judkoff, R.

    1981-01-01

    An analytical verification technique for building energy analysis codes has been developed. For this technique, building models are developed that can be both solved analytically and modeled using the analysis codes. The output of the codes is then compared with the analytical solutions. In this way, the accuracy of selected mechanisms in the codes can be verified. The procedure consists of several tests and was run on SUNCAT 2.4, DOE 2.1, and DEROB III. The results are presented and analyzed.

  7. An Investigation of the Relationship between the Fear of Receiving Negative Criticism and of Taking Academic Risk through Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Cetin, Bayram; Ilhan, Mustafa; Yilmaz, Ferat

    2014-01-01

    The aim of this study is to examine the relationship between the fear of receiving negative criticism and taking academic risk through canonical correlation analysis-in which a relational model was used. The participants of the study consisted of 215 university students enrolled in various programs at Dicle University's Ziya Gökalp Faculty of…

  8. An Analysis of the Relationship between the Percentage of Enrolled Students Receiving Special Education and High School Graduation Rates

    ERIC Educational Resources Information Center

    Barron, Janice Bonner

    2013-01-01

    The purpose of this study was to examine the relationship in the percentage of enrolled students receiving special education services to the high school graduation rates of the campus and students in sub-group (White, African American, and Hispanic) populations. Educators are faced with concerns regarding special education and high school…

  9. A REVISED SOLAR TRANSFORMITY FOR TIDAL ENERGY RECEIVED BY THE EARTH AND DISSIPATED GLOBALLY: IMPLICATIONS FOR EMERGY ANALYSIS

    EPA Science Inventory

    Solar transformities for the tidal energy received by the earth and the tidal energy dissipated globally can be calculated because both solar energy and the gravitational attraction of the sun and moon drive independent processes that produce an annual flux of geopotential energy...

  10. An Analysis of Vocational Rehabilitation Services for Consumers with Hearing Impairments Who Received College or University Training

    ERIC Educational Resources Information Center

    Boutin, Daniel L.; Wilson, Keith

    2009-01-01

    The purpose of this study was to determine the predictive ability of vocational rehabilitation services for deaf and hard of hearing consumers who received college and university training. The RSA-911 database for fiscal year 2004 was analyzed to evaluate the effectiveness of 21 services in leading to competitive employment. A model predicting…

  11. Uncertainty determination for nondestructive chemical analytical methods using field data and application to XRF analysis for lead.

    PubMed

    Bartley, David L; Slaven, James E; Rose, Mike C; Andrew, Michael E; Harper, Martin

    2007-12-01

    Air sampling and analytical methods are developed to provide a basis for decision making. They are evaluated in the laboratory against prescribed fitness-for-use criteria even though laboratory validation does not take into account all possible sources of uncertainty in field application. Field evaluation would be preferable but is complicated by the lack of controlled conditions, which limits the ability to compare analytical methods and to recognize outliers and assess variance homogeneity across the range of interest. The specific situation of evaluating nondestructive field analytical methods against their reference laboratory equivalent is considered here, since the difficulty of providing replicates is obviated in this case. A portable X-ray fluorescence (XRF) analyzer was used to determine the lead content of air filter samples from several workplaces where lead is used or is a contaminant of the process material. The portable XRF method has the advantage of allowing for faster decisions compared with the alternative of submitting the air samples to an off-site laboratory for analysis. Since the XRF method is nondestructive, the same air samples were also subjected to the reference laboratory-based method of analysis. Two statistical approaches were developed specifically to deal with non-normal elements of the data in evaluating the results. The ISO GUM method identifies outliers and then calculates an accuracy range about the true concentration for the remainder of the data. This coverage is then adjusted to account for the rate of outlier occurrence. The bootstrap procedure uses a large number of computer-generated data points that are sampled, with replacement, from the original set including outliers to determine the coverage. No significant difference is seen between the two statistical approaches. Both approaches result in similar coverage and support the adoption of method acceptance criteria specific to field evaluation (a symmetric accuracy range

  12. Optimization in the design of a 12 gigahertz low cost ground receiving system for broadcast satellites. Volume 1: System design, performance, and cost analysis

    NASA Technical Reports Server (NTRS)

    Ohkubo, K.; Han, C. C.; Albernaz, J.; Janky, J. M.; Lusignan, B. B.

    1972-01-01

    The technical and economical feasibility of using the 12 GHz band for broadcasting from satellites were examined. Among the assigned frequency bands for broadcast satellites, the 12 GHz band system offers the most channels. It also has the least interference on and from the terrestrial communication links. The system design and analysis are carried out on the basis of a decision analysis model. Technical difficulties in achieving low-cost 12 GHz ground receivers are solved by making use of a die cast aluminum packaging, a hybrid integrated circuit mixer, a cavity stabilized Gunn oscillator and other state-of-the-art microwave technologies for the receiver front-end. A working model was designed and tested, which used frequency modulation. A final design for the 2.6 GHz system ground receiver is also presented. The cost of the ground-terminal was analyzed and minimized for a given figure-of-merit (a ratio of receiving antenna gain to receiver system noise temperature). The results were used to analyze the performance and cost of the whole satellite system.

  13. Studies of the analyte-carrier interface in flow injection analysis

    SciTech Connect

    Brown, S.D.

    1992-01-01

    Chemical analysis in flowing solution is popular for automation of classical methods. However, most of the classical methods are not specific enough for direct multicomponent analysis of simple mixtures. This research project has the goals of study of rapid multicomponent analysis of transient species in flowing media, and investigations of chemical reactions at interfaces and of effects of competition on distribution of products from interfacial reaction. This report summarizes work done over the past 4.5 years; support has been terminated.

  14. Impact of pre-analytical factors on the proteomic analysis of formalin-fixed paraffin-embedded tissue.

    PubMed

    Thompson, Seonaid M; Craven, Rachel A; Nirmalan, Niroshini J; Harnden, Patricia; Selby, Peter J; Banks, Rosamonde E

    2013-04-01

    Formalin-fixed paraffin-embedded (FFPE) tissue samples represent a tremendous potential resource for biomarker discovery, with large numbers of samples in hospital pathology departments and links to clinical information. However, the cross-linking of proteins and nucleic acids by formalin fixation has hampered analysis and proteomic studies have been restricted to using frozen tissue, which is more limited in availability as it needs to be collected specifically for research. This means that rare disease subtypes cannot be studied easily. Recently, improved extraction techniques have enabled analysis of FFPE tissue by a number of proteomic techniques. As with all clinical samples, pre-analytical factors are likely to impact on the results obtained, although overlooked in many studies. The aim of this review is to discuss the various pre-analytical factors, which include warm and cold ischaemic time, size of sample, fixation duration and temperature, tissue processing conditions, length of storage of archival tissue and storage conditions, and to review the studies that have considered these factors in more detail. In those areas where investigations are few or non-existent, illustrative examples of the possible importance of specific factors have been drawn from studies using frozen tissue or from immunohistochemical studies of FFPE tissue.

  15. Content Analysis of Meta-Analytic Studies from I/O Psychology.

    ERIC Educational Resources Information Center

    Cornwell, John M.

    The use of meta-analysis in industrial and organizational psychology has become quite common. Unfortunately, the understanding and research necessary to ensure appropriate application of the technique have not been as widespread. As part of a larger study, a content analysis of meta-analyses from the industrial and organizational psychological…

  16. Analytical and Computational Aspects of Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Bilevel problem formulations have received considerable attention as an approach to multidisciplinary optimization in engineering. We examine the analytical and computational properties of one such approach, collaborative optimization. The resulting system-level optimization problems suffer from inherent computational difficulties due to the bilevel nature of the method. Most notably, it is impossible to characterize and hence identify solutions of the system-level problems because the standard first-order conditions for solutions of constrained optimization problems do not hold. The analytical features of the system-level problem make it difficult to apply conventional nonlinear programming algorithms. Simple examples illustrate the analysis and the algorithmic consequences for optimization methods. We conclude with additional observations on the practical implications of the analytical and computational properties of collaborative optimization.

  17. Expression, purification, crystallization and preliminary X-ray analysis of the receiver domain of Staphylococcus aureus LytR protein

    PubMed Central

    Shala, Agnesa; Patel, Kevin H.; Golemi-Kotra, Dasantila; Audette, Gerald F.

    2013-01-01

    The response-regulatory protein LytR belongs to a family of transcription factors involved in the regulation of important virulence factors in pathogenic bacteria. The protein consists of a receiver domain and an effector domain, which play an important role in controlled cell death and lysis. The LytR receiver domain (LytRN) has been overexpressed, purified and crystallized using the sitting-drop and hanging-drop vapour-diffusion methods. The crystals grew as needles, with unit-cell parameters a = b = 84.82, c = 157.3 Å, α = β = 90, γ = 120°. LytRN crystallized in space group P6122 and the crystals diffracted to a maximum resolution of 2.34 Å. Based on the Matthews coefficient (V M = 5.44 Å3 Da−1), one molecule is estimated to be present in the asymmetric unit. PMID:24316844

  18. Material degradation due to moisture and temperature. Part 1: mathematical model, analysis, and analytical solutions

    NASA Astrophysics Data System (ADS)

    Xu, C.; Mudunuru, M. K.; Nakshatrala, K. B.

    2016-06-01

    The mechanical response, serviceability, and load-bearing capacity of materials and structural components can be adversely affected due to external stimuli, which include exposure to a corrosive chemical species, high temperatures, temperature fluctuations (i.e., freezing-thawing), cyclic mechanical loading, just to name a few. It is, therefore, of paramount importance in several branches of engineering—ranging from aerospace engineering, civil engineering to biomedical engineering—to have a fundamental understanding of degradation of materials, as the materials in these applications are often subjected to adverse environments. As a result of recent advancements in material science, new materials such as fiber-reinforced polymers and multi-functional materials that exhibit high ductility have been developed and widely used, for example, as infrastructural materials or in medical devices (e.g., stents). The traditional small-strain approaches of modeling these materials will not be adequate. In this paper, we study degradation of materials due to an exposure to chemical species and temperature under large strain and large deformations. In the first part of our research work, we present a consistent mathematical model with firm thermodynamic underpinning. We then obtain semi-analytical solutions of several canonical problems to illustrate the nature of the quasi-static and unsteady behaviors of degrading hyperelastic solids.

  19. Material degradation due to moisture and temperature. Part 1: mathematical model, analysis, and analytical solutions

    NASA Astrophysics Data System (ADS)

    Xu, C.; Mudunuru, M. K.; Nakshatrala, K. B.

    2016-11-01

    The mechanical response, serviceability, and load-bearing capacity of materials and structural components can be adversely affected due to external stimuli, which include exposure to a corrosive chemical species, high temperatures, temperature fluctuations (i.e., freezing-thawing), cyclic mechanical loading, just to name a few. It is, therefore, of paramount importance in several branches of engineering—ranging from aerospace engineering, civil engineering to biomedical engineering—to have a fundamental understanding of degradation of materials, as the materials in these applications are often subjected to adverse environments. As a result of recent advancements in material science, new materials such as fiber-reinforced polymers and multi-functional materials that exhibit high ductility have been developed and widely used, for example, as infrastructural materials or in medical devices (e.g., stents). The traditional small-strain approaches of modeling these materials will not be adequate. In this paper, we study degradation of materials due to an exposure to chemical species and temperature under large strain and large deformations. In the first part of our research work, we present a consistent mathematical model with firm thermodynamic underpinning. We then obtain semi-analytical solutions of several canonical problems to illustrate the nature of the quasi-static and unsteady behaviors of degrading hyperelastic solids.

  20. Analytical expressions for chatter analysis in milling operations with one dominant mode

    NASA Astrophysics Data System (ADS)

    Iglesias, A.; Munoa, J.; Ciurana, J.; Dombovari, Z.; Stepan, G.

    2016-08-01

    In milling, an accurate prediction of chatter is still one of the most complex problems in the field. The presence of these self-excited vibrations can spoil the surface of the part and can also cause a large reduction in tool life. The stability diagrams provide a practical selection of the optimum cutting conditions determined either by time domain or frequency domain based methods. Applying these methods parametric or parameter traced representations of the linear stability limits can be achieved by solving the corresponding eigenvalue problems. In this work, new analytical formulae are proposed related to the parameter domains of both Hopf and period doubling type stability boundaries emerging in the regenerative mechanical model of time periodical milling processes. These formulae are useful to enrich and speed up the currently used numerical methods. Also, the destabilization mechanism of double period chatter is explained, creating an analogy with the chatter related to the Hopf bifurcation, considering one dominant mode and using concepts established by the Pioneers of chatter research.

  1. General analytical approach for sound transmission loss analysis through a thick metamaterial plate

    SciTech Connect

    Oudich, Mourad; Zhou, Xiaoming; Badreddine Assouar, M.

    2014-11-21

    We report theoretically and numerically on the sound transmission loss performance through a thick plate-type acoustic metamaterial made of spring-mass resonators attached to the surface of a homogeneous elastic plate. Two general analytical approaches based on plane wave expansion were developed to calculate both the sound transmission loss through the metamaterial plate (thick and thin) and its band structure. The first one can be applied to thick plate systems to study the sound transmission for any normal or oblique incident sound pressure. The second approach gives the metamaterial dispersion behavior to describe the vibrational motions of the plate, which helps to understand the physics behind sound radiation through air by the structure. Computed results show that high sound transmission loss up to 72 dB at 2 kHz is reached with a thick metamaterial plate while only 23 dB can be obtained for a simple homogeneous plate with the same thickness. Such plate-type acoustic metamaterial can be a very effective solution for high performance sound insulation and structural vibration shielding in the very low-frequency range.

  2. Re-Paying Attention to Visitor Behavior: A Re-Analysis using Meta-Analytic Techniques.

    PubMed

    Castro, Yone; Botella, Juan; Asensio, Mikel

    2016-01-01

    The present study describes a meta-analytic review of museum visitors' behavior. Although there is a large number of visitor studies available, their cumulative importance has not been determined due to the lack of rigorous methods to determine common causes of visitors' behaviors. We analyzed Serrell's (1998) database of 110 studies, defining a number of variables that measure visitors' behaviors in exhibition spaces which exceeded the most typical and obvious ones. We defined four indexes of effect size and obtained their combined estimates: average time per feature [ATF● = 0.43 (0.49; 0.37)], percentage of diligent visitors [dv● = 30% (0.39; 0.23)], inverse of velocity [Iv● = 4.07 min/100m2 (4.55; 3.59)], and stops per feature [SF● = 0.35 (0.38; 0.33)], and we analyzed the role of relevant moderating variables. Key findings indicate, for example, that the visiting time for each display element relates to the size of the exhibition and its newness, and visitor walking speed is higher in large exhibit areas. The indexes obtained in this study can be understood as references to be used for comparison with new evaluations. They may help to predict people's behavior and appreciation of new exhibitions, identifying important problems in museum designs, and providing new research tools for this field. PMID:27319781

  3. Crustal structure beneath two seismic stations in the Sunda-Banda arc transition zone derived from receiver function analysis

    SciTech Connect

    Syuhada; Hananto, Nugroho D.; Handayani, Lina; Puspito, Nanang T; Yudistira, Tedi; Anggono, Titi

    2015-04-24

    We analyzed receiver functions to estimate the crustal thickness and velocity structure beneath two stations of Geofon (GE) network in the Sunda-Banda arc transition zone. The stations are located in two different tectonic regimes: Sumbawa Island (station PLAI) and Timor Island (station SOEI) representing the oceanic and continental characters, respectively. We analyzed teleseismic events of 80 earthquakes to calculate the receiver functions using the time-domain iterative deconvolution technique. We employed 2D grid search (H-κ) algorithm based on the Moho interaction phases to estimate crustal thickness and Vp/Vs ratio. We also derived the S-wave velocity variation with depth beneath both stations by inverting the receiver functions. We obtained that beneath station PLAI the crustal thickness is about 27.8 km with Vp/Vs ratio 2.01. As station SOEI is covered by very thick low-velocity sediment causing unstable solution for the inversion, we modified the initial velocity model by adding the sediment thickness estimated using high frequency content of receiver functions in H-κ stacking process. We obtained the crustal thickness is about 37 km with VP/Vs ratio 2.2 beneath station SOEI. We suggest that the high Vp/Vs in station PLAI may indicate the presence of fluid ascending from the subducted plate to the volcanic arc, whereas the high Vp/Vs in station SOEI could be due to the presence of sediment and rich mafic composition in the upper crust and possibly related to the serpentinization process in the lower crust. We also suggest that the difference in velocity models and crustal thicknesses between stations PLAI and SOEI are consistent with their contrasting tectonic environments.

  4. Crustal structure beneath two seismic stations in the Sunda-Banda arc transition zone derived from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Syuhada, Hananto, Nugroho D.; Puspito, Nanang T.; Anggono, Titi; Handayani, Lina; Yudistira, Tedi

    2015-04-01

    We analyzed receiver functions to estimate the crustal thickness and velocity structure beneath two stations of Geofon (GE) network in the Sunda-Banda arc transition zone. The stations are located in two different tectonic regimes: Sumbawa Island (station PLAI) and Timor Island (station SOEI) representing the oceanic and continental characters, respectively. We analyzed teleseismic events of 80 earthquakes to calculate the receiver functions using the time-domain iterative deconvolution technique. We employed 2D grid search (H-κ) algorithm based on the Moho interaction phases to estimate crustal thickness and Vp/Vs ratio. We also derived the S-wave velocity variation with depth beneath both stations by inverting the receiver functions. We obtained that beneath station PLAI the crustal thickness is about 27.8 km with Vp/Vs ratio 2.01. As station SOEI is covered by very thick low-velocity sediment causing unstable solution for the inversion, we modified the initial velocity model by adding the sediment thickness estimated using high frequency content of receiver functions in H-κ stacking process. We obtained the crustal thickness is about 37 km with VP/Vs ratio 2.2 beneath station SOEI. We suggest that the high Vp/Vs in station PLAI may indicate the presence of fluid ascending from the subducted plate to the volcanic arc, whereas the high Vp/Vs in station SOEI could be due to the presence of sediment and rich mafic composition in the upper crust and possibly related to the serpentinization process in the lower crust. We also suggest that the difference in velocity models and crustal thicknesses between stations PLAI and SOEI are consistent with their contrasting tectonic environments.

  5. Pyrosequencing Analysis Reveals Changes in Intestinal Microbiota of Healthy Adults Who Received a Daily Dose of Immunomodulatory Probiotic Strains

    PubMed Central

    Plaza-Díaz, Julio; Fernández-Caballero, Jose Ángel; Chueca, Natalia; García, Federico; Gómez-Llorente, Carolina; Sáez-Lara, María José; Fontana, Luis; Gil, Ángel

    2015-01-01

    The colon microbiota plays a crucial role in human gastrointestinal health. Current attempts to manipulate the colon microbiota composition are aimed at finding remedies for various diseases. We have recently described the immunomodulatory effects of three probiotic strains (Lactobacillus rhamnosus CNCM I-4036, Lactobacillus paracasei CNCM I-4034, and Bifidobacterium breve CNCM I-4035). The goal of the present study was to analyze the compositions of the fecal microbiota of healthy adults who received one of these strains using high-throughput 16S ribosomal RNA gene sequencing. Bacteroides was the most abundant genus in the groups that received L. rhamnosus CNCM I-4036 or L. paracasei CNCM I-4034. The Shannon indices were significantly increased in these two groups. Our results also revealed a significant increase in the Lactobacillus genus after the intervention with L. rhamnosus CNCM I-4036. The initially different colon microbiota became homogeneous in the subjects who received L. rhamnosus CNCM I-4036. While some orders that were initially present disappeared after the administration of L. rhamnosus CNCM I-4036, other orders, such as Sphingobacteriales, Nitrospirales, Desulfobacterales, Thiotrichales, and Synergistetes, were detected after the intervention. In summary, our results show that the intake of these three bacterial strains induced changes in the colon microbiota. PMID:26016655

  6. Error performance analysis of FSO links with equal gain diversity receivers over double generalized gamma fading channels

    NASA Astrophysics Data System (ADS)

    Aminikashani, Mohammadreza; Kavehrad, Mohsen; Gu, Wenjun

    2016-02-01

    Free space optical (FSO) communication has been receiving increasing attention in recent years with its ability to achieve ultra-high data rates over unlicensed optical spectrum. A major performance limiting factor in FSO systems is atmospheric turbulence which severely degrades the system performance. To address this issue, multiple transmit and/or receive apertures can be employed, and the performance can be improved via diversity gain. In this paper, we investigate the bit error rate (BER) performance of FSO systems with transmit diversity or receive diversity with equal gain combining (EGC) over atmospheric turbulence channels described by the Double Generalized Gamma (Double GG) distribution. The Double GG distribution, recently proposed, generalizes many existing turbulence models in a closed-form expression and covers all turbulence conditions. Since the distribution function of a sum of Double GG random variables (RVs) appears in BER expression, we first derive a closed-form upper bound for the distribution of the sum of Double GG distributed RVs. A novel union upper bound for the average BER as well as corresponding asymptotic expression is then derived and evaluated in terms of Meijers G-functions.

  7. Copper and tin isotopic analysis of ancient bronzes for archaeological investigation: development and validation of a suitable analytical methodology.

    PubMed

    Balliana, Eleonora; Aramendía, Maite; Resano, Martin; Barbante, Carlo; Vanhaecke, Frank

    2013-03-01

    Although in many cases Pb isotopic analysis can be relied on for provenance determination of ancient bronzes, sometimes the use of "non-traditional" isotopic systems, such as those of Cu and Sn, is required. The work reported on in this paper aimed at revising the methodology for Cu and Sn isotope ratio measurements in archaeological bronzes via optimization of the analytical procedures in terms of sample pre-treatment, measurement protocol, precision, and analytical uncertainty. For Cu isotopic analysis, both Zn and Ni were investigated for their merit as internal standard (IS) relied on for mass bias correction. The use of Ni as IS seems to be the most robust approach as Ni is less prone to contamination, has a lower abundance in bronzes and an ionization potential similar to that of Cu, and provides slightly better reproducibility values when applied to NIST SRM 976 Cu isotopic reference material. The possibility of carrying out direct isotopic analysis without prior Cu isolation (with AG-MP-1 anion exchange resin) was investigated by analysis of CRM IARM 91D bronze reference material, synthetic solutions, and archaeological bronzes. Both procedures (Cu isolation/no Cu isolation) provide similar δ (65)Cu results with similar uncertainty budgets in all cases (±0.02-0.04 per mil in delta units, k = 2, n = 4). Direct isotopic analysis of Cu therefore seems feasible, without evidence of spectral interference or matrix-induced effect on the extent of mass bias. For Sn, a separation protocol relying on TRU-Spec anion exchange resin was optimized, providing a recovery close to 100 % without on-column fractionation. Cu was recovered quantitatively together with the bronze matrix with this isolation protocol. Isotopic analysis of this Cu fraction provides δ (65)Cu results similar to those obtained upon isolation using AG-MP-1 resin. This means that Cu and Sn isotopic analysis of bronze alloys can therefore be carried out after a single chromatographic

  8. Development of an analytical method for the simultaneous analysis of MCPD esters and glycidyl esters in oil-based foodstuffs.

    PubMed

    Ermacora, Alessia; Hrnčiřík, Karel

    2014-01-01

    Substantial progress has been recently made in the development and optimisation of analytical methods for the quantification of 2-MCPD, 3-MCPD and glycidyl esters in oils and fats, and there are a few methods currently available that allow a reliable quantification of these contaminants in bulk oils and fats. On the other hand, no standard method for the analysis of foodstuffs has yet been established. The aim of this study was the development and validation of a new method for the simultaneous quantification of 2-MCPD, 3-MCPD and glycidyl esters in oil-based food products. The developed protocol includes a first step of liquid-liquid extraction and purification of the lipophilic substances of the sample, followed by the application of a previously developed procedure based on acid transesterification, for the indirect quantification of these contaminants in oils and fats. The method validation was carried out on food products (fat-based spreads, creams, margarine, mayonnaise) manufactured in-house, in order to control the manufacturing process and account for any food matrix-analyte interactions (the sample spiking was carried out on the single components used for the formulations rather than the final products). The method showed good accuracy (the recoveries ranged from 97% to 106% for bound 3-MCPD and 2-MCPD and from 88% to 115% for bound glycidol) and sensitivity (the LOD was 0.04 and 0.05 mg kg(-1) for bound MCPD and glycidol, respectively). Repeatability and reproducibility were satisfactory (RSD below 2% and 5%, respectively) for all analytes. The levels of salts and surface-active compounds in the formulation were found to have no impact on the accuracy and the other parameters of the method.

  9. Extreme Scale Visual Analytics

    SciTech Connect

    Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E; Potok, Thomas E

    2013-01-01

    Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.

  10. Spatial Analysis in Determination Of Flood Prone Areas Using Geographic Information System and Analytical Hierarchy Process at Sungai Sembrong's Catchment

    NASA Astrophysics Data System (ADS)

    Bukari, S. M.; Ahmad, M. A.; Wai, T. L.; Kaamin, M.; Alimin, N.

    2016-07-01

    Floods that struck Johor state in 2006 and 2007 and the East Coastal in 2014 have triggered a greatly impact to the flood management here in Malaysia. Accordingly, this study conducted to determine potential areas of flooding, especially in Batu Pahat district since it faces terrifying experienced with heavy flood. This objective is archived by using the application of Geographic Information Systems (GIS) on study area of flood risk location at the watershed area of Sungai Sembrong. GIS functions as spatial analysis is capable to produce new information based on analysis of data stored in the system. Meanwhile the Analytical Hierarchy Process (AHP) was used as a method for setting up in decision making concerning the existing data. By using AHP method, preparation and position of the criteria and parameters required in GIS are neater and easier to analyze. Through this study, a flood prone area in the watershed of Sungai Sembrong was identified with the help of GIS and AHP. Analysis was conducted to test two different cell sizes, which are 30 and 5. The analysis of flood prone areas were tested on both cell sizes with two different water levels and the results of the analysis were displayed by GIS. Therefore, the use of AHP and GIS are effective and able to determine the potential flood plain areas in the watershed area of Sungai Sembrong.

  11. An Integrated Geovisual Analytics Framework for Analysis of Energy Consumption Data and Renewable Energy Potentials

    SciTech Connect

    Omitaomu, Olufemi A; Kramer, Ian S; Kodysh, Jeffrey B; Bhaduri, Budhendra L; Steed, Chad A; Karthik, Rajasekar; Nugent, Philip J; Myers, Aaron T

    2012-01-01

    We present an integrated geovisual analytics framework for utility consumers to interactively analyze and benchmark their energy consumption. The framework uses energy and property data already available with the utility companies and county governments respectively. The motivation for the developed framework is the need for citizens to go beyond the conventional utility bills in understanding the patterns in their energy consumption. There is also a need for citizens to go beyond one-time improvements that are often not monitored and measured over time. Some of the features of the framework include the ability for citizens to visualize their historical energy consumption data along with weather data in their location. The quantity of historical energy data available is significantly more than what is available from utility bills. An overlay of the weather data provides users with a visual correlation between weather patterns and their energy consumption patterns. Another feature of the framework is the ability for citizens to compare their consumption on an aggregated basis to that of their peers other citizens living in houses of similar size and age and within the same or different geographical boundaries, such as subdivision, zip code, or county. The users could also compare their consumption to others based on the size of their family and other attributes. This feature could help citizens determine if they are among the best in class . The framework can also be used by the utility companies to better understand their customers and to plan their services. To make the framework easily accessible, it is developed to be compatible with mobile consumer electronics devices.

  12. Analytical strategies for the direct mass spectrometric analysis of steroid and corticosteroid phase II metabolites.

    PubMed

    Antignac, Jean-Philippe; Brosseaud, Aline; Gaudin-Hirret, Isabelle; André, François; Bizec, Bruno Le

    2005-03-01

    The use of steroid hormones as growth promoters remains illegal in Europe. A classical approach used to control their utilization consists to measure the parent drug in target biological matrices. However, this strategy may fail when the parent drug is submitted to extensive metabolism reactions. For urine and tissue samples, chemical or enzymatic hydrolysis is usually applied in order to deconjugate glucuronide and sulfate phase II metabolites. But this treatment lead to the loss of information such as nature and relative proportions of the different conjugated forms, which can be useful, for example, to discriminate an endogenous production from an exogenous administration for natural hormones, or for other clinical or biochemical specific applications. For these purposes, direct measurement of conjugated metabolites using liquid chromatography-tandem mass spectrometry may represent a solution of choice. In this context, the mass spectrometric behavior of 14 steroid and corticosteroid phase II metabolites after electrospray ionization was investigated. Their fragmentation pathways in tandem mass spectrometry revealed some specificities within the different group of conjugates. A specific acquisition program (MRM mode) was developed for the unambiguous identification of the studied reference compounds. A more generic method (Parent Scan mode) was also developed for fishing approaches consisting to monitor several fragment ions typical of each conjugate class. A reverse phase HPLC procedure was also proposed for efficient retention and separation of the studied compounds. Finally, a protocol based on quaternary amine SPE was developed, permitting the separation of free, glucuronide, and sulfate fractions. Preliminary results on biological samples demonstrated the suitability of this analytical strategy for direct measurement of dexamethasone glucuronide and sulfate residues in bovine urine.

  13. Key analytic considerations in design and analysis of randomized controlled trials in osteoarthritis

    PubMed Central

    Losina, Elena; Ranstam, Jonas; Collins, Jamie; Schnitzer, Thomas J; Katz, Jeffrey N.

    2016-01-01

    Objective To highlight methodologic challenges pertinent to design, analysis, and reporting of results of randomized clinical trials in OA and offer practical suggestions to overcome these challenges. Design The topics covered in this paper include subject selection, randomization, approaches to handling missing data, subgroup analysis, sample size, and issues related to changing design mid-way through the study. Special attention is given to standardizing the reporting of results and economic analyses. Results Key findings include the importance of blinding and concealment, the distinction between superiority and non-inferiority trials, the need to minimize missing data, and appropriate analysis and interpretation of subgroup effects. Conclusion Investigators may use the findings and recommendations advanced in this paper to guide design and conduct of randomized controlled trials of interventions for osteoarthritis. PMID:25952341

  14. 2.5D S-wave velocity model of the TESZ area in northern Poland from receiver function analysis

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, Monika; Polkowski, Marcin; Grad, Marek

    2016-04-01

    Receiver function (RF) locally provides the signature of sharp seismic discontinuities and information about the shear wave (S-wave) velocity distribution beneath the seismic station. The data recorded by "13 BB Star" broadband seismic stations (Grad et al., 2015) and by few PASSEQ broadband seismic stations (Wilde-Piórko et al., 2008) are analysed to investigate the crustal and upper mantle structure in the Trans-European Suture Zone (TESZ) in northern Poland. The TESZ is one of the most prominent suture zones in Europe separating the young Palaeozoic platform from the much older Precambrian East European craton. Compilation of over thirty deep seismic refraction and wide angle reflection profiles, vertical seismic profiling in over one hundred thousand boreholes and magnetic, gravity, magnetotelluric and thermal methods allowed for creation a high-resolution 3D P-wave velocity model down to 60 km depth in the area of Poland (Grad et al. 2016). On the other hand the receiver function methods give an opportunity for creation the S-wave velocity model. Modified ray-tracing method (Langston, 1977) are used to calculate the response of the structure with dipping interfaces to the incoming plane wave with fixed slowness and back-azimuth. 3D P-wave velocity model are interpolated to 2.5D P-wave velocity model beneath each seismic station and synthetic back-azimuthal sections of receiver function are calculated for different Vp/Vs ratio. Densities are calculated with combined formulas of Berteussen (1977) and Gardner et al. (1974). Next, the synthetic back-azimuthal sections of RF are compared with observed back-azimuthal sections of RF for "13 BB Star" and PASSEQ seismic stations to find the best 2.5D S-wave models down to 60 km depth. National Science Centre Poland provided financial support for this work by NCN grant DEC-2011/02/A/ST10/00284.

  15. Is food allergen analysis flawed? Health and supply chain risks and a proposed framework to address urgent analytical needs.

    PubMed

    Walker, M J; Burns, D T; Elliott, C T; Gowland, M H; Mills, E N Clare

    2016-01-01

    Food allergy is an increasing problem for those affected, their families or carers, the food industry and for regulators. The food supply chain is highly vulnerable to fraud involving food allergens, risking fatalities and severe reputational damage to the food industry. Many facets are being pursued to ameliorate the difficulties including better food labelling and the concept of thresholds of elicitation of allergy symptoms as risk management tools. These efforts depend to a high degree on the ability reliably to detect and quantify food allergens; yet all current analytical approaches exhibit severe deficiencies that jeopardise accurate results being produced particularly in terms of the risks of false positive and false negative reporting. If we fail to realise the promise of current risk assessment and risk management of food allergens through lack of the ability to measure food allergens reproducibly and with traceability to an international unit of measurement, the analytical community will have failed a significant societal challenge. Three distinct but interrelated areas of analytical work are urgently needed to address the substantial gaps identified: (a) a coordinated international programme for the production of properly characterised clinically relevant reference materials and calibrants for food allergen analysis; (b) an international programme to widen the scope of proteomics and genomics bioinformatics for the genera containing the major allergens to address problems in ELISA, MS and DNA methods; (c) the initiation of a coordinated international programme leading to reference methods for allergen proteins that provide results traceable to the SI. This article describes in more detail food allergy, the risks of inapplicable or flawed allergen analyses with examples and a proposed framework, including clinically relevant incurred allergen concentrations, to address the currently unmet and urgently required analytical requirements. Support for the

  16. Analysis of environmental contamination resulting from catastrophic incidents: part 2. Building laboratory capability by selecting and developing analytical methodologies.

    PubMed

    Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba

    2014-11-01

    Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity

  17. Is food allergen analysis flawed? Health and supply chain risks and a proposed framework to address urgent analytical needs.

    PubMed

    Walker, M J; Burns, D T; Elliott, C T; Gowland, M H; Mills, E N Clare

    2016-01-01

    Food allergy is an increasing problem for those affected, their families or carers, the food industry and for regulators. The food supply chain is highly vulnerable to fraud involving food allergens, risking fatalities and severe reputational damage to the food industry. Many facets are being pursued to ameliorate the difficulties including better food labelling and the concept of thresholds of elicitation of allergy symptoms as risk management tools. These efforts depend to a high degree on the ability reliably to detect and quantify food allergens; yet all current analytical approaches exhibit severe deficiencies that jeopardise accurate results being produced particularly in terms of the risks of false positive and false negative reporting. If we fail to realise the promise of current risk assessment and risk management of food allergens through lack of the ability to measure food allergens reproducibly and with traceability to an international unit of measurement, the analytical community will have failed a significant societal challenge. Three distinct but interrelated areas of analytical work are urgently needed to address the substantial gaps identified: (a) a coordinated international programme for the production of properly characterised clinically relevant reference materials and calibrants for food allergen analysis; (b) an international programme to widen the scope of proteomics and genomics bioinformatics for the genera containing the major allergens to address problems in ELISA, MS and DNA methods; (c) the initiation of a coordinated international programme leading to reference methods for allergen proteins that provide results traceable to the SI. This article describes in more detail food allergy, the risks of inapplicable or flawed allergen analyses with examples and a proposed framework, including clinically relevant incurred allergen concentrations, to address the currently unmet and urgently required analytical requirements. Support for the

  18. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  19. Analytical solutions in R + qRn cosmology from singularity analysis

    NASA Astrophysics Data System (ADS)

    Paliathanasis, Andronikos; Leach, P. G. L.

    2016-08-01

    The integrability of higher-order theories of gravity is of importance in the determining the properties of these models and so their viability as models of reality. An important tool in the establishment of integrability is the singularity analysis. We apply this analysis to the case of fourth-order theory of gravity f (R) = R + qRn to establish those values of the free parameters q and n for which integrability in this sense exists. As a preliminary we examine the well-known case of n = 4 / 3.

  20. Black Boxes in Analytical Chemistry: University Students' Misconceptions of Instrumental Analysis

    ERIC Educational Resources Information Center

    Carbo, Antonio Domenech; Adelantado, Jose Vicente Gimeno; Reig, Francisco Bosch

    2010-01-01

    Misconceptions of chemistry and chemical engineering university students concerning instrumental analysis have been established from coordinated tests, tutorial interviews and laboratory lessons. Misconceptions can be divided into: (1) formal, involving specific concepts and formulations within the general frame of chemistry; (2)…

  1. A Discourse Analytic Approach to Video Analysis of Teaching: Aligning Desired Identities with Practice

    ERIC Educational Resources Information Center

    Schieble, Melissa; Vetter, Amy; Meacham, Mark

    2015-01-01

    The authors present findings from a qualitative study of an experience that supports teacher candidates to use discourse analysis and positioning theory to analyze videos of their practice during student teaching. The research relies on the theoretical concept that learning to teach is an identity process. In particular, teachers construct and…

  2. Researcher Effects on Mortality Salience Research: A Meta-Analytic Moderator Analysis

    ERIC Educational Resources Information Center

    Yen, Chih-Long; Cheng, Chung-Ping

    2013-01-01

    A recent meta-analysis of 164 terror management theory (TMT) papers indicated that mortality salience (MS) yields substantial effects (r = 0.35) on worldview and self-esteem-related dependent variables (B. L. Burke, A. Martens, & E. H. Faucher, 2010). This study reanalyzed the data to explore the researcher effects of TMT. By cluster-analyzing…

  3. A Comparative Analysis of Two Order Analytic Techniques: Assessing Item Hierarchies in Real and Simulated Data.

    ERIC Educational Resources Information Center

    Chevalaz, Gerard M.; Tatsuoka, Kikumi K.

    Two order theoretic techniques were presented and compared. Ordering theory of Krus and Bart (1974) and an extended Takeya's item relational structure analysis (IRS) by Tatsuoka and Tatsuoka (1981) were used to extract the hierarchical item structure from three datasets. Directed graphs were constructed and both methods were assessed as to how…

  4. An analytical platform for mass spectrometry-based identification and chemical analysis of RNA in ribonucleoprotein complexes.

    PubMed

    Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Masaki, Shunpei; Nakayama, Hiroshi; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki

    2009-11-01

    We describe here a mass spectrometry (MS)-based analytical platform of RNA, which combines direct nano-flow reversed-phase liquid chromatography (RPLC) on a spray tip column and a high-resolution LTQ-Orbitrap mass spectrometer. Operating RPLC under a very low flow rate with volatile solvents and MS in the negative mode, we could estimate highly accurate mass values sufficient to predict the nucleotide composition of a approximately 21-nucleotide small interfering RNA, detect post-transcriptional modifications in yeast tRNA, and perform collision-induced dissociation/tandem MS-based structural analysis of nucleolytic fragments of RNA at a sub-femtomole level. Importantly, the method allowed the identification and chemical analysis of small RNAs in ribonucleoprotein (RNP) complex, such as the pre-spliceosomal RNP complex, which was pulled down from cultured cells with a tagged protein cofactor as bait. We have recently developed a unique genome-oriented database search engine, Ariadne, which allows tandem MS-based identification of RNAs in biological samples. Thus, the method presented here has broad potential for automated analysis of RNA; it complements conventional molecular biology-based techniques and is particularly suited for simultaneous analysis of the composition, structure, interaction, and dynamics of RNA and protein components in various cellular RNP complexes.

  5. An analytical platform for mass spectrometry-based identification and chemical analysis of RNA in ribonucleoprotein complexes

    PubMed Central

    Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Masaki, Shunpei; Nakayama, Hiroshi; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki

    2009-01-01

    We describe here a mass spectrometry (MS)-based analytical platform of RNA, which combines direct nano-flow reversed-phase liquid chromatography (RPLC) on a spray tip column and a high-resolution LTQ-Orbitrap mass spectrometer. Operating RPLC under a very low flow rate with volatile solvents and MS in the negative mode, we could estimate highly accurate mass values sufficient to predict the nucleotide composition of a ∼21-nucleotide small interfering RNA, detect post-transcriptional modifications in yeast tRNA, and perform collision-induced dissociation/tandem MS-based structural analysis of nucleolytic fragments of RNA at a sub-femtomole level. Importantly, the method allowed the identification and chemical analysis of small RNAs in ribonucleoprotein (RNP) complex, such as the pre-spliceosomal RNP complex, which was pulled down from cultured cells with a tagged protein cofactor as bait. We have recently developed a unique genome-oriented database search engine, Ariadne, which allows tandem MS-based identification of RNAs in biological samples. Thus, the method presented here has broad potential for automated analysis of RNA; it complements conventional molecular biology-based techniques and is particularly suited for simultaneous analysis of the composition, structure, interaction, and dynamics of RNA and protein components in various cellular RNP complexes. PMID:19740761

  6. An analytical framework for the design and comparative analysis of galloping energy harvesters under quasi-steady aerodynamics

    NASA Astrophysics Data System (ADS)

    Bibo, Amin; Daqaq, Mohammed F.

    2015-09-01

    This paper presents a generalized formulation, analysis, and optimization of energy harvesters subjected to galloping and base excitations. The harvester consists of a cantilever beam with a bluff body attached at the free end. A nondimensional lumped-parameter model which accounts for the combined loading and different electro-mechanical transduction mechanisms is presented. The aerodynamic loading is modeled using the quasi-steady assumption with polynomial approximation. A nonlinear analysis is carried out and an approximate analytical solution is obtained. A dimensional analysis is performed to identify the important parameters that affect the system's response. The analysis of the response is divided into two parts. The first treats a harvester subjected to only galloping excitations. It is shown that, for a given shape of the bluff body and under quasi-steady flow conditions, the harvester's dimensionless response can be described by a single universal curve irrespective to the geometric, mechanical, and electrical design parameters of the harvester. In the second part, a harvester under concurrent galloping and base excitations is analyzed. It is shown that, the total output power depends on three dimensionless loading parameters; wind speed, base excitation amplitude, and excitation frequency. The response curves of the harvester are generated in terms of the loading parameters. These curves can serve as a complete design guide for scaling and optimizing the performance of galloping-based harvesters.

  7. Non-linear waveform analysis for water-layer response and its application to high-frequency receiver function analysis using OBS array

    NASA Astrophysics Data System (ADS)

    Akuhara, Takeshi; Mochizuki, Kimihiro; Kawakatsu, Hitoshi; Takeuchi, Nozomu

    2016-09-01

    Determination of a response of the sea water column to teleseismic plane wave is important to suppress adverse effects of water reverberations in calculating receiver functions (RFs) using ocean-bottom seismometer (OBS) records. We present a novel non-linear waveform analysis method using the simulated annealing algorithm to determine such a water-layer response recorded by an OBS array. We then demonstrate its usefulness for the RF estimation through its application to synthetic and observed data. Synthetic experiments suggest that the water-layer response constrained in this way has a potential to improve RFs of OBS records drastically even in the high-frequency range (to 4 Hz). By applying it to data observed by the OBS array around the Kii Peninsula, southwestern Japan, we identified a low-velocity zone at the top of the subducting Philippine Sea plate. This zone may represent the incoming fluid-rich sediment layer that has been reported by active-source seismic survey.

  8. Research on matching area selection criteria for gravity gradient navigation based on principal component analysis and analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Xiong, Ling; Li, Kaihan; Tang, Jianqiao; Ma, Jie

    2015-12-01

    The matching area selection is the foundation of gravity gradient aided navigation. In this paper, a gravity gradient matching area selection criterion is proposed, based on the principal component analysis (PCA) and analytic hierarchy process (AHP). Firstly, the features of gravity gradient are extracted and nine gravity gradient characteristic parameters are obtained. Secondly, combining PCA with AHP, a PA model is built and the nine characteristic parameters are fused based on it. At last, the gravity gradient matching area selection criterion is given. By using this criterion, gravity gradient area can be divided into matching area and non-matching area. The simulation results show that gravity gradient position effect in the selected matching area is superior to the matching area, and the matching rate is greater than 90%, the position error is less than a gravity gradient grid.

  9. The analysis of non-linear dynamic behavior (including snap-through) of postbuckled plates by simple analytical solution

    NASA Technical Reports Server (NTRS)

    Ng, C. F.

    1988-01-01

    Static postbuckling and nonlinear dynamic analysis of plates are usually accomplished by multimode analyses, although the methods are complicated and do not give straightforward understanding of the nonlinear behavior. Assuming single-mode transverse displacement, a simple formula is derived for the transverse load displacement relationship of a plate under in-plane compression. The formula is used to derive a simple analytical expression for the static postbuckling displacement and nonlinear dynamic responses of postbuckled plates under sinusoidal or random excitation. Regions with softening and hardening spring behavior are identified. Also, the highly nonlinear motion of snap-through and its effects on the overall dynamic response can be easily interpreted using the single-mode formula. Theoretical results are compared with experimental results obtained using a buckled aluminum panel, using discrete frequency and broadband point excitation. Some important effects of the snap-through motion on the dynamic response of the postbuckled plates are found.

  10. A Multi-facetted Visual Analytics Tool for Exploratory Analysis of Human Brain and Function Datasets

    PubMed Central

    Angulo, Diego A.; Schneider, Cyril; Oliver, James H.; Charpak, Nathalie; Hernandez, Jose T.

    2016-01-01

    Brain research typically requires large amounts of data from different sources, and often of different nature. The use of different software tools adapted to the nature of each data source can make research work cumbersome and time consuming. It follows that data is not often used to its fullest potential thus limiting exploratory analysis. This paper presents an ancillary software tool called BRAVIZ that integrates interactive visualization with real-time statistical analyses, facilitating access to multi-facetted neuroscience data and automating many cumbersome and error-prone tasks required to explore such data. Rather than relying on abstract numerical indicators, BRAVIZ emphasizes brain images as the main object of the analysis process of individuals or groups. BRAVIZ facilitates exploration of trends or relationships to gain an integrated view of the phenomena studied, thus motivating discovery of new hypotheses. A case study is presented that incorporates brain structure and function outcomes together with different types of clinical data. PMID:27601990

  11. A Multi-facetted Visual Analytics Tool for Exploratory Analysis of Human Brain and Function Datasets.

    PubMed

    Angulo, Diego A; Schneider, Cyril; Oliver, James H; Charpak, Nathalie; Hernandez, Jose T

    2016-01-01

    Brain research typically requires large amounts of data from different sources, and often of different nature. The use of different software tools adapted to the nature of each data source can make research work cumbersome and time consuming. It follows that data is not often used to its fullest potential thus limiting exploratory analysis. This paper presents an ancillary software tool called BRAVIZ that integrates interactive visualization with real-time statistical analyses, facilitating access to multi-facetted neuroscience data and automating many cumbersome and error-prone tasks required to explore such data. Rather than relying on abstract numerical indicators, BRAVIZ emphasizes brain images as the main object of the analysis process of individuals or groups. BRAVIZ facilitates exploration of trends or relationships to gain an integrated view of the phenomena studied, thus motivating discovery of new hypotheses. A case study is presented that incorporates brain structure and function outcomes together with different types of clinical data.

  12. An investigation on the analytical potential of polymerized liposomes bound to lanthanide ions for protein analysis.

    PubMed

    Santos, Marina; Roy, Bidhan C; Goicoechea, Héctor; Campiglia, Andres D; Mallik, Sanku

    2004-09-01

    We present a promising approach to protein sensing based on Eu3+ ions incorporated into polymerized liposomes. The sensitization of Eu3+ is accomplished with 5-aminosalicylic acid, which provides energy transfer for a stable reference signal and a wide wavelength excitation range free from protein interference. The lipophilic character of polymerized liposomes provides the appropriate platform for protein interaction with the lanthanide ion. Quantitative analysis is based on the linear relationship between the luminescence signal of Eu3+ and protein concentration. Because no spectral shift of the lanthanide luminescence is observed upon protein interaction, qualitative analysis is based on the luminescence lifetime of polymerized liposomes. This parameter, which changes significantly upon protein-liposome interaction, follows a well-behaved single-exponential decay that might be useful for protein identification. PMID:15327334

  13. A Multi-facetted Visual Analytics Tool for Exploratory Analysis of Human Brain and Function Datasets

    PubMed Central

    Angulo, Diego A.; Schneider, Cyril; Oliver, James H.; Charpak, Nathalie; Hernandez, Jose T.

    2016-01-01

    Brain research typically requires large amounts of data from different sources, and often of different nature. The use of different software tools adapted to the nature of each data source can make research work cumbersome and time consuming. It follows that data is not often used to its fullest potential thus limiting exploratory analysis. This paper presents an ancillary software tool called BRAVIZ that integrates interactive visualization with real-time statistical analyses, facilitating access to multi-facetted neuroscience data and automating many cumbersome and error-prone tasks required to explore such data. Rather than relying on abstract numerical indicators, BRAVIZ emphasizes brain images as the main object of the analysis process of individuals or groups. BRAVIZ facilitates exploration of trends or relationships to gain an integrated view of the phenomena studied, thus motivating discovery of new hypotheses. A case study is presented that incorporates brain structure and function outcomes together with different types of clinical data.

  14. A Multi-facetted Visual Analytics Tool for Exploratory Analysis of Human Brain and Function Datasets.

    PubMed

    Angulo, Diego A; Schneider, Cyril; Oliver, James H; Charpak, Nathalie; Hernandez, Jose T

    2016-01-01

    Brain research typically requires large amounts of data from different sources, and often of different nature. The use of different software tools adapted to the nature of each data source can make research work cumbersome and time consuming. It follows that data is not often used to its fullest potential thus limiting exploratory analysis. This paper presents an ancillary software tool called BRAVIZ that integrates interactive visualization with real-time statistical analyses, facilitating access to multi-facetted neuroscience data and automating many cumbersome and error-prone tasks required to explore such data. Rather than relying on abstract numerical indicators, BRAVIZ emphasizes brain images as the main object of the analysis process of individuals or groups. BRAVIZ facilitates exploration of trends or relationships to gain an integrated view of the phenomena studied, thus motivating discovery of new hypotheses. A case study is presented that incorporates brain structure and function outcomes together with different types of clinical data. PMID:27601990

  15. Analytical techniques for reduction of computational effort in reflector antenna analysis

    NASA Astrophysics Data System (ADS)

    Franceschetti, G.

    Techniques used for computing the radiation integral in reflector antenna analysis are briefly reviewed. The techniques discussed include numerical approaches, such as Monte Carlo multidimensional integration and the Ludwig method (1968), asymptotic solutions, expansion techniques, and the sampling approach. It is pointed out that none of the techniques discussed provides optimum results in the full angular range 0-180 deg, and consequently different techniques are generally used in different angular sectors.

  16. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  17. Suicide Attempt Characteristics Among Veterans and Active-Duty Service Members Receiving Mental Health Services: A Pooled Data Analysis

    PubMed Central

    Villatte, Jennifer L.; O’Connor, Stephen S.; Leitner, Rebecca; Kerbrat, Amanda H.; Johnson, Lora L.; Gutierrez, Peter M.

    2015-01-01

    Past suicidal behaviors are among the strongest and most consistent predictors of eventual suicide and may be particularly salient in military suicide. The current study compared characteristics of suicide attempts in veterans (N = 746) and active-duty service members (N = 1,013) receiving treatment for acute suicide risk. Baseline data from six randomized controlled trials were pooled and analyzed using robust regression. Service members had greater odds of having attempted suicide relative to veterans, though there were no differences in number of attempts made. Service members also had higher rates of premilitary suicide attempts and nonsuicidal self-injury (NSSI). Veterans disproportionately attempted suicide by means of overdose. In veterans, combat deployment was associated with lower odds of lifetime suicide attempt, while history of NSSI was associated with greater attempt odds. Neither was significantly associated with lifetime suicide attempt in service members. Implications for suicide assessment and treatment are discussed. PMID:26740909

  18. Erosion of the continental lithosphere at the cusps of the Calabrian arc: Evidence from S receiver functions analysis

    NASA Astrophysics Data System (ADS)

    Miller, Meghan S.; Piana Agostinetti, Nicola

    2011-12-01

    Mediterranean tectonics has been characterized by an irregular, complex temporal evolution with episodic rollback and retreat of the subducted plate followed by period of slow trench-migration. To provide insight into the geodynamics of the Calabrian arc, we image the characteristics and lithospheric structure of the convergent, Apulian and Hyblean forelands at the cusps of the arc. Specifically we investigate the crustal and lithospheric thicknesses using teleseismic S-to-p converted phases, applied to the Adria-Africa plate margin for the first time. We find that the Moho in the Apulian foreland is nearly flat at ˜30 km depth, consistent with previous P receiver functions results, and that the Hyblean crustal thickness is more complex, which can be understood in terms of the nature of the individual pieces of carbonate platform and pelagic sediments that make up the Hyblean platform. The lithospheric thicknesses range between 70-120 km beneath Apulia and 70-90 km beneath Sicily. The lithosphere of the forelands at each end of the Calabrian arc are continental in nature, buoyant compared to the subducting oceanic lithosphere and have previously been interpreted as mostly undeformed carbonate platforms. Our receiver function images also show evidence of lithospheric erosion and thinning close to Mt. Etna and Mt. Vulture, two volcanoes which have been associated with asthenospheric upwelling and mantle flow around of the sides the slab. We suggest that as the continental lithosphere resists being subducted it is being thermo-mechanically modified by toroidal flow around the edges of the subducting oceanic lithosphere of the Calabrian arc.

  19. Matched Pair Analysis of Race or Ethnicity in Outcomes of Head and Neck Cancer Patients Receiving Similar Multidisciplinary Care

    PubMed Central

    Chen, Leon M.; Li, Guojun; Reitzel, Lorraine R.; Pytynia, Kristen B.; Zafereo, Mark E.; Wei, Qingyi; Sturgis, Erich M.

    2009-01-01

    It is unknown whether population-level racial or ethnic disparities in mortality from squamous cell carcinoma of the head and neck (SCCHN) also occur in the setting of standardized multidisciplinary-team directed care. Therefore, we conducted a matched-pair study that controlled for several potentially confounding prognostic variables to assess whether a difference in survival exists for African-American or Hispanic-American compared with non-Hispanic white American SCCHN patients receiving similar care. Matched pairs were 81 African-American case and 81 non-Hispanic white control patients and 100 Hispanic-American cases and 100 matched non-Hispanic white controls selected from 1833 patients of a prospective epidemiologic study of incident SCCHN within a single, large multidisciplinary cancer center. Matching variables included age (± 10 years), sex, smoking status (never versus ever), site, tumor stage (T1–2 versus T3–4), nodal status (negative versus positive), and treatment. Cases and controls were not significantly different in proportions of comorbidity score, alcohol use, subsite distribution, overall stage, or tumor grade. Matched-pair and log-rank analyses showed no significant differences between cases and controls in recurrence-free, disease-specific, or overall survival. Site-specific analyses suggested that more-aggressive oropharyngeal cancers occurred more frequently in minority than non-Hispanic white patients. We conclude that minority and non-Hispanic white SCCHN patients receiving similar multidisciplinary-team directed care at a tertiary cancer center have similar survival results overall. These results encourage reducing health disparities in SCCHN through public-health efforts to improve access to multidisciplinary oncologic care (and to preventive measures) and through individual clinician efforts to make the best multidisciplinary cancer treatment choices available for their minority patients. The subgroup finding suggests a biologically

  20. Analytical Methods for Malachite Green : Completion Report : Malachite Green Analysis in Water.

    SciTech Connect

    Allen, John L.; Gofus, Jane E.; Meinertz, Jeffery R.

    1991-06-01

    Malachite green is a known teratogen and therefore its use is limited to nonfood fish under an Investigational New Animal Drug permit (INAD), number 2573. Although a charcoal adsorption column was developed to remove malachite green from hatchery water, INAD compliance requires that the malachite green residue concentrations in any effluent from hatcheries using the chemical be quantified. Therefore, we developed a method for the analysis of malachite green residues in water. Enrichment of the residues of malachite green in water on a diol column followed by High Performance Liquid Chromatographic (HPLC) analysis gives a minimum sensitivity of less than 10 ppb for the chemical. When combined with post-column oxidation using a lead oxide post-column reactor, the procedure can be used for the simultaneous analysis of malachite green in its leuco form, a decomposition product of the dye, as well as its chromatic form. Recovery of the leuco form is pH dependent and water samples should be adjusted to pH 6 to optimize recovery of this form. Water samples spiked with malachite green were concentrated on a diol column followed by elution with 0.05 M p-toluene sulfonic acid in methanol. The methanol elutes were analyzed by HPLC. Pond water samples spiked with malachite green and leuco malachite green yielded average recoveries of 95.4% for malachite green and 57.3% for leuco malachite green. Tap water samples spiked with the carbinol form of malachite green gave average recoveries of 98.6%. The method is very sensitive and is capable of detecting malachite green residues in water at less than 10 ppb. Fish culturists, who cannot find an effective replacement for malachite green, can utilize the method to ensure that their effluents comply with INAD regulations. 13 refs., 2 figs., 7 tabs.

  1. Nuclear analytical chemistry

    SciTech Connect

    Brune, D.; Forkman, B.; Persson, B.

    1984-01-01

    This book covers the general theories and techniques of nuclear chemical analysis, directed at applications in analytical chemistry, nuclear medicine, radiophysics, agriculture, environmental sciences, geological exploration, industrial process control, etc. The main principles of nuclear physics and nuclear detection on which the analysis is based are briefly outlined. An attempt is made to emphasise the fundamentals of activation analysis, detection and activation methods, as well as their applications. The book provides guidance in analytical chemistry, agriculture, environmental and biomedical sciences, etc. The contents include: the nuclear periodic system; nuclear decay; nuclear reactions; nuclear radiation sources; interaction of radiation with matter; principles of radiation detectors; nuclear electronics; statistical methods and spectral analysis; methods of radiation detection; neutron activation analysis; charged particle activation analysis; photon activation analysis; sample preparation and chemical separation; nuclear chemical analysis in biological and medical research; the use of nuclear chemical analysis in the field of criminology; nuclear chemical analysis in environmental sciences, geology and mineral exploration; and radiation protection.

  2. A comprehensive analytical model of rotorcraft aerodynamics and dynamics. Part 1: Analysis development

    NASA Technical Reports Server (NTRS)

    Johnson, W.

    1980-01-01

    Structural, inertia, and aerodynamic models were combined to form a comprehensive model of rotor aerodynamics and dynamics that is applicable to a wide range of problems and a wide class of vehicles. A digital computer program is used to calculate rotor performance, loads, and noise; helicopter vibration and gust response; flight dynamics and handling qualities; and system aeroelastic stability. The analysis is intended for use in the design, testing, and evaluation of rotors and rotorcraft, and to be a basis for further development of rotary wing theories.

  3. Analytical laboratory quality assurance guidance in support of EM environmental sampling and analysis activities

    SciTech Connect

    Not Available

    1994-05-01

    This document introduces QA guidance pertaining to design and implementation of laboratory procedures and processes for collecting DOE Environmental Restoration and Waste Management (EM) ESAA (environmental sampling and analysis activities) data. It addresses several goals: identifying key laboratory issues and program elements to EM HQ and field office managers; providing non-prescriptive guidance; and introducing environmental data collection program elements for EM-263 assessment documents and programs. The guidance describes the implementation of laboratory QA elements within a functional QA program (development of the QA program and data quality objectives are not covered here).

  4. SASfit: a tool for small-angle scattering data analysis using a library of analytical expressions

    PubMed Central

    Breßler, Ingo; Kohlbrecher, Joachim; Thünemann, Andreas F.

    2015-01-01

    SASfit is one of the mature programs for small-angle scattering data analysis and has been available for many years. This article describes the basic data processing and analysis workflow along with recent developments in the SASfit program package (version 0.94.6). They include (i) advanced algorithms for reduction of oversampled data sets, (ii) improved confidence assessment in the optimized model parameters and (iii) a flexible plug-in system for custom user-provided models. A scattering function of a mass fractal model of branched polymers in solution is provided as an example for implementing a plug-in. The new SASfit release is available for major platforms such as Windows, Linux and MacOS. To facilitate usage, it includes comprehensive indexed documentation as well as a web-based wiki for peer collaboration and online videos demonstrating basic usage. The use of SASfit is illustrated by interpretation of the small-angle X-ray scattering curves of monomodal gold nanoparticles (NIST reference material 8011) and bimodal silica nanoparticles (EU reference material ERM-FD-102). PMID:26500467

  5. Gray Matter Features of Reading Disability: A Combined Meta-Analytic and Direct Analysis Approach1234

    PubMed Central

    Berninger, Virginia W.; Gebregziabher, Mulugeta; Tsu, Loretta

    2016-01-01

    Abstract Meta-analysis of voxel-based morphometry dyslexia studies and direct analysis of 293 reading disability and control cases from six different research sites were performed to characterize defining gray matter features of reading disability. These analyses demonstrated consistently lower gray matter volume in left posterior superior temporal sulcus/middle temporal gyrus regions and left orbitofrontal gyrus/pars orbitalis regions. Gray matter volume within both of these regions significantly predicted individual variation in reading comprehension after correcting for multiple comparisons. These regional gray matter differences were observed across published studies and in the multisite dataset after controlling for potential age and gender effects, and despite increased anatomical variance in the reading disability group, but were not significant after controlling for total gray matter volume. Thus, the orbitofrontal and posterior superior temporal sulcus gray matter findings are relatively reliable effects that appear to be dependent on cases with low total gray matter volume. The results are considered in the context of genetics studies linking orbitofrontal and superior temporal sulcus regions to alleles that confer risk for reading disability. PMID:26835509

  6. Development of Analytical Protocols For Organics and Isotopes Analysis on the 2009 MARS Science Laboratory.

    NASA Technical Reports Server (NTRS)

    Mahaffy, P. R.

    2006-01-01

    The Mars Science Laboratory, under development for launch in 2009, is designed explore and quantitatively asses a local region on Mars as a potential habitat for present or past life. Its ambitious goals are to (1) assess the past or present biological potential of the target environment, (2) to characterize the geology and geochemistry at the MSL landing site, and (3) to investigate planetary processes that influence habitability. The planned capabilities of the rover payload will enable a comprehensive search for organic molecules, a determination of definitive mineralogy of sampled rocks and fines, chemical and isotopic analysis of both atmospheric and solid samples, and precision isotope measurements of several volatile elements. A range of contact and remote surface and subsurface survey tools will establish context for these measurements and will facilitate sample identification and selection. The Sample Analysis at Mars (SAM) suite of MSL addresses several of the mission's core measurement goals. It includes a gas chromatograph, a mass spectrometer, and a tunable laser spectrometer. These instruments will be designed to analyze either atmospheric samples or gases extracted from solid phase samples such as rocks and fines. We will describe the range of measurement protocols under development and study by the SAM engineering and science teams for use on the surface of Mars.

  7. Quality assurance of analytical methods developed for analysis of environmentally significant species

    SciTech Connect

    Smith, R.E.

    1992-05-01

    A quality assurance program for trace analyses of environmentally significant species has begun. In the first stage, methods to analyze environmental samples for a variety of components have been developed and documented. Techniques include visual inspection, gravimetric analysis, ion chromatography (IC), inductively coupled plasma (ICP) emission spectrometry, ICP-mass spectrometry (ICP/MS), atomic absorption (AA) (flame, furnace, and mercury cold vapor techniques), gas chromatography (GC), potentiometry, and visible spectrophotometry. Industrial sites are analyzed for contamination by methylene dianiline (MDA). Precious metal waste sludges are analyzed for cyanide, halogens, mercury, and precious metals. Paint samples are analyzed for volatile organic compounds by GC and gravimetric analysis. Polychlorinated biphenyls (PCBs) also are determined in oil samples. In the second stage of quality assurance, methods are validated by accuracy and precision studies and by determination of detection limits and ranges. Improved methods provide additional information about key substances targeted by EPA. Furthermore, quality assurance data on IC and GC analyses are presented. IC methods simultaneously determine five anions in one run and four cations in another. Results on EPA-sponsored round robin tests indicate that IC can accurately determine the concentrations of anions and cations. Spiked samples analyzed by both GC and IC methods gave recoveries very close to 100%.

  8. Double-sided Microfluidic Device for Speciation Analysis of Iron in Water Samples: Towards Greener Analytical Chemistry.

    PubMed

    Youngvises, Napaporn; Thanurak, Porapichcha; Chaida, Thanatcha; Jukmunee, Jaroon; Alsuhaimi, Awadh

    2015-01-01

    Microfluidics minimize the amounts of reagents and generate less waste. While microdevices are commonly single-sided, producing a substrate with microchannels on multiple surfaces would increase their usefulness. Herein, a polymethymethacrylate substrate incorporating microchannel structures on two sides was sandwiched between two polydimethylsiloxane sheets to create a multi-analysis device, which was used for the spectrophotometric analysis of the ferrous ion (Fe(2+)) and the ferric ion (Fe(3+)), by utilizing colorimetric detection. To monitor the signals from both channel networks, dual optical sensors were integrated into the system. The linear ranges for Fe(2+) and Fe(3+) analyses were 0.1 - 20 mg L(-1) (R(2) = 0.9988) and 1.0 - 40 mg L(-1) (R(2) = 0.9974), respectively. The detection limits for Fe(2+) and Fe(3+) were 0.1 and 0.5 mg L(-1), respectively. The percent recoveries of Fe(2+) and Fe(3+) were 93.5 - 104.3 with an RSD < 8%. The microdevice demonstrated capabilities for simultaneous analysis, low waste generation (7.2 mL h(-1)), and high sample throughput (180 h(-1)), making it ideal for greener analytical chemistry applications. PMID:25958864

  9. Objective analysis of preparations in dental training: development of analytical software.

    PubMed

    Hey, Jeremias; Kupfer, Philipp; Urbannek, Michael; Beuer, Florian

    2013-01-01

    Objective analysis of the quality of a stump preparation by conventional methods is complex and expensive. In this regard digitalization has created new possibilities. The high resolution of modern scanners permits visual representation of the most minute preparation parameters. Virtual rulers permit measurement of distances in many construction programs; however, the use of such measuring instruments is too time-consuming for scientific studies. The computer program presented here permits measurement of the preparation angle, calculation of the width of the preparation margin, and determination of the horizontal path of the preparation shoulder. It runs on the Windows XP operating system independently of a CAD construction program. The program analyzes datasets in the STL file format. PMID:24555407

  10. Interventions to reduce sexual prejudice: a study-space analysis and meta-analytic review.

    PubMed

    Bartoş, Sebastian E; Berger, Israel; Hegarty, Peter

    2014-01-01

    Sexual prejudice is an important threat to the physical and mental well-being of lesbians, gay men, and bisexual people. Therefore, we reviewed the effectiveness of interventions designed to reduce such prejudice. A study-space analysis was performed on published and unpublished papers from all over the world to identify well-studied and underexplored issues. Most studies were conducted with North American undergraduates and were educational in nature. Dissertations were often innovative and well designed but were rarely published. We then performed meta-analyses on sets of comparable studies. Education, contact with gay people, and combining contact with education had a medium-size effect on several measures of sexual prejudice. The manipulation of social norms was effective in reducing antigay behavior. Other promising interventions, such as the use of entertainment media to promote tolerance, need further investigation. More research is also needed on populations other than American students, particularly groups who may have higher levels of sexual prejudice.

  11. An integrated analytical aeropropulsive/aeroelastic model for the dynamic analysis of hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    Chavez, Frank R.; Schmidt, David K.

    1992-01-01

    The development of an approach to the determination of the dynamic characteristics of hypersonic vehicles which is intentionally generic and basic is given. The approach involves a 2D hypersonic aerodynamic analysis utilizing Newtonian theory, coupled with a 1D aero/thermoanalysis of the flow in a scramjet-type propulsion system. In addition, the airframe is considered to be elastic, and the structural dynamics are characterized in terms of a simple lumped-mass model of the invacuo vibration modes. The vibration modes are coupled to the rigid-body modes through the aero/propulsive forces acting on the structure. The control effectors considered on a generic study configuration include aerodynamic pitch-control surfaces, as well as engine fuel flow and diffuser area ratio. The study configuration is shown to be highly statically unstable in pitch, and to exhibit strong airframe/engine/elastic coupling in the aeroelastic and attitude dynamics, as well as the engine responses.

  12. Electromagnetic interference of cardiac rhythmic monitoring devices to radio frequency identification: analytical analysis and mitigation methodology.

    PubMed

    Ogirala, Ajay; Stachel, Joshua R; Mickle, Marlin H

    2011-11-01

    Increasing density of wireless communication and development of radio frequency identification (RFID) technology in particular have increased the susceptibility of patients equipped with cardiac rhythmic monitoring devices (CRMD) to environmental electro magnetic interference (EMI). Several organizations reported observing CRMD EMI from different sources. This paper focuses on mathematically analyzing the energy as perceived by the implanted device, i.e., voltage. Radio frequency (RF) energy transmitted by RFID interrogators is considered as an example. A simplified front-end equivalent circuit of a CRMD sensing circuitry is proposed for the analysis following extensive black-box testing of several commercial pacemakers and implantable defibrillators. After careful understanding of the mechanics of the CRMD signal processing in identifying the QRS complex of the heart-beat, a mitigation technique is proposed. The mitigation methodology introduced in this paper is logical in approach, simple to implement and is therefore applicable to all wireless communication protocols.

  13. REPK: an analytical web server to select restriction endonucleases for terminal restriction fragment length polymorphism analysis.

    PubMed

    Collins, Roy Eric; Rocap, Gabrielle

    2007-07-01

    Terminal restriction fragment length polymorphism (T-RFLP) analysis is a widespread technique for rapidly fingerprinting microbial communities. Users of T-RFLP frequently overlook the resolving power of well-chosen restriction endonucleases and often fail to report how they chose their enzymes. REPK (Restriction Endonuclease Picker) assists in the rational choice of restriction endonucleases for T-RFLP by finding sets of four restriction endonucleases that together uniquely differentiate user-designated sequence groups. With REPK, users can provide their own sequences (of any gene, not just 16S rRNA), specify the taxonomic rank of interest and choose from a number of filtering options to further narrow down the enzyme selection. Bug tracking is provided, and the source code is open and accessible under the GNU Public License v.2, at http://code.google.com/p/repk. The web server is available without access restrictions at http://rocaplab.ocean.washington.edu/tools/repk.

  14. Analysis of size-fractionated coal combustion aerosols by PIXE and other analytical techniques

    NASA Astrophysics Data System (ADS)

    Maenhaut, W.; Røyset, O.; Vadset, M.; Kauppinen, E. I.; Lind, T. M.

    1993-04-01

    Particle-induced X-ray emission (PIXE) analysis, instrumental neutron activation analysis (INAA) and inductively coupled plasma mass spectrometry (ICP-MS) were used to study the chemical composition of size-fractionated in-stack fly-ash particles emitted during coal combustion. The samples were collected before the electrostatic precipitator at a gas temperature of 120°C during the combustion of Venezuelan coal in a 81 MW capacity circulating fluidized bed boiler. The sampling device consisted of a Berner low pressure impactor, which was operated with a cyclone precutter. The Nuclepore polycarbonate foils, which were used as collection surfaces in the low pressure impactor, were analyzed by the three techniques and the results of common elements were critically compared. The PIXE results were systematically lower than the INAA data and the percentage difference appeared to be stage-dependent, but virtually independent upon the element. The discrepancies are most likely due to bounce-off effects, particle reentrainment and other sampling artifacts, which may make that a fraction of the aerosol particles is deposited on the impaction foils outside the section analyzed by PIXE. However, by resorting to a "mixed internal standard" approach, accurate PIXE data are obtained. Also in the comparison between the ICP-MS and the INAA data significant discrepancies were observed. These are most likely due to incomplete dissolution of the particulate material and in particular of the alumino-silicate fly-ash matrix, during the acid digestion sample preparation step for ICP-MS. It is suggested that a comparison between ICP-MS data of acid digested samples and INAA can advantageously be used to provide speciation information on the various elements. Selected examples of size distributions are presented and briefly discussed.

  15. Analytical solutions to the mass-anisotropy degeneracy with higher order Jeans analysis: a general method

    NASA Astrophysics Data System (ADS)

    Richardson, Thomas; Fairbairn, Malcolm

    2013-07-01

    The Jeans analysis is often used to infer the total density of a system by relating the velocity moments of an observable tracer population to the underlying gravitational potential. This technique has recently been applied in the search for dark matter (DM) in objects such as dwarf spheroidal galaxies where the presence of DM is inferred via stellar velocities. A precise account of the density is needed to constrain the expected gamma-ray flux from DM self-annihilation and to distinguish between cold and warm DM models. Unfortunately, the traditional method of fitting the second-order Jeans equation to the tracer dispersion suffers from an unbreakable degeneracy of solutions due to the unknown velocity anisotropy of the projected system. To tackle this degeneracy, one can appeal to higher moments of the Jeans equation. By introducing an analogue to the Binney anisotropy parameter at fourth order, β' we create a framework that encompasses all solutions to the fourth-order Jeans equations rather than the restricted range imposed by the separable augmented density. The condition β' = f(β) ensures that the degeneracy is lifted and we interpret the separable augmented density system as the order-independent case β' = β. For a generic choice of β', we present the line-of-sight projection of the fourth moment and how it could be incorporated into a joint likelihood analysis of the dispersion and kurtosis. The framework is then extended to all orders such that constraints may be placed to ensure a physically positive distribution function. Having presented the mathematical framework, we then use it to make preliminary analyses of simulated dwarf spheroidal data leading to interesting results which strongly motivate further study.

  16. Preliminary results of receiver function analysis of seismic data recorded from a broadband deployment across the Gulf Coast Plain

    NASA Astrophysics Data System (ADS)

    Gurrola, H.; Pratt, K. W.; Pulliam, J.; Dunbar, J. A.

    2011-12-01

    -wave, (with local PPp reverberations averaged out). This is essentially beam forming for the optimal teleseismic ray path. The clean P-wave will then be deconvolved from the vertical components at each station to produce a vertical component receiver function that will enable us to model and stack local P-wave reverberations to produce a 2-D image of lithospheric structure. To produce traditional receiver functions from time periods where one component is lost from several stations, we will treat neighboring stations as arrays and recover an "array averaged three-component seismogram" for each loacation. These "beamed" seismograms will allow imaging of the crust, lithospheric mantle, and transition zone beneath the broadband array using traditional receiver function stacking or migration.

  17. Long term outcome and side effects in patients receiving low-dose I125 brachytherapy: a retrospective analysis

    PubMed Central

    Logghe, Pieter; Verlinde, Rolf; Bouttens, Frank; den Broecke, Caroline Van; Deman, Nathalie; Verboven, Koen; Maes, Dirk; Merckx, Luc

    2016-01-01

    ABSTRACT Objectives: To retrospectively evaluate the disease free survival (DFS), disease specific survival (DSS),overall survival (OS) and side effects in patients who received low-dose rate (LDR) brachytherapy with I125 stranded seeds. Materials and methods: Between july 2003 and august 2012, 274 patients with organ confined prostate cancer were treated with permanent I125 brachytherapy. The median follow-up, age and pretreatment prostate specific antigen (iPSA) was 84 months (12-120), 67 years (50-83) and 7.8 ng/mL (1.14-38), respectively. Median Gleason score was 6 (3-9). 219 patients (80%) had stage cT1c, 42 patients (15.3%) had stage cT2a, 3 (1.1%) had stage cT2b and 3 (1.1%) had stage cT2c. The median D90 was 154.3 Gy (102.7-190.2). Results: DSS was 98.5%.OS was 93.5%. 13 patients (4.7%) developed systemic disease, 7 patients (2.55%) had local progression. In 139 low risk patients, the 5 year biochemical freedom from failure rate (BFFF) was 85% and 9 patients (6.4%) developed clinical progression. In the intermediate risk group, the 5 year BFFF rate was 70% and 5 patients (7.1%) developed clinical progression. Median nPSA in patients with biochemical relapse was 1.58 ng/mL (0.21 – 10.46), median nPSA in patients in remission was 0.51 ng/mL (0.01 – 8.5). Patients attaining a low PSA nadir had a significant higher BFFF (p<0.05). Median D90 in patients with biochemical relapse was 87.2 Gy (51 – 143,1). Patients receiving a high D90 had a significant higher BFFF (p<0.05). Conclusion: In a well selected patient population, LDR brachytherapy offers excellent outcomes. Reaching a low PSA nadir and attaining high D90 values are significant predictors for a higher DFS. PMID:27532118

  18. [Analytical procedure of variable number of tandem repeats (VNTR) analysis and effective use of analysis results for tuberculosis control].

    PubMed

    Hachisu, Yushi; Hashimoto, Ruiko; Kishida, Kazunori; Yokoyama, Eiji

    2013-12-01

    Variable number of tandem repeats (VNTR) analysis is one of the methods for molecular epidemiological studies of Mycobacterium tuberculosis. VNTR analysis is a method based on PCR, provides rapid highly reproducible results and higher strain discrimination power than the restriction fragment length polymorphism (RFLP) analysis widely used in molecular epidemiological studies of Mycobacterium tuberculosis. Genetic lineage compositions of Mycobacterium tuberculosis clinical isolates differ among the regions from where they are isolated, and allelic diversity at each locus also differs among the genetic lineages of Mycobacterium tuberculosis. Therefore, the combination of VNTR loci that can provide high discrimination capacity for analysis is not common in every region. The Japan Anti-Tuberculosis Association (JATA) 12 (15) reported a standard combination of VNTR loci for analysis in Japan, and the combination with hypervariable (HV) loci added to JATA12 (15), which has very high discrimination capacity, was also reported. From these reports, it is thought that data sharing between institutions and construction of a nationwide database will progress from now on. Using database construction of VNTR profiles, VNTR analysis has become an effective tool to trace the route of tuberculosis infection, and also helps in decision-making in the treatment course. However, in order to utilize the results of VNTR analysis effectively, it is important that each related organization cooperates closely, and analysis should be appropriately applied in the system in which accurate control and private information protection are ensured.

  19. Cryoplasty Versus Conventional Angioplasty in Femoropopliteal Arterial Recanalization: 3-Year Analysis of Reintervention-Free Survival by Treatment Received

    SciTech Connect

    Diaz, Maria Lourdes; Urtasun, Fermin Barberena, Javier; Aranzadi, Carlos; Guillen-Grima, Francisco; Bilbao, Jose Ignacio

    2011-10-15

    Purpose: To compare long-term efficacy of cryoplasty therapy versus conventional angioplasty in the treatment of peripheral arterial atherosclerotic stenosis on the basis of our 3-year clinical experience. Materials and Methods: From January 2006 to December 2008, a total of 155 patients with 192 lesions of the femoropopliteal sector were randomized to receive either cryoplasty or conventional balloon angioplasty. The primary study end point was lesion target patency. Follow-up with clinical evaluation of patient's symptoms, ankle-brachial index, and Doppler ultrasound was scheduled at 1, 6, 9, 12, 24, and 36 months. Results: For the cryoplasty group (n = 86), technical immediate success was achieved in 74.4% of lesions. Rate of significant dissection was 13.5% and rate of stent placement of 22%. In the long term, target lesion patency rate at 6 months was 59.4%, with rates of 55.9, 52.6, and 49.1% at 1, 2, and 3 years, respectively. For the conventional angioplasty group (n = 69), the immediate technical success rate was 83.7%. Rate of significant dissection was 19%, and rate of stent placement was 72.9%. Patency rates at 6 months and at 1, 2, and 3 years were 71.5, 61.2, 60, and 56%, respectively. Conclusion: Compared with conventional angioplasty, cryoplasty showed good immediate success rates with lower stent placement rates. During the 3-year follow-up, patency rates tended to equalize between the two modalities.

  20. Error and Performance Analysis of MEMS-based Inertial Sensors with a Low-cost GPS Receiver

    PubMed Central

    Park, Minha; Gao, Yang

    2008-01-01

    Global Navigation Satellite Systems (GNSS), such as the Global Positioning System (GPS), have been widely utilized and their applications are becoming popular, not only in military or commercial applications, but also for everyday life. Although GPS measurements are the essential information for currently developed land vehicle navigation systems (LVNS), GPS signals are often unavailable or unreliable due to signal blockages under certain environments such as urban canyons. This situation must be compensated in order to provide continuous navigation solutions. To overcome the problems of unavailability and unreliability using GPS and to be cost and size effective as well, Micro Electro Mechanical Systems (MEMS) based inertial sensor technology has been pushing for the development of low-cost integrated navigation systems for land vehicle navigation and guidance applications. This paper will analyze the characterization of MEMS based inertial sensors and the performance of an integrated system prototype of MEMS based inertial sensors, a low-cost GPS receiver and a digital compass. The influence of the stochastic variation of sensors will be assessed and modeled by two different methods, namely Gauss-Markov (GM) and AutoRegressive (AR) models, with GPS signal blockage of different lengths. Numerical results from kinematic testing have been used to assess the performance of different modeling schemes.

  1. Venous hemodynamics in neurological disorders: an analytical review with hydrodynamic analysis

    PubMed Central

    2013-01-01

    Venous abnormalities contribute to the pathophysiology of several neurological conditions. This paper reviews the literature regarding venous abnormalities in multiple sclerosis (MS), leukoaraiosis, and normal-pressure hydrocephalus (NPH). The review is supplemented with hydrodynamic analysis to assess the effects on cerebrospinal fluid (CSF) dynamics and cerebral blood flow (CBF) of venous hypertension in general, and chronic cerebrospinal venous insufficiency (CCSVI) in particular. CCSVI-like venous anomalies seem unlikely to account for reduced CBF in patients with MS, thus other mechanisms must be at work, which increase the hydraulic resistance of the cerebral vascular bed in MS. Similarly, hydrodynamic changes appear to be responsible for reduced CBF in leukoaraiosis. The hydrodynamic properties of the periventricular veins make these vessels particularly vulnerable to ischemia and plaque formation. Venous hypertension in the dural sinuses can alter intracranial compliance. Consequently, venous hypertension may change the CSF dynamics, affecting the intracranial windkessel mechanism. MS and NPH appear to share some similar characteristics, with both conditions exhibiting increased CSF pulsatility in the aqueduct of Sylvius. CCSVI appears to be a real phenomenon associated with MS, which causes venous hypertension in the dural sinuses. However, the role of CCSVI in the pathophysiology of MS remains unclear. PMID:23724917

  2. Elemental analysis of marble used in Saudi Arabia by different nuclear analytical techniques.

    PubMed

    El-Taher, A; Ibraheem, Awad A; Abdelkawy, Salah

    2013-03-01

    Instrumental neutron activation analysis (INAA) and HPGe detector γ-spectroscopy were used to determine a total of 22 elements qualitatively and quantitatively for the first time from marble rock samples collected from local markets in Saudi Arabia. The elements determined are Mg, Ca, V, Na, Mn, As, La, Sm, U, Sc, Cr, Fe, Co, Zn, Sn, Ba, Ce, Eu, Yb, Lu, Hf, and Th. The samples were properly prepared together with their standard reference material and simultaneously irradiated by thermal neutrons at the TRIGA Mainz research reactor at a neutron flux of 7 × 10(11) n/cm(2) s. XRF was also used. The concentrations of natural radionuclides (226)Ra, (232)Th and (40)K were also determined by gamma ray spectroscopy to estimate the radiological parameters such as radium equivalent activity, and the external hazard index was calculated to estimate the exposure risk from usage of marble as raw materials in construction. For the sake of comparison the results of concentration levels and radium equivalent activities are compared with similar studies carried out in other countries. PMID:23262125

  3. A novel four-dimensional analytical approach for analysis of complex samples.

    PubMed

    Stephan, Susanne; Jakob, Cornelia; Hippler, Jörg; Schmitz, Oliver J

    2016-05-01

    A two-dimensional LC (2D-LC) method, based on the work of Erni and Frei in 1978, was developed and coupled to an ion mobility-high-resolution mass spectrometer (IM-MS), which enabled the separation of complex samples in four dimensions (2D-LC, ion mobility spectrometry (IMS), and mass spectrometry (MS)). This approach works as a continuous multiheart-cutting LC system, using a long modulation time of 4 min, which allows the complete transfer of most of the first - dimension peaks to the second - dimension column without fractionation, in comparison to comprehensive two-dimensional liquid chromatography. Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Ginkgo biloba shows the separation power of this four-dimensional separation method with a calculated total peak capacity of more than 8700. Furthermore, the advantage of ion mobility for characterizing unknown compounds by their collision cross section (CCS) and accurate mass in a non-target approach is shown for different matrices like plant extracts and coffee. Graphical abstract Principle of the four-dimensional separation. PMID:27038056

  4. Analytical development of disturbed matrix eigenvalue problem applied to mixed convection stability analysis in Darcy media

    NASA Astrophysics Data System (ADS)

    Hamed, Haikel Ben; Bennacer, Rachid

    2008-08-01

    This work consists in evaluating algebraically and numerically the influence of a disturbance on the spectral values of a diagonalizable matrix. Thus, two approaches will be possible; to use the theorem of disturbances of a matrix depending on a parameter, due to Lidskii and primarily based on the structure of Jordan of the no disturbed matrix. The second approach consists in factorizing the matrix system, and then carrying out a numerical calculation of the roots of the disturbances matrix characteristic polynomial. This problem can be a standard model in the equations of the continuous media mechanics. During this work, we chose to use the second approach and in order to illustrate the application, we choose the Rayleigh-Bénard problem in Darcy media, disturbed by a filtering through flow. The matrix form of the problem is calculated starting from a linear stability analysis by a finite elements method. We show that it is possible to break up the general phenomenon into other elementary ones described respectively by a disturbed matrix and a disturbance. A good agreement between the two methods was seen. To cite this article: H.B. Hamed, R. Bennacer, C. R. Mecanique 336 (2008).

  5. Radionuclide migration through fractured rock for arbitrary-length decay chain: Analytical solution and global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Shahkarami, Pirouz; Liu, Longcheng; Moreno, Luis; Neretnieks, Ivars

    2015-01-01

    This study presents an analytical approach to simulate nuclide migration through a channel in a fracture accounting for an arbitrary-length decay chain. The nuclides are retarded as they diffuse in the porous rock matrix and stagnant zones in the fracture. The Laplace transform and similarity transform techniques are applied to solve the model. The analytical solution to the nuclide concentrations at the fracture outlet is governed by nine parameters representing different mechanisms acting on nuclide transport through a fracture, including diffusion into the rock matrices, diffusion into the stagnant water zone, chain decay and hydrodynamic dispersion. Furthermore, to assess how sensitive the results are to parameter uncertainties, the Sobol method is applied in variance-based global sensitivity analyses of the model output. The Sobol indices show how uncertainty in the model output is apportioned to the uncertainty in the model input. This method takes into account both direct effects and interaction effects between input parameters. The simulation results suggest that in the case of pulse injections, ignoring the effect of a stagnant water zone can lead to significant errors in the time of first arrival and the peak value of the nuclides. Likewise, neglecting the parent and modeling its daughter as a single stable species can result in a significant overestimation of the peak value of the daughter nuclide. It is also found that as the dispersion increases, the early arrival time and the peak time of the daughter decrease while the peak value increases. More importantly, the global sensitivity analysis reveals that for time periods greater than a few thousand years, the uncertainty of the model output is more sensitive to the values of the individual parameters than to the interaction between them. Moreover, if one tries to evaluate the true values of the input parameters at the same cost and effort, the determination of priorities should follow a certain

  6. Analytical Microscopy

    SciTech Connect

    Not Available

    2006-06-01

    In the Analytical Microscopy group, within the National Center for Photovoltaic's Measurements and Characterization Division, we combine two complementary areas of analytical microscopy--electron microscopy and proximal-probe techniques--and use a variety of state-of-the-art imaging and analytical tools. We also design and build custom instrumentation and develop novel techniques that provide unique capabilities for studying materials and devices. In our work, we collaborate with you to solve materials- and device-related R&D problems. This sheet summarizes the uses and features of four major tools: transmission electron microscopy, scanning electron microscopy, the dual-beam focused-ion-beam workstation, and scanning probe microscopy.

  7. Uncertainty Quantification for the Reliability of the Analytical Analysis for the Simplified Model of CO2 Geological Sequestration

    SciTech Connect

    Bao, Jie; Xu, Zhijie; Fang, Yilin

    2015-04-01

    A hydro-mechanical model with analytical solutions including pressure evolution and geomechanical deformation for geological CO2 injection and sequestration were introduced in our previous work. However, the reliability and accuracy of the hydro-mechanical model and the companion analytical solution are uncertain because of the assumptions and simplifications in the analytical model, though it was validated by a few example cases. This study introduce the method to efficiently measure the accuracy of the analytical model, and specify the acceptable input parameters range that can guarantee the accuracy and reliability of the analytical solution. A coupled hydro-geomechanical subsurface transport simulator STOMP was adopted as a reference to justify the reliability of the hydro-mechanical model and the analytical solution. A quasi-Monte Carlo sampling method was applied to efficiently sample the input parameter space.

  8. Accuracy analysis on C/A code and P(Y) code pseudo-range of GPS dual frequency receiver and application in point positioning

    NASA Astrophysics Data System (ADS)

    Peng, Xiuying; Fan, Shijie; Guo, Jiming

    2008-10-01

    When the Anti-Spoofing (A-S) is active, the civilian users have some difficulties in using the P(Y) code for precise navigation and positioning. Z-tracking technique is one of the effective methods to acquire the P(Y) code. In this paper, the accuracy of pseudoranges from C/A code and P(Y) code for dual frequency GPS receiver is discussed. The principle of measuring the encrypted P(Y) code is described firstly, then a large data set from IGS tracking stations is utilized for analysis and verification with the help of a precise point positioning software developed by authors. Especially, P(Y) code pseudoranges of civilian GPS receivers allow eliminating/reducing the effect of ionospheric delay and improve the precision of positioning. The point positioning experiments for this are made in the end.

  9. Disaster risk management in prospect mining area Blitar district, East Java, using microtremor analysis and ANP (analytical network processing) approach

    SciTech Connect

    Parwatiningtyas, Diyan E-mail: erlinunindra@gmail.com; Ambarsari, Erlin Windia E-mail: erlinunindra@gmail.com; Marlina, Dwi E-mail: erlinunindra@gmail.com; Wiratomo, Yogi E-mail: erlinunindra@gmail.com

    2014-03-24

    Indonesia has a wealth of natural assets is so large to be managed and utilized, either from its own local government and local communities, especially in the mining sector. However, mining activities can change the state of the surface layer of the earth that have a high impact disaster risk. This could threaten the safety and disrupt human life, environmental damage, loss of property, and the psychological impact, sulking to the rule of law no 24 of 2007. That's why we strive to manage and minimize the risk of mine disasters in the region, how to use the method of calculation of Amplification Factor (AF) from the analysis based microtremor sulking Kanai and Nakamura, and decision systems were tested by analysis of ANP. Based on the amplification factor and Analytical Network Processing (ANP) obtained, some points showed instability in the surface layer of a mining area include the site of the TP-7, TP-8, TP-9, TP-10, (Birowo2). If in terms of structure, location indicated unstable due to have a sloping surface layer, resulting in the occurrence of landslides and earthquake risk is high. In the meantime, other areas of the mine site can be said to be a stable area.

  10. Disaster risk management in prospect mining area Blitar district, East Java, using microtremor analysis and ANP (analytical network processing) approach

    NASA Astrophysics Data System (ADS)

    Parwatiningtyas, Diyan; Ambarsari, Erlin Windia; Marlina, Dwi; Wiratomo, Yogi

    2014-03-01

    Indonesia has a wealth of natural assets is so large to be managed and utilized, either from its own local government and local communities, especially in the mining sector. However, mining activities can change the state of the surface layer of the earth that have a high impact disaster risk. This could threaten the safety and disrupt human life, environmental damage, loss of property, and the psychological impact, sulking to the rule of law no 24 of 2007. That's why we strive to manage and minimize the risk of mine disasters in the region, how to use the method of calculation of Amplification Factor (AF) from the analysis based microtremor sulking Kanai and Nakamura, and decision systems were tested by analysis of ANP. Based on the amplification factor and Analytical Network Processing (ANP) obtained, some points showed instability in the surface layer of a mining area include the site of the TP-7, TP-8, TP-9, TP-10, (Birowo2). If in terms of structure, location indicated unstable due to have a sloping surface layer, resulting in the occurrence of landslides and earthquake risk is high. In the meantime, other areas of the mine site can be said to be a stable area.

  11. What workforce is needed to implement the health information technology agenda? Analysis from the HIMSS analytics database.

    PubMed

    Hersh, William; Wright, Adam

    2008-01-01

    One of the essential ingredients for health information technology implementation is a well trained and competent workforce. However, this workforce has not been quantified or otherwise characterized well. We extracted data from the HIMSS Analytics Database and extrapolated our findings to the US as a whole. We found that there are approximately 108,390 IT professionals in healthcare the US. In addition, the amount of IT staff hired varies by level of EMR adoption, with the rate of IT FTE per bed started at a level of 0.082 FTE per bed at the lowest level of the EMR Adoption Model (Stage0) and increasing to 0.210 FTE bed at higher levels(Stage 4). We can extrapolate nationally to conclude that to move the entire US to higher levels of adoption (Stage 4) will require an additional 40,784 IT professionals. There are limitations to this analysis, including that the data are limited to IT professionals who are mainly in hospitals and do not include those who, for example, work for vendors or in non-clinical settings. Furthermore, data on biomedical informatics professionals are still virtually non-existent. Our analysis adds to data that show there must be increasing attention paid to the workforce that will develop, implement, and evaluate HIT applications. Further research is essential to better characterize all types of workers needed for adoption of health information technology, including their job roles, required competencies, and optimal education.

  12. An Analysis of the Origins, Subjects, and Awards Received for Presentations at the Annual IASG Conferences (2009-2013).

    PubMed

    Soni, Subhash; Sahni, Peush; Nundy, Samiran

    2015-12-01

    s presented at scientific meetings are important for making ongoing research more widely known and may also reflect the quality of the parent institution's standard of patient care. We analyzed the abstracts presented at the annual meetings of IASG for the past 5 years for (i) medium of presentation (oral/poster/videos), (ii) subjects discussed, (iii) institution of origin, (iv) whether a prospective/ retrospective study or a case report, (v) changing trends, and (vi) awards received. Of the 1340 abstracts analyzed, there were 18.5 % oral, 74.9 % posters, and 6.5 % videos presented from 205 in 2009 to 388 in 2012. The main organs discussed were the liver (19.3 %) pancreas (18.8 %), and biliary system (14.4 %). Sixty percent was from the private sector with Sir Ganga Ram, SGRH (95) and the Apollo Hospital, New Delhi (60), being the most prolific. From public institutes, the abstracts were predominantly from PGIMER, Chandigarh (39) and GB Pant Hospital (37). Thirty-three percent was from DNB and 25 % from M.Ch GI Surgery centres and 42 % was from others (most of these were as posters). There were 1016 (75.8 %) prospective/retrospective studies and 324 (24.2 %) case reports. Presentations, especially those related to liver transplantation, showed an increase. The main awards were won by SGRH (8); AIIMS, New Delhi (7); and JIPMER, Puducherry (6). Over the last 5 years, abstracts presented at the IASG have increased in number especially those dealing with the hepato-pancreatico-biliary system and liver transplantation. Most presentations were from "academic" departments, and similar analyses may inform us where we are heading and stimulate healthy competition. PMID:26884655

  13. A scenario analysis for reducing organic priority pollutants in receiving water using integrated dynamic urban fate models.

    PubMed

    Gevaert, Veerle; Verdonck, Frederik; De Baets, Bernard

    2012-08-15

    The Water Framework Directive (WFD) has the objective of a catchment-oriented water quality protection for all European waters with the purpose of achieving a good ecological and chemical quality status by the year 2015. To that end, necessary measures should be identified and implemented, with the aim of progressively reducing pollution from priority substances. The objective of this paper is to demonstrate how a dynamic model of the integrated urban wastewater system (IUWS) can be used to test different emission reduction strategies for organic priority pollutants (PPs) in a semi-hypothetical case study on di(2-ethylhexyl)phthalate (DEHP). The IUWS is composed of coupled entities: sources, urban catchment surface (run-off/infiltration), sewer system, stormwater treatment unit, wastewater treatment plant (WWTP) including sludge handling, and receiving surface water (river). State-of-the-art dynamic fate models were selected from literature and extended with an organic pollutant fate sub-model. Dynamic DEHP release profiles were estimated using a dynamic model input generator and fed to the model to predict the fate and concentration of DEHP in each IUWS sub-system. The model was then used to test eight scenarios on environmental performance, namely (1) reduction of impervious urban area, (2) reduction of infiltration in the sewer system, (3) input reduction (excluding the main pollutant sources), (4) separating the combined sewer system, (5) treatment of stormwater by stormwater infiltration ponds (separate sewer systems), (6) placement of retention basins at main sewer junctions, (7) sand filtration of secondary effluent, and (8) pre-precipitation of phosphorous. The simulation results revealed that the most effective measure in terms of river water quality improvement for DEHP (annual average and spikiness reduction) and PP concentration in the disposed WWTP sludge, is reducing release of this substance into the environment, not surprisingly. In general, this

  14. International proficiency testing of analytical laboratories for foods and feeds from 1990 to 1996: the experiences of the United Kingdom Food Analysis Performance Assessment Scheme.

    PubMed

    Key, P E; Patey, A L; Rowling, S; Wilbourn, A; Worner, F M

    1997-01-01

    The Food Analysis Performance Assessment Scheme (FAPAS) organized by a Secretariat of the UK Ministry of Agriculture, Fisheries, and Food has checked the proficiency of analytical laboratories for foods and feeds from 1990 to 1996. FAPAS was started for UK laboratories but was expanded worldwide at the request of analysts in other countries who did not have a home-based scheme. Thirteen thousand homogeneity-checked test materials were issued, covering a very wide range of analytes, including pesticides, toxins, veterinary drug residues, trace and nutritional elements, food colors, preservatives, sweeteners, alcohol congeners, fatty acids, nitrate, and proximate analysis. Participants returned 85% of requested data, and 47,000 z-score proficiency assessments were made, of which 81% were satisfactory. Evidence is presented of improvements in overall analytical ability with increased participation in proficiency testing in the areas of proximate analysis; organochlorine pesticide analysis; and lead, mercury, and acesulfame-K analyses. Little improvement was shown in other analytical areas such as calcium analysis. Overall accuracies for analysis of specific pesticides and specific trace elements in the circulated test materials were compared. PMID:9241851

  15. Measurement of volatile concentrations in volcanic glasses using thermogravimetric analysis: comparison with micro-analytical methods

    NASA Astrophysics Data System (ADS)

    Tuffen, H.; Owen, J.; Applegarth, L. J.

    2012-04-01

    Thermogravimetric analysis-mass spectrometry (TGA-MS) is potentially a powerful tool for measurement of multi-species volatile concentrations in volcanic rock samples and characterisation of degassing patterns that relate to volatile speciation. Simultaneous differential scanning calorimetry (DSC) provides information on thermal transformations such as crystallisation or melting. However no study has addressed whether the TGA technique can be used to quantify water speciation or separate water from other volatile species such as halogens. We have carried out TGA-DSC-MS experiments on a suite of compositionally homogeneous, variably-degassed rhyolitic obsidian samples from Blahnukur, Torfajökull, Iceland[1]. Sample water contents, as measured by infra-red spectroscopy, range from 0.19-0.81 wt %; F and Cl concentrations, measured using electron microprobe, range from 0.26-0.35 and 0.18-0.22 wt % respectively. Other volatile species concentrations (e.g. CO2, S) were beneath detection limits. The TGA results show an excellent correlation between the total volatile content measured using TGA (TVCTGA) and the total volatile content (H2OT + F + Cl) measured by other techniques (TVCTGA = 0.992TVCFTIR,EPMA, with R2 = 0.94). This shows that both water and halogen species are degassed during TGA measurements, even though halogen species are not detected through MS analyses. Patterns of volatile release indicate a link between water speciation, as measured using FTIR, and the temperature of degassing, and allow identification of hydrated samples. There are strong correlations between TGA weight loss over the 250-550 ° C interval and [H2Om] concentration, and between weight loss >550 ° C and the -OH content. The total volatile loss above 550 ° C far exceeds -OH concentrations alone (TGA>550 = 1.9126 [-OH] + 0.1693), but closely matches the sum of -OH, F and Cl in glasses, with TGA>550= 1.02 [-OH+F+Cl]. This indicates that halogen release occurs at high temperatures and

  16. Detrended Fluctuation Analysis of Heart Rate Dynamics Is an Important Prognostic Factor in Patients with End-Stage Renal Disease Receiving Peritoneal Dialysis

    PubMed Central

    Lin, Lian-Yu; Chang, Chin-Hao; Chu, Fang-Ying; Lin, Yen-Hung; Wu, Cho-Kai; Lee, Jen-Kuang; Hwang, Juei-Jen; Lin, Jiunn-Lee; Chiang, Fu-Tien

    2016-01-01

    Background and Objectives Patients with severe kidney function impairment often have autonomic dysfunction, which could be evaluated noninvasively by heart rate variability (HRV) analysis. Nonlinear HRV parameters such as detrended fluctuation analysis (DFA) has been demonstrated to be an important outcome predictor in patients with cardiovascular diseases. Whether cardiac autonomic dysfunction measured by DFA is also a useful prognostic factor in patients with end-stage renal disease (ESRD) receiving peritoneal dialysis (PD) remains unclear. The purpose of the present study was designed to test the hypothesis. Materials and Methods Patients with ESRD receiving PD were included for the study. Twenty-four hour Holter monitor was obtained from each patient together with other important traditional prognostic makers such as underlying diseases, left ventricular ejection fraction (LVEF) and serum biochemistry profiles. Short-term (DFAα1) and long-term (DFAα2) DFA as well as other linear HRV parameters were calculated. Results A total of 132 patients (62 men, 72 women) with a mean age of 53.7±12.5 years were recruited from July 2007 to March 2009. During a median follow-up period of around 34 months, eight cardiac and six non-cardiac deaths were observed. Competing risk analysis demonstrated that decreased DFAα1 was a strong prognostic predictor for increased cardiac and total mortality. ROC analysis showed that the AUC of DFAα1 (<0.95) to predict mortality was 0.761 (95% confidence interval (CI). = 0.617–0.905). DFAα1≧ 0.95 was associated with lower cardiac mortality (Hazard ratio (HR) 0.062, 95% CI = 0.007–0.571, P = 0.014) and total mortality (HR = 0.109, 95% CI = 0.033–0.362, P = 0.0003). Conclusion Cardiac autonomic dysfunction evaluated by DFAα1 is an independent predictor for cardiac and total mortality in patients with ESRD receiving PD. PMID:26828209

  17. Analytical Chemistry Laboratory

    NASA Technical Reports Server (NTRS)

    Anderson, Mark

    2013-01-01

    The Analytical Chemistry and Material Development Group maintains a capability in chemical analysis, materials R&D failure analysis and contamination control. The uniquely qualified staff and facility support the needs of flight projects, science instrument development and various technical tasks, as well as Cal Tech.

  18. Asaia bogorensis peritonitis identified by 16S ribosomal RNA sequence analysis in a patient receiving peritoneal dialysis.

    PubMed

    Snyder, Richard W; Ruhe, Jorg; Kobrin, Sidney; Wasserstein, Alan; Doline, Christa; Nachamkin, Irving; Lipschutz, Joshua H

    2004-08-01

    Here the authors report a case of refractory peritonitis leading to multiple hospitalizations and the loss of peritoneal dialysis access in a patient on automated peritoneal dialysis, caused by Asaia bogorensis, a bacterium not previously described as a human pathogen. This organism was identified by sequence analysis of the 16S ribosomal RNA gene. Unusual microbial agents may cause peritonitis, and molecular microbiological techniques are important tools for identifying these agents.

  19. The Impact of Chemical Abrasion on Trace Element Analysis of Zircon by In Situ Micro-Analytical Techniques

    NASA Astrophysics Data System (ADS)

    Romanoski, A.; Coint, N.; Cottle, J. M.; Hetherington, C. J.; Barnes, C. G.

    2011-12-01

    Introduction of the chemical abrasion technique has significantly increased the precision and accuracy of ID-TIMS U-Pb dating of zircon. The chemical abrasion technique, coupled with thermal annealing, removes inclusions and metamict domains from zircon reducing the impact of Pb-loss leading to more concordant analyses.In this study, zircon from the Red Bluff Granitic Suite (TX) (ID-TIMS age 1120 ± 35 Ma) has been thermally annealed and chemically abraded prior to SHRIMP-RG and LA-MC-ICP-MS analysis.Chemically abraded zircon gives a date of 1109 ± 22 Ma with an average of 3% discordancy. This compares with dates of 1137 ± 48 Ma with an average of 39% discordancy for non-abraded zircon from the same sample. The dates overlap within uncertainty, but the age from chemically abraded zircon has a lower population uncertainty. Other petrographic and analytical observations of the chemically abraded zircon include brighter CL intensity, lower REE abundances, more consistent (smaller scatter) negative Eu/Eu* anomalies, less scatter in the chondrite-normalized LREE values, and a slightly less-steep chondrite normalized HREE slope. The data show that thermal annealing and chemical abrasion of zircon prior to analysis by in situ ion-beam or laser ablation techniques may result in better accuracy and greater concordance in U-Pb analysis of zircon. However, while improving the quality of some components of the trace element dataset (e.g. Eu anomalies) the process may prejudice the interpretation of zircon trace element data (e.g. HREECN slopes).

  20. Analysis of acoustic damping in duct terminated by porous absorption materials based on analytical models and finite element simulations

    NASA Astrophysics Data System (ADS)

    Guan Qiming

    Acoustic absorption materials are widely used today to dampen and attenuate the noises which exist almost everywhere and have adverse impact upon daily life of human beings. In order to evaluate the absorption performance of such materials, it is necessary to experimentally determine acoustic properties of absorption materials. Two experimental methods, one is Standing Wave Ratio Method and the other is Transfer-Function Method, which also totally called as Impedance Tube Method, are based on two analytical models people have used to evaluate and validate the data obtained from acoustic impedance analyzers. This thesis first reviews the existing analytical models of previous two experimental methods in the literature by looking at their analytical models, respectively. Then a new analytical model is developed is developed based on One-Microphone Method and Three-Microphone Method, which are two novel experimental approaches. Comparisons are made among these analytical models, and their advantages and disadvantages are discussed.

  1. The influence of soil pH and humus content on received by Mehlich 3 method nutrients analysis results

    NASA Astrophysics Data System (ADS)

    Tonutare, Tonu; Krebstein, Kadri; Rodima, Ako; Kõlli, Raimo; Künnapas, Allan; Rebane, Jaanus; Penu, Priit; Vennik, Kersti; Soobik, Liina

    2015-04-01

    Soils provide vital ecosystem functions, playing an important role in our economy and in healthy living environment. However, soils are increasingly degrading in Europe and at the global level. Knowledge about the content of major plant available nutrients, i.e. calcium, magnesium, potassium and phosphorus, plays an important role in the sustainable soil management. Mobility of nutrients depends directly on the environmental conditions, two of the most important factors are the pH and organic matter content. Therefore it is essential to have correct information about the content and behaviour of the above named elements in soil, both from the environmental and agronomical viewpoint. During the last decades several extracting solutions which are suitable for the evaluation of nutrient status of soils have been developed for this purpose. One of them is called Mehlich 3 which is widely used in USA, Canada and some European countries (e.g. Estonia, Czech Republic) because of its suitability to extract several major plant nutrients from the soil simultaneously. There are several different instrumental methods used for the analysis of nutrient elements in the soil extract. Potassium, magnesium and calcium are widely analysed by the AAS (atomic absorption spectroscopic) method or by the ICP (inductively coupled plasma) spectroscopic methods. Molecular spectroscopy and ICP spectroscopy were used for the phosphorus determination. In 2011 a new multielemental instrumental method MP-AES (microwave plasma atomic emission spectroscopy) was added to them. Due to its lower detection limits and multielemental character, compared with AAS, and lower exploitation costs, compared with ICP, the MP-AES has a good potential to achieve a leading position in soil nutrient analysis in the future. The objective of this study was to investigate: (i) the impact of soil pH and humus content and (ii) applicability of MP-AES instrumental method for the determination of soil nutrients extracted

  2. [Analysis of proposals received and funded in discipline of microbiology of the National Natural Science Foundation of China from 2011 to 2015].

    PubMed

    Zhang, Xin; Li, Weimin; He, Jianwei; Wen, Mingzhang; Du, Quansheng

    2016-02-01

    Based on a wrap-up of the research proposals received and awards made during 2011 through 2015 in the discipline of microbiology of the Department of Life Sciences, National Natural Science Foundation of China, this article presents a statistic analysis of award recipient institutions and main research trends, and attempts a prospective prioritization of the funding areas from the points of encouraging interdisciplinary research, optimizing funding instruments and strengthening talent training, with a view to providing reference for scientists and researchers in the field of microbiology. PMID:27373064

  3. [Analysis of proposals received and funded in discipline of microbiology of the National Natural Science Foundation of China from 2011 to 2015].

    PubMed

    Zhang, Xin; Li, Weimin; He, Jianwei; Wen, Mingzhang; Du, Quansheng

    2016-02-01

    Based on a wrap-up of the research proposals received and awards made during 2011 through 2015 in the discipline of microbiology of the Department of Life Sciences, National Natural Science Foundation of China, this article presents a statistic analysis of award recipient institutions and main research trends, and attempts a prospective prioritization of the funding areas from the points of encouraging interdisciplinary research, optimizing funding instruments and strengthening talent training, with a view to providing reference for scientists and researchers in the field of microbiology.

  4. Treatment Outcome and Recursive Partitioning Analysis-Based Prognostic Factors in Patients With Esophageal Squamous Cell Carcinoma Receiving Preoperative Chemoradiotherapy

    SciTech Connect

    Kim, Min Kyoung; Kim, Sung- Bae Ahn, Jin Hee; Kim, Yong Hee; Kim, Jong Hoon; Jung, Hwoon Yong; Lee, Gin Hyug; Choi, Kee Don; Song, Ho-Young; Shin, Ji Hoon; Cho, Kyung-Ja; Ryu, Jin-Sook; Park, Seung-Il

    2008-07-01

    Purpose: To analyze the clinical outcomes and devise a prognostic model for patients with operable esophageal carcinoma who underwent preoperative chemoradiotherapy (CRT). Methods and Materials: A total of 269 patients were enrolled into three clinical trials assessing preoperative CRT at our institution. We assessed the significance of the pretreatment and treatment factors with regard to tumor recurrence and long-term survival and used recursive partitioning analysis to create a decision tree. Results: At a median follow-up of 31 months for the surviving patients, the median overall survival of all 180 patients in this study was 31.8 months, and the 5-year overall survival rate was 33.9%. The median event-free survival was 24.1 months, and the 5-year event-free survival rate was 29.3%. Of the 180 patients, 129 (71.7%) also underwent esophagectomy, and the perioperative mortality rate was 7.8%. A pathologic complete response was achieved by 58 patients (45%). The 5-year overall survival rate was 57.1% for patients who attained a pathologic complete response and 22.4% for those with gross residual disease (p = 0.0008). Recursive partitioning analysis showed that female patients who achieved a clinical response and underwent esophagectomy had the most favorable prognosis (p <0.0001). Among the patients who underwent esophagectomy, the group with good performance status, clinical Stage II, and a major pathologic response to CRT had the most favorable prognosis (p = 0.0002). Conclusion: Although preoperative CRT was generally effective and well-tolerated, an individualized approach is necessary to improve outcomes. Strategies to increase the response and reduce treatment failure should be investigated.

  5. Single-frequency, dual-GNSS versus dual-frequency, single-GNSS: a low-cost and high-grade receivers GPS-BDS RTK analysis

    NASA Astrophysics Data System (ADS)

    Odolinski, Robert; Teunissen, Peter J. G.

    2016-11-01

    The concept of single-frequency, dual-system (SF-DS) real-time kinematic (RTK) positioning has become feasible since, for instance, the Chinese BeiDou Navigation Satellite System (BDS) has become operational in the Asia-Pacific region. The goal of the present contribution is to investigate the single-epoch RTK performance of such a dual-system and compare it to a dual-frequency, single-system (DF-SS). As the SF-DS we investigate the L1 GPS + B1 BDS model, and for DF-SS we take L1, L2 GPS and B1, B2 BDS, respectively. Two different locations in the Asia-Pacific region are analysed with varying visibility of the BDS constellation, namely Perth in Australia and Dunedin in New Zealand. To emphasize the benefits of such a model we also look into using low-cost ublox single-frequency receivers and compare such SF-DS RTK performance to that of a DF-SS, based on much more expensive survey-grade receivers. In this contribution a formal and empirical analysis is given. It will be shown that with the SF-DS higher elevation cut-off angles than the conventional 10° or 15° can be used. The experiment with low-cost receivers for the SF-DS reveals (for the first time) that it has the potential to achieve comparable ambiguity resolution performance to that of a DF-SS (L1, L2 GPS), based on the survey-grade receivers.

  6. Single-frequency, dual-GNSS versus dual-frequency, single-GNSS: a low-cost and high-grade receivers GPS-BDS RTK analysis

    NASA Astrophysics Data System (ADS)

    Odolinski, Robert; Teunissen, Peter J. G.

    2016-06-01

    The concept of single-frequency, dual-system (SF-DS) real-time kinematic (RTK) positioning has become feasible since, for instance, the Chinese BeiDou Navigation Satellite System (BDS) has become operational in the Asia-Pacific region. The goal of the present contribution is to investigate the single-epoch RTK performance of such a dual-system and compare it to a dual-frequency, single-system (DF-SS). As the SF-DS we investigate the L1 GPS + B1 BDS model, and for DF-SS we take L1, L2 GPS and B1, B2 BDS, respectively. Two different locations in the Asia-Pacific region are analysed with varying visibility of the BDS constellation, namely Perth in Australia and Dunedin in New Zealand. To emphasize the benefits of such a model we also look into using low-cost ublox single-frequency receivers and compare such SF-DS RTK performance to that of a DF-SS, based on much more expensive survey-grade receivers. In this contribution a formal and empirical analysis is given. It will be shown that with the SF-DS higher elevation cut-off angles than the conventional 10° or 15° can be used. The experiment with low-cost receivers for the SF-DS reveals (for the first time) that it has the potential to achieve comparable ambiguity resolution performance to that of a DF-SS (L1, L2 GPS), based on the survey-grade receivers.

  7. Mechanical performance of physical-contact, multi-fiber optical connectors: Finite element analysis and semi-analytical model

    NASA Astrophysics Data System (ADS)

    Marin, Esteban B.; Tran, Hieu V.; Kobyakov, Andrey

    2016-07-01

    Three-dimensional finite element analysis of physical-contact, multi-fiber optical connector was used to characterize fiber-to-fiber contact and support the development and validation of a semi-analytical model (SAM) for the contact force. This contact behavior is determined by the elastic deformation of the system components (ferrule, fibers, and bonding adhesive) and the classical Hertzian contact at the fiber tips - effects that ultimately define the axial compliance of the system. Two 3-D finite element models for a 12-fiber connector are constructed to study the contact of two connectors, and the specific numerical simulations are carried out to generate input data to SAM, confirm the main assumptions made in its development, and numerically validate the predictions for the contact force. These simulations mainly consider non-uniform fiber height profiles and different end-face fiber tip geometries characterized by their radius of curvature. The numerically validated SAM is then used to study some performance aspects of multi-fiber connectors as related to the required contact force, namely, finding fiber height profiles that require minimum contact force and evaluating the throughput of polishing processes assuming a target contact force. Predictions are supported by Monte Carlo simulations and associated with current profile geometry metrics.

  8. [Capillary isotachophoresis--a new method in drug analysis. 3. Analytic capillary isotachophoresis for the determination of amantadine and rimantadine].

    PubMed

    Jannasch, R

    1986-07-01

    Investigations by means of analytical capillary isotachophoresis are possible not only for ampholytes, as peptides or amino acids, but generally for weak and strong electrolytes. Amantadine and rimantadine are determined by making use of a leading electrolyte including K+ ions at a pH range from 5.9 to 6.0 (terminating ion = creatinine+) at concentrations of the test solutions of greater than or equal to 600 nmol/ml and greater than or equal to nmol/ml, respectively, with a precision calculated as rel. S.D. (%) less than +/- 3 as a rule and a duration for one analysis less than 10 min. The tested discontinuous electrolyte system enables the determination of both the active substances simultaneously. The missing UV absorption of 1 and 2 is not disadvantageous by virtue of the application of the conductivity detector. The method was employed to carry out a content uniformity test for 2 tablets and informing experiments about the dissolution of 2 hydrochloride respectively 1 hydrochloride in water out of tablets and capsules, respectively.

  9. On the propagation of concentration polarization from microchannel-nanochannel interfaces. Part I: Analytical model and characteristic analysis.

    PubMed

    Mani, Ali; Zangle, Thomas A; Santiago, Juan G

    2009-04-01

    We develop two models to describe ion transport in variable-height micro- and nanochannels. For the first model, we obtain a one-dimensional (unsteady) partial differential equation governing flow and charge transport through a shallow and wide electrokinetic channel. In this model, the effects of electric double layer (EDL) on axial transport are taken into account using exact solutions of the Poisson-Boltzmann equation. The second simpler model, which is approachable analytically, assumes that the EDLs are confined to near-wall regions. Using a characteristics analysis, we show that the latter model captures concentration polarization (CP) effects and provides useful insight into its dynamics. Two distinct CP regimes are identified: CP with propagation in which enrichment and depletion shocks propagate outward, and CP without propagation where polarization effects stay local to micro- nanochannel interfaces. The existence of each regime is found to depend on a nanochannel Dukhin number and mobility of the co-ion nondimensionalized by electroosmotic mobility. Interestingly, microchannel dimensions and axial diffusion are found to play an insignificant role in determining whether CP propagates. The steady state condition of propagating CP is shown to be controlled by channel heights, surface chemistry, and co-ion mobility instead of the reservoir condition. Both models are validated against experimental results in Part II of this two-paper series.

  10. Analytical sedimentology

    SciTech Connect

    Lewis, D.W. . Dept. of Geology); McConchie, D.M. . Centre for Coastal Management)

    1994-01-01

    Both a self instruction manual and a cookbook'' guide to field and laboratory analytical procedures, this book provides an essential reference for non-specialists. With a minimum of mathematics and virtually no theory, it introduces practitioners to easy, inexpensive options for sample collection and preparation, data acquisition, analytic protocols, result interpretation and verification techniques. This step-by-step guide considers the advantages and limitations of different procedures, discusses safety and troubleshooting, and explains support skills like mapping, photography and report writing. It also offers managers, off-site engineers and others using sediments data a quick course in commissioning studies and making the most of the reports. This manual will answer the growing needs of practitioners in the field, either alone or accompanied by Practical Sedimentology, which surveys the science of sedimentology and provides a basic overview of the principles behind the applications.

  11. Nondestructive atomic compositional analysis of BeMgZnO quaternary alloys using ion beam analytical techniques

    NASA Astrophysics Data System (ADS)

    Zolnai, Z.; Toporkov, M.; Volk, J.; Demchenko, D. O.; Okur, S.; Szabó, Z.; Özgür, Ü.; Morkoç, H.; Avrutin, V.; Kótai, E.

    2015-02-01

    The atomic composition with less than 1-2 atom% uncertainty was measured in ternary BeZnO and quaternary BeMgZnO alloys using a combination of nondestructive Rutherford backscattering spectrometry with 1 MeV He+ analyzing ion beam and non-Rutherford elastic backscattering experiments with 2.53 MeV energy protons. An enhancement factor of 60 in the cross-section of Be for protons has been achieved to monitor Be atomic concentrations. Usually the quantitative analysis of BeZnO and BeMgZnO systems is challenging due to difficulties with appropriate experimental tools for the detection of the light Be element with satisfactory accuracy. As it is shown, our applied ion beam technique, supported with the detailed simulation of ion stopping, backscattering, and detection processes allows of quantitative depth profiling and compositional analysis of wurtzite BeZnO/ZnO/sapphire and BeMgZnO/ZnO/sapphire layer structures with low uncertainty for both Be and Mg. In addition, the excitonic bandgaps of the layers were deduced from optical transmittance measurements. To augment the measured compositions and bandgaps of BeO and MgO co-alloyed ZnO layers, hybrid density functional bandgap calculations were performed with varying the Be and Mg contents. The theoretical vs. experimental bandgaps show linear correlation in the entire bandgap range studied from 3.26 eV to 4.62 eV. The analytical method employed should help facilitate bandgap engineering for potential applications, such as solar blind UV photodetectors and heterostructures for UV emitters and intersubband devices.

  12. Palonosetron versus older 5-HT3 receptor antagonists for nausea prevention in patients receiving chemotherapy: a multistudy analysis

    PubMed Central

    Morrow, Gary R; Schwartzberg, Lee; Barbour, Sally Y; Ballinari, Gianluca; Thorn, Michael D; Cox, David

    2015-01-01

    Background No clinical standard currently exists for the optimal management of nausea induced by emetogenic chemotherapy, particularly delayed nausea. Objective To compare the efficacy and safety of palonosetron with older 5-HT3 receptor antagonists (RAs) in preventing chemotherapy-induced nausea. Methods Data were pooled from 4 similarly designed multicenter, randomized, double-blind, clinical trials that compared single intravenous doses of palonosetron 0.25 mg or 0.75 mg with ondansetron 32 mg, dolasetron 100 mg, or granisetron 40 μg/kg, administered 30 minutes before moderately emetogenic chemotherapy (MEC) or highly emetogenic chemotherapy (HEC). Pooled data within each chemotherapy category (MEC: n = 1,132; HEC: n = 1,781) were analyzed by a logistic regression model. Nausea endpoints were complete control rates (ie, no more than mild nausea, no vomiting, and no rescue medication), nausea-free rates, nausea severity, and requirement for rescue antiemetic/antinausea medication over 5 days following chemotherapy. Pooled safety data were summarized descriptively. Results Numerically more palonosetron-treated patients were nausea-free on each day, and fewer had moderate-severe nausea. Similarly, usage of rescue medication was less frequent among palonosetron-treated patients. Complete control rates for palonosetron and older 5-HT3 RAs in the acute phase were 66% vs 63%, 52% vs 42% in the delayed phase (24-120 hours), and 46% vs 37% in the overall phase. The incidence of adverse events was similar for palonosetron and older 5-HT3 RAs. Limitations This post hoc analysis summarized data for palonosetron and several other 5-HT3 RAs but was not powered for statistical comparisons between individual agents. Because nausea is inherently subjective, the reliability of assessments of some aspects (eg, severity) may be influenced by interindividual variability. Conclusion Palonosetron may be more effective than older 5-HT3 RAs in preventing nausea, with comparable

  13. An analysis of sickness absence in chronically ill patients receiving Complementary and Alternative Medicine: A longterm prospective intermittent study

    PubMed Central

    Moebus, Susanne; Lehmann, Nils; Bödeker, Wolfgang; Jöckel, Karl-Heinz

    2006-01-01

    Background The popularity of complementary and alternative medicine (CAM) has led to a growing amount of research in this area. All the same little is known about the effects of these special treatments in every-day practice of primary care, delivered by general practitioners within the health insurance system. From 1994 to 2000 more than 20 German Company health insurances initiated the first model project on CAM according to the German social law. Aim of this contribution is to investigate the effectiveness of multi-modal CAM on chronic diseases within primary health care. Methods A long-term prospective intermittent study was conducted including 44 CAM practitioners and 1221 self-selected chronically ill patients (64% women) of whom 441 were employed. Main outcome measure is sick-leave, controlled for secular trends and regression-to-the mean and self-perceived health status. Results Sick-leave per year of 441 patients at work increased from 22 (SD ± 45.2) to 31 (± 61.0) days within three years prior to intervention, and decreased to 24 (± 55.6) in the second year of treatment, sustaining at this level in the following two years. Detailed statistical analysis show that this development exceeds secular trends and the regression-toward-the-mean effect. Sick-leave reduction was corroborated by data on self-reported improvement of patients' health status. Conclusion Results of this longterm observational study show a reduction of sick leave in chronically ill patients after a complex multimodal CAM intervention. However, as this is an uncontrolled observational study efficacy of any specific CAM treatment can not be proven. The results might indicate an general effectiveness of CAM in primary care, worthwhile further investigations. Future studies should identify the most suitable patients for CAM practices, the most appropriate and safe treatments, provide information on the magnitude of the effects to facilitate subsequent definitive randomised controlled

  14. Competing on talent analytics.

    PubMed

    Davenport, Thomas H; Harris, Jeanne; Shapiro, Jeremy

    2010-10-01

    Do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel? Leading-edge companies such as Google, Best Buy, Procter & Gamble, and Sysco use sophisticated data-collection technology and analysis to answer these questions, leveraging a range of analytics to improve the way they attract and retain talent, connect their employee data to business performance, differentiate themselves from competitors, and more. The authors present the six key ways in which companies track, analyze, and use data about their people-ranging from a simple baseline of metrics to monitor the organization's overall health to custom modeling for predicting future head count depending on various "what if" scenarios. They go on to show that companies competing on talent analytics manage data and technology at an enterprise level, support what analytical leaders do, choose realistic targets for analysis, and hire analysts with strong interpersonal skills as well as broad expertise.

  15. Analytical Protocol (GC/ECNIMS) for OSWER's Response to OIG Report (2005-P-00022) on Toxaphene Analysis

    EPA Science Inventory

    The research approached the large number and complexity of the analytes as four separate groups: technical toxaphene, toxaphene congeners (eight in number), chlordane, and organochlorine pesticides. This approach was advantageous because it eliminated potential interferences amon...

  16. Analysis of Environmental Contamination resulting from Catastrophic Incidents: Part two: Building Laboratory Capability by Selecting and Developing Analytical Methodologies

    EPA Science Inventory

    Catastrophic incidents can generate a large number of samples with analytically diverse types including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface resid...

  17. Spaceborne receivers: Basic principles

    NASA Technical Reports Server (NTRS)

    Stacey, J. M.

    1984-01-01

    The underlying principles of operation of microwave receivers for space observations of planetary surfaces were examined. The design philosophy of the receiver as it is applied to operate functionally as an efficient receiving system, the principle of operation of the key components of the receiver, and the important differences among receiver types are explained. The operating performance and the sensitivity expectations for both the modulated and total power receiver configurations are outlined. The expressions are derived from first principles and are developed through the important intermediate stages to form practicle and easily applied equations. The transfer of thermodynamic energy from point to point within the receiver is illustrated. The language of microwave receivers is applied statistics.

  18. Solar heat receiver

    DOEpatents

    Hunt, Arlon J.; Hansen, Leif J.; Evans, David B.

    1985-01-01

    A receiver for converting solar energy to heat a gas to temperatures from 700.degree.-900.degree. C. The receiver is formed to minimize impingement of radiation on the walls and to provide maximum heating at and near the entry of the gas exit. Also, the receiver is formed to provide controlled movement of the gas to be heated to minimize wall temperatures. The receiver is designed for use with gas containing fine heat absorbing particles, such as carbon particles.

  19. Solar heat receiver

    DOEpatents

    Hunt, A.J.; Hansen, L.J.; Evans, D.B.

    1982-09-29

    A receiver is described for converting solar energy to heat a gas to temperatures from 700 to 900/sup 0/C. The receiver is formed to minimize impingement of radiation on the walls and to provide maximum heating at and near the entry of the gas exit. Also, the receiver is formed to provide controlled movement of the gas to be heated to minimize wall temperatures. The receiver is designed for use with gas containing fine heat absorbing particles, such as carbon particles.

  20. Analytical modeling for vibration analysis of thin rectangular orthotropic/functionally graded plates with an internal crack

    NASA Astrophysics Data System (ADS)

    Joshi, P. V.; Jain, N. K.; Ramtekkar, G. D.

    2015-05-01

    An analytical model is presented for vibration analysis of a thin orthotropic and general functionally graded rectangular plate containing an internal crack located at the center. The continuous line crack is parallel to one of the edges of the plate. The equation of motion of the orthotropic plate is derived using the equilibrium principle. The crack terms are formulated using Line Spring Model. The effect of location of crack along the thickness of the plate on natural frequencies is analyzed using appropriate crack compliance coefficients in the Line Spring Model. By using the Berger formulation for in-plane forces, the derived equation of motion of the cracked plate is transformed into a cubic nonlinear system. Applying the Galerkin's method, the equation is converted into well known Duffing equation. The peak amplitude is obtained by employing Multiple Scales perturbation method. The effect of nonlinearity is also established by deriving frequency response equation for the cracked plate using method of multiple scales. The influence of crack length, boundary conditions and crack location along the thickness, on the natural frequencies of a square and rectangular plate is demonstrated. It is found that the vibration characteristics are affected by the length and location of crack along the thickness of the plate. It is thus deduced that the natural frequencies are minimum when crack is internal and its depth is symmetric about the mid-plane of the plate for all the three boundary conditions considered. Further, it is concluded that the presence of crack across the fibers decreases the frequency more as compared to crack along the fibers. The Effect of varying elasticity ratio on the fundamental frequencies of the cracked plate is also established.