Sample records for array-valued statistical objects

  1. Manipulating Environmental Time Series in Python/Numpy: the Scikits.Timeseries Package and its Applications.

    NASA Astrophysics Data System (ADS)

    Gerard-Marchant, P. G.

    2008-12-01

    Numpy is a free, open source C/Python interface designed for the fast and convenient manipulation of multidimensional numerical arrays. The base object, ndarray, can also be easily be extended to define new objects meeting specific needs. Thanks to its simplicity, efficiency and modularity, numpy and its companion library Scipy have become increasingly popular in the scientific community over the last few years, with application ranging from astronomy and engineering to finances and statistics. Its capacity to handle missing values is particularly appealing when analyzing environmental time series, where irregular data sampling might be an issue. After reviewing the main characteristics of numpy objects and the mechanism of subclassing, we will present the scikits.timeseries package, developed to manipulate single- and multi-variable arrays indexed in time. We will illustrate some typical applications of this package by introducing climpy, a set of extensions designed to help analyzing the impacts of climate variability on environmental data such as precipitations or streamflows.

  2. Perception of the average size of multiple objects in chimpanzees (Pan troglodytes).

    PubMed

    Imura, Tomoko; Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki

    2017-08-30

    Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. © 2017 The Authors.

  3. Perception of the average size of multiple objects in chimpanzees (Pan troglodytes)

    PubMed Central

    Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki

    2017-01-01

    Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. PMID:28835550

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jassal, K; Sarkar, B; Mohanti, B

    Objective: The study presents the application of a simple concept of statistical process control (SPC) for pre-treatment quality assurance procedure analysis for planar dose measurements performed using 2D-array and a-Si electronic portal imaging device (a-Si EPID). Method: A total of 195 patients of four different anatomical sites: brain (n1=45), head & neck (n2=45), thorax (n3=50) and pelvis (n4=55) were selected for the study. Pre-treatment quality assurance for the clinically acceptable IMRT/VMAT plans was measured with 2D array and a-Si EPID of the accelerator. After the γ-analysis, control charts and the quality index Cpm was evaluated for each cohort. Results: Meanmore » and σ of γ ( 3%/3 mm) were EPID γ %≤1= 99.9% ± 1.15% and array γ %<1 = 99.6% ± 1.06%. Among all plans γ max was consistently lower than for 2D array as compared to a-Si EPID. Fig.1 presents the X-bar control charts for every cohort. Cpm values for a-Si EPID were found to be higher than array, detailed results are presented in table 1. Conclusion: Present study demonstrates the significance of control charts used for quality management purposes in newer radiotherapy clinics. Also, provides a pictorial overview of the clinic performance for the advanced radiotherapy techniques.Higher Cpm values for EPID indicate its higher efficiency than array based measurements.« less

  5. Estimation of the geochemical threshold and its statistical significance

    USGS Publications Warehouse

    Miesch, A.T.

    1981-01-01

    A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.

  6. Object detection with a multistatic array using singular value decomposition

    DOEpatents

    Hallquist, Aaron T.; Chambers, David H.

    2014-07-01

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.

  7. Detection of Buried Objects by Means of a SAP Technique: Comparing MUSIC- and SVR-Based Approaches

    NASA Astrophysics Data System (ADS)

    Meschino, S.; Pajewski, L.; Pastorino, M.; Randazzo, A.; Schettini, G.

    2012-04-01

    This work is focused on the application of a Sub-Array Processing (SAP) technique to the detection of metallic cylindrical objects embedded in a dielectric half-space. The identification of buried cables, pipes, conduits, and other cylindrical utilities, is an important problem that has been extensively studied in the last years. Most commonly used approaches are based on the use of electromagnetic sensing: a set of antennas illuminates the ground and the collected echo is analyzed in order to extract information about the scenario and to localize the sought objects [1]. In a SAP approach, algorithms for the estimation of Directions of Arrival (DOAs) are employed [2]: they assume that the sources (in this paper, currents induced on buried targets) are in the far-field region of the receiving array, so that the received wavefront can be considered as planar, and the main angular direction of the field can be estimated. However, in electromagnetic sensing of buried objects, the scatterers are usually quite near to the antennas. Nevertheless, by dividing the whole receiving array in a suitable number of sub-arrays, and by finding a dominant DOA for each one, it is possible to localize objects that are in the far-field of the sub-array, although being in the near-field of the array. The DOAs found by the sub-arrays can be triangulated, obtaining a set of crossings with intersections condensed around object locations. In this work, the performances of two different DOA algorithms are compared. In particular, a MUltiple SIgnal Classification (MUSIC)-type method [3] and Support Vector Regression (SVR) based approach [4] are employed. The results of a Cylindrical-Wave Approach forward solver are used as input data of the detection procedure [5]. To process the crossing pattern, the region of interest is divided in small windows, and a Poisson model is adopted for the statistical distribution of intersections in the windows. Hypothesis testing procedures are used (imposing a suitable threshold from a desired false-alarm rate), to ascribe each window to the ground or to the sought objects. Numerical results are presented, for a test scenario with a circular-section cylinder in a dielectric half-space. Different values of the ground permittivity, target size, and its position with respect to the receiving array, are considered. Preliminary results on the application of MUSIC and SVR to multiple-object localization are reported. [1] H. Jol, Ground Penetrating Radar: Theory and Applications, Elsevier, Amsterdam, NL, 2009. [2] Gross F.B., Smart Antennas for Wireless Communications, McGraw-Hill, New York 2005. [3] S. Meschino, L. Pajewski, G. Schettini, "Use of a Sub-Array Statistical Approach for the Detection of a Buried Object", Near Surface Geophysics, vol. 8(5), pp. 365-375, 2010. [4] M. Pastorino, A. Randazzo, "A Smart Antenna System for Direction of Arrival Estimation based on a Support Vector Regression," IEEE Trans. Antennas Propagat., vol. 53(7), pp. 2161-2168, 2005. [5] M. Di Vico, F. Frezza, L. Pajewski, G. Schettini, "Scattering by a Finite Set of Perfectly Conducting Cylinders Buried in a Dielectric Half-Space: a Spectral-Domain Solution," IEEE Trans. Antennas Propagat., vol. 53(2), pp. 719-727, 2005.

  8. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    NASA Astrophysics Data System (ADS)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-07-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are -0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions.

  9. Testing the system detection unit for measuring solid minerals bulk density

    NASA Astrophysics Data System (ADS)

    Voytyuk, I. N.; Kopteva, A. V.

    2017-10-01

    The paper provides a brief description of the system for measuring flux per volume of solid minerals via example of mineral coal. The paper discloses the operational principle of the detection unit. The paper provides full description of testing methodology, as well as practical implementation of the detection unit testing. This paper describes the removal of two data arrays via the channel of scattered anddirect radiation for the detection units of two generations. This paper describes Matlab software to determine the statistical characteristics of the studied objects. The mean value of pulses per cycles, and pulse counting inaccuracy relatively the mean value were determined for the calculation of the stability account of the detection units.

  10. Distinct contributions of attention and working memory to visual statistical learning and ensemble processing.

    PubMed

    Hall, Michelle G; Mattingley, Jason B; Dux, Paul E

    2015-08-01

    The brain exploits redundancies in the environment to efficiently represent the complexity of the visual world. One example of this is ensemble processing, which provides a statistical summary of elements within a set (e.g., mean size). Another is statistical learning, which involves the encoding of stable spatial or temporal relationships between objects. It has been suggested that ensemble processing over arrays of oriented lines disrupts statistical learning of structure within the arrays (Zhao, Ngo, McKendrick, & Turk-Browne, 2011). Here we asked whether ensemble processing and statistical learning are mutually incompatible, or whether this disruption might occur because ensemble processing encourages participants to process the stimulus arrays in a way that impedes statistical learning. In Experiment 1, we replicated Zhao and colleagues' finding that ensemble processing disrupts statistical learning. In Experiments 2 and 3, we found that statistical learning was unimpaired by ensemble processing when task demands necessitated (a) focal attention to individual items within the stimulus arrays and (b) the retention of individual items in working memory. Together, these results are consistent with an account suggesting that ensemble processing and statistical learning can operate over the same stimuli given appropriate stimulus processing demands during exposure to regularities. (c) 2015 APA, all rights reserved).

  11. Statistical Investigation of the Effect of Process Parameters on the Shear Strength of Metal Adhesive Joints

    NASA Astrophysics Data System (ADS)

    Rajkumar, Goribidanur Rangappa; Krishna, Munishamaih; Narasimhamurthy, Hebbale Narayanrao; Keshavamurthy, Yalanabhalli Channegowda

    2017-06-01

    The objective of the work was to optimize sheet metal joining parameters such as adhesive material, adhesive thickness, adhesive overlap length and surface roughness for single lap joint of aluminium sheet shear strength using robust design. An orthogonal array, main effect plot, signal-to-noise ratio and analysis of variance were employed to investigate the shear strength of the joints. The statistical result shows vinyl ester is best candidate among other two polymers viz. epoxy and polyester due to its low viscosity value compared to other two polymers. The experiment results shows that the adhesive thickness 0.6 mm, overlap length 50 mm and surface roughness 2.12 µm for obtained maximum shear strength of Al sheet joints. The ANOVA result shows one of the most significant factors is overlap length which affect joint strength in addition to adhesive thickness, adhesive material, and surface roughness. A confirmation test was carried out as the optimal combination of parameters will not match with the any of the experiments in the orthogonal array.

  12. Real-time human versus animal classification using pyro-electric sensor array and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2014-03-01

    In this paper, we propose a real-time human versus animal classification technique using a pyro-electric sensor array and Hidden Markov Model. The technique starts with the variational energy functional level set segmentation technique to separate the object from background. After segmentation, we convert the segmented object to a signal by considering column-wise pixel values and then finding the wavelet coefficients of the signal. HMMs are trained to statistically model the wavelet features of individuals through an expectation-maximization learning process. Human versus animal classifications are made by evaluating a set of new wavelet feature data against the trained HMMs using the maximum-likelihood criterion. Human and animal data acquired-using a pyro-electric sensor in different terrains are used for performance evaluation of the algorithms. Failures of the computationally effective SURF feature based approach that we develop in our previous research are because of distorted images produced when the object runs very fast or if the temperature difference between target and background is not sufficient to accurately profile the object. We show that wavelet based HMMs work well for handling some of the distorted profiles in the data set. Further, HMM achieves improved classification rate over the SURF algorithm with almost the same computational time.

  13. Objective Assessment and Design Improvement of a Staring, Sparse Transducer Array by the Spatial Crosstalk Matrix for 3D Photoacoustic Tomography

    PubMed Central

    Kosik, Ivan; Raess, Avery

    2015-01-01

    Accurate reconstruction of 3D photoacoustic (PA) images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT) staring array system and analyze system performance using singular value decomposition (SVD). The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate). The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization. PMID:25875177

  14. Surveying Low-Mass Star Formation with the Submillimeter Array

    NASA Astrophysics Data System (ADS)

    Dunham, Michael

    2018-01-01

    Large astronomical surveys yield important statistical information that can’t be derived from single-object and small-number surveys. In this talk I will review two recent surveys in low-mass star formation undertaken by the Submillimeter Array (SMA): a millimeter continuum survey of disks surrounding variably accreting young stars, and a complete continuum and molecular line survey of all protostars in the nearby Perseus Molecular Cloud. I will highlight several new insights into the processes by which low-mass stars gain their mass that have resulted from the statistical power of these surveys.

  15. ArrayVigil: a methodology for statistical comparison of gene signatures using segregated-one-tailed (SOT) Wilcoxon's signed-rank test.

    PubMed

    Khan, Haseeb Ahmad

    2005-01-28

    Due to versatile diagnostic and prognostic fidelity molecular signatures or fingerprints are anticipated as the most powerful tools for cancer management in the near future. Notwithstanding the experimental advancements in microarray technology, methods for analyzing either whole arrays or gene signatures have not been firmly established. Recently, an algorithm, ArraySolver has been reported by Khan for two-group comparison of microarray gene expression data using two-tailed Wilcoxon signed-rank test. Most of the molecular signatures are composed of two sets of genes (hybrid signatures) wherein up-regulation of one set and down-regulation of the other set collectively define the purpose of a gene signature. Since the direction of a selected gene's expression (positive or negative) with respect to a particular disease condition is known, application of one-tailed statistics could be a more relevant choice. A novel method, ArrayVigil, is described for comparing hybrid signatures using segregated-one-tailed (SOT) Wilcoxon signed-rank test and the results compared with integrated-two-tailed (ITT) procedures (SPSS and ArraySolver). ArrayVigil resulted in lower P values than those obtained from ITT statistics while comparing real data from four signatures.

  16. The statistics of Pearce element diagrams and the Chayes closure problem

    NASA Astrophysics Data System (ADS)

    Nicholls, J.

    1988-05-01

    Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.

  17. High Contrast Programmable Field Masks for JWST NIRSpec

    NASA Technical Reports Server (NTRS)

    Kutyrev, Alexander S.

    2008-01-01

    Microshutter arrays are one of the novel technologies developed for the James Webb Space Telescope (JWST). It will allow Near Infrared Spectrometer (NIRSpec) to acquire spectra of hundreds of objects simultaneously therefore increasing its efficiency tremendously. We have developed these programmable arrays that are based on Micro-Electro Mechanical Structures (MEMS) technology. The arrays are 2D addressable masks that can operate in cryogenic environment of JWST. Since the primary JWST science requires acquisition of spectra of extremely faint objects, it is important to provide very high contrast of the open to closed shutters. This high contrast is necessary to eliminate any possible contamination and confusion in the acquired spectra by unwanted objects. We have developed and built a test system for the microshutter array functional and optical characterization. This system is capable of measuring the contrast of the microshutter array both in visible and infrared light of the NIRSpec wavelength range while the arrays are in their working cryogenic environment. We have measured contrast ratio of several microshutter arrays and demonstrated that they satisfy and in many cases far exceed the NIRSpec contrast requirement value of 2000.

  18. Streamwise evolution of statistical events and the triple correlation in a model wind turbine array

    NASA Astrophysics Data System (ADS)

    Viestenz, Kyle; Cal, Raúl Bayoán

    2013-11-01

    Hot-wire anemometry data, obtained from a wind tunnel experiment containing a 3 × 3 wind turbine array, are used to conditionally average the Reynolds stresses. Nine profiles at the centerline behind the array are analyzed to characterize the turbulent velocity statistics of the wake flow. Quadrant analysis yields statistical events occurring in the wake of the wind farm, where quadrants 2 and 4 produce ejections and sweeps, respectively. A balance between these quadrants is expressed via the ΔSo parameter, which attains a maximum value at the bottom tip and changes sign near the top tip of the rotor. These are then associated to the triple correlation term present in the turbulent kinetic energy equation of the fluctuations. The development of these various quantities is assessed in light of wake remediation, energy transport and possess significance in closure models. National Science Foundation: ECCS-1032647.

  19. Cochlear Implant Electrode Array From Partial to Full Insertion in Non-Human Primate Model.

    PubMed

    Manrique-Huarte, Raquel; Calavia, Diego; Gallego, Maria Antonia; Manrique, Manuel

    2018-04-01

    To determine the feasibility of progressive insertion (two sequential surgeries: partial to full insertion) of an electrode array and to compare functional outcomes. 8 normal-hearing animals (Macaca fascicularis (MF)) were included. A 14 contact electrode array, which is suitably sized for the MF cochlea was partially inserted (PI) in 16 ears. After 3 months of follow-up revision surgery the electrode was advanced to a full insertion (FI) in 8 ears. Radiological examination and auditory testing was performed monthly for 6 months. In order to compare the values a two way repeated measures ANOVA was used. A p-value below 0.05 was considered as statistically significant. IBM SPSS Statistics V20 was used. Surgical procedure was completed in all cases with no complications. Mean auditory threshold shift (ABR click tones) after 6 months follow-up is 19 dB and 27 dB for PI and FI group. For frequencies 4, 6, 8, 12, and 16 kHz in the FI group, tone burst auditory thresholds increased after the revision surgery showing no recovery thereafter. Mean threshold shift at 6 months of follow- up is 19.8 dB ranging from 2 to 36dB for PI group and 33.14dB ranging from 8 to 48dB for FI group. Statistical analysis yields no significant differences between groups. It is feasible to perform a partial insertion of an electrode array and progress on a second surgical time to a full insertion (up to 270º). Hearing preservation is feasible for both procedures. Note that a minimal threshold deterioration is depicted among full insertion group, especially among high frequencies, with no statistical differences.

  20. Compositionality and Statistics in Adjective Acquisition: 4-Year-Olds Interpret "Tall" and "Short" Based on the Size Distributions of Novel Noun Referents

    ERIC Educational Resources Information Center

    Barner, David; Snedeker, Jesse

    2008-01-01

    Four experiments investigated 4-year-olds' understanding of adjective-noun compositionality and their sensitivity to statistics when interpreting scalar adjectives. In Experiments 1 and 2, children selected "tall" and "short" items from 9 novel objects called "pimwits" (1-9 in. in height) or from this array plus 4 taller or shorter distractor…

  1. Total RNA Sequencing Analysis of DCIS Progressing to Invasive Breast Cancer

    DTIC Science & Technology

    2015-09-01

    EPICOPY to obtain reliable copy number variation ( CNV ) data from the methylome array data, thereby decreasing the DNA requirements in half...in the R statistical environment. Samples were assessed for good performance on the array using detection p-values, a metric implemented by...Illumina to identify probes detected with confidence. Samples less than 90% of probes detected were removed from the analysis and probes undetected in any

  2. A Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) Determined from Phased Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.

    2006-01-01

    Current processing of acoustic array data is burdened with considerable uncertainty. This study reports an original methodology that serves to demystify array results, reduce misinterpretation, and accurately quantify position and strength of acoustic sources. Traditional array results represent noise sources that are convolved with array beamform response functions, which depend on array geometry, size (with respect to source position and distributions), and frequency. The Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) method removes beamforming characteristics from output presentations. A unique linear system of equations accounts for reciprocal influence at different locations over the array survey region. It makes no assumption beyond the traditional processing assumption of statistically independent noise sources. The full rank equations are solved with a new robust iterative method. DAMAS is quantitatively validated using archival data from a variety of prior high-lift airframe component noise studies, including flap edge/cove, trailing edge, leading edge, slat, and calibration sources. Presentations are explicit and straightforward, as the noise radiated from a region of interest is determined by simply summing the mean-squared values over that region. DAMAS can fully replace existing array processing and presentations methodology in most applications. It appears to dramatically increase the value of arrays to the field of experimental acoustics.

  3. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  4. Classification of subsurface objects using singular values derived from signal frames

    DOEpatents

    Chambers, David H; Paglieroni, David W

    2014-05-06

    The classification system represents a detected object with a feature vector derived from the return signals acquired by an array of N transceivers operating in multistatic mode. The classification system generates the feature vector by transforming the real-valued return signals into complex-valued spectra, using, for example, a Fast Fourier Transform. The classification system then generates a feature vector of singular values for each user-designated spectral sub-band by applying a singular value decomposition (SVD) to the N.times.N square complex-valued matrix formed from sub-band samples associated with all possible transmitter-receiver pairs. The resulting feature vector of singular values may be transformed into a feature vector of singular value likelihoods and then subjected to a multi-category linear or neural network classifier for object classification.

  5. Assessing differential gene expression with small sample sizes in oligonucleotide arrays using a mean-variance model.

    PubMed

    Hu, Jianhua; Wright, Fred A

    2007-03-01

    The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.

  6. Monitoring of chicken meat freshness by means of a colorimetric sensor array.

    PubMed

    Salinas, Yolanda; Ros-Lis, José V; Vivancos, José-L; Martínez-Máñez, Ramón; Marcos, M Dolores; Aucejo, Susana; Herranz, Nuria; Lorente, Inmaculada

    2012-08-21

    A new optoelectronic nose to monitor chicken meat ageing has been developed. It is based on 16 pigments prepared by the incorporation of different dyes (pH indicators, Lewis acids, hydrogen-bonding derivatives, selective probes and natural dyes) into inorganic materials (UVM-7, silica and alumina). The colour changes of the sensor array were characteristic of chicken ageing in a modified packaging atmosphere (30% CO(2)-70% N(2)). The chromogenic array data were processed with qualitative (PCA) and quantitative (PLS) tools. The PCA statistical analysis showed a high degree of dispersion, with nine dimensions required to explain 95% of variance. Despite this high dimensionality, a tridimensional representation of the three principal components was able to differentiate ageing with 2-day intervals. Moreover, the PLS statistical analysis allows the creation of a model to correlate the chromogenic data with chicken meat ageing. The model offers a PLS prediction model for ageing with values of 0.9937, 0.0389 and 0.994 for the slope, the intercept and the regression coefficient, respectively, and is in agreement with the perfect fit between the predicted and measured values observed. The results suggest the feasibility of this system to help develop optoelectronic noses that monitor food freshness.

  7. Power generation in random diode arrays

    NASA Astrophysics Data System (ADS)

    Shvydka, Diana; Karpov, V. G.

    2005-03-01

    We discuss nonlinear disordered systems, random diode arrays (RDAs), which can represent such objects as large-area photovoltaics and ion channels of biological membranes. Our numerical modeling has revealed several interesting properties of RDAs. In particular, the geometrical distribution of nonuniformities across a RDA has only a minor effect on its integral characteristics determined by RDA parameter statistics. In the meantime, the dispersion of integral characteristics vs system size exhibits a nontrivial scaling dependence. Our theoretical interpretation here remains limited and is based on the picture of eddy currents flowing through weak diodes in the RDA.

  8. Design and implementation of a biomedical image database (BDIM).

    PubMed

    Aubry, F; Badaoui, S; Kaplan, H; Di Paola, R

    1988-01-01

    We developed a biomedical image database (BDIM) which proposes a standardized representation of value arrays such as images and curves, and of their associated parameters, independently of their acquisition mode to make their transmission and processing easier. It includes three kinds of interactions, oriented to the users. The network concept was kept as a constraint to incorporate the BDIM in a distributed structure and we maintained compatibility with the ACR/NEMA communication protocol. The management of arrays and their associated parameters includes two distinct bases of objects, linked together via a gateway. The first one manages arrays according to their storage mode: long term storage on optionally on-line mass storage devices, and, for consultations, partial copies of long term stored arrays on hard disk. The second one manages the associated parameters and the gateway by means of the relational DBMS ORACLE. Parameters are grouped into relations. Some of them are in agreement with groups defined by the ACR/NEMA. The other relations describe objects resulting from processed initial objects. These new objects are not described by the ACR/NEMA but they can be inserted as shadow groups of ACR/NEMA description. The relations describing the storage and their pathname constitute the gateway. ORACLE distributed tools and the two-level storage technique will allow the integration of the BDIM into a distributed structure, Queries and array (alone or in sequences) retrieval module has access to the relations via a level in which a dictionary managed by ORACLE is included. This dictionary translates ACR/NEMA objects into objects that can be handled by the DBMS.(ABSTRACT TRUNCATED AT 250 WORDS)

  9. P-Value Club: Teaching Significance Level on the Dance Floor

    ERIC Educational Resources Information Center

    Gray, Jennifer

    2010-01-01

    Courses: Beginning research methods and statistics courses, as well as advanced communication courses that require reading research articles and completing research projects involving statistics. Objective: Students will understand the difference between significant and nonsignificant statistical results based on p-value.

  10. Measuring the electromagnetic chirality of 2D arrays under normal illumination.

    PubMed

    Garcia-Santiago, X; Burger, S; Rockstuhl, C; Fernandez-Corbaton, I

    2017-10-15

    We present an electromagnetic chirality measure for 2D arrays of subwavelength periodicities under normal illumination. The calculation of the measure uses only the complex reflection and transmission coefficients from the array. The measure allows the ordering of arrays according to their electromagnetic chirality, which further allows a quantitative comparison of different design strategies. The measure is upper bounded, and the extreme properties of objects with high values of electromagnetic chirality make them useful in both near- and far-field applications. We analyze the consequences that different possible symmetries of the array have on its electromagnetic chirality. We use the measure to study four different arrays. The results indicate the suitability of helices for building arrays of high electromagnetic chirality, and the low effectiveness of a substrate for breaking the transverse mirror symmetry.

  11. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography

    NASA Astrophysics Data System (ADS)

    Kim, D.; Youn, J.; Kim, C.

    2017-08-01

    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  12. The Correlation Between Subjective and Objective Measures of Coded Speech Quality and Intelligibility Following Noise Corruption

    DTIC Science & Technology

    1981-12-01

    VALUES OF EACH BLOCK C TO BE PRINTED. C C ASTORE - 256 VALUE REAL ARRAY USED TO C STORE THE CONVERTED VOLTAGES C FROM ISTORE C C SBLK- STARTING BLOCK...BETWEEN -5.00 C VOLTS AND +5.00 VOLTS. C ~c INTEGER IFILE(13),SBLK,CBLK,ISTORE(256),ST(22), " IBLOCKSJFILE(13),EBLK t :- C REAL ASTORE (256) C C ENTER...CONVERT EACH BLOCK TO BE PRINTED INTO VOLTAGES AND C STORE IN THE ARRAY ASTORE . WRITE ASTORE INTO THE C FILE NAMED BY JFILE C C DO 60 I=1,256 ASTORE (I

  13. ArrayBridge: Interweaving declarative array processing with high-performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less

  14. Results of module electrical measurement of the DOE 46-kilowatt procurement

    NASA Technical Reports Server (NTRS)

    Curtis, H. B.

    1978-01-01

    Current-voltage measurements have been made on terrestrial solar cell modules of the DOE/JPL Low Cost Silicon Solar Array procurement. Data on short circuit current, open circuit voltage, and maximum power for the four types of modules are presented in normalized form, showing distribution of the measured values. Standard deviations from the mean values are also given. Tests of the statistical significance of the data are discussed.

  15. Suppression of fixed pattern noise for infrared image system

    NASA Astrophysics Data System (ADS)

    Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon

    2008-04-01

    In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.

  16. Volumetric-modulated Arc Therapy Lung Stereotactic Body Radiation Therapy Dosimetric Quality Assurance: A Comparison between Radiochromic Film and Chamber Array.

    PubMed

    Colodro, Juan Fernando Mata; Berná, Alfredo Serna; Puchades, Vicente Puchades; Amores, David Ramos; Baños, Miguel Alcaraz

    2017-01-01

    The aim of this work is to verify the use of radiochromic film in the quality assurance (QA) of volumetric-modulated arc therapy (VMAT) lung stereotactic body radiation therapy (SBRT) plans and compare the results with those obtained using an ion chamber array. QA was performed for 14 plans using a two-dimensional-array seven29 and EBT3 film. Dose values per session ranged between 7.5 Gy and 18 Gy. The multichannel method was used to obtain a dose map for film. The results obtained were compared with treatment planning system calculated profiles through gamma analysis. Passing criteria were 3%/3 mm, 2%/2 mm and 3%/1.5 mm with maximum and local dose (LD) normalization. Mean gamma passing rate (GPR) (percentage of points presenting a gamma function value of <1) was obtained and compared. Calibration curves were obtained for each color channel within the dose range 0-16 Gy. Mean GPR values for film were >98.9% for all criteria when normalizing per maximum dose. When using LD, normalization was >92.7%. GPR values for the array were lower for all criteria; this difference being statistically significant when normalizing at LD, reaching 12% for the 3%/1.5 mm criterion. Both detectors provide satisfactory results for the QA of plans for VMAT lung SBRT. The film provided greater mean GPR values, afforded greater spatial resolution and was more efficient overall.

  17. Volumetric-modulated Arc Therapy Lung Stereotactic Body Radiation Therapy Dosimetric Quality Assurance: A Comparison between Radiochromic Film and Chamber Array

    PubMed Central

    Colodro, Juan Fernando Mata; Berná, Alfredo Serna; Puchades, Vicente Puchades; Amores, David Ramos; Baños, Miguel Alcaraz

    2017-01-01

    Introduction: The aim of this work is to verify the use of radiochromic film in the quality assurance (QA) of volumetric-modulated arc therapy (VMAT) lung stereotactic body radiation therapy (SBRT) plans and compare the results with those obtained using an ion chamber array. Materials and Methods: QA was performed for 14 plans using a two-dimensional-array seven29 and EBT3 film. Dose values per session ranged between 7.5 Gy and 18 Gy. The multichannel method was used to obtain a dose map for film. Results: The results obtained were compared with treatment planning system calculated profiles through gamma analysis. Passing criteria were 3%/3 mm, 2%/2 mm and 3%/1.5 mm with maximum and local dose (LD) normalization. Mean gamma passing rate (GPR) (percentage of points presenting a gamma function value of <1) was obtained and compared. Calibration curves were obtained for each color channel within the dose range 0–16 Gy. Mean GPR values for film were >98.9% for all criteria when normalizing per maximum dose. When using LD, normalization was >92.7%. GPR values for the array were lower for all criteria; this difference being statistically significant when normalizing at LD, reaching 12% for the 3%/1.5 mm criterion. Conclusion: Both detectors provide satisfactory results for the QA of plans for VMAT lung SBRT. The film provided greater mean GPR values, afforded greater spatial resolution and was more efficient overall. PMID:28974858

  18. Differential temperature stress measurement employing array sensor with local offset

    NASA Technical Reports Server (NTRS)

    Lesniak, Jon R. (Inventor)

    1993-01-01

    The instrument has a focal plane array of infrared sensors of the integrating type such as a multiplexed device in which a charge is built up on a capacitor which is proportional to the total number of photons which that sensor is exposed to between read-out cycles. The infrared sensors of the array are manufactured as part of an overall array which is part of a micro-electronic device. The sensor achieves greater sensitivity by applying a local offset to the output of each sensor before it is converted into a digital word. The offset which is applied to each sensor will typically be the sensor's average value so that the digital signal which is periodically read from each sensor of the array corresponds to the portion of the signal which is varying in time. With proper synchronization between the cyclical loading of the test object and the frame rate of the infrared array the output of the A/D converted signal will correspond to the stress field induced temperature variations. A digital lock-in operation may be performed on the output of each sensor in the array. This results in a test instrument which can rapidly form a precise image of the thermoelastic stresses in an object.

  19. Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

    DOEpatents

    Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert

    2015-12-08

    Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.

  20. The Kepler DB: a database management system for arrays, sparse arrays, and binary data

    NASA Astrophysics Data System (ADS)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-07-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30 minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database management system (Kepler DB)was created to act as the repository of this information. After one year of flight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one-dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  1. The Kepler DB, a Database Management System for Arrays, Sparse Arrays and Binary Data

    NASA Technical Reports Server (NTRS)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-01-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30-minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database (Kepler DB) management system was created to act as the repository of this information. After one year of ight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  2. Categorical data processing for real estate objects valuation using statistical analysis

    NASA Astrophysics Data System (ADS)

    Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.

    2018-05-01

    Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.

  3. Optimizing fixed observational assets in a coastal observatory

    NASA Astrophysics Data System (ADS)

    Frolov, Sergey; Baptista, António; Wilkin, Michael

    2008-11-01

    Proliferation of coastal observatories necessitates an objective approach to managing of observational assets. In this article, we used our experience in the coastal observatory for the Columbia River estuary and plume to identify and address common problems in managing of fixed observational assets, such as salinity, temperature, and water level sensors attached to pilings and moorings. Specifically, we addressed the following problems: assessing the quality of an existing array, adding stations to an existing array, removing stations from an existing array, validating an array design, and targeting of an array toward data assimilation or monitoring. Our analysis was based on a combination of methods from oceanographic and statistical literature, mainly on the statistical machinery of the best linear unbiased estimator. The key information required for our analysis was the covariance structure for a field of interest, which was computed from the output of assimilated and non-assimilated models of the Columbia River estuary and plume. The network optimization experiments in the Columbia River estuary and plume proved to be successful, largely withstanding the scrutiny of sensitivity and validation studies, and hence providing valuable insight into optimization and operation of the existing observational network. Our success in the Columbia River estuary and plume suggest that algorithms for optimal placement of sensors are reaching maturity and are likely to play a significant role in the design of emerging ocean observatories, such as the United State's ocean observation initiative (OOI) and integrated ocean observing system (IOOS) observatories, and smaller regional observatories.

  4. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  5. Cardiac cine imaging at 3 Tesla: initial experience with a 32-element body-array coil.

    PubMed

    Fenchel, Michael; Deshpande, Vibhas S; Nael, Kambiz; Finn, J Paul; Miller, Stephan; Ruehm, Stefan; Laub, Gerhard

    2006-08-01

    We sought to assess the feasibility of cardiac cine imaging and evaluate image quality at 3 T using a body-array coil with 32 coil elements. Eight healthy volunteers (3 men; median age 29 years) were examined on a 3-T magnetic resonance scanner (Magnetom Trio, Siemens Medical Solutions) using a 32-element phased-array coil (prototype from In vivo Corp.). Gradient-recalled-echo (GRE) cine (GRAPPAx3), GRE cine with tagging lines, steady-state-free-precession (SSFP) cine (GRAPPAx3 and x4), and SSFP cine(TSENSEx4 andx6) images were acquired in short-axis and 4-chamber view. Reference images with identical scan parameters were acquired using the total-imaging-matrix (Tim) coil system with a total of 12 coil elements. Images were assessed by 2 observers in a consensus reading with regard to image quality, noise and presence of artifacts. Furthermore, signal-to-noise values were determined in phantom measurements. In phantom measurements signal-to-noise values were increased by 115-155% for the various cine sequences using the 32-element coil. Scoring of image quality yielded statistically significant increased image quality with the SSFP-GRAPPAx4, SSFP-TSENSEx4, and SSFP-TSENSEx6 sequence using the 32-element coil (P < 0.05). Similarly, scoring of image noise yielded a statistically significant lower noise rating with the SSFP-GRAPPAx4, GRE-GRAPPAx3, SSFP-TSENSEx4, and SSFP-TSENSEx6 sequence using the 32-element coil (P < 0.05). This study shows that cardiac cine imaging at 3 T using a 32-element body-array coil is feasible in healthy volunteers. Using a large number of coil elements with a favorable sensitivity profile supports faster image acquisition, with high diagnostic image quality even for high parallel imaging factors.

  6. The economic value of life: linking theory to practice.

    PubMed Central

    Landefeld, J S; Seskin, E P

    1982-01-01

    Human capital estimates of the economic value of life have been routinely used in the past to perform cost-benefit analyses of health programs. Recently, however, serious questions have been raised concerning the conceptual basis for valuing human life by applying these estimates. Most economists writing on these issues tend to agree that a more conceptually correct method to value risks to human life in cost-benefit analyses would be based on individuals.' "willingness to pay" for small changes in their probability of survival. Attempts to implement the willingness-to-pay approach using survey responses or revealed-preference estimates have produced a confusing array of values fraught with statistical problems and measurement difficulties. As a result, economists have searched for a link between willingness to pay and standard human capital estimates and have found that for most individuals a lower bound for valuing risks to life can be based on their willingness to pay to avoid the expected economic losses associated with death. However, while these studies provide support for using individual's private valuation of forgone income in valuing risks to life, it is also clear that standard human capital estimates cannot be used for this purpose without reformulation. After reviewing the major approaches to valuing risks to life, this paper concludes that estimates based on the human capital approach--reformulated using a willingness-to-pay criterion--produce the only clear, consistent, and objective values for use in cost-benefit analyses of policies affecting risks to life. The paper presents the first empirical estimates of such adjusted willingness-to-pay/human capital values. PMID:6803602

  7. Miniaturized Cassegrainian concentrator concept demonstration

    NASA Technical Reports Server (NTRS)

    Patterson, R. E.; Rauschenbach, H. S.

    1982-01-01

    High concentration ratio photovoltaic systems for space applications have generally been considered impractical because of perceived difficulties in controlling solar cell temperatures to reasonably low values. A miniaturized concentrator system is now under development which surmounts this objection by providing acceptable solar cell temperatures using purely passive cell cooling methods. An array of identical miniaturized, rigid Cassegrainian optical systems having a low f-number with resulting short dimensions along their optical axes are rigidly mounted into a frame to form a relatively thin concentrator solar array panel. A number of such panels, approximately 1.5 centimeters thick, are wired as an array and are folded against one another for launch in a stowed configuration. Deployment on orbit is similar to the deployment of conventional planar honeycomb panel arrays or flexible blanket arrays. The miniaturized concept was conceived and studied in the 1978-80 time frame. Progress in the feasibility demonstration to date is reported.

  8. DAnTE: a statistical tool for quantitative analysis of –omics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polpitiya, Ashoka D.; Qian, Weijun; Jaitly, Navdeep

    2008-05-03

    DAnTE (Data Analysis Tool Extension) is a statistical tool designed to address challenges unique to quantitative bottom-up, shotgun proteomics data. This tool has also been demonstrated for microarray data and can easily be extended to other high-throughput data types. DAnTE features selected normalization methods, missing value imputation algorithms, peptide to protein rollup methods, an extensive array of plotting functions, and a comprehensive ANOVA scheme that can handle unbalanced data and random effects. The Graphical User Interface (GUI) is designed to be very intuitive and user friendly.

  9. Evaluation of an experimental electrohydraulic discharge device for extracorporeal shock wave lithotripsy: Pressure field of sparker array.

    PubMed

    Li, Guangyan; Connors, Bret A; Schaefer, Ray B; Gallagher, John J; Evan, Andrew P

    2017-11-01

    In this paper, an extracorporeal shock wave source composed of small ellipsoidal sparker units is described. The sparker units were arranged in an array designed to produce a coherent shock wave of sufficient strength to fracture kidney stones. The objective of this paper was to measure the acoustical output of this array of 18 individual sparker units and compare this array to commercial lithotripters. Representative waveforms acquired with a fiber-optic probe hydrophone at the geometric focus of the sparker array indicated that the sparker array produces a shock wave (P + ∼40-47 MPa, P - ∼2.5-5.0 MPa) similar to shock waves produced by a Dornier HM-3 or Dornier Compact S. The sparker array's pressure field map also appeared similar to the measurements from a HM-3 and Compact S. Compared to the HM-3, the electrohydraulic technology of the sparker array produced a more consistent SW pulse (shot-to-shot positive pressure value standard deviation of ±4.7 MPa vs ±3.3 MPa).

  10. Eating in the absence of hunger in adolescents: intake after a large-array meal compared with that after a standardized meal.

    PubMed

    Shomaker, Lauren B; Tanofsky-Kraff, Marian; Zocca, Jaclyn M; Courville, Amber; Kozlosky, Merel; Columbo, Kelli M; Wolkoff, Laura E; Brady, Sheila M; Crocker, Melissa K; Ali, Asem H; Yanovski, Susan Z; Yanovski, Jack A

    2010-10-01

    Eating in the absence of hunger (EAH) is typically assessed by measuring youths' intake of palatable snack foods after a standard meal designed to reduce hunger. Because energy intake required to reach satiety varies among individuals, a standard meal may not ensure the absence of hunger among participants of all weight strata. The objective of this study was to compare adolescents' EAH observed after access to a very large food array with EAH observed after a standardized meal. Seventy-eight adolescents participated in a randomized crossover study during which EAH was measured as intake of palatable snacks after ad libitum access to a very large array of lunch-type foods (>10,000 kcal) and after a lunch meal standardized to provide 50% of the daily estimated energy requirements. The adolescents consumed more energy and reported less hunger after the large-array meal than after the standardized meal (P values < 0.001). They consumed ≈70 kcal less EAH after the large-array meal than after the standardized meal (295 ± 18 compared with 365 ± 20 kcal; P < 0.001), but EAH intakes after the large-array meal and after the standardized meal were positively correlated (P values < 0.001). The body mass index z score and overweight were positively associated with EAH in both paradigms after age, sex, race, pubertal stage, and meal intake were controlled for (P values ≤ 0.05). EAH is observable and positively related to body weight regardless of whether youth eat in the absence of hunger from a very large-array meal or from a standardized meal. This trial was registered at clinicaltrials.gov as NCT00631644.

  11. Academic freedom and academic-industry relationships in biotechnology.

    PubMed

    Streiffer, Robert

    2006-06-01

    Commercial academic-industry relationships (AIRs) are widespread in biotechnology and have resulted in a wide array of restrictions on academic research. Objections to such restrictions have centered on the charge that they violate academic freedom. I argue that these objections are almost invariably unsuccessful. On a consequentialist understanding of the value of academic freedom, they rely on unfounded empirical claims about the overall effects that AIRs have on academic research. And on a rights-based understanding of the value of academic freedom, they rely on excessively lavish assumptions about the kinds of activities that academic freedom protects.

  12. Detection of presence of chemical precursors

    NASA Technical Reports Server (NTRS)

    Li, Jing (Inventor); Meyyappan, Meyya (Inventor); Lu, Yijiang (Inventor)

    2009-01-01

    Methods and systems for determining if one or more target molecules are present in a gas, by exposing a functionalized carbon nanostructure (CNS) to the gas and measuring an electrical parameter value EPV(n) associated with each of N CNS sub-arrays. In a first embodiment, a most-probable concentration value C(opt) is estimated, and an error value, depending upon differences between the measured values EPV(n) and corresponding values EPV(n;C(opt)) is computed. If the error value is less than a first error threshold value, the system interprets this as indicating that the target molecule is present in a concentration C.apprxeq.C(opt). A second embodiment uses extensive statistical and vector space analysis to estimate target molecule concentration.

  13. Subjective Ratings of Beauty and Aesthetics: Correlations With Statistical Image Properties in Western Oil Paintings

    PubMed Central

    Lehmann, Thomas; Redies, Christoph

    2017-01-01

    For centuries, oil paintings have been a major segment of the visual arts. The JenAesthetics data set consists of a large number of high-quality images of oil paintings of Western provenance from different art periods. With this database, we studied the relationship between objective image measures and subjective evaluations of the images, especially evaluations on aesthetics (defined as artistic value) and beauty (defined as individual liking). The objective measures represented low-level statistical image properties that have been associated with aesthetic value in previous research. Subjective rating scores on aesthetics and beauty correlated not only with each other but also with different combinations of the objective measures. Furthermore, we found that paintings from different art periods vary with regard to the objective measures, that is, they exhibit specific patterns of statistical image properties. In addition, clusters of participants preferred different combinations of these properties. In conclusion, the results of the present study provide evidence that statistical image properties vary between art periods and subject matters and, in addition, they correlate with the subjective evaluation of paintings by the participants. PMID:28694958

  14. Parallel object-oriented data mining system

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick

    2004-01-06

    A data mining system uncovers patterns, associations, anomalies and other statistically significant structures in data. Data files are read and displayed. Objects in the data files are identified. Relevant features for the objects are extracted. Patterns among the objects are recognized based upon the features. Data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) sky survey was used to search for bent doubles. This test was conducted on data from the Very Large Array in New Mexico which seeks to locate a special type of quasar (radio-emitting stellar object) called bent doubles. The FIRST survey has generated more than 32,000 images of the sky to date. Each image is 7.1 megabytes, yielding more than 100 gigabytes of image data in the entire data set.

  15. [Establishment and verification of detecting multiple biomarkers for ovarian cancer by suspension array technology].

    PubMed

    Zhao, B B; Yang, Z J; Wang, Q; Pan, Z M; Zhang, W; Li, D R; Li, L

    2016-10-25

    Objective: Establish and validation of combined detecting of CCL18, CXCL1, C1D, TM4SF1, FXR1, TIZ suspension array technology. Methods: (1)CCL18, CXCL1 monoclonal antibody and C1D, TM4SF1, FXR1, TIZ protein were coupled with polyethylene microspheres. Biotinylated CCL18, CXCL1 polyclonal antibody and sheep anti-human IgG polyclonal antibody were prepared simultaneously. The best packaged concentrations of CCL18, CXCL1 monoclonal antibody and C1D, TM4SF1, FXR1, TIZ antigens were optimized. The best packaged concentrations of CCL18, CXCL1 polyclonal antibodys and C1D, TM4SF1, FXR1, TIZ sheep anti-human IgG polyclonal antibody were optimized to establish a stable detected suspension array.(2)Sixty patients confirmed by pathological examination with ovarian cancer(ovarian cancer group)which treated in Affiliated Tumor Hospital of Guangxi Medical University, 30 patients with ovarian benign tumor(benign group)and 30 cases of healthy women(control group)were chosen between September 2003 and December 2003. Suspension array technology and ELISA method were used to detect expression of CCL18, CXCL1 antigen and C1D, TM4SF1, FXR1 and TIZ IgG autoantibody contented in 3 groups of serum, then to compare the diagnostic efficiency and diagnostic accuracy of two methods(coefficient of variation between batch and batch). Results: (1)This research successfully established stable detecting system of CCL18, CXCL1, C1D, TM4SF1, FXR1 and TIZ IgG autoantibody. The best concentration of CCL18, CXCL1 monoclonal antibody and C1D, TM4SF1, FXR1, TIZ antigen package were 8, 8, 12, 8, 4 and 8 μg/ml; the best detection of CCL18, CXCL1 biotin polyclonal antibody and C1D, TM4SF1, FXR1, TIZ sheep anti-huamne IgG polyclonal antibody were respectively 4, 2, 2, 4, 4 and 2 μg/ml.(2)Suspension array technology and ELISA method were used to detect CCL18, CXCL1 antigen and C1D, TM4SF1, FXR1, TIZ IgG autoantibody of three groups in serum were similar( P >0.05).(3)The comparison of two methods in the diagnosis of efficiency: the diagnostic accuracy of two methods were 99.2%(119/120)and 94.2%(113/120), the difference was statistically significant( P =0.031). The sensitivity of the diagnosis of ovarian cancer of two methods were 100.0%(60/60)and 93.3%(56/60), specific degrees were 100.0%(59/59)and 93.4%(57/61), positive predictive value was 100.0%(60/60)and 93.3%(56/60), negative predictive value was 98.3%(59/60)and 95.0%(57/60), the difference was statistically significant( P <0.05).(4)The detected results of CCL18, CXCL1 antigen and C1D, TM4SF1, FXR1, TIZ IgG autoantibody shown that the diagnostic accuracy of suspension array technology was superior to those of ELISA method(all P <0.05). Conclusion: The study has established the stable detection of suspension array technology, and the diagnostic efficiency and diagnostic accuracy was much better than that by ELISA.

  16. Apparatus and method for imaging metallic objects using an array of giant magnetoresistive sensors

    DOEpatents

    Chaiken, Alison

    2000-01-01

    A portable, low-power, metallic object detector and method for providing an image of a detected metallic object. In one embodiment, the present portable low-power metallic object detector an array of giant magnetoresistive (GMR) sensors. The array of GMR sensors is adapted for detecting the presence of and compiling image data of a metallic object. In the embodiment, the array of GMR sensors is arranged in a checkerboard configuration such that axes of sensitivity of alternate GMR sensors are orthogonally oriented. An electronics portion is coupled to the array of GMR sensors. The electronics portion is adapted to receive and process the image data of the metallic object compiled by the array of GMR sensors. The embodiment also includes a display unit which is coupled to the electronics portion. The display unit is adapted to display a graphical representation of the metallic object detected by the array of GMR sensors. In so doing, a graphical representation of the detected metallic object is provided.

  17. Complexity quantification of dense array EEG using sample entropy analysis.

    PubMed

    Ramanand, Pravitha; Nampoori, V P N; Sreenivasan, R

    2004-09-01

    In this paper, a time series complexity analysis of dense array electroencephalogram signals is carried out using the recently introduced Sample Entropy (SampEn) measure. This statistic quantifies the regularity in signals recorded from systems that can vary from the purely deterministic to purely stochastic realm. The present analysis is conducted with an objective of gaining insight into complexity variations related to changing brain dynamics for EEG recorded from the three cases of passive, eyes closed condition, a mental arithmetic task and the same mental task carried out after a physical exertion task. It is observed that the statistic is a robust quantifier of complexity suited for short physiological signals such as the EEG and it points to the specific brain regions that exhibit lowered complexity during the mental task state as compared to a passive, relaxed state. In the case of mental tasks carried out before and after the performance of a physical exercise, the statistic can detect the variations brought in by the intermediate fatigue inducing exercise period. This enhances its utility in detecting subtle changes in the brain state that can find wider scope for applications in EEG based brain studies.

  18. The VIS-AD data model: Integrating metadata and polymorphic display with a scientific programming language

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.

    1994-01-01

    The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.

  19. Factors influencing infants’ ability to update object representations in memory

    PubMed Central

    Moher, Mariko; Feigenson, Lisa

    2013-01-01

    Remembering persisting objects over occlusion is critical to representing a stable environment. Infants remember hidden objects at multiple locations and can update their representation of a hidden array when an object is added or subtracted. However, the factors influencing these updating abilities have received little systematic exploration. Here we examined the flexibility of infants’ ability to update object representations. We tested 11-month-olds in a looking-time task in which objects were added to or subtracted from two hidden arrays. Across five experiments, infants successfully updated their representations of hidden arrays when the updating occurred successively at one array before beginning at the other. But when updating required alternating between two arrays, infants failed. However, simply connecting the two arrays with a thin strip of foam-core led infants to succeed. Our results suggest that infants’ construal of an event strongly affects their ability to update memory representations of hidden objects. When construing an event as containing multiple updates to the same array, infants succeed, but when construing the event as requiring the revisiting and updating of previously attended arrays, infants fail. PMID:24049245

  20. Search for Long Period Solar Normal Modes in Ambient Seismic Noise

    NASA Astrophysics Data System (ADS)

    Caton, R.; Pavlis, G. L.

    2016-12-01

    We search for evidence of solar free oscillations (normal modes) in long period seismic data through multitaper spectral analysis of array stacks. This analysis is similar to that of Thomson & Vernon (2015), who used data from the most quiet single stations of the global seismic network. Our approach is to use stacks of large arrays of noisier stations to reduce noise. Arrays have the added advantage of permitting the use of nonparametic statistics (jackknife errors) to provide objective error estimates. We used data from the Transportable Array, the broadband borehole array at Pinyon Flat, and the 3D broadband array in Homestake Mine in Lead, SD. The Homestake Mine array has 15 STS-2 sensors deployed in the mine that are extremely quiet at long periods due to stable temperatures and stable piers anchored to hard rock. The length of time series used ranged from 50 days to 85 days. We processed the data by low-pass filtering with a corner frequency of 10 mHz, followed by an autoregressive prewhitening filter and median stack. We elected to use the median instead of the mean in order to get a more robust stack. We then used G. Prieto's mtspec library to compute multitaper spectrum estimates on the data. We produce delete-one jackknife error estimates of the uncertainty at each frequency by computing median stacks of all data with one station removed. The results from the TA data show tentative evidence for several lines between 290 μHz and 400 μHz, including a recurring line near 379 μHz. This 379 μHz line is near the Earth mode 0T2 and the solar mode 5g5, suggesting that 5g5 could be coupling into the Earth mode. Current results suggest more statistically significant lines may be present in Pinyon Flat data, but additional processing of the data is underway to confirm this observation.

  1. Receptor arrays optimized for natural odor statistics.

    PubMed

    Zwicker, David; Murugan, Arvind; Brenner, Michael P

    2016-05-17

    Natural odors typically consist of many molecules at different concentrations. It is unclear how the numerous odorant molecules and their possible mixtures are discriminated by relatively few olfactory receptors. Using an information theoretic model, we show that a receptor array is optimal for this task if it achieves two possibly conflicting goals: (i) Each receptor should respond to half of all odors and (ii) the response of different receptors should be uncorrelated when averaged over odors presented with natural statistics. We use these design principles to predict statistics of the affinities between receptors and odorant molecules for a broad class of odor statistics. We also show that optimal receptor arrays can be tuned to either resolve concentrations well or distinguish mixtures reliably. Finally, we use our results to predict properties of experimentally measured receptor arrays. Our work can thus be used to better understand natural olfaction, and it also suggests ways to improve artificial sensor arrays.

  2. On the structure and phase transitions of power-law Poissonian ensembles

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Oshanin, Gleb

    2012-10-01

    Power-law Poissonian ensembles are Poisson processes that are defined on the positive half-line, and that are governed by power-law intensities. Power-law Poissonian ensembles are stochastic objects of fundamental significance; they uniquely display an array of fractal features and they uniquely generate a span of important applications. In this paper we apply three different methods—oligarchic analysis, Lorenzian analysis and heterogeneity analysis—to explore power-law Poissonian ensembles. The amalgamation of these analyses, combined with the topology of power-law Poissonian ensembles, establishes a detailed and multi-faceted picture of the statistical structure and the statistical phase transitions of these elemental ensembles.

  3. Science with the VLA Sky Survey (VLASS)

    NASA Astrophysics Data System (ADS)

    Murphy, Eric J.; Baum, Stefi Alison; Brandt, W. Niel; Chandler, Claire J.; Clarke, Tracy E.; Condon, James J.; Cordes, James M.; Deustua, Susana E.; Dickinson, Mark; Gugliucci, Nicole E.; Hallinan, Gregg; Hodge, Jacqueline; Lang, Cornelia C.; Law, Casey J.; Lazio, Joseph; Mao, Sui Ann; Myers, Steven T.; Osten, Rachel A.; Richards, Gordon T.; Strauss, Michael A.; White, Richard L.; Zauderer, Bevin; Extragalactic Science Working Group, Galactic Science Working Group, Transient Science Working Group

    2015-01-01

    The Very Large Array Sky Survey (VLASS) was initiated to develop and carry out a new generation large radio sky survey using the recently upgraded Karl G. Jansky Very Large Array. The proposed VLASS is a modern, multi-tiered survey with the VLA designed to provide a broad, cohesive science program with forefront scientific impact, capable of generating unexpected scientific discoveries, generating involvement from all astronomical communities, and leaving a lasting legacy value for decades.VLASS will observe from 2-4 GHz and is structured to combine comprehensive all sky coverage with sequentially deeper coverage in carefully identified parts of the sky, including the Galactic plane, and will be capable of informing time domain studies. This approach enables both focused and wide ranging scientific discovery through the coupling of deeper narrower tiers with increasing sky coverage at shallower depths, addressing key science issues and providing a statistical interpretational framework. Such an approach provides both astronomers and the citizen scientist with information for every accessible point of the radio sky, while simultaneously addressing fundamental questions about the nature and evolution of astrophysical objects.VLASS will follow the evolution of galaxies and their central black hole engines, measure the strength and topology of cosmic magnetic fields, unveil hidden explosions throughout the Universe, and chart our galaxy for stellar remnants and ionized bubbles. Multi-wavelength communities studying rare objects, the Galaxy, radio transients, or galaxy evolution out to the peak of the cosmic star formation rate density will equally benefit from VLASS.Early drafts of the VLASS proposal are available at the VLASS website (https://science.nrao.edu/science/surveys/vlass/vlass), and the final proposal will be posted in early January 2015 for community comment before undergoing review in March 2015. Upon approval, VLASS would then be on schedule to start observing in 2016.

  4. Streamwise Evolution of Statistical Events in a Model Wind-Turbine Array

    NASA Astrophysics Data System (ADS)

    Viestenz, Kyle; Cal, Raúl Bayoán

    2016-02-01

    Hot-wire anemometry data, obtained from a wind-tunnel experiment containing a 3 × 3 model wind-turbine array, are used to conditionally average the Reynolds stresses. Nine profiles at the centreline behind the array are analyzed to characterize the turbulent velocity statistics of the wake flow. Quadrant analysis yields statistical events occurring in the wake of the wind farm where quadrants 2 and 4 produce ejections and sweeps, respectively. The scaled difference between these two events is expressed via the Δ R0 parameter and is based on the Δ S0 quantity as introduced by M. R. Raupach (J Fluid Mech 108:363-382, 1981). Δ R0 attains a maximum value at hub height and changes sign near the top of the rotor. The ratio of quadrant events of upward momentum flux to those of the downward flux, known as the exuberance, is examined and reveals the effect of root vortices persisting to eight rotor diameters downstream. These events are then associated with the triple correlation term present in the turbulent kinetic energy equation of the fluctuations where it is found that ejections play the dual role of entraining mean kinetic energy while convecting turbulent kinetic energy out of the turbine canopy. The development of these various quantities possesses significance in closure models, and is assessed in light of wake remediation, energy transport and power fluctuations, where it is found that the maximum fluctuation is about 30% of the mean power produced.

  5. On the Limits of Infants' Quantification of Small Object Arrays

    ERIC Educational Resources Information Center

    Feigenson, Lisa; Carey, Susan

    2005-01-01

    Recent work suggests that infants rely on mechanisms of object-based attention and short-term memory to represent small numbers of objects. Such work shows that infants discriminate arrays containing 1, 2, or 3 objects, but fail with arrays greater than 3 [Feigenson, L., & Carey, S. (2003). Tracking individuals via object-files: Evidence from…

  6. Environmental Interfaces in Teaching Economic Statistics

    ERIC Educational Resources Information Center

    Campos, Celso; Wodewotzki, Maria Lucia; Jacobini, Otavio; Ferrira, Denise

    2016-01-01

    The objective of this article is, based on the Critical Statistics Education assumptions, to value some environmental interfaces in teaching Statistics by modeling projects. Due to this, we present a practical case, one in which we address an environmental issue, placed in the context of the teaching of index numbers, within the Statistics…

  7. Weighting Statistical Inputs for Data Used to Support Effective Decision Making During Severe Emergency Weather and Environmental Events

    NASA Technical Reports Server (NTRS)

    Gardner, Adrian

    2010-01-01

    National Aeronautical and Space Administration (NASA) weather and atmospheric environmental organizations are insatiable consumers of geophysical, hydrometeorological and solar weather statistics. The expanding array of internet-worked sensors producing targeted physical measurements has generated an almost factorial explosion of near real-time inputs to topical statistical datasets. Normalizing and value-based parsing of such statistical datasets in support of time-constrained weather and environmental alerts and warnings is essential, even with dedicated high-performance computational capabilities. What are the optimal indicators for advanced decision making? How do we recognize the line between sufficient statistical sampling and excessive, mission destructive sampling ? How do we assure that the normalization and parsing process, when interpolated through numerical models, yields accurate and actionable alerts and warnings? This presentation will address the integrated means and methods to achieve desired outputs for NASA and consumers of its data.

  8. Statistical considerations for agroforestry studies

    Treesearch

    James A. Baldwin

    1993-01-01

    Statistical topics that related to agroforestry studies are discussed. These included study objectives, populations of interest, sampling schemes, sample sizes, estimation vs. hypothesis testing, and P-values. In addition, a relatively new and very much improved histogram display is described.

  9. Impact of triphenyltin acetate in microcosms simulating floodplain lakes. II. Comparison of species sensitivity distributions between laboratory and semi-field.

    PubMed

    Roessink, I; Belgers, J D M; Crum, S J H; van den Brink, P J; Brock, T C M

    2006-07-01

    The study objectives were to shed light on the types of freshwater organism that are sensitive to triphenyltin acetate (TPT) and to compare the laboratory and microcosm sensitivities of the invertebrate community. The responses of a wide array of freshwater taxa (including invertebrates, phytoplankton and macrophytes) from acute laboratory Single Species Tests (SST) were compared with the concentration-response relationships of aquatic populations in two types of freshwater microcosms. Representatives of several taxonomic groups of invertebrates, and several phytoplankton and vascular plant species proved to be sensitive to TPT, illustrating its diverse modes of toxic action. Statistically calculated ecological risk thresholds (HC5 values) based on 96 h laboratory EC50 values for invertebrates were 1.3 microg/l, while these values on the basis of microcosm-Species Sensitivity Distributions (SSD) for invertebrates in sampling weeks 2-8 after TPT treatment ranged from 0.2 to 0.6 microg/l based on nominal peak concentrations. Responses observed in the microcosms did not differ between system types and sampling dates, indicating that ecological threshold levels are not affected by different community structures including taxa sensitive to TPT. The laboratory-derived invertebrate SSD curve was less sensitive than the curves from the microcosms. Possible explanations for the more sensitive field response are delayed effects and/or additional chronic exposure via the food chain in the microcosms.

  10. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    PubMed Central

    Trucco, Andrea; Traverso, Federico; Crocco, Marco

    2015-01-01

    For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987

  11. Statistical Post-Processing of Wind Speed Forecasts to Estimate Relative Economic Value

    NASA Astrophysics Data System (ADS)

    Courtney, Jennifer; Lynch, Peter; Sweeney, Conor

    2013-04-01

    The objective of this research is to get the best possible wind speed forecasts for the wind energy industry by using an optimal combination of well-established forecasting and post-processing methods. We start with the ECMWF 51 member ensemble prediction system (EPS) which is underdispersive and hence uncalibrated. We aim to produce wind speed forecasts that are more accurate and calibrated than the EPS. The 51 members of the EPS are clustered to 8 weighted representative members (RMs), chosen to minimize the within-cluster spread, while maximizing the inter-cluster spread. The forecasts are then downscaled using two limited area models, WRF and COSMO, at two resolutions, 14km and 3km. This process creates four distinguishable ensembles which are used as input to statistical post-processes requiring multi-model forecasts. Two such processes are presented here. The first, Bayesian Model Averaging, has been proven to provide more calibrated and accurate wind speed forecasts than the ECMWF EPS using this multi-model input data. The second, heteroscedastic censored regression is indicating positive results also. We compare the two post-processing methods, applied to a year of hindcast wind speed data around Ireland, using an array of deterministic and probabilistic verification techniques, such as MAE, CRPS, probability transform integrals and verification rank histograms, to show which method provides the most accurate and calibrated forecasts. However, the value of a forecast to an end-user cannot be fully quantified by just the accuracy and calibration measurements mentioned, as the relationship between skill and value is complex. Capturing the full potential of the forecast benefits also requires detailed knowledge of the end-users' weather sensitive decision-making processes and most importantly the economic impact it will have on their income. Finally, we present the continuous relative economic value of both post-processing methods to identify which is more beneficial to the wind energy industry of Ireland.

  12. Sexual orientation and spatial position effects on selective forms of object location memory.

    PubMed

    Rahman, Qazi; Newland, Cherie; Smyth, Beatrice Mary

    2011-04-01

    Prior research has demonstrated robust sex and sexual orientation-related differences in object location memory in humans. Here we show that this sexual variation may depend on the spatial position of target objects and the task-specific nature of the spatial array. We tested the recovery of object locations in three object arrays (object exchanges, object shifts, and novel objects) relative to veridical center (left compared to right side of the arrays) in a sample of 35 heterosexual men, 35 heterosexual women, and 35 homosexual men. Relative to heterosexual men, heterosexual women showed better location recovery in the right side of the array during object exchanges and homosexual men performed better in the right side during novel objects. However, the difference between heterosexual and homosexual men disappeared after controlling for IQ. Heterosexual women and homosexual men did not differ significantly from each other in location change detection with respect to task or side of array. These data suggest that visual space biases in processing categorical spatial positions may enhance aspects of object location memory in heterosexual women. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  14. Statistical Optimization of 1,3-Propanediol (1,3-PD) Production from Crude Glycerol by Considering Four Objectives: 1,3-PD Concentration, Yield, Selectivity, and Productivity.

    PubMed

    Supaporn, Pansuwan; Yeom, Sung Ho

    2018-04-30

    This study investigated the biological conversion of crude glycerol generated from a commercial biodiesel production plant as a by-product to 1,3-propanediol (1,3-PD). Statistical analysis was employed to derive a statistical model for the individual and interactive effects of glycerol, (NH 4 ) 2 SO 4 , trace elements, pH, and cultivation time on the four objectives: 1,3-PD concentration, yield, selectivity, and productivity. Optimum conditions for each objective with its maximum value were predicted by statistical optimization, and experiments under the optimum conditions verified the predictions. In addition, by systematic analysis of the values of four objectives, optimum conditions for 1,3-PD concentration (49.8 g/L initial glycerol, 4.0 g/L of (NH 4 ) 2 SO 4 , 2.0 mL/L of trace element, pH 7.5, and 11.2 h of cultivation time) were determined to be the global optimum culture conditions for 1,3-PD production. Under these conditions, we could achieve high 1,3-PD yield (47.4%), 1,3-PD selectivity (88.8%), and 1,3-PD productivity (2.1/g/L/h) as well as high 1,3-PD concentration (23.6 g/L).

  15. Stokes-correlometry of polarization-inhomogeneous objects

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Dubolazov, A.; Bodnar, G. B.; Bachynskiy, V. T.; Vanchulyak, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of Stokes-correlometry description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of modulus (MSV) and phase (PhSV) of complex Stokes vector of skeletal muscle tissue. It was defined the values and ranges of changes of statistic moments of the 1st-4th orders, which characterize the distributions of values of MSV and PhSV. The second part presents the data of statistic analysis of the distributions of modulus MSV and PhSV. It was defined the objective criteria of differentiation of samples with urinary incontinence.

  16. An Investigation of Ionic Wind Propulsion

    NASA Technical Reports Server (NTRS)

    Wilson, Jack; Perkins, Hugh D.; Thompson, William K.

    2009-01-01

    A corona discharge device generates an ionic wind and thrust, when a high voltage corona discharge is struck between sharply pointed electrodes and larger radius ground electrodes. The objective of this study was to examine whether this thrust could be scaled to values of interest for aircraft propulsion. An initial experiment showed that the thrust observed did equal the thrust of the ionic wind. Different types of high voltage electrodes were tried, including wires, knife-edges, and arrays of pins. A pin array was found to be optimum. Parametric experiments, and theory, showed that the thrust per unit power could be raised from early values of 5 N/kW to values approaching 50 N/kW, but only by lowering the thrust produced, and raising the voltage applied. In addition to using DC voltage, pulsed excitation, with and without a DC bias, was examined. The results were inconclusive as to whether this was advantageous. It was concluded that the use of a corona discharge for aircraft propulsion did not seem very practical.

  17. Lu-Hf systematics of meteorites

    NASA Astrophysics Data System (ADS)

    Bizzarro, M.; Baker, J. A.; Haack, H.

    2003-04-01

    We have measured Lu-Hf concentrations and Hf isotope ratios on a number of solar system objects with a new digestion and chemical separation technique (1). The analysed materials include a variety of carbonaceous and ordinary chondrites (CC and OC), basaltic eucrites and a diogenite, and work is ongoing on angrites, aubrites and mesosiderites. Nineteen analyses of OC and CC define, for the first time, a statistically significant Lu-Hf isochron with a slope of 0.09465 ± 145 and intercept of 0.279628 ± 47 (2). In contrast to the CC and type 3 OC (176Lu/177Hf = 0.032-0.034), the more highly metamorphosed OC have a large range of 176Lu/177Hf ratios (0.026-0.036). The large range of 176Lu/177Hf values may be related to heterogeneous variations in phosphate abundances in equilibrated OC, which is supported by the observation that most of the observed variation is defined by this type of material. The present-day bulk-earth 176Hf/177Hf ratio calculated from this study, and a 176Lu/177Hf ratio of 0.0332, is identical to the value of (3) and confirms that the chondritic Hf-Hd isotopic composition is displaced (3 ɛ units) to unradiogenic Hf compared to the terrestrial array. The slope and intercept derived from individual regressions of either the OC or the L type alone are identical within analytical uncertainty. Using a mean age of 4.56 Ga for the chondrite forming event, we derive a value for λ176Lu = 1.983 ± 33 time 10-11 y-1 from the regression of the chondrite meteorites, ca. 6% faster than a recent calibration based on terrestrial material, which has important implications for the differentiation of the early Earth (2, 4). The four basaltic eucrites analysed align on the same array as the chondrites and, as such, chondrites and basaltic eucrites also define a statistically significant isochron with a slope of 0.09462 ± 68 and intercept of 0.279627 ± 20, identical to the values derived from the chondrites alone. Moreover, a recent Lu-Hf study of basaltic eucrites also yielded a slope and intercept identical to that determined here (5). In contrast, three cumulate eucrites of (5) and our analysis of the Bilanga diogenite align on a statistically significant Lu-Hf isochron defining an age of 4.349 ± 0.073 Ga. This implies a genetic relationship between diogenites and cumulate eucrites, and further confirms that cumulate eucrites are at least 100 Myr younger than basaltic eucrites. (1) Bizzarro, M., Baker, J.A. &Ulfbeck D. (in review) Geostandards Newsletter. (2) Bizzarro, M., Baker, J.A., Haack, H., Ulfbeck D. &Rosing M. (In press) Nature. (3) Blichert-Toft, J. &Albarede, F. (1997) EPSL 148, 243-258. (4) Scherer, E., Münker, C. &Mezger, K. (2001) Science 293, 683-686. (5) Blichert-Toft, J., Boyet, M., Télouk, P &Albarède, F. (2002) EPSL 204, 167-181.

  18. Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri

    2017-07-01

    We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.

  19. Computer Modelling and Simulation of Solar PV Array Characteristics

    NASA Astrophysics Data System (ADS)

    Gautam, Nalin Kumar

    2003-02-01

    The main objective of my PhD research work was to study the behaviour of inter-connected solar photovoltaic (PV) arrays. The approach involved the construction of mathematical models to investigate different types of research problems related to the energy yield, fault tolerance, efficiency and optimal sizing of inter-connected solar PV array systems. My research work can be divided into four different types of research problems: 1. Modeling of inter-connected solar PV array systems to investigate their electrical behavior, 2. Modeling of different inter-connected solar PV array networks to predict their expected operational lifetimes, 3. Modeling solar radiation estimation and its variability, and 4. Modeling of a coupled system to estimate the size of PV array and battery-bank in the stand-alone inter-connected solar PV system where the solar PV system depends on a system providing solar radiant energy. The successful application of mathematics to the above-m entioned problems entailed three phases: 1. The formulation of the problem in a mathematical form using numerical, optimization, probabilistic and statistical methods / techniques, 2. The translation of mathematical models using C++ to simulate them on a computer, and 3. The interpretation of the results to see how closely they correlated with the real data. Array is the most cost-intensive component of the solar PV system. Since the electrical performances as well as life properties of an array are highly sensitive to field conditions, different characteristics of the arrays, such as energy yield, operational lifetime, collector orientation, and optimal sizing were investigated in order to improve their efficiency, fault-tolerance and reliability. Three solar cell interconnection configurations in the array - series-parallel, total-cross-tied, and bridge-linked, were considered. The electrical characteristics of these configurations were investigated to find out one that is comparatively less susceptible to the mismatches due to manufacturer's tolerances in cell characteristics, shadowing, soiling and aging of solar cells. The current-voltage curves and the values of energy yield characterized by maximum-power points and fill factors for these arrays were also obtained. Two different mathematical models, one for smaller size arrays and the other for the larger size arrays, were developed. The first model takes account of the partial differential equations with boundary value conditions, whereas the second one involves the simple linear programming concept. Based on the initial information on the values of short-circuit current and open-circuit voltage of thirty-six single-crystalline silicon solar cells provided by a manufacturer, the values of these parameters for up to 14,400 solar cells were generated randomly. Thus, the investigations were done for three different cases of array sizes, i.e., (6 x 6), (36 x 8) and (720 x 20), for each configuration. The operational lifetimes of different interconnected solar PV arrays and the improvement in their life properties through different interconnection and modularized configurations were investigated using a reliability-index model. Under normal conditions, the efficiency of a solar cell degrades in an exponential manner, and its operational life above a lowest admissible efficiency may be considered as the upper bound of its lifetime. Under field conditions, the solar cell may fail any time due to environmental stresses, or it may function up to the end of its expected lifetime. In view of this, the lifetime of a solar cell in an array was represented by an exponentially distributed random variable. At any instant of time t, this random variable was considered to have two states: (i) the cell functioned till time t, or (ii) the cell failed within time t. It was considered that the functioning of the solar cell included its operation at an efficiency decaying with time under normal conditions. It was assumed that the lifetime of a solar cell had lack of memory or aging property, which meant that no matter how long (say, t) the cell had been operational, the probability that it would last an additional time ?t was independent of t. The operational life of the solar cell above a lowest admissible efficiency was considered as the upper bound of its expected lifetime. The value of the upper bound on the expected life of solar cell was evaluated using the information provided by the manufacturers of the single-crystalline silicon solar cells. Then on the basis of these lifetimes, the expected operational lifetimes of the array systems were obtained. Since the investigations of the effects of collector orientation on the performance of an array require the continuous values of global solar radiation on a surface, a method to estimate the global solar radiation on a surface (horizontal or tilted) was also proposed. The cloudiness index was defined as the fraction of extraterrestrial radiation that reached the earth's surface when the sky above the location of interest was obscured by the cloud cover. The cloud cover at the location of interest during any time interval of a day was assumed to follow the fuzzy random phenomenon. The cloudiness index, therefore, was considered as a fuzzy random variable that accounted for the cloud cover at the location of interest during any time interval of a day. This variable was assumed to depend on four other fuzzy random variables that, respectively, accounted for the cloud cover corresponding to the 1) type of cloud group, 2) climatic region, 3) season with most of the precipitation, and 4) type of precipitation at the location of interest during any time interval. All possible types of cloud covers were categorized into five types of cloud groups. Each cloud group was considered to be a fuzzy subset. In this model, the cloud cover at the location of interest during a time interval was considered to be the clouds that obscure the sky above the location. The cloud covers, with all possible types of clouds having transmissivities corresponding to values in the membership range of a fuzzy subset (i.e., a type of cloud group), were considered to be the membership elements of that fuzzy subset. The transmissivities of different types of cloud covers in a cloud group corresponded to the values in the membership range of that cloud group. Predicate logic (i.e., if---then---, else---, conditions) was used to set the relationship between all the fuzzy random variables. The values of the above-mentioned fuzzy random variables were evaluated to provide the value of cloudiness index for each time interval at the location of interest. For each case of the fuzzy random variable, heuristic approach was used to identify subjectively the range ([a, b], where a and b were real numbers with in [0, 1] such that a

  20. [Clinical validation of multiple biomarkers suspension array technology for ovarian cancer].

    PubMed

    Zhao, B B; Yang, Z J; Wang, Q; Pan, Z M; Zhang, W; Li, L

    2017-01-25

    Objective: To investigates the diagnostic value of combined detection serum CCL18, CXCL1 antigen, C1D, TM4SF1, FXR1, TIZ IgG autoantibody by suspension array for ovarian cancer. Methods: Suspension array was used to detect CCL18, CXCL1 antigen, C1D, TM4SF1, FXR1, TIZ IgG autoantibody in 120 cases of healthy women, 204 cases of patients with benign pelvic tumors, 119 cases of pelvic malignant tumor patients, and 40 cases with breast cancer, lung cancer oroliver cancer, respectively. Constructed diagnosis model of combined detection six biomarkers for diagnosis of ovarian malignant tumor. Constructed diagnosis model of combined detection autoantibodies to diagnose epithelial ovarian cancer. Analysed the value of detecting six biomarkers for diagnosis of ovarian malignant tumor and detecting autoantibodies for diagnosis of epithelial ovarian cancer. Analysed diagnostic value of detecting six biomarkers to diagnose stage Ⅰ and Ⅱepithelial ovarian cancer. Compared diagnostic value of detecting six biomarkers in diagnosis of tissue types and pathologic grading with that of CA(125). Results: Model of combined detecting six biomarkers to diagnose ovarian malignant tumor was logit ( P ) =-11.151+0.008×C1D+0.011×TM4SF1+0.011×TIZ-0.008×FXR1+0.021×CCL18+0.200×CXCL1. Model of combined detection autoantibodies to diagnose epithelial ovarian cancer was logit ( P ) =-5.137+0.013×C1D+0.014×TM4SF1+0.060×TIZ-0.060×FXR1. Sensitivity and specificity of detecting six biomarker to diagnose ovarian malignant tumor was 90.6% and 98.7%. Sensitivity and specificity of detecting autoantibodies to diagnose epithelial ovarian cancer was 75.8% and 96.7%. Combined detection for six biomarkers to diagnose serous and mucinous ovarian cancer was statistically no better than those of CA(125) ( P =0.196 and P =0.602, respectively); there was significantly difference in diagnosis of ovarian cancer ( P =0.023), and there was no significantly difference in diagnosis of different pathological grading ( P =0.089 and P =0.169, respectively). Conclusions: Constructing diagnosis model of combined detection six biomarker to diagnose ovarian malignant tumor and constructed diagnosis model of combined detectionautoantibodies to diagnose epithelial ovarian cancer. Combined detection six biomarkers to diagnose serous and mucinous ovarian tumors is better than that of CA(125).

  1. System for interferometric distortion measurements that define an optical path

    DOEpatents

    Bokor, Jeffrey; Naulleau, Patrick

    2003-05-06

    An improved phase-shifting point diffraction interferometer can measure both distortion and wavefront aberration. In the preferred embodiment, the interferometer employs an object-plane pinhole array comprising a plurality of object pinholes located between the test optic and the source of electromagnetic radiation and an image-plane mask array that is positioned in the image plane of the test optic. The image-plane mask array comprises a plurality of test windows and corresponding reference pinholes, wherein the positions of the plurality of pinholes in the object-plane pinhole array register with those of the plurality of test windows in image-plane mask array. Electromagnetic radiation that is directed into a first pinhole of object-plane pinhole array thereby creating a first corresponding test beam image on the image-plane mask array. Where distortion is relatively small, it can be directly measured interferometrically by measuring the separation distance between and the orientation of the test beam and reference-beam pinhole and repeating this process for at least one other pinhole of the plurality of pinholes of the object-plane pinhole array. Where the distortion is relative large, it can be measured by using interferometry to direct the stage motion, of a stage supporting the image-plane mask array, and then use the final stage motion as a measure of the distortion.

  2. Air-flow distortion and turbulence statistics near an animal facility

    NASA Astrophysics Data System (ADS)

    Prueger, J. H.; Eichinger, W. E.; Hipps, L. E.; Hatfield, J. L.; Cooper, D. I.

    The emission and dispersion of particulates and gases from concentrated animal feeding operations (CAFO) at local to regional scales is a current issue in science and society. The transport of particulates, odors and toxic chemical species from the source into the local and eventually regional atmosphere is largely determined by turbulence. Any models that attempt to simulate the dispersion of particles must either specify or assume various statistical properties of the turbulence field. Statistical properties of turbulence are well documented for idealized boundary layers above uniform surfaces. However, an animal production facility is a complex surface with structures that act as bluff bodies that distort the turbulence intensity near the buildings. As a result, the initial release and subsequent dispersion of effluents in the region near a facility will be affected by the complex nature of the surface. Previous Lidar studies of plume dispersion over the facility used in this study indicated that plumes move in complex yet organized patterns that would not be explained by the properties of turbulence generally assumed in models. The objective of this study was to characterize the near-surface turbulence statistics in the flow field around an array of animal confinement buildings. Eddy covariance towers were erected in the upwind, within the building array and downwind regions of the flow field. Substantial changes in turbulence intensity statistics and turbulence-kinetic energy (TKE) were observed as the mean wind flow encountered the building structures. Spectra analysis demonstrated unique distribution of the spectral energy in the vertical profile above the buildings.

  3. The chemiluminescence based Ziplex automated workstation focus array reproduces ovarian cancer Affymetrix GeneChip expression profiles.

    PubMed

    Quinn, Michael C J; Wilson, Daniel J; Young, Fiona; Dempsey, Adam A; Arcand, Suzanna L; Birch, Ashley H; Wojnarowicz, Paulina M; Provencher, Diane; Mes-Masson, Anne-Marie; Englert, David; Tonin, Patricia N

    2009-07-06

    As gene expression signatures may serve as biomarkers, there is a need to develop technologies based on mRNA expression patterns that are adaptable for translational research. Xceed Molecular has recently developed a Ziplex technology, that can assay for gene expression of a discrete number of genes as a focused array. The present study has evaluated the reproducibility of the Ziplex system as applied to ovarian cancer research of genes shown to exhibit distinct expression profiles initially assessed by Affymetrix GeneChip analyses. The new chemiluminescence-based Ziplex gene expression array technology was evaluated for the expression of 93 genes selected based on their Affymetrix GeneChip profiles as applied to ovarian cancer research. Probe design was based on the Affymetrix target sequence that favors the 3' UTR of transcripts in order to maximize reproducibility across platforms. Gene expression analysis was performed using the Ziplex Automated Workstation. Statistical analyses were performed to evaluate reproducibility of both the magnitude of expression and differences between normal and tumor samples by correlation analyses, fold change differences and statistical significance testing. Expressions of 82 of 93 (88.2%) genes were highly correlated (p < 0.01) in a comparison of the two platforms. Overall, 75 of 93 (80.6%) genes exhibited consistent results in normal versus tumor tissue comparisons for both platforms (p < 0.001). The fold change differences were concordant for 87 of 93 (94%) genes, where there was agreement between the platforms regarding statistical significance for 71 (76%) of 87 genes. There was a strong agreement between the two platforms as shown by comparisons of log2 fold differences of gene expression between tumor versus normal samples (R = 0.93) and by Bland-Altman analysis, where greater than 90% of expression values fell within the 95% limits of agreement. Overall concordance of gene expression patterns based on correlations, statistical significance between tumor and normal ovary data, and fold changes was consistent between the Ziplex and Affymetrix platforms. The reproducibility and ease-of-use of the technology suggests that the Ziplex array is a suitable platform for translational research.

  4. Detection of coffee flavour ageing by solid-phase microextraction/surface acoustic wave sensor array technique (SPME/SAW).

    PubMed

    Barié, Nicole; Bücking, Mark; Stahl, Ullrich; Rapp, Michael

    2015-06-01

    The use of polymer coated surface acoustic wave (SAW) sensor arrays is a very promising technique for highly sensitive and selective detection of volatile organic compounds (VOCs). We present new developments to achieve a low cost sensor setup with a sampling method enabling the highly reproducible detection of volatiles even in the ppb range. Since the VOCs of coffee are well known by gas chromatography (GC) research studies, the new sensor array was tested for an easy assessable objective: coffee ageing during storage. As reference method these changes were traced with a standard GC/FID set-up, accompanied by sensory panellists. The evaluation of GC data showed a non-linear characteristic for single compound concentrations as well as for total peak area values, disabling prediction of the coffee age. In contrast, the new SAW sensor array demonstrates a linear dependency, i.e. being capable to show a dependency between volatile concentration and storage time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Detection and Localization of Subsurface Two-Dimensional Metallic Objects

    NASA Astrophysics Data System (ADS)

    Meschino, S.; Pajewski, L.; Schettini, G.

    2009-04-01

    "Roma Tre" University, Applied Electronics Dept.v. Vasca Navale 84, 00146 Rome, Italy Non-invasive identification of buried objects in the near-field of a receiver array is a subject of great interest, due to its application to the remote sensing of the earth's subsurface, to the detection of landmines, pipes, conduits, to the archaeological site characterization, and more. In this work, we present a Sub-Array Processing (SAP) approach for the detection and localization of subsurface perfectly-conducting circular cylinders. We consider a plane wave illuminating the region of interest, which is assumed to be a homogeneous, unlossy medium of unknown permittivity containing one or more targets. In a first step, we partition the receiver array so that the field scattered from the targets result to be locally plane at each sub-array. Then, we apply a Direction of Arrival (DOA) technique to obtain a set of angles for each locally plane wave, and triangulate these directions obtaining a collection of crossing crowding in the expected object locations [1]. We compare several DOA algorithms such as the traditional Bartlett and Capon Beamforming, the Pisarenko Harmonic Decomposition (PHD), the Minimum-Norm method, the Multiple Signal Classification (MUSIC) and the Estimation of Signal Parameters via Rotational Techinque (ESPRIT) [2]. In a second stage, we develop a statistical Poisson based model to manage the crossing pattern in order to extract the probable target's centre position. In particular, if the crossings are Poisson distributed, it is possible to feature two different distribution parameters [3]. These two parameters perform two density rate for the crossings, so that we can previously divide the crossing pattern in a certain number of equal-size windows and we can collect the windows of the crossing pattern with low rate parameters (that probably are background windows) and remove them. In this way we can consider only the high rate parameter windows (that most probably locate the target) and extract the center position of the object. We also consider some other localization-connected aspects. For example how to obtain a likely estimation of the soil permittivity and of the cylinders radius. Finally, when multiple objects are present, we refine our localization procedure by performing a Clustering Analysis of the crossing pattern. In particular, we apply the K-means algorithm to extract the coordinates of the objects centroids and the clusters extension. References [1] Şahin A., Miller L., "Object Detection Using High Resolution Near-Field Array Processing", IEEE Trans. on Geoscience and Remote Sensing, vol.39, no.1, Jan. 2001, pp. 136-141. [2] Gross F.B., "Smart Antennas for Wireless Communications", Mc.Graw-Hill 2005. [3] Hoaglin D.C., "A Poisonnes Plot", The American Statistician, vol.34, no.3 August 1980, pp.146-149.

  6. Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Morales, K E; Whipple, R E

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of themore » material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less

  7. Quantitative and simultaneous analysis of the polarity of polycrystalline ZnO seed layers and related nanowires grown by wet chemical deposition.

    PubMed

    Guillemin, Sophie; Parize, Romain; Carabetta, Joseph; Cantelli, Valentina; Albertini, David; Gautier, Brice; Brémond, Georges; Fong, Dillon D; Renevier, Hubert; Consonni, Vincent

    2017-03-03

    The polarity in ZnO nanowires is an important issue since it strongly affects surface configuration and reactivity, nucleation and growth, electro-optical properties, and nanoscale-engineering device performances. However, measuring statistically the polarity of ZnO nanowire arrays grown by chemical bath deposition and elucidating its correlation with the polarity of the underneath polycrystalline ZnO seed layer grown by the sol-gel process represents a major difficulty. To address that issue, we combine resonant x-ray diffraction (XRD) at Zn K-edge using synchrotron radiation with piezoelectric force microscopy and polarity-sensitive chemical etching to statistically investigate the polarity of more than 10 7 nano-objects both on the macroscopic and local microscopic scales, respectively. By using high temperature annealing under an argon atmosphere, it is shown that the compact, highly c-axis oriented ZnO seed layer is more than 92% Zn-polar and that only a few small O-polar ZnO grains with an amount less than 8% are formed. Correlatively, the resulting ZnO nanowires are also found to be Zn-polar, indicating that their polarity is transferred from the c-axis oriented ZnO grains acting as nucleation sites in the seed layer. These findings pave the way for the development of new strategies to form unipolar ZnO nanowire arrays as a requirement for a number of nanoscale-engineering devices like piezoelectric nanogenerators. They also highlight the great advantage of resonant XRD as a macroscopic, non-destructive method to simultaneously and statistically measure the polarity of ZnO nanowire arrays and of the underneath ZnO seed layer.

  8. Quantitative and simultaneous analysis of the polarity of polycrystalline ZnO seed layers and related nanowires grown by wet chemical deposition

    NASA Astrophysics Data System (ADS)

    Guillemin, Sophie; Parize, Romain; Carabetta, Joseph; Cantelli, Valentina; Albertini, David; Gautier, Brice; Brémond, Georges; Fong, Dillon D.; Renevier, Hubert; Consonni, Vincent

    2017-03-01

    The polarity in ZnO nanowires is an important issue since it strongly affects surface configuration and reactivity, nucleation and growth, electro-optical properties, and nanoscale-engineering device performances. However, measuring statistically the polarity of ZnO nanowire arrays grown by chemical bath deposition and elucidating its correlation with the polarity of the underneath polycrystalline ZnO seed layer grown by the sol-gel process represents a major difficulty. To address that issue, we combine resonant x-ray diffraction (XRD) at Zn K-edge using synchrotron radiation with piezoelectric force microscopy and polarity-sensitive chemical etching to statistically investigate the polarity of more than 107 nano-objects both on the macroscopic and local microscopic scales, respectively. By using high temperature annealing under an argon atmosphere, it is shown that the compact, highly c-axis oriented ZnO seed layer is more than 92% Zn-polar and that only a few small O-polar ZnO grains with an amount less than 8% are formed. Correlatively, the resulting ZnO nanowires are also found to be Zn-polar, indicating that their polarity is transferred from the c-axis oriented ZnO grains acting as nucleation sites in the seed layer. These findings pave the way for the development of new strategies to form unipolar ZnO nanowire arrays as a requirement for a number of nanoscale-engineering devices like piezoelectric nanogenerators. They also highlight the great advantage of resonant XRD as a macroscopic, non-destructive method to simultaneously and statistically measure the polarity of ZnO nanowire arrays and of the underneath ZnO seed layer.

  9. Quantitative and simultaneous analysis of the polarity of polycrystalline ZnO seed layers and related nanowires grown by wet chemical deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillemin, Sophie; Parize, Romain; Carabetta, Joseph

    The polarity in ZnO nanowires is an important issue since it strongly affects surface configuration and reactivity, nucleation and growth, electro-optical properties, and nanoscaleengineering device performances. However, measuring statistically the polarity of ZnO nanowire arrays grown by chemical bath deposition and elucidating its correlation with the polarity of the underneath polycrystalline ZnO seed layer grown by the sol–gel process represents a major difficulty. To address that issue, we combine resonant x-ray diffraction (XRD) at Zn K-edge using synchrotron radiation with piezoelectric force microscopy and polarity-sensitive chemical etching to statistically investigate the polarity of more than 107 nano-objects both on themore » macroscopic and local microscopic scales, respectively. By using high temperature annealing under an argon atmosphere, it is shown that the compact, highly c-axis oriented ZnO seed layer is more than 92% Zn-polar and that only a few small O-polar ZnO grains with an amount less than 8% are formed. Correlatively, the resulting ZnO nanowires are also found to be Zn-polar, indicating that their polarity is transferred from the c-axis oriented ZnO grains acting as nucleation sites in the seed layer. These findings pave the way for the development of new strategies to form unipolar ZnO nanowire arrays as a requirement for a number of nanoscaleengineering devices like piezoelectric nanogenerators. They also highlight the great advantage of resonant XRD as a macroscopic, non-destructive method to simultaneously and statistically measure the polarity of ZnO nanowire arrays and of the underneath ZnO seed layer.« less

  10. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  11. Optimization of the Hartmann-Shack microlens array

    NASA Astrophysics Data System (ADS)

    de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William

    2011-04-01

    In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.

  12. Ecological model of glittering texture

    NASA Astrophysics Data System (ADS)

    Vallet, Matthieu; Paille, Damien; Monot, Annie; Kemeny, Andras

    2003-06-01

    The perceptual effects of changes of texture luminance either between the eyes or over time have been studied in several experiments and have led to a better comprehension of phenomenons such as sieve effect, binocular and monocular lustre and rivaldepth. In this paper, we propose an ecological model of glittering texture and analyze glitter perception in terms of variations of texture luminance and animation frequency, in dynamic illumination conditions. Our approach is based on randomly oriented mirrors that are computed according to the specular term of Phong's image rendering formula. The sparkling effect is thus correlated to the relative movements of the resulting textured object, the light array and the observer's point of view. The perceptual effect obtained with this model depends on several parameters: mirrors' density, the Phong specular exponent and the statistical properties of the mirrors' normal vectors. The ability to independently set these properties offers a way to explore a characterization space of glitter. A rating procedure provided a first approximation of the numerical values that lead to the best feeling of typical sparkling surfaces such as metallic paint, granite or sea shore.

  13. Optimization of Robotic Spray Painting process Parameters using Taguchi Method

    NASA Astrophysics Data System (ADS)

    Chidhambara, K. V.; Latha Shankar, B.; Vijaykumar

    2018-02-01

    Automated spray painting process is gaining interest in industry and research recently due to extensive application of spray painting in automobile industries. Automating spray painting process has advantages of improved quality, productivity, reduced labor, clean environment and particularly cost effectiveness. This study investigates the performance characteristics of an industrial robot Fanuc 250ib for an automated painting process using statistical tool Taguchi’s Design of Experiment technique. The experiment is designed using Taguchi’s L25 orthogonal array by considering three factors and five levels for each factor. The objective of this work is to explore the major control parameters and to optimize the same for the improved quality of the paint coating measured in terms of Dry Film thickness(DFT), which also results in reduced rejection. Further Analysis of Variance (ANOVA) is performed to know the influence of individual factors on DFT. It is observed that shaping air and paint flow are the most influencing parameters. Multiple regression model is formulated for estimating predicted values of DFT. Confirmation test is then conducted and comparison results show that error is within acceptable level.

  14. Monitoring concept for structural integration of PZT-fiber arrays in metal sheets: a numerical and experimental study

    NASA Astrophysics Data System (ADS)

    Drossel, Welf-Guntram; Schubert, Andreas; Putz, Matthias; Koriath, Hans-Joachim; Wittstock, Volker; Hensel, Sebastian; Pierer, Alexander; Müller, Benedikt; Schmidt, Marek

    2018-01-01

    The technique joining by forming allows the structural integration of piezoceramic fibers into locally microstructured metal sheets without any elastic interlayers. A high-volume production of the joining partners causes in statistical deviations from the nominal dimensions. A numerical simulation on geometric process sensitivity shows that the deviations have a high significant influence on the resulting fiber stresses after the joining by forming operation and demonstrate the necessity of a monitoring concept. On this basis, the electromechanical behavior of piezoceramic array transducers is investigated experimentally before, during and after the joining process. The piezoceramic array transducer consists of an arrangement of five electrical interconnected piezoceramic fibers. The findings show that the impedance spectrum depends on the fiber stresses and can be used for in-process monitoring during the joining process. Based on the impedance values the preload state of the interconnected piezoceramic fibers can be specifically controlled and a fiber overload.

  15. A renormalization group model for the stick-slip behavior of faults

    NASA Technical Reports Server (NTRS)

    Smalley, R. F., Jr.; Turcotte, D. L.; Solla, S. A.

    1983-01-01

    A fault which is treated as an array of asperities with a prescribed statistical distribution of strengths is described. For a linear array the stress is transferred to a single adjacent asperity and for a two dimensional array to three ajacent asperities. It is shown that the solutions bifurcate at a critical applied stress. At stresses less than the critical stress virtually no asperities fail on a large scale and the fault is locked. At the critical stress the solution bifurcates and asperity failure cascades away from the nucleus of failure. It is found that the stick slip behavior of most faults can be attributed to the distribution of asperities on the fault. The observation of stick slip behavior on faults rather than stable sliding, why the observed level of seismicity on a locked fault is very small, and why the stress on a fault is less than that predicted by a standard value of the coefficient of friction are outlined.

  16. Chimera Type Behavior in Nonlocal Coupling System with Two Different Inherent Frequencies

    NASA Astrophysics Data System (ADS)

    Lin, Larry; Li, Ping-Cheng; Tseng, Hseng-Che

    2014-03-01

    From the research of Kuramoto and Strogatz, arrays of identical oscillators can display a remarkable pattern, named chimera state, in which phase-locked oscillators coexist with drifting ones in nonlocal coupling oscillator system. We consider further in this study, two groups of oscillators with different inherent frequencies and arrange them in a ring. When the difference of the inherent frequencies is within some specific parameter range, oscillators of nonlocal coupling system show two distinct chimera states. When the parameter value exceeds some threshold value, two chimera states disappear. They show different features. The statistical dynamic behavior of the system can be described by Kuramoto theory.

  17. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    NASA Astrophysics Data System (ADS)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  18. Space and power efficient hybrid counters array

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY

    2009-05-12

    A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.

  19. Space and power efficient hybrid counters array

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-03-30

    A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.

  20. Discovery of new rheumatoid arthritis biomarkers using the surface-enhanced laser desorption/ionization time-of-flight mass spectrometry ProteinChip approach.

    PubMed

    de Seny, Dominique; Fillet, Marianne; Meuwis, Marie-Alice; Geurts, Pierre; Lutteri, Laurence; Ribbens, Clio; Bours, Vincent; Wehenkel, Louis; Piette, Jacques; Malaise, Michel; Merville, Marie-Paule

    2005-12-01

    To identify serum protein biomarkers specific for rheumatoid arthritis (RA), using surface-enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF-MS) technology. A total of 103 serum samples from patients and healthy controls were analyzed. Thirty-four of the patients had a diagnosis of RA, based on the American College of Rheumatology criteria. The inflammation control group comprised 20 patients with psoriatic arthritis (PsA), 9 with asthma, and 10 with Crohn's disease. The noninflammation control group comprised 14 patients with knee osteoarthritis and 16 healthy control subjects. Serum protein profiles were obtained by SELDI-TOF-MS and compared in order to identify new biomarkers specific for RA. Data were analyzed by a machine learning algorithm called decision tree boosting, according to different preprocessing steps. The most discriminative mass/charge (m/z) values serving as potential biomarkers for RA were identified on arrays for both patients with RA versus controls and patients with RA versus patients with PsA. From among several candidates, the following peaks were highlighted: m/z values of 2,924 (RA versus controls on H4 arrays), 10,832 and 11,632 (RA versus controls on CM10 arrays), 4,824 (RA versus PsA on H4 arrays), and 4,666 (RA versus PsA on CM10 arrays). Positive results of proteomic analysis were associated with positive results of the anti-cyclic citrullinated peptide test. Our observations suggested that the 10,832 peak could represent myeloid-related protein 8. SELDI-TOF-MS technology allows rapid analysis of many serum samples, and use of decision tree boosting analysis as the main statistical method allowed us to propose a pattern of protein peaks specific for RA.

  1. Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

    NASA Astrophysics Data System (ADS)

    Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato

    2017-02-01

    Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.

  2. Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”

    PubMed Central

    Schroeder, Christopher L.; Hartmann, Mitra J. Z.

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the “optical flow” equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that “flows” over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip. PMID:23097641

  3. Sensory prediction on a whiskered robot: a tactile analogy to "optical flow".

    PubMed

    Schroeder, Christopher L; Hartmann, Mitra J Z

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the "optical flow" equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that "flows" over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip.

  4. Stable statistical representations facilitate visual search.

    PubMed

    Corbett, Jennifer E; Melcher, David

    2014-10-01

    Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.

  5. Sweetwater, Texas Large N Experiment

    NASA Astrophysics Data System (ADS)

    Sumy, D. F.; Woodward, R.; Barklage, M.; Hollis, D.; Spriggs, N.; Gridley, J. M.; Parker, T.

    2015-12-01

    From 7 March to 30 April 2014, NodalSeismic, Nanometrics, and IRIS PASSCAL conducted a collaborative, spatially-dense seismic survey with several thousand nodal short-period geophones complemented by a backbone array of broadband sensors near Sweetwater, Texas. This pilot project demonstrates the efficacy of industry and academic partnerships, and leveraged a larger, commercial 3D survey to collect passive source seismic recordings to image the subsurface. This innovative deployment of a large-N mixed-mode array allows industry to explore array geometries and investigate the value of broadband recordings, while affording academics a dense wavefield imaging capability and an operational model for high volume instrument deployment. The broadband array consists of 25 continuously-recording stations from IRIS PASSCAL and Nanometrics, with an array design that maximized recording of horizontal-traveling seismic energy for surface wave analysis over the primary target area with sufficient offset for imaging objectives at depth. In addition, 2639 FairfieldNodal Zland nodes from NodalSeismic were deployed in three sub-arrays: the outlier, backbone, and active source arrays. The backbone array consisted of 292 nodes that covered the entire survey area, while the outlier array consisted of 25 continuously-recording nodes distributed at a ~3 km distance away from the survey perimeter. Both the backbone and outlier array provide valuable constraints for the passive source portion of the analysis. This project serves as a learning platform to develop best practices in the support of large-N arrays with joint industry and academic expertise. Here we investigate lessons learned from a facility perspective, and present examples of data from the various sensors and array geometries. We will explore first-order results from local and teleseismic earthquakes, and show visualizations of the data across the array. Data are archived at the IRIS DMC under stations codes XB and 1B.

  6. Quantitation of heteroplasmy of mtDNA sequence variants identified in a population of AD patients and controls by array-based resequencing.

    PubMed

    Coon, Keith D; Valla, Jon; Szelinger, Szabolics; Schneider, Lonnie E; Niedzielko, Tracy L; Brown, Kevin M; Pearson, John V; Halperin, Rebecca; Dunckley, Travis; Papassotiropoulos, Andreas; Caselli, Richard J; Reiman, Eric M; Stephan, Dietrich A

    2006-08-01

    The role of mitochondrial dysfunction in the pathogenesis of Alzheimer's disease (AD) has been well documented. Though evidence for the role of mitochondria in AD seems incontrovertible, the impact of mitochondrial DNA (mtDNA) mutations in AD etiology remains controversial. Though mutations in mitochondrially encoded genes have repeatedly been implicated in the pathogenesis of AD, many of these studies have been plagued by lack of replication as well as potential contamination of nuclear-encoded mitochondrial pseudogenes. To assess the role of mtDNA mutations in the pathogenesis of AD, while avoiding the pitfalls of nuclear-encoded mitochondrial pseudogenes encountered in previous investigations and showcasing the benefits of a novel resequencing technology, we sequenced the entire coding region (15,452 bp) of mtDNA from 19 extremely well-characterized AD patients and 18 age-matched, unaffected controls utilizing a new, reliable, high-throughput array-based resequencing technique, the Human MitoChip. High-throughput, array-based DNA resequencing of the entire mtDNA coding region from platelets of 37 subjects revealed the presence of 208 loci displaying a total of 917 sequence variants. There were no statistically significant differences in overall mutational burden between cases and controls, however, 265 independent sites of statistically significant change between cases and controls were identified. Changed sites were found in genes associated with complexes I (30.2%), III (3.0%), IV (33.2%), and V (9.1%) as well as tRNA (10.6%) and rRNA (14.0%). Despite their statistical significance, the subtle nature of the observed changes makes it difficult to determine whether they represent true functional variants involved in AD etiology or merely naturally occurring dissimilarity. Regardless, this study demonstrates the tremendous value of this novel mtDNA resequencing platform, which avoids the pitfalls of erroneously amplifying nuclear-encoded mtDNA pseudogenes, and our proposed analysis paradigm, which utilizes the availability of raw signal intensity values for each of the four potential alleles to facilitate quantitative estimates of mtDNA heteroplasmy. This information provides a potential new target for burgeoning diagnostics and therapeutics that could truly assist those suffering from this devastating disorder.

  7. Linear encoding device

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1993-01-01

    A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.

  8. Acoustic imaging system

    DOEpatents

    Smith, Richard W.

    1979-01-01

    An acoustic imaging system for displaying an object viewed by a moving array of transducers as the array is pivoted about a fixed point within a given plane. A plurality of transducers are fixedly positioned and equally spaced within a laterally extending array and operatively directed to transmit and receive acoustic signals along substantially parallel transmission paths. The transducers are sequentially activated along the array to transmit and receive acoustic signals according to a preestablished sequence. Means are provided for generating output voltages for each reception of an acoustic signal, corresponding to the coordinate position of the object viewed as the array is pivoted. Receptions from each of the transducers are presented on the same display at coordinates corresponding to the actual position of the object viewed to form a plane view of the object scanned.

  9. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  11. Performance bounds for matched field processing in subsurface object detection applications

    NASA Astrophysics Data System (ADS)

    Sahin, Adnan; Miller, Eric L.

    1998-09-01

    In recent years there has been considerable interest in the use of ground penetrating radar (GPR) for the non-invasive detection and localization of buried objects. In a previous work, we have considered the use of high resolution array processing methods for solving these problems for measurement geometries in which an array of electromagnetic receivers observes the fields scattered by the subsurface targets in response to a plane wave illumination. Our approach uses the MUSIC algorithm in a matched field processing (MFP) scheme to determine both the range and the bearing of the objects. In this paper we derive the Cramer-Rao bounds (CRB) for this MUSIC-based approach analytically. Analysis of the theoretical CRB has shown that there exists an optimum inter-element spacing of array elements for which the CRB is minimum. Furthermore, the optimum inter-element spacing minimizing CRB is smaller than the conventional half wavelength criterion. The theoretical bounds are then verified for two estimators using Monte-Carlo simulations. The first estimator is the MUSIC-based MFP and the second one is the maximum likelihood based MFP. The two approaches differ in the cost functions they optimize. We observe that Monte-Carlo simulated error variances always lie above the values established by CRB. Finally, we evaluate the performance of our MUSIC-based algorithm in the presence of model mismatches. Since the detection algorithm strongly depends on the model used, we have tested the performance of the algorithm when the object radius used in the model is different from the true radius. This analysis reveals that the algorithm is still capable of localizing the objects with a bias depending on the degree of mismatch.

  12. Laser beam projection with adaptive array of fiber collimators. II. Analysis of atmospheric compensation efficiency.

    PubMed

    Lachinova, Svetlana L; Vorontsov, Mikhail A

    2008-08-01

    We analyze the potential efficiency of laser beam projection onto a remote object in atmosphere with incoherent and coherent phase-locked conformal-beam director systems composed of an adaptive array of fiber collimators. Adaptive optics compensation of turbulence-induced phase aberrations in these systems is performed at each fiber collimator. Our analysis is based on a derived expression for the atmospheric-averaged value of the mean square residual phase error as well as direct numerical simulations. Operation of both conformal-beam projection systems is compared for various adaptive system configurations characterized by the number of fiber collimators, the adaptive compensation resolution, and atmospheric turbulence conditions.

  13. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  14. Use of High-resolution WRF Simulations to Forecast Lightning Threat

    NASA Technical Reports Server (NTRS)

    McCaul, William E.; LaCasse, K.; Goodman, S. J.

    2006-01-01

    Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors in storms. This relationship is exploited, in conjunction with the capabilities of recent forecast models such as WRF, to forecast the threat of lightning from convective storms using the output fields from the model forecasts. The simulated vertical flux of graupel at -15C is used in this study as a proxy for charge separation processes and their associated lightning risk. Six-h simulations are conducted for a number of case studies for which three-dimensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity and reflectivity fields, and METAR and ACARS data yield the most realistic simulations. An array of subjective and objective statistical metrics are employed to document the utility of the WRF forecasts. The simulation results are also compared to other more traditional means of forecasting convective storms, such as those based on inspection of the convective available potential energy field.

  15. Landsat real-time processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, E.L.

    A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less

  16. Detection of the Odor Signature of Ovarian Cancer using DNA-Decorated Carbon Nanotube Field Effect Transistor Arrays

    NASA Astrophysics Data System (ADS)

    Kehayias, Christopher; Kybert, Nicholas; Yodh, Jeremy; Johnson, A. T. Charlie

    Carbon nanotubes are low-dimensional materials that exhibit remarkable chemical and bio-sensing properties and have excellent compatibility with electronic systems. Here, we present a study that uses an electronic olfaction system based on a large array of DNA-carbon nanotube field effect transistors vapor sensors to analyze the VOCs of blood plasma samples collected from patients with malignant ovarian cancer, patients with benign ovarian lesions, and age-matched healthy subjects. Initial investigations involved coating each CNT sensor with single-stranded DNA of a particular base sequence. 10 distinct DNA oligomers were used to functionalize the carbon nanotube field effect transistors, providing a 10-dimensional sensor array output response. Upon performing a statistical analysis of the 10-dimensional sensor array responses, we showed that blood samples from patients with malignant cancer can be reliably differentiated from those of healthy control subjects with a p-value of 3 x 10-5. The results provide preliminary evidence that the blood of ovarian cancer patients contains a discernable volatile chemical signature that can be detected using DNA-CNT nanoelectronic vapor sensors, a first step towards a minimally invasive electronic diagnostic technology for ovarian cancer.

  17. Retinal Anatomy and Electrode Array Position in Retinitis Pigmentosa Patients after Argus II Implantation: an International Study.

    PubMed

    Gregori, Ninel Z; Callaway, Natalia F; Hoeppner, Catherine; Yuan, Alex; Rachitskaya, Aleksandra; Feuer, William; Ameri, Hossein; Arevalo, J Fernando; Augustin, Albert J; Birch, David G; Dagnelie, Gislin; Grisanti, Salvatore; Davis, Janet L; Hahn, Paul; Handa, James T; Ho, Allen C; Huang, Suber S; Humayun, Mark S; Iezzi, Raymond; Jayasundera, K Thiran; Kokame, Gregg T; Lam, Byron L; Lim, Jennifer I; Mandava, Naresh; Montezuma, Sandra R; Olmos de Koo, Lisa; Szurman, Peter; Vajzovic, Lejla; Wiedemann, Peter; Weiland, James; Yan, Jiong; Zacks, David N

    2018-06-22

    To assess the retinal anatomy and array position in the Argus II Retinal Prosthesis recipients. Prospective, non-comparative cohort study METHODS: Setting: international multicenter study PATIENTS: Argus II recipients enrolled in the Post-Market Surveillance Studies. Spectral-domain Optical Coherence Tomography images collected for the Surveillance Studies (NCT01860092 and NCT01490827) were reviewed. Baseline and postoperative macular thickness, electrode-retina distance (gap), optic disc-array overlap, and preretinal membrane presence were recorded at 1, 3, 6, and 12 months. Axial retinal thickness and axial gap along the array's long axis (a line between the tack and handle), maximal retinal thickness and maximal gap along a B-scan near the tack, midline, and handle. Thirty-three patients from 16 surgical sites in the United States and Germany were included. Mean axial retinal thickness increased from month 1 through month 12 at each location, but reached statistical significance only at the array midline (p-value=0.007). The rate of maximal thickness increase was highest near the array midline (slope=6.02, p=0.004), compared to the tack (slope=3.60, p<0.001) or the handle (slope=1.93, p=0.368). The mean axial and maximal gaps decreased over the study period, and the mean maximal gap size decrease was significant at midline (p=0.032). Optic disc-array overlap was seen in the minority of patients. Preretinal membranes were common before and after implantation. Progressive macular thickening under the array was common and corresponded to decreased electrode-retina gap over time. By month 12, the array was completely apposed to the macula in approximately half of the eyes. Copyright © 2018. Published by Elsevier Inc.

  18. Creating Multi Objective Value Functions from Non-Independent Values

    DTIC Science & Technology

    2009-03-01

    1998) or oil companies trying to capitalize on the increasing flood of available data and statistics ( Coopersmith , Dean, McVean, & Storaune, 2001...Clemen, R. T., & Reilly, T. (2001). Making Hard Decisions. Pacific Grove: Duxbury. Coopersmith , E., Dean, G., McVean, J., & Storaune, E. (2001

  19. Improving numeracy through values affirmation enhances decision and STEM outcomes

    PubMed Central

    Peters, Ellen; Tompkins, Mary Kate; Schley, Dan; Meilleur, Louise; Sinayev, Aleksander; Tusler, Martin; Wagner, Laura; Crocker, Jennifer

    2017-01-01

    Greater numeracy has been correlated with better health and financial outcomes in past studies, but causal effects in adults are unknown. In a 9-week longitudinal study, undergraduate students, all taking a psychology statistics course, were randomly assigned to a control condition or a values-affirmation manipulation intended to improve numeracy. By the final week in the course, the numeracy intervention (statistics-course enrollment combined with values affirmation) enhanced objective numeracy, subjective numeracy, and two decision-related outcomes (financial literacy and health-related behaviors). It also showed positive indirect-only effects on financial outcomes and a series of STEM-related outcomes (course grades, intentions to take more math-intensive courses, later math-intensive courses taken based on academic transcripts). All decision and STEM-related outcome effects were mediated by the changes in objective and/or subjective numeracy and demonstrated similar and robust enhancements. Improvements to abstract numeric reasoning can improve everyday outcomes. PMID:28704410

  20. Imaging System With Confocally Self-Detecting Laser.

    DOEpatents

    Webb, Robert H.; Rogomentich, Fran J.

    1996-10-08

    The invention relates to a confocal laser imaging system and method. The system includes a laser source, a beam splitter, focusing elements, and a photosensitive detector. The laser source projects a laser beam along a first optical path at an object to be imaged, and modulates the intensity of the projected laser beam in response to light reflected from the object. A beam splitter directs a portion of the projected laser beam onto a photodetector. The photodetector monitors the intensity of laser output. The laser source can be an electrically scannable array, with a lens or objective assembly for focusing light generated by the array onto the object of interest. As the array is energized, its laser beams scan over the object, and light reflected at each point is returned by the lens to the element of the array from which it originated. A single photosensitive detector element can generate an intensity-representative signal for all lasers of the array. The intensity-representative signal from the photosensitive detector can be processed to provide an image of the object of interest.

  1. Accelerated iterative beam angle selection in IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bangert, Mark, E-mail: m.bangert@dkfz.de; Unkelbach, Jan

    2016-03-15

    Purpose: Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based onmore » surrogates for the FMO objective function value. Methods: We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Results: Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. Conclusions: We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.« less

  2. Accelerated iterative beam angle selection in IMRT.

    PubMed

    Bangert, Mark; Unkelbach, Jan

    2016-03-01

    Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy (IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n - 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value. We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation. Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles. We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.

  3. Extracellular matrix proteins as temporary coating for thin-film neural implants

    NASA Astrophysics Data System (ADS)

    Ceyssens, Frederik; Deprez, Marjolijn; Turner, Neill; Kil, Dries; van Kuyck, Kris; Welkenhuysen, Marleen; Nuttin, Bart; Badylak, Stephen; Puers, Robert

    2017-02-01

    Objective. This study investigates the suitability of a thin sheet of extracellular matrix (ECM) proteins as a resorbable coating for temporarily reinforcing fragile or ultra-low stiffness thin-film neural implants to be placed on the brain, i.e. microelectrocorticographic (µECOG) implants. Approach. Thin-film polyimide-based electrode arrays were fabricated using lithographic methods. ECM was harvested from porcine tissue by a decellularization method and coated around the arrays. Mechanical tests and an in vivo experiment on rats were conducted, followed by a histological tissue study combined with a statistical equivalence test (confidence interval approach, 0.05 significance level) to compare the test group with an uncoated control group. Main results. After 3 months, no significant damage was found based on GFAP and NeuN staining of the relevant brain areas. Significance. The study shows that ECM sheets are a suitable temporary coating for thin µECOG neural implants.

  4. A critique of Rasch residual fit statistics.

    PubMed

    Karabatsos, G

    2000-01-01

    In test analysis involving the Rasch model, a large degree of importance is placed on the "objective" measurement of individual abilities and item difficulties. The degree to which the objectivity properties are attained, of course, depends on the degree to which the data fit the Rasch model. It is therefore important to utilize fit statistics that accurately and reliably detect the person-item response inconsistencies that threaten the measurement objectivity of persons and items. Given this argument, it is somewhat surprising that there is far more emphasis placed in the objective measurement of person and items than there is in the measurement quality of Rasch fit statistics. This paper provides a critical analysis of the residual fit statistics of the Rasch model, arguably the most often used fit statistics, in an effort to illustrate that the task of Rasch fit analysis is not as simple and straightforward as it appears to be. The faulty statistical properties of the residual fit statistics do not allow either a convenient or a straightforward approach to Rasch fit analysis. For instance, given a residual fit statistic, the use of a single minimum critical value for misfit diagnosis across different testing situations, where the situations vary in sample and test properties, leads to both the overdetection and underdetection of misfit. To improve this situation, it is argued that psychometricians need to implement residual-free Rasch fit statistics that are based on the number of Guttman response errors, or use indices that are statistically optimal in detecting measurement disturbances.

  5. Comparison of Two Methods for Detecting Alternative Splice Variants Using GeneChip® Exon Arrays

    PubMed Central

    Fan, Wenhong; Stirewalt, Derek L.; Radich, Jerald P.; Zhao, Lueping

    2011-01-01

    The Affymetrix GeneChip Exon Array can be used to detect alternative splice variants. Microarray Detection of Alternative Splicing (MIDAS) and Partek® Genomics Suite (Partek® GS) are among the most popular analytical methods used to analyze exon array data. While both methods utilize statistical significance for testing, MIDAS and Partek® GS could produce somewhat different results due to different underlying assumptions. Comparing MIDAS and Partek® GS is quite difficult due to their substantially different mathematical formulations and assumptions regarding alternative splice variants. For meaningful comparison, we have used the previously published generalized probe model (GPM) which encompasses both MIDAS and Partek® GS under different assumptions. We analyzed a colon cancer exon array data set using MIDAS, Partek® GS and GPM. MIDAS and Partek® GS produced quite different sets of genes that are considered to have alternative splice variants. Further, we found that GPM produced results similar to MIDAS as well as to Partek® GS under their respective assumptions. Within the GPM, we show how discoveries relating to alternative variants can be quite different due to different assumptions. MIDAS focuses on relative changes in expression values across different exons within genes and tends to be robust but less efficient. Partek® GS, however, uses absolute expression values of individual exons within genes and tends to be more efficient but more sensitive to the presence of outliers. From our observations, we conclude that MIDAS and Partek® GS produce complementary results, and discoveries from both analyses should be considered. PMID:23675234

  6. Spectral performance of Square Kilometre Array Antennas - II. Calibration performance

    NASA Astrophysics Data System (ADS)

    Trott, Cathryn M.; de Lera Acedo, Eloy; Wayth, Randall B.; Fagnoni, Nicolas; Sutinjo, Adrian T.; Wakley, Brett; Punzalan, Chris Ivan B.

    2017-09-01

    We test the bandpass smoothness performance of two prototype Square Kilometre Array (SKA) SKA1-Low log-periodic dipole antennas, SKALA2 and SKALA3 ('SKA Log-periodic Antenna'), and the current dipole from the Murchison Widefield Array (MWA) precursor telescope. Throughout this paper, we refer to the output complex-valued voltage response of an antenna when connected to a low-noise amplifier, as the dipole bandpass. In Paper I, the bandpass spectral response of the log-periodic antenna being developed for the SKA1-Low was estimated using numerical electromagnetic simulations and analysed using low-order polynomial fittings, and it was compared with the HERA antenna against the delay spectrum metric. In this work, realistic simulations of the SKA1-Low instrument, including frequency-dependent primary beam shapes and array configuration, are used with a weighted least-squares polynomial estimator to assess the ability of a given prototype antenna to perform the SKA Epoch of Reionisation (EoR) statistical experiments. This work complements the ideal estimator tolerances computed for the proposed EoR science experiments in Trott & Wayth, with the realized performance of an optimal and standard estimation (calibration) procedure. With a sufficient sky calibration model at higher frequencies, all antennas have bandpasses that are sufficiently smooth to meet the tolerances described in Trott & Wayth to perform the EoR statistical experiments, and these are primarily limited by an adequate sky calibration model and the thermal noise level in the calibration data. At frequencies of the Cosmic Dawn, which is of principal interest to SKA as one of the first next-generation telescopes capable of accessing higher redshifts, the MWA dipole and SKALA3 antenna have adequate performance, while the SKALA2 design will impede the ability to explore this era.

  7. Simulated and Real Sheet-of-Light 3D Object Scanning Using a-Si:H Thin Film PSD Arrays.

    PubMed

    Contreras, Javier; Tornero, Josep; Ferreira, Isabel; Martins, Rodrigo; Gomes, Luis; Fortunato, Elvira

    2015-11-30

    A MATLAB/SIMULINK software simulation model (structure and component blocks) has been constructed in order to view and analyze the potential of the PSD (Position Sensitive Detector) array concept technology before it is further expanded or developed. This simulation allows changing most of its parameters, such as the number of elements in the PSD array, the direction of vision, the viewing/scanning angle, the object rotation, translation, sample/scan/simulation time, etc. In addition, results show for the first time the possibility of scanning an object in 3D when using an a-Si:H thin film 128 PSD array sensor and hardware/software system. Moreover, this sensor technology is able to perform these scans and render 3D objects at high speeds and high resolutions when using a sheet-of-light laser within a triangulation platform. As shown by the simulation, a substantial enhancement in 3D object profile image quality and realism can be achieved by increasing the number of elements of the PSD array sensor as well as by achieving an optimal position response from the sensor since clearly the definition of the 3D object profile depends on the correct and accurate position response of each detector as well as on the size of the PSD array.

  8. The statistics of laser returns from cube-corner arrays on satellite

    NASA Technical Reports Server (NTRS)

    Lehr, C. G.

    1973-01-01

    A method first presented by Goodman is used to derive an equation for the statistical effects associated with laser returns from satellites having retroreflecting arrays of cube corners. The effect of the distribution on the returns of a satellite-tracking system is illustrated by a computation based on randomly generated numbers.

  9. Virginia ridesharing statistics : methodologies for determining carpooler and vanpool average life bases and the average fuel economy of commuter vehicles.

    DOT National Transportation Integrated Search

    1985-01-01

    The objective of this research was to investigate methods of computing average life values for carpoolers and vanpools in Virginia. These statistics are to be used by the Rail and Public Transportation Division in evaluating the efficiency and cost-e...

  10. Constraints on Fault Damage Zone Properties and Normal Modes from a Dense Linear Array Deployment along the San Jacinto Fault Zone

    NASA Astrophysics Data System (ADS)

    Allam, A. A.; Lin, F. C.; Share, P. E.; Ben-Zion, Y.; Vernon, F.; Schuster, G. T.; Karplus, M. S.

    2016-12-01

    We present earthquake data and statistical analyses from a month-long deployment of a linear array of 134 Fairfield three-component 5 Hz seismometers along the Clark strand of the San Jacinto fault zone in Southern California. With a total aperture of 2.4km and mean station spacing of 20m, the array locally spans the entire fault zone from the most intensely fractured core to relatively undamaged host rock on the outer edges. We recorded 36 days of continuous seismic data at 1000Hz sampling rate, capturing waveforms from 751 local events with Mw>0.5 and 43 teleseismic events with M>5.5, including two 600km deep M7.5 events along the Andean subduction zone. For any single local event on the San Jacinto fault, the central stations of the array recorded both higher amplitude and longer duration waveforms, which we interpret as the result of damage-related low-velocity structure acting as a broad waveguide. Using 271 San Jacinto events, we compute the distributions of three quantities for each station: maximum amplitude, mean amplitude, and total energy (the integral of the envelope). All three values become statistically lower with increasing distance from the fault, but in addition show a nonrandom zigzag pattern which we interpret as normal mode oscillations. This interpretation is supported by polarization analysis which demonstrates that the high-amplitude late-arriving energy is strongly vertically polarized in the central part of the array, consistent with Love-type trapped waves. These results, comprising nearly 30,000 separate coseismic waveforms, support the consistent interpretation of a 450m wide asymmetric damage zone, with the lowest velocities offset to the northeast of the mapped surface trace by 100m. This asymmetric damage zone has important implications for the earthquake dynamics of the San Jacinto and especially its ability to generate damaging multi-segment ruptures.

  11. Thermal cycle testing of Space Station Freedom solar array blanket coupons

    NASA Technical Reports Server (NTRS)

    Scheiman, David A.; Schieman, David A.

    1991-01-01

    Lewis Research Center is presently conducting thermal cycle testing of solar array blanket coupons that represent the baseline design for Space Station Freedom. Four coupons were fabricated as part of the Photovoltaic Array Environment Protection (PAEP) Program, NAS 3-25079, at Lockheed Missile and Space Company. The objective of the testing is to demonstrate the durability or operational lifetime of the solar array welded interconnect design within the durability or operational lifetime of the solar array welded interconnect design within a low earth orbit (LEO) thermal cycling environment. Secondary objectives include the observation and identification of potential failure modes and effects that may occur within the solar array blanket coupons as a result of thermal cycling. The objectives, test articles, test chamber, performance evaluation, test requirements, and test results are presented for the successful completion of 60,000 thermal cycles.

  12. Uncertainty-enabled design of electromagnetic reflectors with integrated shape control

    NASA Astrophysics Data System (ADS)

    Haque, Samiul; Kindrat, Laszlo P.; Zhang, Li; Mikheev, Vikenty; Kim, Daewa; Liu, Sijing; Chung, Jooyeon; Kuian, Mykhailo; Massad, Jordan E.; Smith, Ralph C.

    2018-03-01

    We implemented a computationally efficient model for a corner-supported, thin, rectangular, orthotropic polyvinylidene fluoride (PVDF) laminate membrane, actuated by a two-dimensional array of segmented electrodes. The laminate can be used as shape-controlled electromagnetic reflector and the model estimates the reflector's shape given an array of control voltages. In this paper, we describe a model to determine the shape of the laminate for a given distribution of control voltages. Then, we investigate the surface shape error and its sensitivity to the model parameters. Subsequently, we analyze the simulated deflection of the actuated bimorph using a Zernike polynomial decomposition. Finally, we provide a probabilistic description of reflector performance using statistical methods to quantify uncertainty. We make design recommendations for nominal parameter values and their tolerances based on optimization under uncertainty using multiple methods.

  13. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study with Synechococcus WH8102

    PubMed Central

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; Haaland, D. M.; Timlin, J. A.; Elbourne, L. D. H.; Palenik, B.; Paulsen, I. T.

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in part to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition. PMID:19404483

  14. Statistical Analysis of Microarray Data with Replicated Spots: A Case Study with Synechococcus WH8102

    DOE PAGES

    Thomas, E. V.; Phillippy, K. H.; Brahamsha, B.; ...

    2009-01-01

    Until recently microarray experiments often involved relatively few arrays with only a single representation of each gene on each array. A complete genome microarray with multiple spots per gene (spread out spatially across the array) was developed in order to compare the gene expression of a marine cyanobacterium and a knockout mutant strain in a defined artificial seawater medium. Statistical methods were developed for analysis in the special situation of this case study where there is gene replication within an array and where relatively few arrays are used, which can be the case with current array technology. Due in partmore » to the replication within an array, it was possible to detect very small changes in the levels of expression between the wild type and mutant strains. One interesting biological outcome of this experiment is the indication of the extent to which the phosphorus regulatory system of this cyanobacterium affects the expression of multiple genes beyond those strictly involved in phosphorus acquisition.« less

  15. Impedance testing on cochlear implants after electroconvulsive therapy.

    PubMed

    McRackan, Theodore R; Rivas, Alejandro; Hedley-Williams, Andrea; Raj, Vidya; Dietrich, Mary S; Clark, Nathaniel K; Labadie, Robert F

    2014-12-01

    Cochlear implants (CI) are neural prostheses that restore hearing to individuals with profound sensorineural hearing loss. The surgically implanted component consists of an electrode array, which is threaded into the cochlea, and an electronic processor, which is buried under the skin behind the ear. The Food and Drug Administration and CI manufacturers contend that electroconvulsive therapy (ECT) is contraindicated in CI recipients owing to risk of damage to the implant and/or the patient. We hypothesized that ECT does no electrical damage to CIs. Ten functional CIs were implanted in 5 fresh cadaveric human heads. Each head then received a consecutive series of 12 unilateral ECT sessions applying maximum full pulse-width energy settings. Electroconvulsive therapy was delivered contralaterally to 5 CIs and ipsilaterally to 5 CIs. Electrical integrity testing (impedance testing) of the electrode array was performed before and after CI insertion, and after the first, third, fifth, seventh, ninth, and 12th ECT sessions. Electroconvulsive therapy was performed by a staff psychiatrist experienced with the technique. Explanted CIs were sent back to the manufacturer for further integrity testing. No electrical damage was identified during impedance testing. Overall, there were statistically significant decreases in impedances (consistent with no electrical damage) when comparing pre-ECT impedance values to those after 12 sessions. There was no statistically significant difference (P > 0.05) in impedance values comparing ipsilateral to contralateral ECT. Manufacturer testing revealed no other electrical damage to the CIs. Electroconvulsive therapy does not seem to cause any detectable electrical injury to CIs.

  16. Summary Statistics for Fun Dough Data Acquired at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Morales, K E; Whipple, R E

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a Play Dough{trademark}-like product, Fun Dough{trademark}, designated as PD. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2100 LMHU{sub D} at 100kVp to a low of about 1100 LMHU{sub D} at 300kVp. The standard deviation of each measurement is around 1% of the mean. The entropy covers the range from 3.9 to 4.6. Ordinarily, we would model the LAC ofmore » the material and compare the modeled values to the measured values. In this case, however, we did not have the composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 8.5. LLNL prepared about 50mL of the Fun Dough{trademark} in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. Still, layers can plainly be seen in the reconstructed images, indicating that the bulk density of the material in the container is affected by voids and bubbles. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less

  17. Sexual Orientation and Spatial Position Effects on Selective Forms of Object Location Memory

    ERIC Educational Resources Information Center

    Rahman, Qazi; Newland, Cherie; Smyth, Beatrice Mary

    2011-01-01

    Prior research has demonstrated robust sex and sexual orientation-related differences in object location memory in humans. Here we show that this sexual variation may depend on the spatial position of target objects and the task-specific nature of the spatial array. We tested the recovery of object locations in three object arrays (object…

  18. The spatial coherence structure of infrasonic waves: analysis of data from International Monitoring System arrays

    NASA Astrophysics Data System (ADS)

    Green, David N.

    2015-04-01

    The spatial coherence structure of 30 infrasound array detections, with source-to-receiver ranges of 25-6500 km, has been measured within the 0.25-1 Hz passband. The data were recorded at International Monitoring System (IMS) microbarograph arrays with apertures of between 1 and 4 km. Such array detections are of interest for Comprehensive Nuclear-Test-Ban Treaty monitoring. The majority of array detections (e.g. 80 per cent of recordings in the third-octave passband centred on 0.63 Hz) exhibit spatial coherence loss anisotropy that is consistent with previous lower frequency atmospheric acoustic studies; coherence loss is more rapid perpendicular to the acoustic propagation direction than parallel to it. The thirty array detections display significant interdetection variation in the magnitude of spatial coherence loss. The measurements can be explained by the simultaneous arrival of wave fronts at the recording array with angular beamwidths of between 0.4 and 7° and velocity bandwidths of between 2 and 40 m s-1. There is a statistically significant positive correlation between source-to-receiver range and the magnitude of coherence loss. Acoustic multipathing generated by interactions with fine-scale wind and temperature gradients along stratospheric propagation paths is qualitatively consistent with the observations. In addition, the study indicates that to isolate coherence loss generated by propagation effects, analysis of signals exhibiting high signal-to-noise ratios (SNR) is required (SNR2 > 11 in this study). The rapid temporal variations in infrasonic noise observed in recordings at IMS arrays indicates that correcting measured coherence values for the effect of noise, using pre-signal estimates of noise power, is ineffective.

  19. Dielectrophoresis-Assisted Integration of 1024 Carbon Nanotube Sensors into a CMOS Microsystem.

    PubMed

    Seichepine, Florent; Rothe, Jörg; Dudina, Alexandra; Hierlemann, Andreas; Frey, Urs

    2017-05-01

    Carbon-nanotube (CNT)-based sensors offer the potential to detect single-molecule events and picomolar analyte concentrations. An important step toward applications of such nanosensors is their integration in large arrays. The availability of large arrays would enable multiplexed and parallel sensing, and the simultaneously obtained sensor signals would facilitate statistical analysis. A reliable method to fabricate an array of 1024 CNT-based sensors on a fully processed complementary-metal-oxide-semiconductor microsystem is presented. A high-yield process for the deposition of CNTs from a suspension by means of liquid-coupled floating-electrode dielectrophoresis (DEP), which yielded 80% of the sensor devices featuring between one and five CNTs, is developed. The mechanism of floating-electrode DEP on full arrays and individual devices to understand its self-limiting behavior is studied. The resistance distributions across the array of CNT devices with respect to different DEP parameters are characterized. The CNT devices are then operated as liquid-gated CNT field-effect-transistors (LG-CNTFET) in liquid environment. Current dependency to the gate voltage of up to two orders of magnitude is recorded. Finally, the sensors are validated by studying the pH dependency of the LG-CNTFET conductance and it is demonstrated that 73% of the CNT sensors of a given microsystem show a resistance decrease upon increasing the pH value. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Performance of the image statistics decoder in conjunction with the Goldstone-VLA array

    NASA Technical Reports Server (NTRS)

    Wang, H. C.; Pitt, G. H., III

    1989-01-01

    During Voyager's Neptune encounter, the National Radio Astronomy Observatory's Very Large Array (VLA) will be arrayed with Goldstone antennas to receive the transmitted telemetry data from the spacecraft. The telemetry signal from the VLA will drop out periodically, resulting in a periodic drop in the received signal-to-noise ratio (SNR). The Image Statistics Decoder (ISD), which assumes a correlation between pixels, can improve the bit error rate (BER) for images during these dropout periods. Simulation results have shown that the ISD, in conjunction with the Goldstone-VLA array can provide a 3-dB gain for uncompressed images at a BER of 5.0 x 10(exp -3).

  1. Optimum sensor placement for microphone arrays

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.

    Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.

  2. Multisensor Arrays for Greater Reliability and Accuracy

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Eckhoff, Anthony; Lane, John; Perotti, Jose; Randazzo, John; Blalock, Norman; Ree, Jeff

    2004-01-01

    Arrays of multiple, nominally identical sensors with sensor-output-processing electronic hardware and software are being developed in order to obtain accuracy, reliability, and lifetime greater than those of single sensors. The conceptual basis of this development lies in the statistical behavior of multiple sensors and a multisensor-array (MSA) algorithm that exploits that behavior. In addition, advances in microelectromechanical systems (MEMS) and integrated circuits are exploited. A typical sensor unit according to this concept includes multiple MEMS sensors and sensor-readout circuitry fabricated together on a single chip and packaged compactly with a microprocessor that performs several functions, including execution of the MSA algorithm. In the MSA algorithm, the readings from all the sensors in an array at a given instant of time are compared and the reliability of each sensor is quantified. This comparison of readings and quantification of reliabilities involves the calculation of the ratio between every sensor reading and every other sensor reading, plus calculation of the sum of all such ratios. Then one output reading for the given instant of time is computed as a weighted average of the readings of all the sensors. In this computation, the weight for each sensor is the aforementioned value used to quantify its reliability. In an optional variant of the MSA algorithm that can be implemented easily, a running sum of the reliability value for each sensor at previous time steps as well as at the present time step is used as the weight of the sensor in calculating the weighted average at the present time step. In this variant, the weight of a sensor that continually fails gradually decreases, so that eventually, its influence over the output reading becomes minimal: In effect, the sensor system "learns" which sensors to trust and which not to trust. The MSA algorithm incorporates a criterion for deciding whether there remain enough sensor readings that approximate each other sufficiently closely to constitute a majority for the purpose of quantifying reliability. This criterion is, simply, that if there do not exist at least three sensors having weights greater than a prescribed minimum acceptable value, then the array as a whole is deemed to have failed.

  3. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  4. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  5. Expanding Coherent Array Processing to Larger Apertures Using Empirical Matched Field Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ringdal, F; Harris, D B; Kvaerna, T

    2009-07-23

    We have adapted matched field processing, a method developed in underwater acoustics to detect and locate targets, to classify transient seismic signals arising from mining explosions. Matched field processing, as we apply it, is an empirical technique, using observations of historic events to calibrate the amplitude and phase structure of wavefields incident upon an array aperture for particular repeating sources. The objective of this project is to determine how broadly applicable the method is and to understand the phenomena that control its performance. We obtained our original results in distinguishing events from ten mines in the Khibiny and Olenegorsk miningmore » districts of the Kola Peninsula, for which we had exceptional ground truth information. In a cross-validation test, some 98.2% of 549 explosions were correctly classified by originating mine using just the Pn observations (2.5-12.5 Hz) on the ARCES array at ranges from 350-410 kilometers. These results were achieved despite the fact that the mines are as closely spaced as 3 kilometers. Such classification performance is significantly better than predicted by the Rayleigh limit. Scattering phenomena account for the increased resolution, as we make clear in an analysis of the information carrying capacity of Pn under two alternative propagation scenarios: free-space propagation and propagation with realistic (actually measured) spatial covariance structure. The increase in information capacity over a wide band is captured by the matched field calibrations and used to separate explosions from very closely-spaced sources. In part, the improvement occurs because the calibrations enable coherent processing at frequencies above those normally considered coherent. We are investigating whether similar results can be expected in different regions, with apertures of increasing scale and for diffuse seismicity. We verified similar performance with the closely-spaced Zapolyarni mines, though discovered that it may be necessary to divide event populations from a single mine into identifiable subpopulations. For this purpose, we perform cluster analysis using matched field statistics calculated on pairs of individual events as a distance metric. In our initial work, calibrations were derived from ensembles of events ranging in number to more than 100. We are considering the performance now of matched field calibrations derived with many fewer events (even, as mentioned, individual events). Since these are high-variance estimates, we are testing the use of cross-channel, multitaper, spectral estimation methods to reduce the variance of calibrations and detection statistics derived from single-event observations. To test the applicability of the technique in a different tectonic region, we have obtained four years of continuous data from 4 Kazakh arrays and are extracting large numbers of event segments. Our initial results using 132 mining explosions recorded by the Makanchi array are similar to those obtained in the European Arctic. Matched field processing clearly separates the explosions from three closely-spaced mines located approximately 400 kilometers from the array, again using waveforms in a band (6-10 Hz) normally considered incoherent for this array. Having reproduced ARCES-type performance with another small aperture array, we have two additional objectives for matched field processing. We will attempt to extend matched field processing to larger apertures: a 200 km aperture (the KNET) and, if data permit, to an aperture comprised of several Kazakh arrays. We also will investigate the potential of developing matched field processing to roughly locate and classify natural seismicity, which is more diffuse than the concentrated sources of mining explosions that we have investigated to date.« less

  6. Improvement in airborne position measurements based on an ultrasonic linear-period-modulated wave by 1-bit signal processing

    NASA Astrophysics Data System (ADS)

    Thong-un, Natee; Hirata, Shinnosuke; Kurosawa, Minoru K.

    2015-07-01

    In this paper, we describe an expansion of the airborne ultrasonic systems for object localization in the three-dimensional spaces of navigation. A system, which revises the microphone arrangement and algorithm, can expand the object-position measurement from +90° in a previous method up to +180° for both the elevation and azimuth angles. The proposed system consists of a sound source and four acoustical receivers. Moreover, the system is designed to utilize low-cost devices, and low-cost computation relying on 1-bit signal processing is used to support the real-time application on a field-programmable gate array (FPGA). An object location is identified using spherical coordinates. A spherical object, which has a curved surface, is considered a target for this system. The transmit pulse to the target is a linear-period-modulated ultrasonic wave with a chirp rate of 50-20 kHz. Statistical evaluation of this work is the experimental investigation under repeatability.

  7. Weight Vector Fluctuations in Adaptive Antenna Arrays Tuned Using the Least-Mean-Square Error Algorithm with Quadratic Constraint

    NASA Astrophysics Data System (ADS)

    Zimina, S. V.

    2015-06-01

    We present the results of statistical analysis of an adaptive antenna array tuned using the least-mean-square error algorithm with quadratic constraint on the useful-signal amplification with allowance for the weight-coefficient fluctuations. Using the perturbation theory, the expressions for the correlation function and power of the output signal of the adaptive antenna array, as well as the formula for the weight-vector covariance matrix are obtained in the first approximation. The fluctuations are shown to lead to the signal distortions at the antenna-array output. The weight-coefficient fluctuations result in the appearance of additional terms in the statistical characteristics of the antenna array. It is also shown that the weight-vector fluctuations are isotropic, i.e., identical in all directions of the weight-coefficient space.

  8. Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.

    PubMed

    Paul, Jacob M; Reeve, Robert A; Forte, Jason D

    2017-03-01

    We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.

  9. Sparse aperture 3D passive image sensing and recognition

    NASA Astrophysics Data System (ADS)

    Daneshpanah, Mehdi

    The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results suggest that one might not need to venture into exotic and expensive detector arrays and associated optics for sensing room-temperature thermal objects in complete darkness.

  10. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature and dew point, as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network. Objective statistics will give the forecasters knowledge of the model's strength and weaknesses, which will result in improved forecasts for operations.

  11. Downregulation of toll-like receptor-mediated signalling pathways in oral lichen planus.

    PubMed

    Sinon, Suraya H; Rich, Alison M; Parachuru, Venkata P B; Firth, Fiona A; Milne, Trudy; Seymour, Gregory J

    2016-01-01

    The objective of this study was to investigate the expression of Toll-like receptors (TLR) and TLR-associated signalling pathway genes in oral lichen planus (OLP). Initially, immunohistochemistry was used to determine TLR expression in 12 formalin-fixed archival OLP tissues with 12 non-specifically inflamed oral tissues as controls. RNA was isolated from further fresh samples of OLP and non-specifically inflamed oral tissue controls (n = 6 for both groups) and used in qRT(2)-PCR focused arrays to determine the expression of TLRs and associated signalling pathway genes. Genes with a statistical significance of ±two-fold regulation (FR) and a P-value < 0.05 were considered as significantly regulated. Significantly more TLR4(+) cells were present in the inflammatory infiltrate in OLP compared with the control tissues (P < 0.05). There was no statistically significant difference in the numbers of TLR2(+) and TLR8(+) cells between the groups. TLR3 was significantly downregulated in OLP (P < 0.01). TLR8 was upregulated in OLP, but the difference between the groups was not statistically significant. The TLR-mediated signalling-associated protein genes MyD88 and TIRAP were significantly downregulated (P < 0.01 and P < 0.05), as were IRAK1 (P < 0.05), MAPK8 (P < 0.01), MAP3K1 (P < 0.05), MAP4K4 (P < 0.05), REL (P < 0.01) and RELA (P < 0.01). Stress proteins HMGB1 and the heat shock protein D1 were significantly downregulated in OLP (P < 0.01). These findings suggest a downregulation of TLR-mediated signalling pathways in OLP lesions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Quantitative analysis of fetal facial morphology using 3D ultrasound and statistical shape modeling: a feasibility study.

    PubMed

    Dall'Asta, Andrea; Schievano, Silvia; Bruse, Jan L; Paramasivam, Gowrishankar; Kaihura, Christine Tita; Dunaway, David; Lees, Christoph C

    2017-07-01

    The antenatal detection of facial dysmorphism using 3-dimensional ultrasound may raise the suspicion of an underlying genetic condition but infrequently leads to a definitive antenatal diagnosis. Despite advances in array and noninvasive prenatal testing, not all genetic conditions can be ascertained from such testing. The aim of this study was to investigate the feasibility of quantitative assessment of fetal face features using prenatal 3-dimensional ultrasound volumes and statistical shape modeling. STUDY DESIGN: Thirteen normal and 7 abnormal stored 3-dimensional ultrasound fetal face volumes were analyzed, at a median gestation of 29 +4  weeks (25 +0 to 36 +1 ). The 20 3-dimensional surface meshes generated were aligned and served as input for a statistical shape model, which computed the mean 3-dimensional face shape and 3-dimensional shape variations using principal component analysis. Ten shape modes explained more than 90% of the total shape variability in the population. While the first mode accounted for overall size differences, the second highlighted shape feature changes from an overall proportionate toward a more asymmetric face shape with a wide prominent forehead and an undersized, posteriorly positioned chin. Analysis of the Mahalanobis distance in principal component analysis shape space suggested differences between normal and abnormal fetuses (median and interquartile range distance values, 7.31 ± 5.54 for the normal group vs 13.27 ± 9.82 for the abnormal group) (P = .056). This feasibility study demonstrates that objective characterization and quantification of fetal facial morphology is possible from 3-dimensional ultrasound. This technique has the potential to assist in utero diagnosis, particularly of rare conditions in which facial dysmorphology is a feature. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Development and Operation of the Microshutter Array System

    NASA Technical Reports Server (NTRS)

    Jhabvala, M. D.; Franz, D.; King, T.; Kletetschka, G.; Kutyrev, A. S.; Li, M. J.

    2008-01-01

    The microshutter array (MSA) is a key component in the James Webb Space Telescope Near Infrared Spectrometer (NIRSpec) instrument. The James Webb Space Telescope is the next generation of a space-borne astronomy platform that is scheduled to be launched in 2013. However, in order to effectively operate the array and meet the severe operational requirements associated with a space flight mission has placed enormous constraints on the microshutter array subsystem. This paper will present an overview and description of the entire microshutter subsystem including the microshutter array, the hybridized array assembly, the integrated CMOS electronics, mechanical mounting module and the test methodology and performance of the fully assembled microshutter subsystem. The NIRSpec is a European Space Agency (ESA) instrument requiring four fully assembled microshutter arrays, or quads, which are independently addressed to allow for the imaging of selected celestial objects onto the two 4 mega pixel IR detectors. Each microshutter array must have no more than approx.8 shutters which are failed in the open mode (depending on how many are failed closed) out of the 62,415 (365x171) total number of shutters per array. The driving science requirement is to be able to select up to 100 objects at a time to be spectrally imaged at the focal plane. The spectrum is dispersed in the direction of the 171 shutters so if there is an unwanted open shutter in that row the light from an object passing through that failed open shutter will corrupt the spectrum from the intended object.

  14. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.

  15. Identifying On-Orbit Test Targets for Space Fence Operational Testing

    NASA Astrophysics Data System (ADS)

    Pechkis, D.; Pacheco, N.; Botting, T.

    2014-09-01

    Space Fence will be an integrated system of two ground-based, S-band (2 to 4 GHz) phased-array radars located in Kwajalein and perhaps Western Australia [1]. Space Fence will cooperate with other Space Surveillance Network sensors to provide space object tracking and radar characterization data to support U.S. Strategic Command space object catalog maintenance and other space situational awareness needs. We present a rigorous statistical test design intended to test Space Fence to the letter of the program requirements as well as to characterize the system performance across the entire operational envelope. The design uses altitude, size, and inclination as independent factors in statistical tests of dependent variables (e.g., observation accuracy) linked to requirements. The analysis derives the type and number of necessary test targets. Comparing the resulting sample sizes with the number of currently known targets, we identify those areas where modelling and simulation methods are needed. Assuming hypothetical Kwajalein radar coverage and a conservative number of radar passes per object per day, we conclude that tests involving real-world space objects should take no more than 25 days to evaluate all operational requirements; almost 60 percent of the requirements can be tested in a single day and nearly 90 percent can be tested in one week or less. Reference: [1] L. Haines and P. Phu, Space Fence PDR Concept Development Phase, 2011 AMOS Conference Technical Papers.

  16. TH-CD-207B-12: Quantification of Clinical Feedback On Image Quality Differences Between Two CT Scanner Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, S; Liu, X; Loyer, E

    Purpose: This work sought to quantify a radiology team’s assessment of image quality differences between two CT scanner models currently in clinical use, with emphasis on noise and low-contrast detectability (LCD). Methods: A water phantom and a Kagaku anthropomorphic body phantom were scanned on GE Discovery CT750 HD and LightSpeed VCT scanners (4 each) with identical scan parameters and reconstructed to 2.5mm/5.0mm thicknesses. Images of water phantom were analyzed at the scanner console with a built-in LCD tool that uses statistical methods to compute requisite CT-number contrast for 95% confidence in detection of a user-defined object size. LCD value wasmore » computed for 5mm, 3mm, and 1mm objects. Analysis of standard deviation and LCD values were performed on Kagaku phantom images within liver, stomach, and spleen. LCD value was computed for 4mm, 3mm, and 1mm objects using a benchmarked MATLAB implementation of the GE scanner-console tool. Results: Water LCD values were larger (poorer performance) for all HD scanners compared to VCT scanners. Mean scanner model difference in requisite CT-number contrast for 5mm, 3mm, and 1mm objects for 5.0mm/2.5mm images was 3.0%/3.4% (p=0.02/p=0.10), 5.3%/5.7% (0.00002/0.02), and 8.5%/8.2% (0.0004/0.002), respectively. Mean standard deviations within Kagaku phantom ROIs were greater in HD compared to VCT images, with mean differences for the liver, stomach, and spleen for 5.0mm/2.5mm of 16%/12% (p=0.04/0.10), 8%/12% (0.15/0.11), and 16%/15% (0.05/0.11), respectively. Mean LCD value difference between HD and VCT scanners over all ROIs for 4mm, 3m, and 1mm objects and 5.0mm/2.5mm was 34%/9%, 16%/8%, and 18%/10%, respectively. HD scanners outperformed VCT scanners only for the 4mm stomach object. Conclusion: Using both water and anthropomorphic phantoms, it was shown that HD scanners are outperformed by VCT scanners with respect to noise and LCD in a consistent and in most cases statistically significant manner. The relationship between statistical and clinical significance demands further work.« less

  17. A user-friendly workflow for analysis of Illumina gene expression bead array data available at the arrayanalysis.org portal.

    PubMed

    Eijssen, Lars M T; Goelela, Varshna S; Kelder, Thomas; Adriaens, Michiel E; Evelo, Chris T; Radonjic, Marijana

    2015-06-30

    Illumina whole-genome expression bead arrays are a widely used platform for transcriptomics. Most of the tools available for the analysis of the resulting data are not easily applicable by less experienced users. ArrayAnalysis.org provides researchers with an easy-to-use and comprehensive interface to the functionality of R and Bioconductor packages for microarray data analysis. As a modular open source project, it allows developers to contribute modules that provide support for additional types of data or extend workflows. To enable data analysis of Illumina bead arrays for a broad user community, we have developed a module for ArrayAnalysis.org that provides a free and user-friendly web interface for quality control and pre-processing for these arrays. This module can be used together with existing modules for statistical and pathway analysis to provide a full workflow for Illumina gene expression data analysis. The module accepts data exported from Illumina's GenomeStudio, and provides the user with quality control plots and normalized data. The outputs are directly linked to the existing statistics module of ArrayAnalysis.org, but can also be downloaded for further downstream analysis in third-party tools. The Illumina bead arrays analysis module is available at http://www.arrayanalysis.org . A user guide, a tutorial demonstrating the analysis of an example dataset, and R scripts are available. The module can be used as a starting point for statistical evaluation and pathway analysis provided on the website or to generate processed input data for a broad range of applications in life sciences research.

  18. Three-Dimensional Medical Image Analysis Using Local Dynamic Algorithm Selection on a Multiple-Instruction, Multiple-Data Architecture

    DTIC Science & Technology

    1989-01-01

    is represented by a number, called a Hounsfield Unit (HU), which represents the attenuation within the volume relative to the attenuation of the same...volume of water. Hounsfield Unit values range from -1000 to +3000, with a value of zero assigned to the attenuation of water. A HU value of -1000...represented by a 3D array. Each array element represents a single voxel, and the value of the array entry is the corresponding scaled Hounsfield Unit value

  19. Real-time range generation for ladar hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Olson, Eric M.; Coker, Charles F.

    1996-05-01

    Real-time closed loop simulation of LADAR seekers in a hardware-in-the-loop facility can reduce program risk and cost. This paper discusses an implementation of real-time range imagery generated in a synthetic environment at the Kinetic Kill Vehicle Hardware-in-the Loop facility at Eglin AFB, for the stimulation of LADAR seekers and algorithms. The computer hardware platform used was a Silicon Graphics Incorporated Onyx Reality Engine. This computer contains graphics hardware, and is optimized for generating visible or infrared imagery in real-time. A by-produce of the rendering process, in the form of a depth buffer, is generated from all objects in view during its rendering process. The depth buffer is an array of integer values that contributes to the proper rendering of overlapping objects and can be converted to range values using a mathematical formula. This paper presents an optimized software approach to the generation of the scenes, calculation of the range values, and outputting the range data for a LADAR seeker.

  20. Dual-mode photosensitive arrays based on the integration of liquid crystal microlenses and CMOS sensors for obtaining the intensity images and wavefronts of objects.

    PubMed

    Tong, Qing; Lei, Yu; Xin, Zhaowei; Zhang, Xinyu; Sang, Hongshi; Xie, Changsheng

    2016-02-08

    In this paper, we present a kind of dual-mode photosensitive arrays (DMPAs) constructed by hybrid integration a liquid crystal microlens array (LCMLA) driven electrically and a CMOS sensor array, which can be used to measure both the conventional intensity images and corresponding wavefronts of objects. We utilize liquid crystal materials to shape the microlens array with the electrically tunable focal length. Through switching the voltage signal on and off, the wavefronts and the intensity images can be acquired through the DMPAs, sequentially. We use white light to obtain the object's wavefronts for avoiding losing important wavefront information. We separate the white light wavefronts with a large number of spectral components and then experimentally compare them with single spectral wavefronts of typical red, green and blue lasers, respectively. Then we mix the red, green and blue wavefronts to a composite wavefront containing more optical information of the object.

  1. Efficiencies for the statistics of size discrimination.

    PubMed

    Solomon, Joshua A; Morgan, Michael; Chubb, Charles

    2011-10-19

    Different laboratories have achieved a consensus regarding how well human observers can estimate the average orientation in a set of N objects. Such estimates are not only limited by visual noise, which perturbs the visual signal of each object's orientation, they are also inefficient: Observers effectively use only √N objects in their estimates (e.g., S. C. Dakin, 2001; J. A. Solomon, 2010). More controversial is the efficiency with which observers can estimate the average size in an array of circles (e.g., D. Ariely, 2001, 2008; S. C. Chong, S. J. Joo, T.-A. Emmanouil, & A. Treisman, 2008; K. Myczek & D. J. Simons, 2008). Of course, there are some important differences between orientation and size; nonetheless, it seemed sensible to compare the two types of estimate against the same ideal observer. Indeed, quantitative evaluation of statistical efficiency requires this sort of comparison (R. A. Fisher, 1925). Our first step was to measure the noise that limits size estimates when only two circles are compared. Our results (Weber fractions between 0.07 and 0.14 were necessary for 84% correct 2AFC performance) are consistent with the visual system adding the same amount of Gaussian noise to all logarithmically transduced circle diameters. We exaggerated this visual noise by randomly varying the diameters in (uncrowded) arrays of 1, 2, 4, and 8 circles and measured its effect on discrimination between mean sizes. Efficiencies inferred from all four observers significantly exceed 25% and, in two cases, approach 100%. More consistent are our measurements of just-noticeable differences in size variance. These latter results suggest between 62 and 75% efficiency for variance discriminations. Although our observers were no more efficient comparing size variances than they were at comparing mean sizes, they were significantly more precise. In other words, our results contain evidence for a non-negligible source of late noise that limits mean discriminations but not variance discriminations.

  2. Tests for qualitative treatment-by-centre interaction using a 'pushback' procedure.

    PubMed

    Ciminera, J L; Heyse, J F; Nguyen, H H; Tukey, J W

    1993-06-15

    In multicentre clinical trials using a common protocol, the centres are usually regarded as being a fixed factor, thus allowing any treatment-by-centre interaction to be omitted from the error term for the effect of treatment. However, we feel it necessary to use the treatment-by-centre interaction as the error term if there is substantial evidence that the interaction with centres is qualitative instead of quantitative. To make allowance for the estimated uncertainties of the centre means, we propose choosing a reference value (for example, the median of the ordered array of centre means) and converting the individual centre results into standardized deviations from the reference value. The deviations are then reordered, and the results 'pushed back' by amounts appropriate for the corresponding order statistics in a sample from the relevant distribution. The pushed-back standardized deviations are then restored to the original scale. The appearance of opposite signs among the destandardized values for the various centres is then taken as 'substantial evidence' of qualitative interaction. Procedures are presented using, in any combination: (i) Gaussian, or Student's t-distribution; (ii) order-statistic medians or outward 90 per cent points of the corresponding order statistic distributions; (iii) pooling or grouping and pooling the internally estimated standard deviations of the centre means. The use of the least conservative combination--Student's t, outward 90 per cent points, grouping and pooling--is recommended.

  3. Object Classification Based on Analysis of Spectral Characteristics of Seismic Signal Envelopes

    NASA Astrophysics Data System (ADS)

    Morozov, Yu. V.; Spektor, A. A.

    2017-11-01

    A method for classifying moving objects having a seismic effect on the ground surface is proposed which is based on statistical analysis of the envelopes of received signals. The values of the components of the amplitude spectrum of the envelopes obtained applying Hilbert and Fourier transforms are used as classification criteria. Examples illustrating the statistical properties of spectra and the operation of the seismic classifier are given for an ensemble of objects of four classes (person, group of people, large animal, vehicle). It is shown that the computational procedures for processing seismic signals are quite simple and can therefore be used in real-time systems with modest requirements for computational resources.

  4. Using Visual Displays to Communicate Risk of Cancer to Women From Diverse Race/Ethnic Backgrounds

    PubMed Central

    Wong, Sabrina T.; Pérez-Stable, Eliseo J.; Kim, Sue E.; Gregorich, Steven E.; Sawaya, George F.; Walsh, Judith M. E.; Washington, A. Eugene; Kaplan, Celia P.

    2012-01-01

    Objective This study evaluated how well women from diverse race/ethnic groups were able to take a quantitative cancer risk statistic verbally provided to them and report it in a visual format. Methods Cross-sectional survey was administered in English, Spanish or Chinese, to women aged 50 to 80 (n=1,160), recruited from primary care practices. The survey contained breast, colorectal or cervical cancer questions regarding screening and prevention. Women were told cancer-specific lifetime risk then shown a visual display of risk and asked to indicate the specific lifetime risk. Correct indication of risk was the main outcome. Results Correct responses on icon arrays were 46% for breast, 55% for colon, and 44% for cervical; only 25% correctly responded to a magnifying glass graphic. Compared to Whites, African American and Latina women were significantly less likely to use the icon arrays correctly. Higher education and higher numeracy were associated with correct responses. Lower education was associated with lower numeracy. Conclusions Race/Ethnic differences were associated with women’s ability to take a quantitative cancer risk statistic verbally provided to them and report it in a visual format. Practice Implications Systematically considering the complexity of intersecting factors such as race/ethnicity, educational level, poverty, and numeracy in most health communications is needed. (200) PMID:22244322

  5. Breast Retraction Assessment: an objective evaluation of cosmetic results of patients treated conservatively for breast cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pezner, R.D.; Patterson, M.P.; Hill, L.R.

    Breast Retraction Assessment (BRA) is an objective evaluation of the amount of cosmetic retraction of the treated breast in comparison to the untreated breast in patients who receive conservative treatment for breast cancer. A clear acrylic sheet supported vertically and marked as a grid at 1 cm intervals is employed to perform the measurements. Average BRA value in 29 control patients without breast cancer was 1.2 cm. Average BRA value in 27 patients treated conservatively for clinical Stage I or II unilateral breast cancer was 3.7 cm. BRA values in breast cancer patients ranged from 0.0 to 8.5 cm. Patientsmore » who received a local radiation boost to the primary tumor bed site had statistically significantly less retraction than those who did not receive a boost. Patients who had an extensive primary tumor resection had statistically significantly more retraction than those who underwent a more limited resection. In comparison to qualitative forms of cosmetic analysis, BRA is an objective test that can quantitatively evaluate factors which may be related to cosmetic retraction in patients treated conservatively for breast cancer.« less

  6. Microoptical artificial compound eyes: from design to experimental verification of two different concepts

    NASA Astrophysics Data System (ADS)

    Duparré, Jacques; Wippermann, Frank; Dannberg, Peter; Schreiber, Peter; Bräuer, Andreas; Völkel, Reinhard; Scharf, Toralf

    2005-09-01

    Two novel objective types on the basis of artificial compound eyes are examined. Both imaging systems are well suited for fabrication using microoptics technology due to the small required lens sags. In the apposition optics a microlens array (MLA) and a photo detector array of different pitch in its focal plane are applied. The image reconstruction is based on moire magnification. Several generations of demonstrators of this objective type are manufactured by photo lithographic processes. This includes a system with opaque walls between adjacent channels and an objective which is directly applied onto a CMOS detector array. The cluster eye approach, which is based on a mixture of superposition compound eyes and the vision system of jumping spiders, produces a regular image. Here, three microlens arrays of different pitch form arrays of Keplerian microtelescopes with tilted optical axes, including a field lens. The microlens arrays of this demonstrator are also fabricated using microoptics technology, aperture arrays are applied. Subsequently the lens arrays are stacked to the overall microoptical system on wafer scale. Both fabricated types of artificial compound eye imaging systems are experimentally characterized with respect to resolution, sensitivity and cross talk between adjacent channels. Captured images are presented.

  7. Real-Time Atmospheric Phase Fluctuation Correction Using a Phased Array of Widely Separated Antennas: X-Band Results and Ka-Band Progress

    NASA Astrophysics Data System (ADS)

    Geldzahler, B.; Birr, R.; Brown, R.; Grant, K.; Hoblitzell, R.; Miller, M.; Woods, G.; Argueta, A.; Ciminera, M.; Cornish, T.; D'Addario, L.; Davarian, F.; Kocz, J.; Lee, D.; Morabito, D.; Tsao, P.; Jakeman-Flores, H.; Ott, M.; Soloff, J.; Denn, G.; Church, K.; Deffenbaugh, P.

    2016-09-01

    NASA is pursuing a demonstration of coherent uplink arraying at 7.145-7.190 GHz (X-band) and 30-31 GHz (Kaband) using three 12m diameter COTS antennas separated by 60m at the Kennedy Space Center in Florida. In addition, we have used up to three 34m antennas separated by 250m at the Goldstone Deep Space Communication Complex in California at X-band 7.1 GHz incorporating real-time correction for tropospheric phase fluctuations. Such a demonstration can enable NASA to design and establish a high power, high resolution, 24/7 availability radar system for (a) tracking and characterizing observations of Near Earth Objects (NEOs), (b) tracking, characterizing and determining the statistics of small-scale (≤10cm) orbital debris, (c) incorporating the capability into its space communication and navigation tracking stations for emergency spacecraft commanding in the Ka band era which NASA is entering, and (d) fielding capabilities of interest to other US government agencies. We present herein the results of our phased array uplink combining at near 7.17 and 8.3 GHz using widely separated antennas demonstrations at both locales, the results of a study to upgrade from a communication to a radar system, and our vision for going forward in implementing a high performance, low lifecycle cost multi-element radar array.

  8. A clinical study of electrophysiological correlates of behavioural comfort levels in cochlear implantees.

    PubMed

    Raghunandhan, S; Ravikumar, A; Kameswaran, Mohan; Mandke, Kalyani; Ranjith, R

    2014-05-01

    Indications for cochlear implantation have expanded today to include very young children and those with syndromes/multiple handicaps. Programming the implant based on behavioural responses may be tedious for audiologists in such cases, wherein matching an effective Measurable Auditory Percept (MAP) and appropriate MAP becomes the key issue in the habilitation program. In 'Difficult to MAP' scenarios, objective measures become paramount to predict optimal current levels to be set in the MAP. We aimed to (a) study the trends in multi-modal electrophysiological tests and behavioural responses sequentially over the first year of implant use; (b) generate normative data from the above; (c) correlate the multi-modal electrophysiological thresholds levels with behavioural comfort levels; and (d) create predictive formulae for deriving optimal comfort levels (if unknown), using linear and multiple regression analysis. This prospective study included 10 profoundly hearing impaired children aged between 2 and 7 years with normal inner ear anatomy and no additional handicaps. They received the Advanced Bionics HiRes 90 K Implant with Harmony Speech processor and used HiRes-P with Fidelity 120 strategy. They underwent, impedance telemetry, neural response imaging, electrically evoked stapedial response telemetry (ESRT), and electrically evoked auditory brainstem response (EABR) tests at 1, 4, 8, and 12 months of implant use, in conjunction with behavioural mapping. Trends in electrophysiological and behavioural responses were analyzed using paired t-test. By Karl Pearson's correlation method, electrode-wise correlations were derived for neural response imaging (NRI) thresholds versus most comfortable level (M-levels) and offset based (apical, mid-array, and basal array) correlations for EABR and ESRT thresholds versus M-levels were calculated over time. These were used to derive predictive formulae by linear and multiple regression analysis. Such statistically predicted M-levels were compared with the behaviourally recorded M-levels among the cohort, using Cronbach's alpha reliability test method for confirming the efficacy of this method. NRI, ESRT, and EABR thresholds showed statistically significant positive correlations with behavioural M-levels, which improved with implant use over time. These correlations were used to derive predicted M-levels using regression analysis. On an average, predicted M-levels were found to be statistically reliable and they were a fair match to the actual behavioural M-levels. When applied in clinical practice, the predicted values were found to be useful for programming members of the study group. However, individuals showed considerable deviations in behavioural M-levels, above and below the electrophysiologically predicted values, due to various factors. While the current method appears helpful as a reference to predict initial maps in 'difficult to Map' subjects, it is recommended that behavioural measures are mandatory to further optimize the maps for these individuals. The study explores the trends, correlations and individual variabilities that occur between electrophysiological tests and behavioural responses, recorded over time among a cohort of cochlear implantees. The statistical method shown may be used as a guideline to predict optimal behavioural levels in difficult situations among future implantees, bearing in mind that optimal M-levels for individuals can vary from predicted values. In 'Difficult to MAP' scenarios, following a protocol of sequential behavioural programming, in conjunction with electrophysiological correlates will provide the best outcomes.

  9. BEAT: Bioinformatics Exon Array Tool to store, analyze and visualize Affymetrix GeneChip Human Exon Array data from disease experiments

    PubMed Central

    2012-01-01

    Background It is known from recent studies that more than 90% of human multi-exon genes are subject to Alternative Splicing (AS), a key molecular mechanism in which multiple transcripts may be generated from a single gene. It is widely recognized that a breakdown in AS mechanisms plays an important role in cellular differentiation and pathologies. Polymerase Chain Reactions, microarrays and sequencing technologies have been applied to the study of transcript diversity arising from alternative expression. Last generation Affymetrix GeneChip Human Exon 1.0 ST Arrays offer a more detailed view of the gene expression profile providing information on the AS patterns. The exon array technology, with more than five million data points, can detect approximately one million exons, and it allows performing analyses at both gene and exon level. In this paper we describe BEAT, an integrated user-friendly bioinformatics framework to store, analyze and visualize exon arrays datasets. It combines a data warehouse approach with some rigorous statistical methods for assessing the AS of genes involved in diseases. Meta statistics are proposed as a novel approach to explore the analysis results. BEAT is available at http://beat.ba.itb.cnr.it. Results BEAT is a web tool which allows uploading and analyzing exon array datasets using standard statistical methods and an easy-to-use graphical web front-end. BEAT has been tested on a dataset with 173 samples and tuned using new datasets of exon array experiments from 28 colorectal cancer and 26 renal cell cancer samples produced at the Medical Genetics Unit of IRCCS Casa Sollievo della Sofferenza. To highlight all possible AS events, alternative names, accession Ids, Gene Ontology terms and biochemical pathways annotations are integrated with exon and gene level expression plots. The user can customize the results choosing custom thresholds for the statistical parameters and exploiting the available clinical data of the samples for a multivariate AS analysis. Conclusions Despite exon array chips being widely used for transcriptomics studies, there is a lack of analysis tools offering advanced statistical features and requiring no programming knowledge. BEAT provides a user-friendly platform for a comprehensive study of AS events in human diseases, displaying the analysis results with easily interpretable and interactive tables and graphics. PMID:22536968

  10. An Expanded Very Large Array and CARMA Study of Dusty Disks and Torii with Large Grains in Dying Stars

    NASA Astrophysics Data System (ADS)

    Sahai, R.; Claussen, M. J.; Schnee, S.; Morris, M. R.; Sánchez Contreras, C.

    2011-09-01

    We report the results of a pilot multiwavelength survey in the radio continuum (X, Ka, and Q bands, i.e., from 3.6 cm to 7 mm) carried out with the Expanded Very Large Array (EVLA) in order to confirm the presence of very large dust grains in dusty disks and torii around the central stars in a small sample of post-asymptotic giant branch (pAGB) objects, as inferred from millimeter (mm) and submillimeter (submm) observations. Supporting mm-wave observations were also obtained with the Combined Array for Research in Millimeter-wave Astronomy toward three of our sources. Our EVLA survey has resulted in a robust detection of our most prominent submm emission source, the pre-planetary nebula (PPN) IRAS 22036+5306, in all three bands, and the disk-prominent pAGB object, RV Tau, in one band. The observed fluxes are consistent with optically thin free-free emission, and since they are insignificant compared to their submm/mm fluxes, we conclude that the latter must come from substantial masses of cool, large (mm-sized) grains. We find that the power-law emissivity in the cm-to-submm range for the large grains in IRAS22036 is νβ, with β = 1-1.3. Furthermore, the value of β in the 3-0.85 mm range for the three disk-prominent pAGB sources (β <= 0.4) is significantly lower than that of IRAS22036, suggesting that the grains in pAGB objects with circumbinary disks are likely larger than those in the dusty waists of pre-planetary nebulae.

  11. Liquid Crystal Based Optical Phased Array for Steering Lasers

    DTIC Science & Technology

    2009-10-01

    profile into the liquid crystal cell, the first step is to characterize the LC cell’s OPD curve with respect to the ramped voltage by a simple one...corresponding voltage value on the OPD vs. 22 Voltage curve , the first entry voltage profile of a positive or negative micro-lens can be thereby...Fig. 2.6 Optical path delay (OPD) profile of ideal objective positive (blue curve ) and negative (green curve ) lens with 552 μm radius, no

  12. Real-time, continuous-wave terahertz imaging using a microbolometer focal-plane array

    NASA Technical Reports Server (NTRS)

    Hu, Qing (Inventor); Min Lee, Alan W. (Inventor)

    2010-01-01

    The present invention generally provides a terahertz (THz) imaging system that includes a source for generating radiation (e.g., a quantum cascade laser) having one or more frequencies in a range of about 0.1 THz to about 10 THz, and a two-dimensional detector array comprising a plurality of radiation detecting elements that are capable of detecting radiation in that frequency range. An optical system directs radiation from the source to an object to be imaged. The detector array detects at least a portion of the radiation transmitted through the object (or reflected by the object) so as to form a THz image of that object.

  13. Durability of Hearing Preservation after Cochlear Implantation with Conventional-Length Electrodes and Scala Tympani Insertion

    PubMed Central

    Sweeney, Alex D.; Hunter, Jacob B.; Carlson, Matthew L.; Rivas, Alejandro; Bennett, Marc L.; Gifford, Rene H.; Noble, Jack H.; Haynes, David S.; Labadie, Robert F.; Wanna, George B.

    2016-01-01

    Objectives To analyze factors that influence hearing preservation over time in cochlear implant recipients with conventional-length electrode arrays located entirely within the scala tympani. Study Design Case series with planned chart review. Setting Single tertiary academic referral center. Subjects and Methods A retrospective review was performed to analyze a subgroup of cochlear implant recipients with residual acoustic hearing. Patients were included in the study only if their electrode arrays remained fully in the scala tympani after insertion and serviceable acoustic hearing (≤80 dB at 250 Hz) was preserved. Electrode array location was verified through a validated radiographic assessment tool. Patients with <6 months of audiologic follow-up were excluded. The main outcome measure was change in acoustic hearing thresholds from implant activation to the last available follow-up. Results A total of 16 cases met inclusion criteria (median age, 70.6 years; range, 29.4–82.2; 50% female). The average follow-up was 18.0 months (median, 16.1; range, 6.2–36.4). Patients with a lateral wall electrode array were more likely to have stable acoustic thresholds over time (P < .05). Positive correlations were seen between continued hearing loss following activation and larger initial postoperative acoustic threshold shifts, though statistical significance was not achieved. Age, sex, and noise exposure had no significant influence on continued hearing preservation over time. Conclusions To control for hearing loss associated with inter-scalar excursion during cochlear implantation, the present study evaluated patients only with conventional electrode arrays located entirely within the scala tympani. In this group, the style of electrode array may influence residual hearing preservation over time. PMID:26908553

  14. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles

    NASA Astrophysics Data System (ADS)

    Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang

    2018-01-01

    Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.

  15. Variability-aware compact modeling and statistical circuit validation on SRAM test array

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Spanos, Costas J.

    2016-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.

  16. Arrays of flow channels with heat transfer embedded in conducting walls

    DOE PAGES

    Bejan, A.; Almerbati, A.; Lorente, S.; ...

    2016-04-20

    Here we illustrate the free search for the optimal geometry of flow channel cross-sections that meet two objectives simultaneously: reduced resistances to heat transfer and fluid flow. The element cross section and the wall material are fixed, while the shape of the fluid flow opening, or the wetted perimeter is free to vary. Two element cross sections are considered, square and equilateral triangular. We find that the two objectives are best met when the solid wall thickness is uniform, i.e., when the wetted perimeters are square and triangular, respectively. In addition, we consider arrays of square elements and triangular elements,more » on the basis of equal mass flow rate per unit of array cross sectional area. The conclusion is that the array of triangular elements meets the two objectives better than the array of square elements.« less

  17. Characterization and Optimization Design of the Polymer-Based Capacitive Micro-Arrayed Ultrasonic Transducer

    NASA Astrophysics Data System (ADS)

    Chiou, De-Yi; Chen, Mu-Yueh; Chang, Ming-Wei; Deng, Hsu-Cheng

    2007-11-01

    This study constructs an electromechanical finite element model of the polymer-based capacitive micro-arrayed ultrasonic transducer (P-CMUT). The electrostatic-structural coupled-field simulations are performed to investigate the operational characteristics, such as collapse voltage and resonant frequency. The numerical results are found to be in good agreement with experimental observations. The study of influence of each defined parameter on the collapse voltage and resonant frequency are also presented. To solve some conflict problems in diversely physical fields, an integrated design method is developed to optimize the geometric parameters of the P-CMUT. The optimization search routine conducted using the genetic algorithm (GA) is connected with the commercial FEM software ANSYS to obtain the best design variable using multi-objective functions. The results show that the optimal parameter values satisfy the conflicting objectives, namely to minimize the collapse voltage while simultaneously maintaining a customized frequency. Overall, the present result indicates that the combined FEM/GA optimization scheme provides an efficient and versatile approach of optimization design of the P-CMUT.

  18. A light field microscope imaging spectrometer based on the microlens array

    NASA Astrophysics Data System (ADS)

    Yao, Yu-jia; Xu, Feng; Xia, Yin-xiang

    2017-10-01

    A new light field spectrometry microscope imaging system, which was composed by microscope objective, microlens array and spectrometry system was designed in this paper. 5-D information (4-D light field and 1-D spectrometer) of the sample could be captured by the snapshot system in only one exposure, avoiding the motion blur and aberration caused by the scanning imaging process of the traditional imaging spectrometry. Microscope objective had been used as the former group while microlens array used as the posterior group. The optical design of the system was simulated by Zemax, the parameter matching condition between microscope objective and microlens array was discussed significantly during the simulation process. The result simulated in the image plane was analyzed and discussed.

  19. Radiofrequency energy deposition and radiofrequency power requirements in parallel transmission with increasing distance from the coil to the sample.

    PubMed

    Deniz, Cem M; Vaidya, Manushka V; Sodickson, Daniel K; Lattanzi, Riccardo

    2016-01-01

    We investigated global specific absorption rate (SAR) and radiofrequency (RF) power requirements in parallel transmission as the distance between the transmit coils and the sample was increased. We calculated ultimate intrinsic SAR (UISAR), which depends on object geometry and electrical properties but not on coil design, and we used it as the reference to compare the performance of various transmit arrays. We investigated the case of fixing coil size and increasing the number of coils while moving the array away from the sample, as well as the case of fixing coil number and scaling coil dimensions. We also investigated RF power requirements as a function of lift-off, and tracked local SAR distributions associated with global SAR optima. In all cases, the target excitation profile was achieved and global SAR (as well as associated maximum local SAR) decreased with lift-off, approaching UISAR, which was constant for all lift-offs. We observed a lift-off value that optimizes the balance between global SAR and power losses in coil conductors. We showed that, using parallel transmission, global SAR can decrease at ultra high fields for finite arrays with a sufficient number of transmit elements. For parallel transmission, the distance between coils and object can be optimized to reduce SAR and minimize RF power requirements associated with homogeneous excitation. © 2015 Wiley Periodicals, Inc.

  20. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    PubMed

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low convolutional layers of VGG19 are considerably larger than the value of approximately 100 reported for IT neurons in Lehky et al. ( 2014 ), whereas those at the high fully connected layers are close to or lower than 100. To the best of our knowledge, this work is the first attempt to analyze the response statistics of DNN neurons with respect to primate IT neurons in image object representation.

  1. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes.

    PubMed

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M

    2018-04-12

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods.

  2. Optimized Multi-Spectral Filter Array Based Imaging of Natural Scenes

    PubMed Central

    Li, Yuqi; Majumder, Aditi; Zhang, Hao; Gopi, M.

    2018-01-01

    Multi-spectral imaging using a camera with more than three channels is an efficient method to acquire and reconstruct spectral data and is used extensively in tasks like object recognition, relighted rendering, and color constancy. Recently developed methods are used to only guide content-dependent filter selection where the set of spectral reflectances to be recovered are known a priori. We present the first content-independent spectral imaging pipeline that allows optimal selection of multiple channels. We also present algorithms for optimal placement of the channels in the color filter array yielding an efficient demosaicing order resulting in accurate spectral recovery of natural reflectance functions. These reflectance functions have the property that their power spectrum statistically exhibits a power-law behavior. Using this property, we propose power-law based error descriptors that are minimized to optimize the imaging pipeline. We extensively verify our models and optimizations using large sets of commercially available wide-band filters to demonstrate the greater accuracy and efficiency of our multi-spectral imaging pipeline over existing methods. PMID:29649114

  3. EzArray: A web-based highly automated Affymetrix expression array data management and analysis system

    PubMed Central

    Zhu, Yuerong; Zhu, Yuelin; Xu, Wei

    2008-01-01

    Background Though microarray experiments are very popular in life science research, managing and analyzing microarray data are still challenging tasks for many biologists. Most microarray programs require users to have sophisticated knowledge of mathematics, statistics and computer skills for usage. With accumulating microarray data deposited in public databases, easy-to-use programs to re-analyze previously published microarray data are in high demand. Results EzArray is a web-based Affymetrix expression array data management and analysis system for researchers who need to organize microarray data efficiently and get data analyzed instantly. EzArray organizes microarray data into projects that can be analyzed online with predefined or custom procedures. EzArray performs data preprocessing and detection of differentially expressed genes with statistical methods. All analysis procedures are optimized and highly automated so that even novice users with limited pre-knowledge of microarray data analysis can complete initial analysis quickly. Since all input files, analysis parameters, and executed scripts can be downloaded, EzArray provides maximum reproducibility for each analysis. In addition, EzArray integrates with Gene Expression Omnibus (GEO) and allows instantaneous re-analysis of published array data. Conclusion EzArray is a novel Affymetrix expression array data analysis and sharing system. EzArray provides easy-to-use tools for re-analyzing published microarray data and will help both novice and experienced users perform initial analysis of their microarray data from the location of data storage. We believe EzArray will be a useful system for facilities with microarray services and laboratories with multiple members involved in microarray data analysis. EzArray is freely available from . PMID:18218103

  4. Use of the multifocal electroretinogram (mfERG) for assessing the response of 670 nm light emitting diodes (LED) photoillumination in an animal model with laser retinal injuries

    NASA Astrophysics Data System (ADS)

    DiCarlo, Cheryl D.; Brown, Jeremiah; Grado, Andres; Sankovich, James; Zwick, Harry; Lund, David J.; Stuck, Bruce E.

    2004-07-01

    There is no uniformly accepted objective method to diagnose the functional extent of retinal damage following laser eye injury and there is no uniform therapy for laser retinal injury. J.T. Eells, et al, reported the use of Light Emitting Diodes (LED) photoillumination (670 nm) for methanol-induced retinal toxicity in rats. The findings indicated a preservation of retinal architecture, as determined by histopathology and a partial functional recovery of photoreceptors, as determined by electroretinogram (ERG), in the LED exposed methanol-intoxicated rats. The purpose of this study is to use multifocal electroretinography (mfERG) to evaluate recovery of retinal function following treatment with LED photoillumination in a cynomolgus monkey laser retinal injury model. Control and LED array (670 nm) illuminated animals received macular Argon laser lesions (514 nm, 130 mW, 100 ms). LED array exposure was accomplished for 4 days for a total dose of 4 J/cm2 per day. Baseline and post-laser exposure mfERGs were performed. mfERG results for five animals post-laser injury but prior to treatment (Day 0) showed increased implicit times and P1 waveform amplitudes when compared to a combined laboratory normal and each animal's baseline normal values. In general, preliminary mfERG results of our first five subjects recorded using both the 103-hexagon and 509-hexagon patterns indicate a more rapid functional recovery in the LED illuminated animal as compared to the control by the end of the fourth day post-exposure. Research is continuing to determine if this difference in functional return is seen in additional subjects and if statistical significance exists.

  5. ZENO: N-body and SPH Simulation Codes

    NASA Astrophysics Data System (ADS)

    Barnes, Joshua E.

    2011-02-01

    The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere. Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include: Structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems.Snapshot generation routines create particle distributions with various properties. Systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium.Snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.Simulation codes include both pure N-body and combined N-body/SPH programs: Pure N-body codes are available in both uniprocessor and parallel versions.SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions.Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

  6. Statistical analysis of kinetic energy entrainment in a model wind turbine array boundary layer

    NASA Astrophysics Data System (ADS)

    Cal, Raul Bayoan; Hamilton, Nicholas; Kang, Hyung-Suk; Meneveau, Charles

    2012-11-01

    For large wind farms, kinetic energy must be entrained from the flow above the wind turbines to replenish wakes and enable power extraction in the array. Various statistical features of turbulence causing vertical entrainment of mean-flow kinetic energy are studied using hot-wire velocimetry data taken in a model wind farm in a scaled wind tunnel experiment. Conditional statistics and spectral decompositions are employed to characterize the most relevant turbulent flow structures and determine their length-scales. Sweep and ejection events are shown to be the largest contributors to the vertical kinetic energy flux, although their relative contribution depends upon the location in the wake. Sweeps are shown to be dominant in the region above the wind turbine array. A spectral analysis of the data shows that large scales of the flow, about the size of the rotor diameter in length or larger, dominate the vertical entrainment. The flow is more incoherent below the array, causing decreased vertical fluxes there. The results show that improving the rate of vertical kinetic energy entrainment into wind turbine arrays is a standing challenge and would require modifying the large-scale structures of the flow. This work was funded in part by the National Science Foundation (CBET-0730922, CBET-1133800 and CBET-0953053).

  7. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers (LWO's) use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit (AMU; Bauman et ai, 2004) to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature (T) and dew pOint (T d), as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network shown in Table 1. These objective statistics give the forecasters knowledge of the model's strengths and weaknesses, which will result in improved forecasts for operations.

  8. Parameters optimization of laser brazing in crimping butt using Taguchi and BPNN-GA

    NASA Astrophysics Data System (ADS)

    Rong, Youmin; Zhang, Zhen; Zhang, Guojun; Yue, Chen; Gu, Yafei; Huang, Yu; Wang, Chunming; Shao, Xinyu

    2015-04-01

    The laser brazing (LB) is widely used in the automotive industry due to the advantages of high speed, small heat affected zone, high quality of welding seam, and low heat input. Welding parameters play a significant role in determining the bead geometry and hence quality of the weld joint. This paper addresses the optimization of the seam shape in LB process with welding crimping butt of 0.8 mm thickness using back propagation neural network (BPNN) and genetic algorithm (GA). A 3-factor, 5-level welding experiment is conducted by Taguchi L25 orthogonal array through the statistical design method. Then, the input parameters are considered here including welding speed, wire speed rate, and gap with 5 levels. The output results are efficient connection length of left side and right side, top width (WT) and bottom width (WB) of the weld bead. The experiment results are embed into the BPNN network to establish relationship between the input and output variables. The predicted results of the BPNN are fed to GA algorithm that optimizes the process parameters subjected to the objectives. Then, the effects of welding speed (WS), wire feed rate (WF), and gap (GAP) on the sum values of bead geometry is discussed. Eventually, the confirmation experiments are carried out to demonstrate the optimal values were effective and reliable. On the whole, the proposed hybrid method, BPNN-GA, can be used to guide the actual work and improve the efficiency and stability of LB process.

  9. An application of Social Values for Ecosystem Services (SolVES) to three national forests in Colorado and Wyoming

    USGS Publications Warehouse

    Sherrouse, Benson C.; Semmens, Darius J.; Clement, Jessica M.

    2014-01-01

    Despite widespread recognition that social-value information is needed to inform stakeholders and decision makers regarding trade-offs in environmental management, it too often remains absent from ecosystem service assessments. Although quantitative indicators of social values need to be explicitly accounted for in the decision-making process, they need not be monetary. Ongoing efforts to map such values demonstrate how they can also be made spatially explicit and relatable to underlying ecological information. We originally developed Social Values for Ecosystem Services (SolVES) as a tool to assess, map, and quantify nonmarket values perceived by various groups of ecosystem stakeholders. With SolVES 2.0 we have extended the functionality by integrating SolVES with Maxent maximum entropy modeling software to generate more complete social-value maps from available value and preference survey data and to produce more robust models describing the relationship between social values and ecosystems. The current study has two objectives: (1) evaluate how effectively the value index, a quantitative, nonmonetary social-value indicator calculated by SolVES, reproduces results from more common statistical methods of social-survey data analysis and (2) examine how the spatial results produced by SolVES provide additional information that could be used by managers and stakeholders to better understand more complex relationships among stakeholder values, attitudes, and preferences. To achieve these objectives, we applied SolVES to value and preference survey data collected for three national forests, the Pike and San Isabel in Colorado and the Bridger–Teton and the Shoshone in Wyoming. Value index results were generally consistent with results found through more common statistical analyses of the survey data such as frequency, discriminant function, and correlation analyses. In addition, spatial analysis of the social-value maps produced by SolVES provided information that was useful for explaining relationships between stakeholder values and forest uses. Our results suggest that SolVES can effectively reproduce information derived from traditional statistical analyses while adding spatially explicit, social-value information that can contribute to integrated resource assessment, planning, and management of forests and other ecosystems.

  10. Large Ka-Band Slot Array for Digital Beam-Forming Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawadzki, Mark S.; Hodges, Richard E.

    2011-01-01

    This work describes the development of a large Ka Band Slot Array for the Glacier and Land Ice Surface Topography Interferometer (GLISTIN), a proposed spaceborne interferometric synthetic aperture radar for topographic mapping of ice sheets and glaciers. GLISTIN will collect ice topography measurement data over a wide swath with sub-seasonal repeat intervals using a Ka-band digitally beamformed antenna. For technology demonstration purpose a receive array of size 1x1 m, consisting of 160x160 radiating elements, was developed. The array is divided into 16 sticks, each stick consisting of 160x10 radiating elements, whose outputs are combined to produce 16 digital beams. A transmit array stick was also developed. The antenna arrays were designed using Elliott's design equations with the use of an infinite-array mutual-coupling model. A Floquet wave model was used to account for external coupling between radiating slots. Because of the use of uniform amplitude and phase distribution, the infinite array model yielded identical values for all radiating elements but for alternating offsets, and identical coupling elements but for alternating positive and negative tilts. Waveguide-fed slot arrays are finding many applications in radar, remote sensing, and communications applications because of their desirable properties such as low mass, low volume, and ease of design, manufacture, and deployability. Although waveguide-fed slot arrays have been designed, built, and tested in the past, this work represents several advances to the state of the art. The use of the infinite array model for the radiating slots yielded a simple design process for radiating and coupling slots. Method of moments solution to the integral equations for alternating offset radiating slots in an infinite array environment was developed and validated using the commercial finite element code HFSS. For the analysis purpose, a method of moments code was developed for an infinite array of subarrays. Overall the 1x1 m array was found to be successful in meeting the objectives of the GLISTIN demonstration antenna, especially with respect to the 0.042deg, 1/10th of the beamwidth of each stick, relative beam alignment between sticks.

  11. Technical Objective Document. Fiscal Year 1989

    DTIC Science & Technology

    1987-12-01

    other special interest areas/technologies; and throuch a " delphi " process with the Center Technical Investment Committee *develop a "puts and takes...radar and larce optical systems in space, the detection and trackina of low observables, and the operation of sensors for tracking objects in space for...for reducing the processing time for adaptive beamforming in receive arrays, self-coherina techniques in larce distributed arrays and array self

  12. Direct statistical modeling and its implications for predictive mapping in mining exploration

    NASA Astrophysics Data System (ADS)

    Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila

    2010-05-01

    Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.

  13. The influence of contextual reward statistics on risk preference

    PubMed Central

    Rigoli, Francesco; Rutledge, Robb B.; Dayan, Peter; Dolan, Raymond J.

    2016-01-01

    Decision theories mandate that organisms should adjust their behaviour in the light of the contextual reward statistics. We tested this notion using a gambling choice task involving distinct contexts with different reward distributions. The best fitting model of subjects' behaviour indicated that the subjective values of options depended on several factors, including a baseline gambling propensity, a gambling preference dependent on reward amount, and a contextual reward adaptation factor. Combining this behavioural model with simultaneous functional magnetic resonance imaging we probed neural responses in three key regions linked to reward and value, namely ventral tegmental area/substantia nigra (VTA/SN), ventromedial prefrontal cortex (vmPFC) and ventral striatum (VST). We show that activity in the VTA/SN reflected contextual reward statistics to the extent that context affected behaviour, activity in the vmPFC represented a value difference between chosen and unchosen options while VST responses reflected a non-linear mapping between the actual objective rewards and their subjective value. The findings highlight a multifaceted basis for choice behaviour with distinct mappings between components of this behaviour and value sensitive brain regions. PMID:26707890

  14. [Application of statistics on chronic-diseases-relating observational research papers].

    PubMed

    Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua

    2012-09-01

    To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.

  15. Micromirror arrays to assess luminescent nano-objects.

    PubMed

    Kawakami, Yoichi; Kanai, Akinobu; Kaneta, Akio; Funato, Mitsuru; Kikuchi, Akihiko; Kishino, Katsumi

    2011-05-01

    We propose an array of submicrometer mirrors to assess luminescent nano-objects. Micromirror arrays (MMAs) are fabricated on Si (001) wafers via selectively doping Ga using the focused ion beam technique to form p-type etch stop regions, subsequent anisotropic chemical etching, and Al deposition. MMAs provide two benefits: reflection of luminescence from nano-objects within MMAs toward the Si (001) surface normal and nano-object labeling. The former increases the probability of optics collecting luminescence and is demonstrated by simulations based on the ray-tracing and finite-difference time-domain methods as well as by experiments. The latter enables different measurements to be repeatedly performed on a single nano-object located at a certain micromirror. For example, a single InGaN∕GaN nanocolumn is assessed by scanning electron microscopy and microphotoluminescence spectroscopy.

  16. Statistical study of conductance properties in one-dimensional quantum wires focusing on the 0.7 anomaly

    NASA Astrophysics Data System (ADS)

    Smith, L. W.; Al-Taie, H.; Sfigakis, F.; See, P.; Lesage, A. A. J.; Xu, B.; Griffiths, J. P.; Beere, H. E.; Jones, G. A. C.; Ritchie, D. A.; Kelly, M. J.; Smith, C. G.

    2014-07-01

    The properties of conductance in one-dimensional (1D) quantum wires are statistically investigated using an array of 256 lithographically identical split gates, fabricated on a GaAs/AlGaAs heterostructure. All the split gates are measured during a single cooldown under the same conditions. Electron many-body effects give rise to an anomalous feature in the conductance of a one-dimensional quantum wire, known as the "0.7 structure" (or "0.7 anomaly"). To handle the large data set, a method of automatically estimating the conductance value of the 0.7 structure is developed. Large differences are observed in the strength and value of the 0.7 structure [from 0.63 to 0.84×(2e2/h)], despite the constant temperature and identical device design. Variations in the 1D potential profile are quantified by estimating the curvature of the barrier in the direction of electron transport, following a saddle-point model. The 0.7 structure appears to be highly sensitive to the specific confining potential within individual devices.

  17. Doppler radar detection of vortex hazard indicators

    NASA Technical Reports Server (NTRS)

    Nespor, Jerald D.; Hudson, B.; Stegall, R. L.; Freedman, Jerome E.

    1994-01-01

    Wake vortex experiments were conducted at White Sands Missile Range, NM using the AN/MPS-39 Multiple Object Tracking Radar (MOTR). The purpose of these experiments was twofold. The first objective was to verify that radar returns from wake vortex are observed for some time after the passage of an aircraft. The second objective was to verify that other vortex hazard indicators such as ambient wind speed and direction could also be detected. The present study addresses the Doppler characteristics of wake vortex and clear air returns based upon measurements employing MOTR, a very sensitive C-Band phased array radar. In this regard, the experiment was conducted so that the spectral characteristics could be determined on a dwell to-dwell basis. Results are presented from measurements of the backscattered power (equivalent structure constant), radial velocity and spectral width when the aircraft flies transverse and axial to the radar beam. The statistics of the backscattered power and spectral width for each case are given. In addition, the scan strategy, experimental test procedure and radar parameters are presented.

  18. Application of the Statistical ICA Technique in the DANCE Data Analysis

    NASA Astrophysics Data System (ADS)

    Baramsai, Bayarbadrakh; Jandel, M.; Bredeweg, T. A.; Rusev, G.; Walker, C. L.; Couture, A.; Mosby, S.; Ullmann, J. L.; Dance Collaboration

    2015-10-01

    The Detector for Advanced Neutron Capture Experiments (DANCE) at the Los Alamos Neutron Science Center is used to improve our understanding of the neutron capture reaction. DANCE is a highly efficient 4 π γ-ray detector array consisting of 160 BaF2 crystals which make it an ideal tool for neutron capture experiments. The (n, γ) reaction Q-value equals to the sum energy of all γ-rays emitted in the de-excitation cascades from the excited capture state to the ground state. The total γ-ray energy is used to identify reactions on different isotopes as well as the background. However, it's challenging to identify contribution in the Esum spectra from different isotopes with the similar Q-values. Recently we have tested the applicability of modern statistical methods such as Independent Component Analysis (ICA) to identify and separate different (n, γ) reaction yields on different isotopes that are present in the target material. ICA is a recently developed computational tool for separating multidimensional data into statistically independent additive subcomponents. In this conference talk, we present some results of the application of ICA algorithms and its modification for the DANCE experimental data analysis. This research is supported by the U. S. Department of Energy, Office of Science, Nuclear Physics under the Early Career Award No. LANL20135009.

  19. Highly uniform parallel microfabrication using a large numerical aperture system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Zi-Yu; Su, Ya-Hui, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn; Zhang, Chen-Chu

    In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ∼75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallelmore » processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.« less

  20. Non-native fish control below Glen Canyon Dam - Report from a structured decision-making project

    USGS Publications Warehouse

    Runge, Michael C.; Bean, Ellen; Smith, David; Kokos, Sonja

    2011-01-01

    This report describes the results of a structured decision-making project by the U.S. Geological Survey to provide substantive input to the Bureau of Reclamation (Reclamation) for use in the preparation of an Environmental Assessment concerning control of non-native fish below Glen Canyon Dam. A forum was created to allow the diverse cooperating agencies and Tribes to discuss, expand, and articulate their respective values; to develop and evaluate a broad set of potential control alternatives using the best available science; and to define individual preferences of each group on how to manage the inherent trade-offs in this non-native fish control problem. This project consisted of two face-to-face workshops, held in Mesa, Arizona, October 18-20 and November 8-10, 2010. At the first workshop, a diverse set of objectives was discussed, which represented the range of concerns of those agencies and Tribes present. A set of non-native fish control alternatives ('hybrid portfolios') was also developed. Over the 2-week period between the two workshops, four assessment teams worked to evaluate the control alternatives against the array of objectives. At the second workshop, the results of the assessment teams were presented. Multi-criteria decision analysis methods were used to examine the trade-offs inherent in the problem, and allowed the participating agencies and Tribes to express their individual judgments about how those trade-offs should best be managed in Reclamation`s selection of a preferred alternative. A broad array of objectives was identified and defined, and an effort was made to understand how these objectives are likely to be achieved by a variety of strategies. In general, the objectives reflected desired future conditions over 30 years. A rich set of alternative approaches was developed, and the complex structure of those alternatives was documented. Multi-criteria decision analysis methods allowed the evaluation of those alternatives against the array of objectives, with the values of individual agencies and tribes deliberately preserved. Trout removal strategies aimed at the Paria to Badger Rapid reach (PBR), with a variety of permutations in deference to cultural values, and with backup removal at the Little Colorado River reach (LCR) if necessary, were identified as top-ranking portfolios for all agencies and Tribes. These PBR/LCR removal portfolios outperformed LCR-only removal portfolios, for cultural reasons and for effectiveness - the probability of keeping the humpback chub population above a desired threshold was estimated to be higher under the PBR/LCR portfolios than the LCR-only portfolios. The PBR/LCR removal portfolios also outperformed portfolios based on flow manipulations, primarily because of the effect of sport fishery and wilderness recreation objectives, as well as cultural objectives. The preference for the PBR/LCR removal portfolios was quite robust to variation in the objective weights and to uncertainty about the underlying dynamics, at least over the ranges of uncertainty investigated. Examination of the effect of uncertainty on the recommended outcomes allowed us to complete a 'value of information' analysis. The results of this analysis led to an adaptive strategy that includes three possible long-term management actions (no action; LCR removal; or PBR removal) and seeks to reduce uncertainty about the following two issues: the degree to which rainbow trout limit chub populations, and the effectiveness of PBR removal to reduce trout emigration downstream into Marble and eastern Grand Canyons, where the largest population of humpback chub exist. In the face of uncertainty about the effectiveness of PBR removal, a case might be made for including flow manipulations in an adaptive strategy, but formal analysis of this case was not conducted. The full set of conclusions described above is not definitive, however. This analysis described in this report is a simplified depiction of the t

  1. A review of statistical methods to analyze extreme precipitation and temperature events in the Mediterranean region

    NASA Astrophysics Data System (ADS)

    Lazoglou, Georgia; Anagnostopoulou, Christina; Tolika, Konstantia; Kolyva-Machera, Fotini

    2018-04-01

    The increasing trend of the intensity and frequency of temperature and precipitation extremes during the past decades has substantial environmental and socioeconomic impacts. Thus, the objective of the present study is the comparison of several statistical methods of the extreme value theory (EVT) in order to identify which is the most appropriate to analyze the behavior of the extreme precipitation, and high and low temperature events, in the Mediterranean region. The extremes choice was made using both the block maxima and the peaks over threshold (POT) technique and as a consequence both the generalized extreme value (GEV) and generalized Pareto distributions (GPDs) were used to fit them. The results were compared, in order to select the most appropriate distribution for extremes characterization. Moreover, this study evaluates the maximum likelihood estimation, the L-moments and the Bayesian method, based on both graphical and statistical goodness-of-fit tests. It was revealed that the GPD can characterize accurately both precipitation and temperature extreme events. Additionally, GEV distribution with the Bayesian method is proven to be appropriate especially for the greatest values of extremes. Another important objective of this investigation was the estimation of the precipitation and temperature return levels for three return periods (50, 100, and 150 years) classifying the data into groups with similar characteristics. Finally, the return level values were estimated with both GEV and GPD and with the three different estimation methods, revealing that the selected method can affect the return level values for both the parameter of precipitation and temperature.

  2. The Morphology of the Rat Vibrissal Array: A Model for Quantifying Spatiotemporal Patterns of Whisker-Object Contact

    PubMed Central

    Gopal, Venkatesh; Solomon, Joseph H.; Hartmann, Mitra J. Z.

    2011-01-01

    In all sensory modalities, the data acquired by the nervous system is shaped by the biomechanics, material properties, and the morphology of the peripheral sensory organs. The rat vibrissal (whisker) system is one of the premier models in neuroscience to study the relationship between physical embodiment of the sensor array and the neural circuits underlying perception. To date, however, the three-dimensional morphology of the vibrissal array has not been characterized. Quantifying array morphology is important because it directly constrains the mechanosensory inputs that will be generated during behavior. These inputs in turn shape all subsequent neural processing in the vibrissal-trigeminal system, from the trigeminal ganglion to primary somatosensory (“barrel”) cortex. Here we develop a set of equations for the morphology of the vibrissal array that accurately describes the location of every point on every whisker to within ±5% of the whisker length. Given only a whisker's identity (row and column location within the array), the equations establish the whisker's two-dimensional (2D) shape as well as three-dimensional (3D) position and orientation. The equations were developed via parameterization of 2D and 3D scans of six rat vibrissal arrays, and the parameters were specifically chosen to be consistent with those commonly measured in behavioral studies. The final morphological model was used to simulate the contact patterns that would be generated as a rat uses its whiskers to tactually explore objects with varying curvatures. The simulations demonstrate that altering the morphology of the array changes the relationship between the sensory signals acquired and the curvature of the object. The morphology of the vibrissal array thus directly constrains the nature of the neural computations that can be associated with extraction of a particular object feature. These results illustrate the key role that the physical embodiment of the sensor array plays in the sensing process. PMID:21490724

  3. Beam-steering efficiency optimization method based on a rapid-search algorithm for liquid crystal optical phased array.

    PubMed

    Xiao, Feng; Kong, Lingjiang; Chen, Jian

    2017-06-01

    A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.

  4. Spatial Updating According to a Fixed Reference Direction of a Briefly Viewed Layout

    ERIC Educational Resources Information Center

    Zhang, Hui; Mou, Weimin; McNamara, Timothy P.

    2011-01-01

    Three experiments examined the role of reference directions in spatial updating. Participants briefly viewed an array of five objects. A non-egocentric reference direction was primed by placing a stick under two objects in the array at the time of learning. After a short interval, participants detected which object had been moved at a novel view…

  5. Objective research of auscultation signals in Traditional Chinese Medicine based on wavelet packet energy and support vector machine.

    PubMed

    Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei

    2010-01-01

    This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.

  6. Statistical analysis on the concordance of the radiological evaluation of fractures of the distal radius subjected to traction☆

    PubMed Central

    Machado, Daniel Gonçalves; da Cruz Cerqueira, Sergio Auto; de Lima, Alexandre Fernandes; de Mathias, Marcelo Bezerra; Aramburu, José Paulo Gabbi; Rodarte, Rodrigo Ribeiro Pinho

    2016-01-01

    Objective The objective of this study was to evaluate the current classifications for fractures of the distal extremity of the radius, since the classifications made using traditional radiographs in anteroposterior and lateral views have been questioned regarding their reproducibility. In the literature, it has been suggested that other options are needed, such as use of preoperative radiographs on fractures of the distal radius subjected to traction, with stratification by the evaluators. The aim was to demonstrate which classification systems present better statistical reliability. Results In the Universal classification, the results from the third-year resident group (R3) and from the group of more experienced evaluators (Staff) presented excellent correlation, with a statistically significant p-value (p < 0.05). Neither of the groups presented a statistically significant result through the Frykman classification. In the AO classification, there were high correlations in the R3 and Staff groups (respectively 0.950 and 0.800), with p-values lower than 0.05 (respectively <0.001 and 0.003). Conclusion It can be concluded that radiographs performed under traction showed good concordance in the Staff group and in the R3 group, and that this is a good tactic for radiographic evaluations of fractures of the distal extremity of the radius. PMID:26962498

  7. The Relationship Between Organizational Culture and Organizational Commitment in Zahedan University of Medical Sciences

    PubMed Central

    Azizollah, Arbabisarjou; Abolghasem, Farhang; Amin, Dadgar Mohammad

    2016-01-01

    Background and Objective: Organizations effort is to achieve a common goal. There are many constructs needed for organizations. Organizational culture and organizational commitment are special concepts in management. The objective of the current research is to study the relationship between organizational culture and organizational commitment among the personnel of Zahedan University of Medical Sciences. Materials and Methods: This is a descriptive- correlational study. The statistical population was whole tenured staff of Zahedan University of Medical Sciences that worked for this organization in 2012-2013. Random sampling method was used and 165 samples were chosen. Two standardized questionnaires of the organizational culture (Schein, 1984) and organizational commitment (Meyer & Allen, 2002) were applied. The face and construct validity of the questionnaires were approved by the lecturers of Management and experts. Reliability of questionnaires of the organizational culture and organizational commitment were 0.89 and 0.88 respectively, by Cronbach’s Alpha coefficient. All statistical calculations performed using Statistical Package for the Social Sciences version 21.0 (SPSS Inc., Chicago, IL, USA). The level of significance was set at P<0.05. Findings: The findings of the study showed that there was a significant relationship between organizational culture and organizational commitment (P value=0.027). Also, the results showed that there was a significant relation between organizational culture and affective commitment (P-value=0.009), organizational culture and continuance commitment (P-value=0.009), and organizational culture and normative commitment (P-value=0.009). PMID:26925884

  8. Investigation of Regional Influence of Magic-Angle Effect on T2 in Human Articular Cartilage with Osteoarthritis at 3 T

    PubMed Central

    Wang, Ligong; Regatte, Ravinder R.

    2014-01-01

    Rationale and Objectives The objectives of this research study were to determine the magic-angle effect on different subregions of in vivo human femoral cartilage through the quantitative assessment of the effect of static magnetic field orientation (B0) on transverse (T2) relaxation time at 3.0 T. Materials and Methods Healthy volunteers (n = 5l; mean age, 36.4 years) and clinical patients (n = 5; mean age, 64 years) with early osteoarthritis (OA) were scanned at 3.0-T magnetic resonance using an 8-channel phased-array knee coil (transmit-receive). Results The T2 maps revealed significantly greater values in ventral than in dorsal regions. When the cartilage regions were oriented at 55° to B0 (magic angle), the longest T2 values were detected in comparison with the neighboring regions oriented 90° and 180° (0°) to B0. The subregions oriented 180° (0°) to B0 showed the lowest T2 values. Conclusions The differences in T2 values of different subregions suggest that magic-angle effect needs to be considered when interpreting cartilage abnormalities in OA patients. PMID:25481517

  9. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    2013-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  10. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  11. Flow noise of an underwater vector sensor embedded in a flexible towed array.

    PubMed

    Korenbaum, Vladimir I; Tagiltsev, Alexander A

    2012-05-01

    The objective of this work is to simulate the flow noise of a vector sensor embedded in a flexible towed array. The mathematical model developed, based on long-wavelength analysis of the inner space of a cylindrical multipole source, predicts the reduction of the flow noise of a vector sensor embedded in an underwater flexible towed array by means of intensimetric processing (cross-spectral density calculation of oscillatory velocity and sound-pressure-sensor responses). It is found experimentally that intensimetric processing results in flow noise reduction by 12-25 dB at mean levels and by 10-30 dB in fluctuations compared to a squared oscillatory velocity channel. The effect of flow noise suppression in the intensimetry channel relative to a squared sound pressure channel is observed, but only for frequencies above the threshold. These suppression values are 10-15 dB at mean noise levels and 3-6 dB in fluctuations. At towing velocities of 1.5-3 ms(-1) and an accumulation time of 98.3 s, the threshold frequency in fluctuations is between 30 and 45 Hz.

  12. Statistical modeling of temperature, humidity and wind fields in the atmospheric boundary layer over the Siberian region

    NASA Astrophysics Data System (ADS)

    Lomakina, N. Ya.

    2017-11-01

    The work presents the results of the applied climatic division of the Siberian region into districts based on the methodology of objective classification of the atmospheric boundary layer climates by the "temperature-moisture-wind" complex realized with using the method of principal components and the special similarity criteria of average profiles and the eigen values of correlation matrices. On the territory of Siberia, it was identified 14 homogeneous regions for winter season and 10 regions were revealed for summer. The local statistical models were constructed for each region. These include vertical profiles of mean values, mean square deviations, and matrices of interlevel correlation of temperature, specific humidity, zonal and meridional wind velocity. The advantage of the obtained local statistical models over the regional models is shown.

  13. Penalized likelihood and multi-objective spatial scans for the detection and inference of irregular clusters

    PubMed Central

    2010-01-01

    Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451

  14. Neurons with object-centered spatial selectivity in macaque SEF: do they represent locations or rules?

    PubMed

    Tremblay, Léon; Gettner, Sonya N; Olson, Carl R

    2002-01-01

    In macaque monkeys performing a task that requires eye movements to the leftmost or rightmost of two dots in a horizontal array, some neurons in the supplementary eye field (SEF) fire differentially according to which side of the array is the target regardless of the array's location on the screen. We refer to these neurons as exhibiting selectivity for object-centered location. This form of selectivity might arise from involvement of the neurons in either of two processes: representing the locations of targets or representing the rules by which targets are selected. To distinguish between these possibilities, we monitored neuronal activity in the SEF of two monkeys performing a task that required the selection of targets by either an object-centered spatial rule or a color rule. On each trial, a sample array consisting of two side-by-side dots appeared; then a cue flashed on one dot; then the display vanished and a delay ensued. Next a target array consisting of two side-by-side dots appeared at an unpredictable location and another delay ensued; finally the monkey had to make an eye movement to one of the target dots. On some trials, the monkey had to select the dot on the same side as the cue (right or left). On other trials, he had to select the target of the same color as the cue (red or green). Neuronal activity robustly encoded the object-centered locations first of the cue and then of the target regardless of the whether the monkey was following a rule based on object-centered location or color. Neuronal activity was at most weakly affected by the type of rule the monkey was following (object-centered-location or color) or by the color of the cue and target (red or green). On trials involving a color rule, neuronal activity was moderately enhanced when the cue and target appeared on opposite sides of their respective arrays. We conclude that the general function of SEF neurons selective for object-centered location is to represent where the cue and target are in their respective arrays rather than to represent the rule for target selection.

  15. A Prospective, Multicenter, Single-Blind Study Assessing Indices of SNAP II Versus BIS VISTA on Surgical Patients Undergoing General Anesthesia

    PubMed Central

    Bergese, Sergio D; Puente, Erika G; Marcus, R-Jay L; Krohn, Randall J; Docsa, Steven; Soto, Roy G; Candiotti, Keith A

    2017-01-01

    Background Traditionally, anesthesiologists have relied on nonspecific subjective and objective physical signs to assess patients’ comfort level and depth of anesthesia. Commercial development of electrical monitors, which use low- and high-frequency electroencephalogram (EEG) signals, have been developed to enhance the assessment of patients’ level of consciousness. Multiple studies have shown that monitoring patients’ consciousness levels can help in reducing drug consumption, anesthesia-related adverse events, and recovery time. This clinical study will provide information by simultaneously comparing the performance of the SNAP II (a single-channel EEG device) and the bispectral index (BIS) VISTA (a dual-channel EEG device) by assessing their efficacy in monitoring different anesthetic states in patients undergoing general anesthesia. Objective The primary objective of this study is to establish the range of index values for the SNAP II corresponding to each anesthetic state (preinduction, loss of response, maintenance, first purposeful response, and extubation). The secondary objectives will assess the range of index values for BIS VISTA corresponding to each anesthetic state compared to published BIS VISTA range information, and estimate the area under the curve, sensitivity, and specificity for both devices. Methods This is a multicenter, prospective, double-arm, parallel assignment, single-blind study involving patients undergoing elective surgery that requires general anesthesia. The study will include 40 patients and will be conducted at the following sites: The Ohio State University Medical Center (Columbus, OH); Northwestern University Prentice Women's Hospital (Chicago, IL); and University of Miami Jackson Memorial Hospital (Miami, FL). The study will assess the predictive value of SNAP II versus BIS VISTA indices at various anesthetic states in patients undergoing general anesthesia (preinduction, loss of response, maintenance, first purposeful response, and extubation). The SNAP II and BIS VISTA electrode arrays will be placed on the patient’s forehead on opposite sides. The hemisphere location for both devices’ electrodes will be equally alternated among the patient population. The index values for both devices will be recorded and correlated with the scorings received by performing the Modified Observer’s Assessment of Alertness and Sedation and the American Society of Anesthesiologists Continuum of Depth of Sedation, at different stages of anesthesia. Results Enrollment for this study has been completed and statistical data analyses are currently underway. Conclusions The results of this trial will provide information that will simultaneously compare the performance of SNAP II and BIS VISTA devices, with regards to monitoring different anesthesia states among patients. ClinicalTrial Clinicaltrials.gov NCT00829803; https://clinicaltrials.gov/ct2/show/NCT00829803 (Archived by WebCite at http://www.webcitation.org/6nmyi8YKO) PMID:28159731

  16. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  17. A boundary-Fitted Coordinate Code for General Two-Dimensional Regions with Obstacles and Boundary Intrusions.

    DTIC Science & Technology

    1983-03-01

    values of these functions on the two sides of the slits. The acceleration parameters for the iteration at each point are in the field array WACC (I,J...code will calculate a locally optimum value at each point in the field, these values being placed in the field array WACC . This calculation is...changes in x and y, are calculated by calling subroutine ERROR.) The acceleration parameter is placed in the field 65 array WACC . The addition to the

  18. Explosive hazard detection using MIMO forward-looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Shaw, Darren; Ho, K. C.; Stone, Kevin; Keller, James M.; Popescu, Mihail; Anderson, Derek T.; Luke, Robert H.; Burns, Brian

    2015-05-01

    This paper proposes a machine learning algorithm for subsurface object detection on multiple-input-multiple-output (MIMO) forward-looking ground-penetrating radar (FLGPR). By detecting hazards using FLGPR, standoff distances of up to tens of meters can be acquired, but this is at the degradation of performance due to high false alarm rates. The proposed system utilizes an anomaly detection prescreener to identify potential object locations. Alarm locations have multiple one-dimensional (ML) spectral features, two-dimensional (2D) spectral features, and log-Gabor statistic features extracted. The ability of these features to reduce the number of false alarms and increase the probability of detection is evaluated for both co-polarizations present in the Akela MIMO array. Classification is performed by a Support Vector Machine (SVM) with lane-based cross-validation for training and testing. Class imbalance and optimized SVM kernel parameters are considered during classifier training.

  19. Nevada's Children: Selected Educational and Social Statistics. Nevada and National.

    ERIC Educational Resources Information Center

    Horner, Mary P., Comp.

    This statistical report describes the successes and shortcomings of education in Nevada and compares some statistics concerning education in Nevada to national norms. The report, which provides a comprehensive array of information helpful to policy makers and citizens, is divided into three sections. The first section presents statistics about…

  20. A Compact Optical Instrument with Artificial Neural Network for pH Determination

    PubMed Central

    Capel-Cuevas, Sonia; López-Ruiz, Nuria; Martinez-Olmos, Antonio; Cuéllar, Manuel P.; Pegalajar, Maria del Carmen; Palma, Alberto José; de Orbe-Payá, Ignacio; Capitán-Vallvey, Luis Fermin

    2012-01-01

    The aim of this work was the determination of pH with a sensor array-based optical portable instrument. This sensor array consists of eleven membranes with selective colour changes at different pH intervals. The method for the pH calculation is based on the implementation of artificial neural networks that use the responses of the membranes to generate a final pH value. A multi-objective algorithm was used to select the minimum number of sensing elements required to achieve an accurate pH determination from the neural network, and also to minimise the network size. This helps to minimise instrument and array development costs and save on microprocessor energy consumption. A set of artificial neural networks that fulfils these requirements is proposed using different combinations of the membranes in the sensor array, and is evaluated in terms of accuracy and reliability. In the end, the network including the response of the eleven membranes in the sensor was selected for validation in the instrument prototype because of its high accuracy. The performance of the instrument was evaluated by measuring the pH of a large set of real samples, showing that high precision can be obtained in the full range. PMID:22778668

  1. Array heterogeneity prevents catastrophic forgetting in infants

    PubMed Central

    Zosh, Jennifer M.; Feigenson, Lisa

    2015-01-01

    Working memory is limited in adults and infants. But unlike adults, infants whose working memory capacity is exceeded often fail in a particularly striking way: they do not represent any of the presented objects, rather than simply remembering as many objects as they can and ignoring anything further (Feigenson & Carey 2003, 2005). Here we explored the nature of this “catastrophic forgetting,” asking whether stimuli themselves modulate the way in which infants’ memory fails. We showed 13-month old infants object arrays that either were within or that exceeded working memory capacity—but, unlike previous experiments, presented objects with contrasting features. Although previous studies have repeatedly documented infants’ failure to represent four identical hidden objects, in Experiments 1 and 2 we found that infants who saw four contrasting objects hidden, and then retrieved just two of the four, successfully continued searching for the missing objects. Perceptual contrast between objects sufficed to drive this success; infants succeeded regardless of whether the different objects were contrastively labeled, and regardless of whether the objects were semantically familiar or completely novel. In Experiment 3 we explored the nature of this surprising success, asking whether array heterogeneity actually expanded infants’ working memory capacity or rather prevented catastrophic forgetting. We found that infants successfully continued searching after seeing four contrasting objects hidden and retrieving two of them, but not after retrieving three of them. This suggests that, like adults, infants were able to remember up to, but not beyond, the limits of their working memory capacity when representing heterogeneous arrays. PMID:25543889

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, Russlee; Farley, M.; Hansen, Gabriel

    In 1995, the Chief Joseph Kokanee Enhancement Project was established to mitigate the loss of anadromous fish due to the construction of Chief Joseph and Grand Coulee dams. The objectives of the Chief Joseph Enhancement Project are to determine the status of resident kokanee (Oncorhynchus nerka) populations above Chief Joseph and Grand Coulee dams and to enhance kokanee and rainbow trout (Oncorhynchus mykiss) populations. Studies conducted at Grand Coulee Dam documented substantial entrainment of kokanee through turbines at the third powerhouse. In response to finding high entrainment at Grand Coulee Dam, the Independent Scientific Review Panel (ISRP) recommended investigating themore » use of strobe lights to repel fish from the forebay of the third powerhouse. Therefore, our study focused on the third powerhouse and how strobe lights affected fish behavior in this area. The primary objective of our study was to assess the behavioral response of kokanee and rainbow trout to strobe lights using 3D acoustic telemetry, which yields explicit spatial locations of fish in three dimensions. Our secondary objectives were to (1) use a 3D acoustic system to mobile track tagged fish in the forebay and upriver of Grand Coulee Dam and (2) determine the feasibility of detecting fish using a hydrophone mounted in the tailrace of the third powerhouse. Within the fixed hydrophone array located in the third powerhouse cul-de-sac, we detected 50 kokanee and 30 rainbow trout, accounting for 47% and 45% respectively, of the fish released. Kokanee had a median residence time of 0.20 h and rainbow trout had a median residence time of 1.07 h. We detected more kokanee in the array at night compared to the day, and we detected more rainbow trout during the day compared to the night. In general, kokanee and rainbow trout approached along the eastern shore and the relative frequency of kokanee and rainbow trout detections was highest along the eastern shoreline of the 3D array. However, because we released fish near the eastern shore, this approach pattern may have resulted from our release location. A high percentage of rainbow trout (60%) approached within 35 m of the eastern shore, while fewer kokanee (40%) approached within 35 m of the eastern shore and were more evenly distributed across the entrance to the third powerhouse cul-de-sac area. During each of the strobe light treatments there were very few fish detected within 25 m of the strobe lights. The spatial distribution of fish detections showed relatively few tagged fish swam through the center of the array where the strobe lights were located. We detected 11 kokanee and 12 rainbow trout within 25 m of the strobe lights, accounting for 10% and 18% respectively, of the fish released. Both species exhibited very short residence times within 25 m of the strobe lights No attraction or repulsion behavior was observed within 25 m of the strobe lights. Directional vectors of both kokanee and rainbow trout indicate that both species passed the strobe lights by moving in a downstream direction and slightly towards the third powerhouse. We statistically analyzed fish behavior during treatments using a randomization to compare the mean distance fish were detected from the strobe lights. We compared treatments separately for day and night and with the data constrained to three distances from the strobe light (< 85m, < 50 m, and < 25 m). For kokanee, the only significant randomization test (of 10 tests) occurred with kokanee during the day for the 3-On treatment constrained to within 85 m of the strobe lights, where kokanee were significantly further away from the strobe lights than during the Off treatment (randomization test, P < 0.004, Table 1.5). However, one other test had a low P-value (P = 0.064) where kokanee were closer to the lights during the 3-On treatment at night within 85 m of the strobe lights compared to the Off treatment. For rainbow trout, none of the 11 tests were significant, but one test had a low P-value (P = 0.04), and fish were further away from the strobe lights during the 6-On treatment, within 50 m, during the day (Table 1.5). During 2002, it is unclear whether tagged fish truly had little response to the strobe lights, or whether too few fish near the strobe lights and short residence times prevented us from detecting a behavioral response to the strobe lights. Although fish tended to be slightly further away from the strobe lights during 3-On and 6-On treatments compared to the Off treatment, only one of the 21 statistical tests indicated that these differences were significant. However, within 25 m of the strobe lights we may have had little power to detect a difference due to the few fish available for statistical comparison. We detected 32 kokanee and 7 rainbow trout in the tailrace of Grand Coulee Dam, accounting for 30% and 12%, respectively of the fish released.« less

  3. 7-Hexagon Multifocal Electroretinography for an Objective Functional Assessment of the Macula in 14 Seconds.

    PubMed

    Schönbach, Etienne M; Chaikitmongkol, Voraporn; Annam, Rachel; McDonnell, Emma C; Wolfson, Yulia; Fletcher, Emily; Scholl, Hendrik P N

    2017-01-01

    We present the multifocal electroretinogram (mfERG) with a 7-hexagon array as an objective test of macular function that can be recorded in 14 s. We provide normal values and investigate its reproducibility and validity. Healthy participants underwent mfERG testing according to International Society for Clinical Electrophysiology of Vision (ISCEV) standards using the Espion Profile/D310 multifocal ERG system (Diagnosys, LLC, Lowell, MA, USA). One standard recording of a 61-hexagon array and 2 repeated recordings of a custom 7-hexagon array were obtained. A total of 13 subjects (mean age 46.9 years) were included. The median response densities were 12.5 nV/deg2 in the center and 5.2 nV/deg2 in the periphery. Intereye correlations were strong in both the center (ρCenter = 0.821; p < 0.0001) and the periphery (ρPeriphery = 0.862; p < 0.0001). Intraeye correlations were even stronger: ρCenter = 0.904 with p < 0.0001 and ρPeriphery = 0.955 with p < 0.0001. Bland-Altman plots demonstrated an acceptable retest mean difference in both the center and periphery, and narrow limits of agreement. We found strong correlations of the center (ρCenter = 0.826; p < 0.0001) and periphery (ρPeriphery = 0.848; p < 0.0001), with recordings obtained by the 61-hexagon method. The 7-hexagon mfERG provides reproducible results in agreement with results obtained according to the ISCEV standard. © 2017 S. Karger AG, Basel.

  4. The VLITE Post-Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.

    2018-01-01

    A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE; ) on the Karl G. Jansky Very Large Array (VLA). In contrast to other radio sky surveys, the commensal observing mode of VLITE results in varying depths, sensitivities, and spatial resolutions across the sky based on the configuration of the VLA, location on the sky, and time on source specified by the primary observer for their independent science objectives. Therefore, previously developed tools and methods for generating source catalogs and survey statistics are not always appropriate for VLITE's diverse and growing set of data. A raw catalog of point sources extracted from every VLITE image will be created from source fit parameters stored in a queryable database. Point sources will be measured using the Python Blob Detector and Source Finder software (PyBDSF; Mohan & Rafferty 2015). Sources in the raw catalog will be associated with previous VLITE detections in a resolution- and sensitivity-dependent manner, and cross-matched to other radio sky surveys to aid in the detection of transient and variable sources. Final data products will include separate, tiered point source catalogs grouped by sensitivity limit and spatial resolution.

  5. Statistical results on restorative dentistry experiments: effect of the interaction between main variables

    PubMed Central

    CAVALCANTI, Andrea Nóbrega; MARCHI, Giselle Maria; AMBROSANO, Gláucia Maria Bovi

    2010-01-01

    Statistical analysis interpretation is a critical field in scientific research. When there is more than one main variable being studied in a research, the effect of the interaction between those variables is fundamental on experiments discussion. However, some doubts can occur when the p-value of the interaction is greater than the significance level. Objective To determine the most adequate interpretation for factorial experiments with p-values of the interaction nearly higher than the significance level. Materials and methods The p-values of the interactions found in two restorative dentistry experiments (0.053 and 0.068) were interpreted in two distinct ways: considering the interaction as not significant and as significant. Results Different findings were observed between the two analyses, and studies results became more coherent when the significant interaction was used. Conclusion The p-value of the interaction between main variables must be analyzed with caution because it can change the outcomes of research studies. Researchers are strongly advised to interpret carefully the results of their statistical analysis in order to discuss the findings of their experiments properly. PMID:20857003

  6. Effect of Abdominoplasty in the Lipid Profile of Patients with Dyslipidemia

    PubMed Central

    Ramos-Gallardo, Guillermo; Pérez Verdin, Ana; Fuentes, Miguel; Godínez Gutiérrez, Sergio; Ambriz-Plascencia, Ana Rosa; González-García, Ignacio; Gómez-Fonseca, Sonia Mericia; Madrigal, Rosalio; González-Reynoso, Luis Iván; Figueroa, Sandra; Toscano Igartua, Xavier; Jiménez Gutierrez, Déctor Francisco

    2013-01-01

    Introduction. Dyslipidemia like other chronic degenerative diseases is pandemic in Latin America and around the world. A lot of patients asking for body contouring surgery can be sick without knowing it. Objective. Observe the lipid profile of patients with dyslipidemia, before and three months after an abdominoplasty. Methods. Patients candidate to an abdominoplasty without morbid obesity were followed before and three months after the surgery. We compared the lipid profile, glucose, insulin, and HOMA (cardiovascular risk marker) before and three months after the surgery. We used Student's t test to compare the results. A P value less than 0.05 was considered as significant. Results. Twenty-six patients were observed before and after the surgery. At the third month, we found only statistical differences in LDL and triglyceride values (P 0.04 and P 0.03). The rest of metabolic values did not reach statistical significance. Conclusion. In this group of patients with dyslipidemia, at the third month, only LDL and triglyceride values reached statistical significances. There is no significant change in glucose, insulin, HOMA, cholesterol, VLDL, or HDL. PMID:23956856

  7. Refocusing-range and image-quality enhanced optical reconstruction of 3-D objects from integral images using a principal periodic δ-function array

    NASA Astrophysics Data System (ADS)

    Ai, Lingyu; Kim, Eun-Soo

    2018-03-01

    We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.

  8. Measurement of high-voltage and radiation-damage limitations to advanced solar array performance

    NASA Technical Reports Server (NTRS)

    Guidice, D. A.; Severance, P. S.; Keinhardt, K. C.

    1991-01-01

    A description is given of the reconfigured Photovoltaic Array Space Power (PASP) Plus experiment: its objectives, solar-array complement, and diagnostic sensors. Results from a successful spaceflight will lead to a better understanding of high-voltage and radiation-damage limitations in the operation of new-technology solar arrays.

  9. Intelligent data processing of an ultrasonic sensor system for pattern recognition improvements

    NASA Astrophysics Data System (ADS)

    Na, Seung You; Park, Min-Sang; Hwang, Won-Gul; Kee, Chang-Doo

    1999-05-01

    Though conventional time-of-flight ultrasonic sensor systems are popular due to the advantages of low cost and simplicity, the usage of the sensors is rather narrowly restricted within object detection and distance readings. There is a strong need to enlarge the amount of environmental information for mobile applications to provide intelligent autonomy. Wide sectors of such neighboring object recognition problems can be satisfactorily handled with coarse vision data such as sonar maps instead of accurate laser or optic measurements. For the usage of object pattern recognition, ultrasonic senors have inherent shortcomings of poor directionality and specularity which result in low spatial resolution and indistinctiveness of object patterns. To resolve these problems an array of increased number of sensor elements has been used for large objects. In this paper we propose a method of sensor array system with improved recognition capability using electronic circuits accompanying the sensor array and neuro-fuzzy processing of data fusion. The circuit changes transmitter output voltages of array elements in several steps. Relying upon the known sensor characteristics, a set of different return signals from neighboring senors is manipulated to provide an enhanced pattern recognition in the aspects of inclination angle, size and shift as well as distance of objects. The results show improved resolution of the measurements for smaller targets.

  10. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    NASA Astrophysics Data System (ADS)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  11. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  12. Piecewise polynomial representations of genomic tracks.

    PubMed

    Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz

    2012-01-01

    Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.

  13. Evaluating focused ion beam patterning for position-controlled nanowire growth using computer vision

    NASA Astrophysics Data System (ADS)

    Mosberg, A. B.; Myklebost, S.; Ren, D.; Weman, H.; Fimland, B. O.; van Helvoort, A. T. J.

    2017-09-01

    To efficiently evaluate the novel approach of focused ion beam (FIB) direct patterning of substrates for nanowire growth, a reference matrix of hole arrays has been used to study the effect of ion fluence and hole diameter on nanowire growth. Self-catalyzed GaAsSb nanowires were grown using molecular beam epitaxy and studied by scanning electron microscopy (SEM). To ensure an objective analysis, SEM images were analyzed with computer vision to automatically identify nanowires and characterize each array. It is shown that FIB milling parameters can be used to control the nanowire growth. Lower ion fluence and smaller diameter holes result in a higher yield (up to 83%) of single vertical nanowires, while higher fluence and hole diameter exhibit a regime of multiple nanowires. The catalyst size distribution and placement uniformity of vertical nanowires is best for low-value parameter combinations, indicating how to improve the FIB parameters for positioned-controlled nanowire growth.

  14. High intensity click statistics from a 10 × 10 avalanche photodiode array

    NASA Astrophysics Data System (ADS)

    Kröger, Johannes; Ahrens, Thomas; Sperling, Jan; Vogel, Werner; Stolz, Heinrich; Hage, Boris

    2017-11-01

    Photon-number measurements are a fundamental technique for the discrimination and characterization of quantum states of light. Beyond the abilities of state-of-the-art devices, we present measurements with an array of 100 avalanche photodiodes exposed to photon-numbers ranging from well below to significantly above one photon per diode. Despite each single diode only discriminating between zero and non-zero photon-numbers we were able to extract a second order moment, which acts as a nonclassicality indicator. We demonstrate a vast enhancement of the applicable intensity range by two orders of magnitude relative to the standard application of such devices. It turns out that the probabilistic mapping of arbitrary photon-numbers on a finite number of registered clicks is not per se a disadvantage compared with true photon counters. Such detector arrays can bridge the gap between single-photon and linear detection, by investigation of the click statistics, without the necessity of photon statistics reconstruction.

  15. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  16. Statistical analysis of atmospheric turbulence about a simulated block building

    NASA Technical Reports Server (NTRS)

    Steely, S. L., Jr.

    1981-01-01

    An array of towers instrumented to measure the three components of wind speed was used to study atmospheric flow about a simulated block building. Two-point spacetime correlations of the longitudinal velocity component were computed along with two-point spatial correlations. These correlations are in good agreement with fundamental concepts of fluid mechanics. The two-point spatial correlations computed directly were compared with correlations predicted by Taylor's hypothesis and excellent agreement was obtained at the higher levels which were out of the building influence. The correlations fall off significantly in the building wake but recover beyond the wake to essentially the same values in the undisturbed, higher regions.

  17. Statistical inference for classification of RRIM clone series using near IR reflectance properties

    NASA Astrophysics Data System (ADS)

    Ismail, Faridatul Aima; Madzhi, Nina Korlina; Hashim, Hadzli; Abdullah, Noor Ezan; Khairuzzaman, Noor Aishah; Azmi, Azrie Faris Mohd; Sampian, Ahmad Faiz Mohd; Harun, Muhammad Hafiz

    2015-08-01

    RRIM clone is a rubber breeding series produced by RRIM (Rubber Research Institute of Malaysia) through "rubber breeding program" to improve latex yield and producing clones attractive to farmers. The objective of this work is to analyse measurement of optical sensing device on latex of selected clone series. The device using transmitting NIR properties and its reflectance is converted in terms of voltage. The obtained reflectance index value via voltage was analyzed using statistical technique in order to find out the discrimination among the clones. From the statistical results using error plots and one-way ANOVA test, there is an overwhelming evidence showing discrimination of RRIM 2002, RRIM 2007 and RRIM 3001 clone series with p value = 0.000. RRIM 2008 cannot be discriminated with RRIM 2014; however both of these groups are distinct from the other clones.

  18. Implementation of total focusing method for phased array ultrasonic imaging on FPGA

    NASA Astrophysics Data System (ADS)

    Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2015-02-01

    This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.

  19. ArraySolver: an algorithm for colour-coded graphical display and Wilcoxon signed-rank statistics for comparing microarray gene expression data.

    PubMed

    Khan, Haseeb Ahmad

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann-Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n < or = 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform.

  20. ArraySolver: An Algorithm for Colour-Coded Graphical Display and Wilcoxon Signed-Rank Statistics for Comparing Microarray Gene Expression Data

    PubMed Central

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann–Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n ≤ 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform. PMID:18629036

  1. A 3T Sodium and Proton Composite Array Breast Coil

    PubMed Central

    Kaggie, Joshua D.; Hadley, J. Rock; Badal, James; Campbell, John R.; Park, Daniel J.; Parker, Dennis L.; Morrell, Glen; Newbould, Rexford D.; Wood, Ali F.; Bangerter, Neal K.

    2013-01-01

    Purpose The objective of this study was to determine whether a sodium phased array would improve sodium breast MRI at 3T. The secondary objective was to create acceptable proton images with the sodium phased array in place. Methods A novel composite array for combined proton/sodium 3T breast MRI is compared to a coil with a single proton and sodium channel. The composite array consists of a 7-channel sodium receive array, a larger sodium transmit coil, and a 4-channel proton transceive array. The new composite array design utilizes smaller sodium receive loops than typically used in sodium imaging, uses novel decoupling methods between the receive loops and transmit loops, and uses a novel multi-channel proton transceive coil. The proton transceive coil reduces coupling between proton and sodium elements by intersecting the constituent loops to reduce their mutual inductance. The coil used for comparison consists of a concentric sodium and proton loop with passive decoupling traps. Results The composite array coil demonstrates a 2–5x improvement in SNR for sodium imaging and similar SNR for proton imaging when compared to a simple single-loop dual resonant design. Conclusion The improved SNR of the composite array gives breast sodium images of unprecedented quality in reasonable scan times. PMID:24105740

  2. Students' Spatial Structuring of 2D Arrays of Squares.

    ERIC Educational Resources Information Center

    Battista, Michael T.; Clements, Douglas H.; Arnoff, Judy; Battista, Kathryn; Van Auken Borrow, Caroline

    1998-01-01

    Defines spatial structuring as the mental operation of constructing an organization or form for an object/set of objects. Examines in detail students' structuring and enumeration of two-dimensional rectangular arrays of squares. Concludes that many students do not see row-by-column structure. Describes various levels of sophistication in students'…

  3. Low-cost silicon solar array project environmental hail model for assessing risk to solar collectors

    NASA Technical Reports Server (NTRS)

    Gonzalez, C.

    1977-01-01

    The probability of solar arrays being struck by hailstones of various sizes as a function of geographic location and service life was assessed. The study complements parallel studies of solar array sensitivity to hail damage, the final objective being an estimate of the most cost effective level for solar array hail protection.

  4. Cross correlation anomaly detection system

    NASA Technical Reports Server (NTRS)

    Micka, E. Z. (Inventor)

    1975-01-01

    This invention provides a method for automatically inspecting the surface of an object, such as an integrated circuit chip, whereby the data obtained by the light reflected from the surface, caused by a scanning light beam, is automatically compared with data representing acceptable values for each unique surface. A signal output provided indicated of acceptance or rejection of the chip. Acceptance is based on predetermined statistical confidence intervals calculated from known good regions of the object being tested, or their representative values. The method can utilize a known good chip, a photographic mask from which the I.C. was fabricated, or a computer stored replica of each pattern being tested.

  5. (abstract) Scaling Nominal Solar Cell Impedances for Array Design

    NASA Technical Reports Server (NTRS)

    Mueller, Robert L; Wallace, Matthew T.; Iles, Peter

    1994-01-01

    This paper discusses a task the objective of which is to characterize solar cell array AC impedance and develop scaling rules for impedance characterization of large arrays by testing single solar cells and small arrays. This effort is aimed at formulating a methodology for estimating the AC impedance of the Mars Pathfinder (MPF) cruise and lander solar arrays based upon testing single cells and small solar cell arrays and to create a basis for design of a single shunt limiter for MPF power control of flight solar arrays having very different inpedances.

  6. Teacher's Guide: Social Studies, 5.

    ERIC Educational Resources Information Center

    Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.

    Part of a sequential K-12 program, this teacher's guide provides objectives and activities for students in grade 5. Five major sections correspond to learning, inquiry, and discussion skills, concepts, and values and moral reasoning. Learning skills include listening, speaking, viewing, reading, writing, map, and statistical abilities. Students…

  7. Microarray expression profiling identifies genes with altered expression in HDL-deficient mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callow, Matthew J.; Dudoit, Sandrine; Gong, Elaine L.

    2000-05-05

    Based on the assumption that severe alterations in the expression of genes known to be involved in HDL metabolism may affect the expression of other genes we screened an array of over 5000 mouse expressed sequence tags (ESTs) for altered gene expression in the livers of two lines of mice with dramatic decreases in HDL plasma concentrations. Labeled cDNA from livers of apolipoprotein AI (apo AI) knockout mice, Scavenger Receptor BI (SR-BI) transgenic mice and control mice were co-hybridized to microarrays. Two-sample t-statistics were used to identify genes with altered expression levels in the knockout or transgenic mice compared withmore » the control mice. In the SR-BI group we found 9 array elements representing at least 5 genes to be significantly altered on the basis of an adjusted p value of less than 0.05. In the apo AI knockout group 8 array elements representing 4 genes were altered compared with the control group (p < 0.05). Several of the genes identified in the SR-BI transgenic suggest altered sterol metabolism and oxidative processes. These studies illustrate the use of multiple-testing methods for the identification of genes with altered expression in replicated microarray experiments of apo AI knockout and SR-BI transgenic mice.« less

  8. EGS hydraulic stimulation monitoring by surface arrays - location accuracy and completeness magnitude: the Basel Deep Heat Mining Project case study

    NASA Astrophysics Data System (ADS)

    Häge, Martin; Blascheck, Patrick; Joswig, Manfred

    2013-01-01

    The potential and limits of monitoring induced seismicity by surface-based mini arrays was evaluated for the hydraulic stimulation of the Basel Deep Heat Mining Project. This project aimed at the exploitation of geothermal heat from a depth of about 4,630 m. As reference for our results, a network of borehole stations by Geothermal Explorers Ltd. provided ground truth information. We utilized array processing, sonogram event detection and outlier-resistant, graphical jackknife location procedures to compensate for the decrease in signal-to-noise ratio at the surface. We could correctly resolve the NNW-SSE striking fault plane by relative master event locations. Statistical analysis of our catalog data resulted in M L 0.36 as completeness magnitude, but with significant day-to-night dependency. To compare to the performance of borehole data with M W 0.9 as completeness magnitude, we applied two methods for converting M L to M W which raised our M C to M W in the range of 0.99-1.13. Further, the b value for the duration of our measurement was calculated to 1.14 (related to M L), respectively 1.66 (related to M W), but changes over time could not be resolved from the error bars.

  9. Spatio-temporal dimension of lightning flashes based on three-dimensional Lightning Mapping Array

    NASA Astrophysics Data System (ADS)

    López, Jesús A.; Pineda, Nicolau; Montanyà, Joan; Velde, Oscar van der; Fabró, Ferran; Romero, David

    2017-11-01

    3D mapping system like the LMA - Lightning Mapping Array - are a leap forward in lightning observation. LMA measurements has lead to an improvement on the analysis of the fine structure of lightning, allowing to characterize the duration and maximum extension of the cloud fraction of a lightning flash. During several years of operation, the first LMA deployed in Europe has been providing a large amount of data which now allows a statistical approach to compute the full duration and horizontal extension of the in-cloud phase of a lightning flash. The "Ebro Lightning Mapping Array" (ELMA) is used in the present study. Summer and winter lighting were analyzed for seasonal periods (Dec-Feb and Jun-Aug). A simple method based on an ellipse fitting technique (EFT) has been used to characterize the spatio-temporal dimensions from a set of about 29,000 lightning flashes including both summer and winter events. Results show an average lightning flash duration of 440 ms (450 ms in winter) and a horizontal maximum length of 15.0 km (18.4 km in winter). The uncertainties for summer lightning lengths were about ± 1.2 km and ± 0.7 km for the mean and median values respectively. In case of winter lightning, the level of uncertainty reaches up to 1 km and 0.7 km of mean and median value. The results of the successful correlation of CG discharges with the EFT method, represent 6.9% and 35.5% of the total LMA flashes detected in summer and winter respectively. Additionally, the median value of lightning lengths calculated through this correlative method was approximately 17 km for both seasons. On the other hand, the highest median ratios of lightning length to CG discharges in both summer and winter were reported for positive CG discharges.

  10. Diffraction mode terahertz tomography

    DOEpatents

    Ferguson, Bradley; Wang, Shaohong; Zhang, Xi-Cheng

    2006-10-31

    A method of obtaining a series of images of a three-dimensional object. The method includes the steps of transmitting pulsed terahertz (THz) radiation through the entire object from a plurality of angles, optically detecting changes in the transmitted THz radiation using pulsed laser radiation, and constructing a plurality of imaged slices of the three-dimensional object using the detected changes in the transmitted THz radiation. The THz radiation is transmitted through the object as a two-dimensional array of parallel rays. The optical detection is an array of detectors such as a CCD sensor.

  11. Study of labor-negotiation productivity concerns in the petroleum-refining industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, J.E.

    The primary objective of this study was to identify productivity factors relative to negotiating future labor contracts with the Oil, Chemical and Atomic Workers International Union (OCAWIU). A Delphi research method was utilized to accomplish this purpose. The study utilized three rounds to obtain the stated objectives. Round one involved the use of an open instrument to solicit productivity factors that would be beneficial in future negotiations with the OCAWIU. In round two, two separate instruments were sent to the panel members who were asked to judge the value of each item on the first instrument, and to rank themore » ten most significant items on the second. The round three instruments were individualized for each panel member. The productivity items were rated by the panel members, and descriptive statistics were used to describe the combined order of listings and weights for determining the relative importance of each factor in the consensus model. The nonparametric statistics were used to examine the degree of consensus between the mean values on the first instrument with the ranked values for the second instrument. No significant differences were formed. Twenty-five productivity items were identified and prioritized as viable negotiable items with the OCAWIU.« less

  12. Spectral statistics and scattering resonances of complex primes arrays

    NASA Astrophysics Data System (ADS)

    Wang, Ren; Pinheiro, Felipe A.; Dal Negro, Luca

    2018-01-01

    We introduce a class of aperiodic arrays of electric dipoles generated from the distribution of prime numbers in complex quadratic fields (Eisenstein and Gaussian primes) as well as quaternion primes (Hurwitz and Lifschitz primes), and study the nature of their scattering resonances using the vectorial Green's matrix method. In these systems we demonstrate several distinctive spectral properties, such as the absence of level repulsion in the strongly scattering regime, critical statistics of level spacings, and the existence of critical modes, which are extended fractal modes with long lifetimes not supported by either random or periodic systems. Moreover, we show that one can predict important physical properties, such as the existence spectral gaps, by analyzing the eigenvalue distribution of the Green's matrix of the arrays in the complex plane. Our results unveil the importance of aperiodic correlations in prime number arrays for the engineering of gapped photonic media that support far richer mode localization and spectral properties compared to usual periodic and random media.

  13. The effects of DRIE operational parameters on vertically aligned micropillar arrays

    NASA Astrophysics Data System (ADS)

    Miller, Kane; Li, Mingxiao; Walsh, Kevin M.; Fu, Xiao-An

    2013-03-01

    Vertically aligned silicon micropillar arrays have been created by deep reactive ion etching (DRIE) and used for a number of microfabricated devices including microfluidic devices, micropreconcentrators and photovoltaic cells. This paper delineates an experimental design performed on the Bosch process of DRIE of micropillar arrays. The arrays are fabricated with direct-write optical lithography without photomask, and the effects of DRIE process parameters, including etch cycle time, passivation cycle time, platen power and coil power on profile angle, scallop depth and scallop peak-to-peak distance are studied by statistical design of experiments. Scanning electron microscope images are used for measuring the resultant profile angles and characterizing the scalloping effect on the pillar sidewalls. The experimental results indicate the effects of the determining factors, etch cycle time, passivation cycle time and platen power, on the micropillar profile angles and scallop depths. An optimized DRIE process recipe for creating nearly 90° and smooth surface (invisible scalloping) has been obtained as a result of the statistical design of experiments.

  14. Efficient view based 3-D object retrieval using Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  15. Small aperture seismic arrays for studying planetary interiors and seismicity

    NASA Astrophysics Data System (ADS)

    Schmerr, N. C.; Lekic, V.; Fouch, M. J.; Panning, M. P.; Siegler, M.; Weber, R. C.

    2017-12-01

    Seismic arrays are a powerful tool for understanding the interior structure and seismicity across objects in the Solar System. Given the operational constraints of ground-based lander investigations, a small aperture seismic array can provide many of the benefits of a larger-scale network, but does not necessitate a global deployment of instrumentation. Here we define a small aperture array as a deployment of multiple seismometers, with a separation between instruments of 1-1000 meters. For example, small aperture seismic arrays have been deployed on the Moon during the Apollo program, the Active Seismic Experiments of Apollo 14 and 16, and the Lunar Seismic Profiling Experiment deployed by the Apollo 17 astronauts. Both were high frequency geophone arrays with spacing of 50 meters that provided information on the layering and velocity structure of the uppermost kilometer of the lunar crust. Ideally such arrays would consist of instruments that are 3-axis short period or broadband seismometers. The instruments must have a sampling rate and frequency range sensitivity capable of distinguishing between waves arriving at each station in the array. Both terrestrial analogs and the data retrieved from the Apollo arrays demonstrate the efficacy of this approach. Future opportunities exist for deployment of seismic arrays on Europa, asteroids, and other objects throughout the Solar System. Here we will present both observational data and 3-D synthetic modeling results that reveal the sensing requirements and the primary advantages of a small aperture seismic array over single station approach. For example, at the smallest apertures of < 1 m, we constrain that sampling rates must exceed 500 Hz and instrument sensitivity must extend to 100 Hz or greater. Such advantages include the improved ability to resolve the location of the sources near the array through detection of backazimuth and differential timing between stations, determination of the small-scale structure (layering, scattering bodies, density and velocity variations) in the vicinity of the array, as well as the ability to improve the signal to noise ratio of distant body waves by additive methods such as stacking and velocity-slowness analysis. These results will inform future missions on the surfaces of objects throughout the Solar System.

  16. Estimation of social value of statistical life using willingness-to-pay method in Nanjing, China.

    PubMed

    Yang, Zhao; Liu, Pan; Xu, Xin

    2016-10-01

    Rational decision making regarding the safety related investment programs greatly depends on the economic valuation of traffic crashes. The primary objective of this study was to estimate the social value of statistical life in the city of Nanjing in China. A stated preference survey was conducted to investigate travelers' willingness to pay for traffic risk reduction. Face-to-face interviews were conducted at stations, shopping centers, schools, and parks in different districts in the urban area of Nanjing. The respondents were categorized into two groups, including motorists and non-motorists. Both the binary logit model and mixed logit model were developed for the two groups of people. The results revealed that the mixed logit model is superior to the fixed coefficient binary logit model. The factors that significantly affect people's willingness to pay for risk reduction include income, education, gender, age, drive age (for motorists), occupation, whether the charged fees were used to improve private vehicle equipment (for motorists), reduction in fatality rate, and change in travel cost. The Monte Carlo simulation method was used to generate the distribution of value of statistical life (VSL). Based on the mixed logit model, the VSL had a mean value of 3,729,493 RMB ($586,610) with a standard deviation of 2,181,592 RMB ($343,142) for motorists; and a mean of 3,281,283 RMB ($505,318) with a standard deviation of 2,376,975 RMB ($366,054) for non-motorists. Using the tax system to illustrate the contribution of different income groups to social funds, the social value of statistical life was estimated. The average social value of statistical life was found to be 7,184,406 RMB ($1,130,032). Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Effects of wind waves on horizontal array performance in shallow-water conditions

    NASA Astrophysics Data System (ADS)

    Zavol'skii, N. A.; Malekhanov, A. I.; Raevskii, M. A.; Smirnov, A. V.

    2017-09-01

    We analyze the influence of statistical effects of the propagation of an acoustic signal excited by a tone source in a shallow-water channel with a rough sea surface on the efficiency of a horizontal phased array. As the array characteristics, we consider the angular function of the array response for a given direction to the source and the coefficient of amplification of the signal-to-noise ratio (array gain). Numerical simulation was conducted in to the winter hydrological conditions of the Barents Sea in a wide range of parameters determining the spatial signal coherence. The results show the main physical effects of the influence of wind waves on the array characteristics and make it possible to quantitatively predict the efficiency of a large horizontal array in realistic shallow-water channels.

  18. Ambient noise imaging in warm shallow waters; robust statistical algorithms and range estimation.

    PubMed

    Chitre, Mandar; Kuselan, Subash; Pallayil, Venugopalan

    2012-08-01

    The high frequency ambient noise in warm shallow waters is dominated by snapping shrimp. The loud snapping noises they produce are impulsive and broadband. As the noise propagates through the water, it interacts with the seabed, sea surface, and submerged objects. An array of acoustic pressure sensors can produce images of the submerged objects using this noise as the source of acoustic "illumination." This concept is called ambient noise imaging (ANI) and was demonstrated using ADONIS, an ANI camera developed at the Scripps Institution of Oceanography. To overcome some of the limitations of ADONIS, a second generation ANI camera (ROMANIS) was developed at the National University of Singapore. The acoustic time series recordings made by ROMANIS during field experiments in Singapore show that the ambient noise is well modeled by a symmetric α-stable (SαS) distribution. As high-order moments of SαS distributions generally do not converge, ANI algorithms based on low-order moments and fractiles are developed and demonstrated. By localizing nearby snaps and identifying the echoes from an object, the range to the object can be passively estimated. This technique is also demonstrated using the data collected with ROMANIS.

  19. The p-Value You Can't Buy.

    PubMed

    Demidenko, Eugene

    2016-01-02

    There is growing frustration with the concept of the p -value. Besides having an ambiguous interpretation, the p- value can be made as small as desired by increasing the sample size, n . The p -value is outdated and does not make sense with big data: Everything becomes statistically significant. The root of the problem with the p- value is in the mean comparison. We argue that statistical uncertainty should be measured on the individual, not the group, level. Consequently, standard deviation (SD), not standard error (SE), error bars should be used to graphically present the data on two groups. We introduce a new measure based on the discrimination of individuals/objects from two groups, and call it the D -value. The D -value can be viewed as the n -of-1 p -value because it is computed in the same way as p while letting n equal 1. We show how the D -value is related to discrimination probability and the area above the receiver operating characteristic (ROC) curve. The D -value has a clear interpretation as the proportion of patients who get worse after the treatment, and as such facilitates to weigh up the likelihood of events under different scenarios. [Received January 2015. Revised June 2015.].

  20. Teacher's Guide: Social Studies, 8.

    ERIC Educational Resources Information Center

    Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.

    This teacher's guide, part of a sequential K-12 series, provides objectives and activities for social studies students in grade 8. Five major sections focus on learning, inquiry, and discussion skills, concepts, and values and moral reasoning. Learning skills stress listening, speaking, viewing, reading, writing, map, and statistical abilities.…

  1. Teacher's Guide: Social Studies, 7.

    ERIC Educational Resources Information Center

    Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.

    This teacher's guide, part of a sequential K-12 series, provides objectives and activities for social studies students in grade 7. Five major sections focus on learning, inquiry, and discussion skills, concepts, and values and moral reasoning. Learning skills stress listening, speaking, viewing, reading, writing, map, and statistical abilities.…

  2. Teacher's Guide: Social Studies, 9.

    ERIC Educational Resources Information Center

    Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.

    This teacher's guide, part of a sequential K-12 series, provides objectives and activities for social studies students in grade 9. Five major sections focus on learning, inquiry, and discussion skills, concepts, and values and moral reasoning. Learning skills stress listening, speaking, viewing, reading, writing, map, and statistical abilities.…

  3. Teacher's Guide: Social Studies, 4.

    ERIC Educational Resources Information Center

    Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.

    This teacher's guide, part of a K-12 sequential series, provides objectives and activities for students in grade 4. Five major sections focus on learning, inquiry, and discussion skills, concepts, and values and moral reasoning. Learning skills stress listening, speaking, viewing, reading, writing, map, and statistical abilities. Students role…

  4. Teacher's Guide: Social Studies, 6.

    ERIC Educational Resources Information Center

    Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.

    This teacher's guide, part of a sequential K-12 series, provides objectives and activities for social studies students in grade 6. Five major sections focus on learning, inquiry, and discussion skills, concepts, and values and moral reasoning. Learning skills stress listening, speaking, viewing, reading, writing, map, and statistical abilities.…

  5. Rectangular Array Model Supporting Students' Spatial Structuring in Learning Multiplication

    ERIC Educational Resources Information Center

    Shanty, Nenden Octavarulia; Wijaya, Surya

    2012-01-01

    We examine how rectangular array model can support students' spatial structuring in learning multiplication. To begin, we define what we mean by spatial structuring as the mental operation of constructing an organization or form for an object or set of objects. For that reason, the eggs problem was chosen as the starting point in which the…

  6. Attention Capture by Faces

    ERIC Educational Resources Information Center

    Langton, Stephen R. H.; Law, Anna S.; Burton, A. Mike; Schweinberger, Stefan R.

    2008-01-01

    We report three experiments that investigate whether faces are capable of capturing attention when in competition with other non-face objects. In Experiment 1a participants took longer to decide that an array of objects contained a butterfly target when a face appeared as one of the distracting items than when the face did not appear in the array.…

  7. Assessment of Characteristic Function Modulus of Vibroacoustic Signal Given a Limit State Parameter of Diagnosed Equipment

    NASA Astrophysics Data System (ADS)

    Kostyukov, V. N.; Naumenko, A. P.; Kudryavtseva, I. S.

    2018-01-01

    Improvement of distinguishing criteria, determining defects of machinery and mechanisms, by vibroacoustic signals is a recent problem for technical diagnostics. The work objective is assessment of instantaneous values by methods of statistical decision making theory and risk of regulatory values of characteristic function modulus. The modulus of the characteristic function is determined given a fixed parameter of the characteristic function. It is possible to determine the limits of the modulus, which correspond to different machine’s condition. The data of the modulus values are used as diagnostic features in the vibration diagnostics and monitoring systems. Using such static decision-making methods as: minimum number of wrong decisions, maximum likelihood, minimax, Neumann-Pearson characteristic function modulus limits are determined, separating conditions of a diagnosed object.

  8. Long range heliostat target using array of normal incidence pyranometers to evaluate a beam of solar radiation

    DOEpatents

    Ghanbari, Cheryl M; Ho, Clifford K; Kolb, Gregory J

    2014-03-04

    Various technologies described herein pertain to evaluating a beam reflected by a heliostat. A portable target that has an array of sensors mounted thereupon is configured to capture the beam reflected by the heliostat. The sensors in the array output measured values indicative of a characteristic of the beam reflected by the heliostat. Moreover, a computing device can generate and output data corresponding to the beam reflected by the heliostat based on the measured values indicative of the characteristic of the beam received from the sensors in the array.

  9. State Comparisons of Education Statistics: 1969-70 to 1996-97.

    ERIC Educational Resources Information Center

    Snyder, Thomas D.; Hoffman, Charlene M.

    Information on elementary and secondary schools and institutions of higher learning aggregated at a state level is presented. The report contains a wide array of statistical data ranging from enrollments and enrollment ratios to teacher salaries and institutional finances. The state-level statistics most frequently requested from the National…

  10. Structural Models that Manage IT Portfolio Affecting Business Value of Enterprise Architecture

    NASA Astrophysics Data System (ADS)

    Kamogawa, Takaaki

    This paper examines the structural relationships between Information Technology (IT) governance and Enterprise Architecture (EA), with the objective of enhancing business value in the enterprise society. Structural models consisting of four related hypotheses reveal the relationship between IT governance and EA in the improvement of business values. We statistically examined the hypotheses by analyzing validated questionnaire items from respondents within firms listed on the Japanese stock exchange who were qualified to answer them. We concluded that firms which have organizational ability controlled by IT governance are more likely to deliver business value based on IT portfolio management.

  11. Multi-Angle Snowflake Camera Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkurko, Konstantin; Garrett, T.; Gaustad, K

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less

  12. Statistical considerations for harmonization of the global multicenter study on reference values.

    PubMed

    Ichihara, Kiyoshi

    2014-05-15

    The global multicenter study on reference values coordinated by the Committee on Reference Intervals and Decision Limits (C-RIDL) of the IFCC was launched in December 2011, targeting 45 commonly tested analytes with the following objectives: 1) to derive reference intervals (RIs) country by country using a common protocol, and 2) to explore regionality/ethnicity of reference values by aligning test results among the countries. To achieve these objectives, it is crucial to harmonize 1) the protocol for recruitment and sampling, 2) statistical procedures for deriving the RI, and 3) test results through measurement of a panel of sera in common. For harmonized recruitment, very lenient inclusion/exclusion criteria were adopted in view of differences in interpretation of what constitutes healthiness by different cultures and investigators. This policy may require secondary exclusion of individuals according to the standard of each country at the time of deriving RIs. An iterative optimization procedure, called the latent abnormal values exclusion (LAVE) method, can be applied to automate the process of refining the choice of reference individuals. For global comparison of reference values, test results must be harmonized, based on the among-country, pair-wise linear relationships of test values for the panel. Traceability of reference values can be ensured based on values assigned indirectly to the panel through collaborative measurement of certified reference materials. The validity of the adopted strategies is discussed in this article, based on interim results obtained to date from five countries. Special considerations are made for dissociation of RIs by parametric and nonparametric methods and between-country difference in the effect of body mass index on reference values. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing

    NASA Astrophysics Data System (ADS)

    Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.

    2009-05-01

    A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

  14. Studies of encapsulant materials for terrestrial solar-cell arrays

    NASA Technical Reports Server (NTRS)

    Carmichael, D. C. (Compiler)

    1975-01-01

    Study 1 of this contract is entitled ""Evaluation of World Experience and Properties of Materials for Encapsulation of Terrestrial Solar-Cell Arrays.'' The approach of this study is to review and analyze world experience and to compile data on properties of encapsulants for photovoltaic cells and for related applications. The objective of the effort is to recommend candidate materials and processes for encapsulating terrestrial photovoltaic arrays at low cost for a service life greater than 20 years. The objectives of Study 2, ""Definition of Encapsulant Service Environments and Test Conditions,'' are to develop the climatic/environmental data required to define the frequency and duration of detrimental environmental conditions in a 20-year array lifetime and to develop a corresponding test schedule for encapsulant systems.

  15. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  16. Creation of diffraction-limited non-Airy multifocal arrays using a spatially shifted vortex beam

    NASA Astrophysics Data System (ADS)

    Lin, Han; Gu, Min

    2013-02-01

    Diffraction-limited non-Airy multifocal arrays are created by focusing a phase-modulated vortex beam through a high numerical-aperture objective. The modulated phase at the back aperture of the objective resulting from the superposition of two concentric phase-modulated vortex beams allows for the generation of a multifocal array of cylindrically polarized non-Airy patterns. Furthermore, we shift the spatial positions of the phase vortices to manipulate the intensity distribution at each focal spot, leading to the creation of a multifocal array of split-ring patterns. Our method is experimentally validated by generating the predicted phase modulation through a spatial light modulator. Consequently, the spatially shifted circularly polarized vortex beam adopted in a dynamic laser direct writing system facilitates the fabrication of a split-ring microstructure array in a polymer material by a single exposure of a femtosecond laser beam.

  17. Microshutter Array Development for the Multi-Object Spectrograph for the New Generation Space Telescope, and Its Ground-based Demonstrator

    NASA Technical Reports Server (NTRS)

    Woodgate, Bruce E.; Moseley, Harvey; Fettig, Rainer; Kutyrev, Alexander; Ge, Jian; Fisher, Richard R. (Technical Monitor)

    2001-01-01

    The 6.5-m NASA/ESA/Canada New Generation Space Telescope to be operated at the L2 Lagrangian point will require a multi-object spectrograph (MOS) operating from 1 to 5 microns. Up to 3000 targets will be selected for simultaneous spectroscopy using a programmable cryogenic (approx. 35K) aperture array, consisting of a mosaic of arrays of micromirrors or microshutters. We describe the current status of the GSFC microshutter array development. The 100 micron square shutters are opened magnetically and latched open or closed electrostatically. Selection will be by two crossed one-dimensional addressing circuits. We will demonstrate the use of a 512 x 512 unit array on a ground-based IR MOS which will cover 0.6 to 5 microns, and operate rapidly to include spectroscopy of gamma ray burst afterglows.

  18. Plasma chamber testing of advanced photovoltaic solar array coupons

    NASA Technical Reports Server (NTRS)

    Hillard, G. Barry

    1994-01-01

    The solar array module plasma interactions experiment is a space shuttle experiment designed to investigate and quantify the high voltage plasma interactions. One of the objectives of the experiment is to test the performance of the Advanced Photovoltaic Solar Array (APSA). The material properties of array blanket are also studied as electric insulators for APSA arrays in high voltage conditions. Three twelve cell prototype coupons of silicon cells were constructed and tested in a space simulation chamber.

  19. Hydrogeochemistry and water quality of the Kordkandi-Duzduzan plain, NW Iran: application of multivariate statistical analysis and PoS index.

    PubMed

    Soltani, Shahla; Asghari Moghaddam, Asghar; Barzegar, Rahim; Kazemian, Naeimeh; Tziritis, Evangelos

    2017-08-18

    Kordkandi-Duzduzan plain is one of the fertile plains of East Azarbaijan Province, NW of Iran. Groundwater is an important resource for drinking and agricultural purposes due to the lack of surface water resources in the region. The main objectives of the present study are to identify the hydrogeochemical processes and the potential sources of major, minor, and trace metals and metalloids such as Cr, Mn, Cd, Fe, Al, and As by using joint hydrogeochemical techniques and multivariate statistical analysis and to evaluate groundwater quality deterioration with the use of PoS environmental index. To achieve these objectives, 23 groundwater samples were collected in September 2015. Piper diagram shows that the mixed Ca-Mg-Cl is the dominant groundwater type, and some of the samples have Ca-HCO 3 , Ca-Cl, and Na-Cl types. Multivariate statistical analyses indicate that weathering and dissolution of different rocks and minerals, e.g., silicates, gypsum, and halite, ion exchange, and agricultural activities influence the hydrogeochemistry of the study area. The cluster analysis divides the samples into two distinct clusters which are completely different in EC (and its dependent variables such as Na + , K + , Ca 2+ , Mg 2+ , SO 4 2- , and Cl - ), Cd, and Cr variables according to the ANOVA statistical test. Based on the median values, the concentrations of pH, NO 3 - , SiO 2 , and As in cluster 1 are elevated compared with those of cluster 2, while their maximum values occur in cluster 2. According to the PoS index, the dominant parameter that controls quality deterioration is As, with 60% of contribution. Samples of lowest PoS values are located in the southern and northern parts (recharge area) while samples of the highest values are located in the discharge area and the eastern part.

  20. Mn and Btex Reference Value Arrays (Final Reports)

    EPA Science Inventory

    These final reports are a summary of reference value arrays with critical supporting documentation for the chemicals manganese, benzene, toluene, ethylbenzene, and xylene. Each chemical is covered in a separate document, and each is a statement of the status of the available inha...

  1. Variation in the iodine concentrations of foods: considerations for dietary assessment1234

    PubMed Central

    Carriquiry, Alicia L; Spungen, Judith H; Murphy, Suzanne P; Pehrsson, Pamela R; Dwyer, Johanna T; Juan, WenYen; Wirtz, Mark S

    2016-01-01

    Background: Food-composition tables typically give measured nutrient concentrations in foods as a single summary value, often the mean, without providing information as to the shape of the distribution. Objective: Our objective was to explore how the statistical approach chosen to describe the iodine concentrations of foods affects the proportion of the population identified as having either insufficient or excessive iodine intakes. Design: We used food intake data reported by the 2009−2010 NHANES and measured iodine concentrations of Total Diet Study (TDS) foods from 4 US regions sampled in 2004–2011. We created 4 data sets, each by using a different summary statistic (median, mean, and 10th and 90th percentiles), to represent the iodine concentration distribution of each TDS food. We estimated the iodine concentration distribution of each food consumed by NHANES participants as the 4 iodine concentration summary statistics of a similar TDS food and used these, along with NHANES food intake data, to develop 4 estimates of each participant’s iodine intake on each survey day. Using the 4 estimates in turn, we calculated 4 usual iodine intakes for each sex- and age-specific subgroup. We then compared these to guideline values and developed 4 estimates of the proportions of each subgroup with deficient and excessive usual iodine intakes. Results: In general, the distribution of iodine intakes was poorly characterized when food iodine concentrations were expressed as mean values. In addition, mean values predicted lower prevalences of iodine deficiency than did median values. For example, in women aged 19–50 y, the estimated prevalence of iodine deficiency was 25% when based on median food iodine concentrations but only 5.8% when based on mean values. Conclusion: For nutrients such as iodine with highly variable concentrations in important food sources, we recommend that food-composition tables provide useful variability information, including the mean, SD, and median. PMID:27534633

  2. Micromachined Thermoelectric Sensors and Arrays and Process for Producing

    NASA Technical Reports Server (NTRS)

    Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)

    2000-01-01

    Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.

  3. Increase in the energy density of the pinch plasma in 3D implosion of quasi-spherical wire arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aleksandrov, V. V., E-mail: alexvv@triniti.ru; Gasilov, V. A.; Grabovski, E. V.

    Results are presented from experimental studies of the characteristics of the soft X-ray (SXR) source formed in the implosion of quasi-spherical arrays made of tungsten wires and metalized kapron fibers. The experiments were carried out at the Angara-5-1 facility at currents of up to 3 MA. Analysis of the spatial distribution of hard X-ray emission with photon energies above 20 keV in the pinch images taken during the implosion of quasi-spherical tungsten wire arrays (QTWAs) showed that a compact quasi-spherical plasma object symmetric with respect to the array axis formed in the central region of the array. Using a diffractionmore » grazing incidence spectrograph, spectra of SXR emission with wavelengths of 20–400 Å from the central, axial, and peripheral regions of the emission source were measured with spatial resolutions along the array radius and height in the implosion of QTWAs. It is shown that the emission spectra of the SXR sources formed under the implosion of quasi-spherical and cylindrical tungsten wire arrays at currents of up to 3 MA have a maximum in the wavelength range of 50–150 Å. It is found that, during the implosion of a QTWA with a profiled linear mass, a redistribution of energy in the emission spectrum takes place, which indicates that, during 3D implosion, the energy of longitudinal motion of the array material additionally contributes to the radiation energy. It is also found that, at close masses of the arrays and close values of the current in the range of 2.4{sup −3} MA, the average energy density in the emission source formed during the implosion of a quasi-spherical wire array is larger by a factor of 7 than in the source formed during the implosion of a cylindrical wire array. The experimental data were compared with results of 3D simulations of plasma dynamics and radiation generation during the implosion of quasi-spherical wire arrays with a profiled mass by using the MARPLE-3D radiative magnetohydrodynamic code, developed at the Keldysh Institute of Applied Mathematics, Russian Academy of Sciences.« less

  4. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    NASA Astrophysics Data System (ADS)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  5. U.S. Marine Corps Level-Dependent Hearing Protector Assessment: Objective Measures of Hearing Protection Devices

    DTIC Science & Technology

    2014-01-01

    12 4.2.1 Loudspeaker Array ...Instrumentation 4.2.1 Loudspeaker Array Target stimuli were presented from a spherical loudspeaker array consisting of 57 Meyer Sound MM-4XP miniature... loudspeaker array on a raised platform that placed their ears at the same elevation as the 0° loudspeaker ring. The chair was free to rotate 360° and

  6. OPEN PROBLEM: Orbits' statistics in chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Arnold, V.

    2008-07-01

    This paper shows how the measurement of the stochasticity degree of a finite sequence of real numbers, published by Kolmogorov in Italian in a journal of insurances' statistics, can be usefully applied to measure the objective stochasticity degree of sequences, originating from dynamical systems theory and from number theory. Namely, whenever the value of Kolmogorov's stochasticity parameter of a given sequence of numbers is too small (or too big), one may conclude that the conjecture describing this sequence as a sample of independent values of a random variables is highly improbable. Kolmogorov used this strategy fighting (in a paper in 'Doklady', 1940) against Lysenko, who had tried to disprove the classical genetics' law of Mendel experimentally. Calculating his stochasticity parameter value for the numbers from Lysenko's experiment reports, Kolmogorov deduced, that, while these numbers were different from the exact fulfilment of Mendel's 3 : 1 law, any smaller deviation would be a manifestation of the report's number falsification. The calculation of the values of the stochasticity parameter would be useful for many other generators of pseudorandom numbers and for many other chaotically looking statistics, including even the prime numbers distribution (discussed in this paper as an example).

  7. The Spiral Arm Segments of the Galaxy within 3 kpc from the Sun: A Statistical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griv, Evgeny; Jiang, Ing-Guey; Hou, Li-Gang, E-mail: griv@bgu.ac.il

    As can be reasonably expected, upcoming large-scale APOGEE, GAIA, GALAH, LAMOST, and WEAVE stellar spectroscopic surveys will yield rather noisy Galactic distributions of stars. In view of the possibility of employing these surveys, our aim is to present a statistical method to extract information about the spiral structure of the Galaxy from currently available data, and to demonstrate the effectiveness of this method. The model differs from previous works studying how objects are distributed in space in its calculation of the statistical significance of the hypothesis that some of the objects are actually concentrated in a spiral. A statistical analysismore » of the distribution of cold dust clumps within molecular clouds, H ii regions, Cepheid stars, and open clusters in the nearby Galactic disk within 3 kpc from the Sun is carried out. As an application of the method, we obtain distances between the Sun and the centers of the neighboring Sagittarius arm segment, the Orion arm segment in which the Sun is located, and the Perseus arm segment. Pitch angles of the logarithmic spiral segments and their widths are also estimated. The hypothesis that the collected objects accidentally form spirals is refuted with almost 100% statistical confidence. We show that these four independent distributions of young objects lead to essentially the same results. We also demonstrate that our newly deduced values of the mean distances and pitch angles for the segments are not too far from those found recently by Reid et al. using VLBI-based trigonometric parallaxes of massive star-forming regions.« less

  8. Determination of the Wave Parameters from the Statistical Characteristics of the Image of a Linear Test Object

    NASA Astrophysics Data System (ADS)

    Weber, V. L.

    2018-03-01

    We statistically analyze the images of the objects of the "light-line" and "half-plane" types which are observed through a randomly irregular air-water interface. The expressions for the correlation function of fluctuations of the image of an object given in the form of a luminous half-plane are found. The possibility of determining the spatial and temporal correlation functions of the slopes of a rough water surface from these relationships is shown. The problem of the probability of intersection of a small arbitrarily oriented line segment by the contour image of a luminous straight line is solved. Using the results of solving this problem, we show the possibility of determining the values of the curvature variances of a rough water surface. A practical method for obtaining an image of a rectilinear luminous object in the light rays reflected from the rough surface is proposed. It is theoretically shown that such an object can be synthesized by temporal accumulation of the image of a point source of light rapidly moving in the horizontal plane with respect to the water surface.

  9. Optimization of focality and direction in dense electrode array transcranial direct current stimulation (tDCS)

    NASA Astrophysics Data System (ADS)

    Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.

    2016-06-01

    Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives insight into the relationship between different objective criteria and optimized stimulus patterns. In addition, the analysis of the interaction between optimized stimulus patterns and safety constraint bounds suggests that more precise current localization in the ROI, with improved safety criterion, may be achieved by careful selection of the constraint bounds.

  10. Sequential, progressive, equal-power, reflective beam-splitter arrays

    NASA Astrophysics Data System (ADS)

    Manhart, Paul K.

    2017-11-01

    The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

  11. Optimal shortening of uniform covering arrays

    PubMed Central

    Rangel-Valdez, Nelson; Avila-George, Himer; Carrizalez-Turrubiates, Oscar

    2017-01-01

    Software test suites based on the concept of interaction testing are very useful for testing software components in an economical way. Test suites of this kind may be created using mathematical objects called covering arrays. A covering array, denoted by CA(N; t, k, v), is an N × k array over Zv={0,…,v-1} with the property that every N × t sub-array covers all t-tuples of Zvt at least once. Covering arrays can be used to test systems in which failures occur as a result of interactions among components or subsystems. They are often used in areas such as hardware Trojan detection, software testing, and network design. Because system testing is expensive, it is critical to reduce the amount of testing required. This paper addresses the Optimal Shortening of Covering ARrays (OSCAR) problem, an optimization problem whose objective is to construct, from an existing covering array matrix of uniform level, an array with dimensions of (N − δ) × (k − Δ) such that the number of missing t-tuples is minimized. Two applications of the OSCAR problem are (a) to produce smaller covering arrays from larger ones and (b) to obtain quasi-covering arrays (covering arrays in which the number of missing t-tuples is small) to be used as input to a meta-heuristic algorithm that produces covering arrays. In addition, it is proven that the OSCAR problem is NP-complete, and twelve different algorithms are proposed to solve it. An experiment was performed on 62 problem instances, and the results demonstrate the effectiveness of solving the OSCAR problem to facilitate the construction of new covering arrays. PMID:29267343

  12. Investigating the Trade-Off Between Power Generation and Environmental Impact of Tidal-Turbine Arrays Using Array Layout Optimisation and Habitat Sustainability Modelling.

    NASA Astrophysics Data System (ADS)

    du Feu, R. J.; Funke, S. W.; Kramer, S. C.; Hill, J.; Piggott, M. D.

    2016-12-01

    The installation of tidal turbines into the ocean will inevitably affect the environment around them. However, due to the relative infancy of this sector the extent and severity of such effects is unknown. The layout of an array of turbines is an important factor in determining not only the array's final yield but also how it will influence regional hydrodynamics. This in turn could affect, for example, sediment transportation or habitat suitability. The two potentially competing objectives of extracting energy from the tidal current, and of limiting any environmental impact consequent to influencing that current, are investigated here. This relationship is posed as a multi-objective optimisation problem. OpenTidalFarm, an array layout optimisation tool, and MaxEnt, habitat sustainability modelling software, are used to evaluate scenarios off the coast of the UK. MaxEnt is used to estimate the likelihood of finding a species in a given location based upon environmental input data and presence data of the species. Environmental features which are known to impact habitat, specifically those affected by the presence of an array, such as bed shear stress, are chosen as inputs. MaxEnt then uses a maximum-entropy modelling approach to estimate population distribution across the modelled area. OpenTidalFarm is used to maximise the power generated by an array, or multiple arrays, through adjusting the position and number of turbines within them. It uses a 2D shallow water model with turbine arrays represented as adjustable friction fields. It has the capability to also optimise for user created functionals that can be expressed mathematically. This work uses two functionals; power extracted by the array, and the suitability of habitat as predicted by MaxEnt. A gradient-based local optimisation is used to adjust the array layout at each iteration. This work presents arrays that are optimised for both yield and the viability of habitat for chosen species. In each scenario studied, a range of array formations is found expressing varying preferences for either functional. Further analyses then allow for the identification of trade-offs between the two key societal objectives of energy production and conservation. This in turn produces information valuable to stakeholders and policymakers when making decisions on array design.

  13. Promising Results from Three NASA SBIR Solar Array Technology Development Programs

    NASA Technical Reports Server (NTRS)

    Eskenazi, Mike; White, Steve; Spence, Brian; Douglas, Mark; Glick, Mike; Pavlick, Ariel; Murphy, David; O'Neill, Mark; McDanal, A. J.; Piszczor, Michael

    2005-01-01

    Results from three NASA SBIR solar array technology programs are presented. The programs discussed are: 1) Thin Film Photovoltaic UltraFlex Solar Array; 2) Low Cost/Mass Electrostatically Clean Solar Array (ESCA); and 3) Stretched Lens Array SquareRigger (SLASR). The purpose of the Thin Film UltraFlex (TFUF) Program is to mature and validate the use of advanced flexible thin film photovoltaics blankets as the electrical subsystem element within an UltraFlex solar array structural system. In this program operational prototype flexible array segments, using United Solar amorphous silicon cells, are being manufactured and tested for the flight qualified UltraFlex structure. In addition, large size (e.g. 10 kW GEO) TFUF wing systems are being designed and analyzed. Thermal cycle and electrical test and analysis results from the TFUF program are presented. The purpose of the second program entitled, Low Cost/Mass Electrostatically Clean Solar Array (ESCA) System, is to develop an Electrostatically Clean Solar Array meeting NASA s design requirements and ready this technology for commercialization and use on the NASA MMS and GED missions. The ESCA designs developed use flight proven materials and processes to create a ESCA system that yields low cost, low mass, high reliability, high power density, and is adaptable to any cell type and coverglass thickness. All program objectives, which included developing specifications, creating ESCA concepts, concept analysis and trade studies, producing detailed designs of the most promising ESCA treatments, manufacturing ESCA demonstration panels, and LEO (2,000 cycles) and GEO (1,350 cycles) thermal cycling testing of the down-selected designs were successfully achieved. The purpose of the third program entitled, "High Power Platform for the Stretched Lens Array," is to develop an extremely lightweight, high efficiency, high power, high voltage, and low stowed volume solar array suitable for very high power (multi-kW to MW) applications. These objectives are achieved by combining two cutting edge technologies, the SquareRigger solar array structure and the Stretched Lens Array (SLA). The SLA SquareRigger solar array is termed SLASR. All program objectives, which included developing specifications, creating preliminary designs for a near-term SLASR, detailed structural, mass, power, and sizing analyses, fabrication and power testing of a functional flight-like SLASR solar blanket, were successfully achieved.

  14. Combined Ultraviolet and Optical Spectra of 48 Low-Redshift QSOs and the Relation of the Continuum and Emission-Line Properties

    NASA Astrophysics Data System (ADS)

    Corbin, Michael R.; Boroson, Todd A.

    1996-11-01

    We present combined ultraviolet and optical spectra of 48 QSOs and Seyfert 1 galaxies in the redshift range 0.034-0.774. The UV spectra were obtained non-simultaneously with the optical and are derived from archival Hubble Space Telescope (HST) Faint Object Spectrograph and International Ultraviolet Explorer (IUE) observations. The sample consists of 22 radio- quiet objects, 12 flat radio spectrum radio-loud objects, and 14 steep radio spectrum objects, and it covers approximately 2.5 decades in ultraviolet continuum luminosity. The sample objects are among the most luminous known in this redshift range and include 3C 273 and Fairall 9, as well as many objects discovered in the Bright Quasar Survey. We measure and compare an array of emission-line and continuum parameters, including 2 keV X-ray luminosities derived from the Einstein database. We examine individual correlations and also apply a principal components analysis (PCA) in an effort to determine the underlying sources of variance among these observables. Our main results are as follows. 1. The C IV λ1549 profile asymmetry is correlated with the UV continuum luminosity measured at the position of that line, such that increasing continuum luminosity produces increasing redward asymmetry. This is the same correlation found between Hβ asymmetry and 2 keV luminosity in a larger sample of objects and appears to be followed by both radio-loud and radio-quiet sources. The C IV profile asymmetry is also correlated with the FWZI of the Lyα profile, with more redward asymmetric profiles associated with wider profile bases. The PCA reveals that the correlated increase in luminosity, C IV redward asymmetry, and profile base width accounts for over half the statistical variance in the sample. 2. There is a statistically significant difference between the FWZI distributions of the Lyα and Hβ lines, such that the former is wider on average by ~10^4^ km s^-1^. The FWHM values of the broad Hβ line are weakly correlated with those of C IV λ1549 and Lyα, and in contrast to the FWZI values the Hβ profiles are wider. Measures of the asymmetry of the Hβ and C IV profiles also show a weak correlation. The wavelength centroids at 3/4 maximum of the Lyα and C IV lines also show average blueshifts ~50-200 km s^-1^ from [O III] λ5007, versus an average redshift of 75 km s^-1^ for broad Hβ. 3. There is no clear evidence of narrow components to the stronger UV lines, even among objects in which the optical narrow lines including [O III] λλ4959, 5007 are unusually strong. We measure the average fractional contributions of such components to the Lyα and C III] λ1909 lines to be ~4%-5%, consistent with the findings from smaller samples. However, a sizable fraction (50%) of radio-loud objects display a narrow component of He II λ1640, the same as in the QSO population at intermediate redshifts, and such a component is likely to contribute to the other UV lines. We interpret the first result as the effect of a black hole mass/luminosity relation in which the profile widths and redward asymmetries are produced respectively by the virialized motions and gravitational redshift associated with 10^9^-10^10^ M_sun_ holes. This does not explain the cases of blueward profile asymmetries and blueshifted profile peaks, which require an effect acting oppositely to gravitational redshift. The peak redshift differences and relative weakness of the correlations between the UV profile widths and asymmetries and those of Hβ suggests a stratified ionization structure of the broad-line region (BLR), consistent with the variability studies of Seyfert 1 galaxies. Continuum variability and the dynamical evolution of the BLR gas may also influence these results. The difference between the Lyα and Hβ FWZI values provides additional evidence of an optically thin very broad line region (VBLR) lying interior to an intermediate line region (ILR) producing the profile cores. The smaller average FWHM values of the UV lines compared to Hβ indicate that they have a higher relative contribution of ILR emission, versus a more dominant VBLR component in the Balmer lines. The narrow He II λ1640 feature of radio-loud objects is likely associated with the inner regions of extended (100 kpc) ionized halos that are not present around radio-quiet objects, and which appear to be best explained as cooling flows around the QSO host galaxies.

  15. Statistical interpretation of transient current power-law decay in colloidal quantum dot arrays

    NASA Astrophysics Data System (ADS)

    Sibatov, R. T.

    2011-08-01

    A new statistical model of the charge transport in colloidal quantum dot arrays is proposed. It takes into account Coulomb blockade forbidding multiple occupancy of nanocrystals and the influence of energetic disorder of interdot space. The model explains power-law current transients and the presence of the memory effect. The fractional differential analogue of the Ohm law is found phenomenologically for nanocrystal arrays. The model combines ideas that were considered as conflicting by other authors: the Scher-Montroll idea about the power-law distribution of waiting times in localized states for disordered semiconductors is applied taking into account Coulomb blockade; Novikov's condition about the asymptotic power-law distribution of time intervals between successful current pulses in conduction channels is fulfilled; and the carrier injection blocking predicted by Ginger and Greenham (2000 J. Appl. Phys. 87 1361) takes place.

  16. Statistical Analysis of Deflation in Covariance and Resultant Pc Values for AQUA, AURA and TERRA

    NASA Technical Reports Server (NTRS)

    Hasan, Syed O.

    2016-01-01

    This presentation will display statistical analysis performed for raw conjunction CDMs received for the EOS Aqua, Aura and Terra satellites within the period of February 2015 through July 2016. The analysis performed indicates a discernable deflation in covariance calculated at the JSpOC after the utilization of the dynamic drag consider parameter was implemented operationally in May 2015. As a result, the overall diminution in the conjunction plane intersection of the primary and secondary objects appears to be leading to reduced probability of collision (Pc) values for these conjunction events. This presentation also displays evidence for this theory with analysis of Pc trending plots using data calculated by the SpaceNav CRMS system.

  17. Toward improved mechanical, tribological, corrosion and in-vitro bioactivity properties of mixed oxide nanotubes on Ti-6Al-7Nb implant using multi-objective PSO.

    PubMed

    Rafieerad, A R; Bushroa, A R; Nasiri-Tabrizi, B; Kaboli, S H A; Khanahmadi, S; Amiri, Ahmad; Vadivelu, J; Yusof, F; Basirun, W J; Wasa, K

    2017-05-01

    Recently, the robust optimization and prediction models have been highly noticed in district of surface engineering and coating techniques to obtain the highest possible output values through least trial and error experiments. Besides, due to necessity of finding the optimum value of dependent variables, the multi-objective metaheuristic models have been proposed to optimize various processes. Herein, oriented mixed oxide nanotubular arrays were grown on Ti-6Al-7Nb (Ti67) implant using physical vapor deposition magnetron sputtering (PVDMS) designed by Taguchi and following electrochemical anodization. The obtained adhesion strength and hardness of Ti67/Nb were modeled by particle swarm optimization (PSO) to predict the outputs performance. According to developed models, multi-objective PSO (MOPSO) run aimed at finding PVDMS inputs to maximize current outputs simultaneously. The provided sputtering parameters were applied as validation experiment and resulted in higher adhesion strength and hardness of interfaced layer with Ti67. The as-deposited Nb layer before and after optimization were anodized in fluoride-base electrolyte for 300min. To crystallize the coatings, the anodically grown mixed oxide TiO 2 -Nb 2 O 5 -Al 2 O 3 nanotubes were annealed at 440°C for 30min. From the FESEM observations, the optimized adhesive Nb interlayer led to further homogeneity of mixed nanotube arrays. As a result of this surface modification, the anodized sample after annealing showed the highest mechanical, tribological, corrosion resistant and in-vitro bioactivity properties, where a thick bone-like apatite layer was formed on the mixed oxide nanotubes surface within 10 days immersion in simulated body fluid (SBF) after applied MOPSO. The novel results of this study can be effective in optimizing a variety of the surface properties of the nanostructured implants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Effect of Component Failures on Economics of Distributed Photovoltaic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubin, Barry T.

    2012-02-02

    This report describes an applied research program to assess the realistic costs of grid connected photovoltaic (PV) installations. A Board of Advisors was assembled that included management from the regional electric power utilities, as well as other participants from companies that work in the electric power industry. Although the program started with the intention of addressing effective load carrying capacity (ELCC) for utility-owned photovoltaic installations, results from the literature study and recommendations from the Board of Advisors led investigators to the conclusion that obtaining effective data for this analysis would be difficult, if not impossible. The effort was then re-focusedmore » on assessing the realistic costs and economic valuations of grid-connected PV installations. The 17 kW PV installation on the University of Hartford's Lincoln Theater was used as one source of actual data. The change in objective required a more technically oriented group. The re-organized working group (changes made due to the need for more technically oriented participants) made site visits to medium-sized PV installations in Connecticut with the objective of developing sources of operating histories. An extensive literature review helped to focus efforts in several technical and economic subjects. The objective of determining the consequences of component failures on both generation and economic returns required three analyses. The first was a Monte-Carlo-based simulation model for failure occurrences and the resulting downtime. Published failure data, though limited, was used to verify the results. A second model was developed to predict the reduction in or loss of electrical generation related to the downtime due to these failures. Finally, a comprehensive economic analysis, including these failures, was developed to determine realistic net present values of installed PV arrays. Two types of societal benefits were explored, with quantitative valuations developed for both. Some societal benefits associated with financial benefits to the utility of having a distributed generation capacity that is not fossil-fuel based have been included into the economic models. Also included and quantified in the models are several benefits to society more generally: job creation and some estimates of benefits from avoiding greenhouse emissions. PV system failures result in a lowering of the economic values of a grid-connected system, but this turned out to be a surprisingly small effect on the overall economics. The most significant benefit noted resulted from including the societal benefits accrued to the utility. This provided a marked increase in the valuations of the array and made the overall value proposition a financially attractive one, in that net present values exceeded installation costs. These results indicate that the Department of Energy and state regulatory bodies should consider focusing on societal benefits that create economic value for the utility, confirm these quantitative values, and work to have them accepted by the utilities and reflected in the rate structures for power obtained from grid-connected arrays. Understanding and applying the economic benefits evident in this work can significantly improve the business case for grid-connected PV installations. This work also indicates that the societal benefits to the population are real and defensible, but not nearly as easy to justify in a business case as are the benefits that accrue directly to the utility.« less

  19. Graphical Arrays of Chemical-Specific Health Effect Reference Values for Inhalation Exposures (2009 Final Report)

    EPA Science Inventory

    This document provides graphical arrays and tables of key information on the derivation of human inhalation health effect reference values for specific chemicals, allowing comparisons across durations, populations, and intended use. A number of program offices within the Agency, ...

  20. Comparison of Texture Analysis Techniques in Both Frequency and Spatial Domains for Cloud Feature Extraction

    DTIC Science & Technology

    1992-01-01

    entropy , energy. variance, skewness, and object. It can also be applied to an image of a phenomenon. It kurtosis. These parameters are then used as...statistic. The co-occurrence matrix method is used in this study to derive texture values of entropy . Limogeneity. energy (similar to the GLDV angular...from working with the co-occurrence matrix method. Seven convolution sizes were chosen to derive the texture values of entropy , local homogeneity, and

  1. Determination of Acidity in Donor Milk.

    PubMed

    Escuder-Vieco, Diana; Vázquez-Román, Sara; Sánchez-Pallás, Juan; Ureta-Velasco, Noelia; Mosqueda-Peña, Rocío; Pallás-Alonso, Carmen Rosa

    2016-11-01

    There is no uniformity among milk banks on milk acceptance criteria. The acidity obtained by the Dornic titration technique is a widely used quality control in donor milk. However, there are no comparative data with other acidity-measuring techniques, such as the pH meter. The objective of this study was to assess the correlation between the Dornic technique and the pH measure to determine the pH cutoff corresponding to the Dornic degree limit value used as a reference for donor milk quality control. Fifty-two human milk samples were obtained from 48 donors. Acidity was measured using the Dornic method and pH meter in triplicate. Statistical data analysis to estimate significant correlations between variables was carried out. The Dornic acidity value that led to rejecting donor milk was ≥ 8 Dornic degrees (°D). In the evaluated sample size, Dornic acidity measure and pH values showed a statistically significant negative correlation (τ = -0.780; P = .000). A pH value of 6.57 corresponds to 8°D and of 7.12 to 4°D. Donor milk with a pH over 6.57 may be accepted for subsequent processing in the milk bank. Moreover, the pH measurement seems to be more useful due to certain advantages over the Dornic method, such as objectivity, accuracy, standardization, the lack of chemical reagents required, and the fact that it does not destroy the milk sample.

  2. RCP: a novel probe design bias correction method for Illumina Methylation BeadChip.

    PubMed

    Niu, Liang; Xu, Zongli; Taylor, Jack A

    2016-09-01

    The Illumina HumanMethylation450 BeadChip has been extensively utilized in epigenome-wide association studies. This array and its successor, the MethylationEPIC array, use two types of probes-Infinium I (type I) and Infinium II (type II)-in order to increase genome coverage but differences in probe chemistries result in different type I and II distributions of methylation values. Ignoring the difference in distributions between the two probe types may bias downstream analysis. Here, we developed a novel method, called Regression on Correlated Probes (RCP), which uses the existing correlation between pairs of nearby type I and II probes to adjust the beta values of all type II probes. We evaluate the effect of this adjustment on reducing probe design type bias, reducing technical variation in duplicate samples, improving accuracy of measurements against known standards, and retention of biological signal. We find that RCP is statistically significantly better than unadjusted data or adjustment with alternative methods including SWAN and BMIQ. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website (https://www.bioconductor.org/packages/release/bioc/html/ENmix.html). niulg@ucmail.uc.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  3. Separation of parallel encoded complex-valued slices (SPECS) from a single complex-valued aliased coil image.

    PubMed

    Rowe, Daniel B; Bruce, Iain P; Nencka, Andrew S; Hyde, James S; Kociuba, Mary C

    2016-04-01

    Achieving a reduction in scan time with minimal inter-slice signal leakage is one of the significant obstacles in parallel MR imaging. In fMRI, multiband-imaging techniques accelerate data acquisition by simultaneously magnetizing the spatial frequency spectrum of multiple slices. The SPECS model eliminates the consequential inter-slice signal leakage from the slice unaliasing, while maintaining an optimal reduction in scan time and activation statistics in fMRI studies. When the combined k-space array is inverse Fourier reconstructed, the resulting aliased image is separated into the un-aliased slices through a least squares estimator. Without the additional spatial information from a phased array of receiver coils, slice separation in SPECS is accomplished with acquired aliased images in shifted FOV aliasing pattern, and a bootstrapping approach of incorporating reference calibration images in an orthogonal Hadamard pattern. The aliased slices are effectively separated with minimal expense to the spatial and temporal resolution. Functional activation is observed in the motor cortex, as the number of aliased slices is increased, in a bilateral finger tapping fMRI experiment. The SPECS model incorporates calibration reference images together with coefficients of orthogonal polynomials into an un-aliasing estimator to achieve separated images, with virtually no residual artifacts and functional activation detection in separated images. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Multi-kW solar arrays for Earth orbit applications

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The multi-kW solar array program is concerned with developing the technology required to enable the design of solar arrays required to power the missions of the 1990's. The present effort required the design of a modular solar array panel consisting of superstrate modules interconnected to provide the structural support for the solar cells. The effort was divided into two tasks: (1) superstrate solar array panel design, and (2) superstrate solar array panel-to-panel design. The primary objective was to systematically investigate critical areas of the transparent superstrate solar array and evaluate the flight capabilities of this low cost approach.

  5. GPS FOM Chimney Analysis using Generalized Extreme Value Distribution

    NASA Technical Reports Server (NTRS)

    Ott, Rick; Frisbee, Joe; Saha, Kanan

    2004-01-01

    Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.

  6. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, Bahram A.; Maestre, Marcos F.; Fish, Richard H.; Johnston, William E.

    1997-01-01

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations add reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage.

  7. Method and apparatus for accurately manipulating an object during microelectrophoresis

    DOEpatents

    Parvin, B.A.; Maestre, M.F.; Fish, R.H.; Johnston, W.E.

    1997-09-23

    An apparatus using electrophoresis provides accurate manipulation of an object on a microscope stage for further manipulations and reactions. The present invention also provides an inexpensive and easily accessible means to move an object without damage to the object. A plurality of electrodes are coupled to the stage in an array whereby the electrode array allows for distinct manipulations of the electric field for accurate manipulations of the object. There is an electrode array control coupled to the plurality of electrodes for manipulating the electric field. In an alternative embodiment, a chamber is provided on the stage to hold the object. The plurality of electrodes are positioned in the chamber, and the chamber is filled with fluid. The system can be automated using visual servoing, which manipulates the control parameters, i.e., x, y stage, applying the field, etc., after extracting the significant features directly from image data. Visual servoing includes an imaging device and computer system to determine the location of the object. A second stage having a plurality of tubes positioned on top of the second stage, can be accurately positioned by visual servoing so that one end of one of the plurality of tubes surrounds at least part of the object on the first stage. 11 figs.

  8. A real-time regional adaptive exposure method for saving dose-area product in x-ray fluoroscopy

    PubMed Central

    Burion, Steve; Speidel, Michael A.; Funk, Tobias

    2013-01-01

    Purpose: Reduction of radiation dose in x-ray imaging has been recognized as a high priority in the medical community. Here the authors show that a regional adaptive exposure method can reduce dose-area product (DAP) in x-ray fluoroscopy. The authors' method is particularly geared toward providing dose savings for the pediatric population. Methods: The scanning beam digital x-ray system uses a large-area x-ray source with 8000 focal spots in combination with a small photon-counting detector. An imaging frame is obtained by acquiring and reconstructing up to 8000 detector images, each viewing only a small portion of the patient. Regional adaptive exposure was implemented by varying the exposure of the detector images depending on the local opacity of the object. A family of phantoms ranging in size from infant to obese adult was imaged in anteroposterior view with and without adaptive exposure. The DAP delivered to each phantom was measured in each case, and noise performance was compared by generating noise arrays to represent regional noise in the images. These noise arrays were generated by dividing the image into regions of about 6 mm2, calculating the relative noise in each region, and placing the relative noise value of each region in a one-dimensional array (noise array) sorted from highest to lowest. Dose-area product savings were calculated as the difference between the ratio of DAP with adaptive exposure to DAP without adaptive exposure. The authors modified this value by a correction factor that matches the noise arrays where relative noise is the highest to report a final dose-area product savings. Results: The average dose-area product saving across the phantom family was (42 ± 8)% with the highest dose-area product saving in the child-sized phantom (50%) and the lowest in the phantom mimicking an obese adult (23%). Conclusions: Phantom measurements indicate that a regional adaptive exposure method can produce large DAP savings without compromising the noise performance in the image regions with highest noise. PMID:23635281

  9. The control system of the 12-m medium-size telescope prototype: a test-ground for the CTA array control

    NASA Astrophysics Data System (ADS)

    Oya, I.; Anguner, E. A.; Behera, B.; Birsin, E.; Fuessling, M.; Lindemann, R.; Melkumyan, D.; Schlenstedt, S.; Schmidt, T.; Schwanke, U.; Sternberger, R.; Wegner, P.; Wiesand, S.

    2014-07-01

    The Cherenkov Telescope Array (CTA) will be the next generation ground-based very-high energy -ray observatory. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different sizes and types and in addition numerous auxiliary devices. In order to provide a test-ground for the CTA array control, the steering software of the 12-m medium size telescope (MST) prototype deployed in Berlin has been implemented using the tools and design concepts under consideration to be used for the control of the CTA array. The prototype control system is implemented based on the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) control middleware, with components implemented in Java, C++ and Python. The interfacing to the hardware is standardized via the Object Linking and Embedding for Process Control Unified Architecture (OPC UA). In order to access the OPC UA servers from the ACS framework in a common way, a library has been developed that allows to tie the OPC UA server nodes, methods and events to the equivalents in ACS components. The front-end of the archive system is able to identify the deployed components and to perform the sampling of the monitoring points of each component following time and value change triggers according to the selected configurations. The back-end of the archive system of the prototype is composed by two different databases: MySQL and MongoDB. MySQL has been selected as storage of the system configurations, while MongoDB is used to have an efficient storage of device monitoring data, CCD images, logging and alarm information. In this contribution, the details and conclusions on the implementation of the control software of the MST prototype are presented.

  10. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  11. Check Calibration of the NASA Glenn 10- by 10-Foot Supersonic Wind Tunnel (2014 Test Entry)

    NASA Technical Reports Server (NTRS)

    Johnson, Aaron; Pastor-Barsi, Christine; Arrington, E. Allen

    2016-01-01

    A check calibration of the 10- by 10-Foot Supersonic Wind Tunnel (SWT) was conducted in May/June 2014 using an array of five supersonic wedge probes to verify the 1999 Calibration. This check calibration was necessary following a control systems upgrade and an integrated systems test (IST). This check calibration was required to verify the tunnel flow quality was unchanged by the control systems upgrade prior to the next test customer beginning their test entry. The previous check calibration of the tunnel occurred in 2007, prior to the Mars Science Laboratory test program. Secondary objectives of this test entry included the validation of the new Cobra data acquisition system (DAS) against the current Escort DAS and the creation of statistical process control (SPC) charts through the collection of series of repeated test points at certain predetermined tunnel parameters. The SPC charts secondary objective was not completed due to schedule constraints. It is hoped that this effort will be readdressed and completed in the near future.

  12. Plasma Interaction with International Space Station High Voltage Solar Arrays

    NASA Technical Reports Server (NTRS)

    Heard, John W.

    2002-01-01

    The International Space Station (ISS) is presently being assembled in low-earth orbit (LEO) operating high voltage solar arrays (-160 V max, -140 V typical with respect to the ambient atmosphere). At the station's present altitude, there exists substantial ambient plasma that can interact with the solar arrays. The biasing of an object to an electric potential immersed in plasma creates a plasma "sheath" or non-equilibrium plasma around the object to mask out the electric fields. A positively biased object can collect electrons from the plasma sheath and the sheath will draw a current from the surrounding plasma. This parasitic current can enter the solar cells and effectively "short out" the potential across the cells, reducing the power that can be generated by the panels. Predictions of collected current based on previous high voltage experiments (SAMPIE (Solar Array Module Plasma Interactions Experiment), PASP+ (Photovoltaic Array Space Power) were on the order of amperes of current. However, present measurements of parasitic current are on the order of several milliamperes, and the current collection mainly occurs during an "eclipse exit" event, i.e., when the space station comes out of darkness. This collection also has a time scale, t approx. 1000 s, that is much slower than any known plasma interaction time scales. The reason for the discrepancy between predictions and present electron collection is not understood and is under investigation by the PCU (Plasma Contactor Unit) "Tiger" team. This paper will examine the potential structure within and around the solar arrays, and the possible causes and reasons for the electron collection of the array.

  13. S-band antenna phased array communications system

    NASA Technical Reports Server (NTRS)

    Delzer, D. R.; Chapman, J. E.; Griffin, R. A.

    1975-01-01

    The development of an S-band antenna phased array for spacecraft to spacecraft communication is discussed. The system requirements, antenna array subsystem design, and hardware implementation are examined. It is stated that the phased array approach offers the greatest simplicity and lowest cost. The objectives of the development contract are defined as: (1) design of a medium gain active phased array S-band communications antenna, (2) development and test of a model of a seven element planar array of radiating elements mounted in the appropriate cavity matrix, and (3) development and test of a breadboard transmit/receive microelectronics module.

  14. Investigation of the influence of geometric parameters of carbon nanotube arrays on their adhesion properties

    NASA Astrophysics Data System (ADS)

    Il’ina, M. V.; Konshin, A. A.; Il’in, O. I.; Rudyk, N. N.; Fedotov, A. A.; Ageev, O. A.

    2018-03-01

    The results of experimental studies of adhesion of carbon nanotube (CNT) arrays with different geometric parameters and orientations using atomic-force microscopy are presented. The adhesion values of CNT arrays were determined, which were from 82 to 1315 nN depending on the parameters of the array. As a result, it was established that the adhesion of a CNT array increases with an increase in branching and disorientation of the array, as well as with the growth of the aspect ratio of CNTs in the array.

  15. Characterization of neuroendocrine tumors of the pancreas by real-time quantitative polymerase chain reaction. A methodological approach.

    PubMed

    Annaratone, Laura; Volante, Marco; Asioli, Sofia; Rangel, Nelson; Bussolati, Gianni

    2013-06-01

    The aim of this study was to assess the suitability of using real-time quantitative PCR (RT-qPCR) to characterize neuroendocrine (NE) tumors of the pancreas. For a series of tumors, we evaluated several genes of interest, and the data were matched with the "classical" immunohistochemical (IHC) features. In 21 cases, we extracted RNA from formalin-fixed paraffin-embedded (FFPE) blocks, and in nine cases, we also extracted RNA from fresh-frozen tissue. The RT-qPCR procedure was performed using two sets of customized arrays. The test using the first set, covering 96 genes of interest, was focused on assessing the feasibility of the procedure, and the results were used to select 18 genes indicative of NE differentiation, clinical behavior, and therapeutic responsiveness for use in the second set of arrays. Threshold cycle (Ct) values were used to calculate the fold-changes in gene expression using the 2-∆∆Ct method. Statistical procedures were used to analyze the results, which were matched with the IHC and follow-up data. Material from fresh-frozen samples performed better in terms of the level of amplification, but acceptable and concordant results were also obtained from FFPE samples. In addition, high concordance was observed between the mRNA and protein expression levels of somatostatin receptor type 2A (R = 0.52, p = 0.016). Genes associated with NE differentiation, as well as the gastrin-releasing peptide receptor and O-6-methylguanine-DNA methyltransferase genes, were underexpressed, whereas angiogenesis-associated markers (CDH13 and SLIT2) were overexpressed in tissues with malignant behavior. The RT-qPCR procedure is practical and feasible in economic terms for the characterization of NE tumors of the pancreas and can complement morphological and IHC-based evaluations. Thus, the results of the RT-qPCR procedure might offer an objective basis for therapeutic choices.

  16. MesoNAM Verification Phase II

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.

    2011-01-01

    The 45th Weather Squadron Launch Weather Officers use the 12-km resolution North American Mesoscale model (MesoNAM) forecasts to support launch weather operations. In Phase I, the performance of the model at KSC/CCAFS was measured objectively by conducting a detailed statistical analysis of model output compared to observed values. The objective analysis compared the MesoNAM forecast winds, temperature, and dew point to the observed values from the sensors in the KSC/CCAFS wind tower network. In Phase II, the AMU modified the current tool by adding an additional 15 months of model output to the database and recalculating the verification statistics. The bias, standard deviation of bias, Root Mean Square Error, and Hypothesis test for bias were calculated to verify the performance of the model. The results indicated that the accuracy decreased as the forecast progressed, there was a diurnal signal in temperature with a cool bias during the late night and a warm bias during the afternoon, and there was a diurnal signal in dewpoint temperature with a low bias during the afternoon and a high bias during the late night.

  17. Solid state optical microscope

    DOEpatents

    Young, I.T.

    1983-08-09

    A solid state optical microscope wherein wide-field and high-resolution images of an object are produced at a rapid rate by utilizing conventional optics with a charge-coupled photodiode array. A galvanometer scanning mirror, for scanning in one of two orthogonal directions is provided, while the charge-coupled photodiode array scans in the other orthogonal direction. Illumination light from the object is incident upon the photodiodes, creating packets of electrons (signals) which are representative of the illuminated object. The signals are then processed, stored in a memory, and finally displayed as a video signal. 2 figs.

  18. Solid-state optical microscope

    DOEpatents

    Young, I.T.

    1981-01-07

    A solid state optical microscope is described wherein wide-field and high-resolution images of an object are produced at a rapid rate by utilizing conventional optics with a charge-coupled photodiode array. Means for scanning in one of two orthogonal directions are provided, while the charge-coupled photodiode array scans in the other orthogonal direction. Illumination light from the object is incident upon the photodiodes, creating packets of electrons (signals) which are representative of the illuminated object. The signals are then processed, stored in a memory, and finally displayed as a video signal.

  19. Solid state optical microscope

    DOEpatents

    Young, Ian T.

    1983-01-01

    A solid state optical microscope wherein wide-field and high-resolution images of an object are produced at a rapid rate by utilizing conventional optics with a charge-coupled photodiode array. A galvanometer scanning mirror, for scanning in one of two orthogonal directions is provided, while the charge-coupled photodiode array scans in the other orthogonal direction. Illumination light from the object is incident upon the photodiodes, creating packets of electrons (signals) which are representative of the illuminated object. The signals are then processed, stored in a memory, and finally displayed as a video signal.

  20. Photomask CD and LER characterization using Mueller matrix spectroscopic ellipsometry

    NASA Astrophysics Data System (ADS)

    Heinrich, A.; Dirnstorfer, I.; Bischoff, J.; Meiner, K.; Ketelsen, H.; Richter, U.; Mikolajick, T.

    2014-10-01

    Critical dimension and line edge roughness on photomask arrays are determined with Mueller matrix spectroscopic ellipsometry. Arrays with large sinusoidal perturbations are measured for different azimuth angels and compared with simulations based on rigorous coupled wave analysis. Experiment and simulation show that line edge roughness leads to characteristic changes in the different Mueller matrix elements. The influence of line edge roughness is interpreted as an increase of isotropic character of the sample. The changes in the Mueller matrix elements are very similar when the arrays are statistically perturbed with rms roughness values in the nanometer range suggesting that the results on the sinusoidal test structures are also relevant for "real" mask errors. Critical dimension errors and line edge roughness have similar impact on the SE MM measurement. To distinguish between both deviations, a strategy based on the calculation of sensitivities and correlation coefficients for all Mueller matrix elements is shown. The Mueller matrix elements M13/M31 and M34/M43 are the most suitable elements due to their high sensitivities to critical dimension errors and line edge roughness and, at the same time, to a low correlation coefficient between both influences. From the simulated sensitivities, it is estimated that the measurement accuracy has to be in the order of 0.01 and 0.001 for the detection of 1 nm critical dimension error and 1 nm line edge roughness, respectively.

  1. A Phased Array of Widely Separated Antennas for Space Communication and Planetary Radar

    NASA Astrophysics Data System (ADS)

    Geldzahler, B.; Bershad, C.; Brown, R.; Cox, R.; Hoblitzell, R.; Kiriazes, J.; Ledford, B.; Miller, M.; Woods, G.; Cornish, T.; D'Addario, L.; Davarian, F.; Lee, D.; Morabito, D.; Tsao, P.; Soloff, J.; Church, K.; Deffenbaugh, P.; Abernethy, K.; Anderson, W.; Collier, J.; Wellen, G.

    NASA has successfully demonstrated coherent uplink arraying with real time compensation for atmospheric phase fluctuations at 7.145-7.190 GHz (X-band) and is pursuing a similar demonstration 30-31 GHz (Ka-band) using three 12m diameter COTS antennas separated by 60m at the Kennedy Space Center in Florida. In addition, we have done the same demonstration with up to three 34m antennas separated by 250m at the Goldstone Deep Space Communication Complex in California at X-band 7.1 GHz. We have begun to infuse the capability at Goldstone into the Deep Space Network to provide a quasi-operational system. Such a demonstration can enable NASA to design and establish a high power (10 PW) high resolution (<10 cm), 24/7 availability radar system for (a) tracking and characterizing observations of Near Earth Objects (NEOs), (b) tracking, characterizing and determining the statistics of small-scale (≤10cm) orbital debris, (c) incorporating the capability into its space communication and navigation tracking stations for emergency spacecraft commanding in the Ka band era which NASA is entering, and (d) fielding capabilities of interest to other US government agencies. We present herein the results of our phased array uplink combining at near 7.17 and 8.3 GHz using widely separated antennas demonstrations, our moderately successful attempts to rescue the STEREO-B spacecraft (distance 2 astronomical units (185,000,000 miles), the first two attempts at imaging and ranging of near Earth asteroids, and progress in developing telescopes that are fully capable at radio and optical frequencies. And progress toward the implementation of our vision for going forward in implementing a high performance, low lifecycle cost multi-element radar array.

  2. Using a binaural biomimetic array to identify bottom objects ensonified by echolocating dolphins

    USGS Publications Warehouse

    Heiweg, D.A.; Moore, P.W.; Martin, S.W.; Dankiewicz, L.A.

    2006-01-01

    The development of a unique dolphin biomimetic sonar produced data that were used to study signal processing methods for object identification. Echoes from four metallic objects proud on the bottom, and a substrate-only condition, were generated by bottlenose dolphins trained to ensonify the targets in very shallow water. Using the two-element ('binaural') receive array, object echo spectra were collected and submitted for identification to four neural network architectures. Identification accuracy was evaluated over two receive array configurations, and five signal processing schemes. The four neural networks included backpropagation, learning vector quantization, genetic learning and probabilistic network architectures. The processing schemes included four methods that capitalized on the binaural data, plus a monaural benchmark process. All the schemes resulted in above-chance identification accuracy when applied to learning vector quantization and backpropagation. Beam-forming or concatenation of spectra from both receive elements outperformed the monaural benchmark, with higher sensitivity and lower bias. Ultimately, best object identification performance was achieved by the learning vector quantization network supplied with beam-formed data. The advantages of multi-element signal processing for object identification are clearly demonstrated in this development of a first-ever dolphin biomimetic sonar. ?? 2006 IOP Publishing Ltd.

  3. Monitoring the WFC3/UVIS Relative Gain with Internal Flatfields

    NASA Astrophysics Data System (ADS)

    Fowler, J.; Baggett, S.

    2017-03-01

    The WFC3/UVIS gain stability has been monitored twice yearly. This project provides a new examination of gain stability, making use of the existing internal flatfield observations taken every three days (for the Bowtie monitor) for a regular look at relative gain stability. Amplifiers are examined for consistency both in comparison to each other and over time, by normalizing the B, C, and D amplifiers to A, and then plotting statistics for each of the three normalized amplifiers with time. We find minimal trends in these statistics, with a 0.02 - 0.2% change in mean amplifier ratio over 7.5 years. The trends in the amplifiers are well-behaved with the exception of the B/A ratio, which shows increased scatter in mean, median, and standard deviation. The cause of the scatter remains unclear though we find it is not dependent upon detector defects, filter features, or shutter effects, and is only observable after pixel flagging (both from the data quality arrays and outlier values) has been applied.

  4. NuSTAR AND Swift Observations of the Very High State in GX 339-4: Weighing the Black Hole With X-Rays

    NASA Technical Reports Server (NTRS)

    Parker, M. L.; Tomsick, J. A.; Kennea, J. A.; Miller, J. M.; Harrison, F. A.; Barret, D.; Boggs, S. E.; Christensen, F. E.; Craig, W. W.; Fabian, A. C.; hide

    2016-01-01

    We present results from spectral fitting of the very high state of GX339-4 with Nuclear Spectroscopic Telescope Array (NuSTAR) and Swift. We use relativistic reflection modeling to measure the spin of the black hole and inclination of the inner disk and find a spin of a = 0.95+0.08/-0.02 and inclination of 30deg +/- 1deg (statistical errors). These values agree well with previous results from reflection modeling. With the exceptional sensitivity of NuSTAR at the high-energy side of the disk spectrum, we are able to constrain multiple physical parameters simultaneously using continuum fitting. By using the constraints from reflection as input for the continuum fitting method, we invert the conventional fitting procedure to estimate the mass and distance of GX 339-4 using just the X-ray spectrum, finding a mass of 9.0+1.6/-1.2 Stellar Mass and distance of 8.4 +/- 0.9 kpc (statistical errors).

  5. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, S

    2015-06-15

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less

  6. Three dimensional measurement with an electrically tunable focused plenoptic camera

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  7. Three dimensional measurement with an electrically tunable focused plenoptic camera.

    PubMed

    Lei, Yu; Tong, Qing; Xin, Zhaowei; Wei, Dong; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng

    2017-03-01

    A liquid crystal microlens array (LCMLA) with an arrayed microhole pattern electrode based on nematic liquid crystal materials using a fabrication method including traditional UV-photolithography and wet etching is presented. Its focusing performance is measured under different voltage signals applied between the electrodes of the LCMLA. The experimental outcome shows that the focal length of the LCMLA can be tuned easily by only changing the root mean square value of the voltage signal applied. The developed LCMLA is further integrated with a main lens and an imaging sensor to construct a LCMLA-based focused plenoptic camera (LCFPC) prototype. The focused range of the LCFPC can be shifted electrically along the optical axis of the imaging system. The principles and methods for acquiring several key parameters such as three dimensional (3D) depth, positioning, and motion expression are given. The depth resolution is discussed in detail. Experiments are carried out to obtain the static and dynamic 3D information of objects chosen.

  8. MicroRNA Expression Profiling of the Armed Forces Health Surveillance Branch Cohort for Identification of "Enviro-miRs" Associated With Deployment-Based Environmental Exposure.

    PubMed

    Dalgard, Clifton L; Polston, Keith F; Sukumar, Gauthaman; Mallon, Col Timothy M; Wilkerson, Matthew D; Pollard, Harvey B

    2016-08-01

    The aim of this study was to identify serum microRNA (miRNA) biomarkers that indicate deployment-associated exposures in service members at military installations with open burn pits. Another objective was to determine detection rates of miRNAs in Department of Defense Serum Repository (DoDSR) samples with a high-throughput methodology. Low-volume serum samples (n = 800) were profiled by miRNA-capture isolation, pre-amplification, and measurement by a quantitative PCR-based OpenArray platform. Normalized quantitative cycle values were used for differential expression analysis between groups. Assay specificity, dynamic range, reproducibility, and detection rates by OpenArray passed target desired specifications. Serum abundant miRNAs were consistently measured in study specimens. Four miRNAs were differentially expressed in the case deployment group subjects. miRNAs are suitable RNA species for biomarker discovery in the DoDSR serum specimens. Serum miRNAs are candidate biomarkers for deployment and environmental exposure in military service members.

  9. WO{sub 3} thin film based multiple sensor array for electronic nose application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramgir, Niranjan S., E-mail: niranjanpr@yahoo.com, E-mail: deepakcct1991@gmail.com; Goyal, C. P.; Datta, N.

    2015-06-24

    Multiple sensor array comprising 16 x 2 sensing elements were realized using RF sputtered WO{sub 3} thin films. The sensor films were modified with a thin layer of sensitizers namely Au, Ni, Cu, Al, Pd, Ti, Pt. The resulting sensor array were tested for their response towards different gases namely H{sub 2}S, NH{sub 3}, NO and C{sub 2}H{sub 5}OH. The sensor response values measured from the response curves indicates that the sensor array generates a unique signature pattern (bar chart) for the gases. The sensor response values can be used to get both qualitative and quantitative information about the gas.

  10. Fast Noncircular 2D-DOA Estimation for Rectangular Planar Array

    PubMed Central

    Xu, Lingyun; Wen, Fangqing

    2017-01-01

    A novel scheme is proposed for direction finding with uniform rectangular planar array. First, the characteristics of noncircular signals and Euler’s formula are exploited to construct a new real-valued rectangular array data. Then, the rotational invariance relations for real-valued signal space are depicted in a new way. Finally the real-valued propagator method is utilized to estimate the pairing two-dimensional direction of arrival (2D-DOA). The proposed algorithm provides better angle estimation performance and can discern more sources than the 2D propagator method. At the same time, it has very close angle estimation performance to the noncircular propagator method (NC-PM) with reduced computational complexity. PMID:28417926

  11. Large array of 2048 tilting micromirrors for astronomical spectroscopy: optical and cryogenic characterization

    NASA Astrophysics Data System (ADS)

    Zamkotsian, Frédéric; Canonica, Michael; Lanzoni, Patrick; Noell, Wilfried; Lani, Sebastien

    2014-03-01

    Multi-object spectroscopy (MOS) is a powerful tool for space and ground-based telescopes for the study of the formation and evolution of galaxies. This technique requires a programmable slit mask for astronomical object selection. We are engaged in a European development of micromirror arrays (MMA) for generating reflective slit masks in future MOS, called MIRA. MMA with 100 × 200 μm2 single-crystal silicon micromirrors were successfully designed, fabricated and tested. Arrays are composed of 2048 micromirrors (32 x 64) with a peak-to-valley deformation less than 10 nm, a tilt angle of 24° for an actuation voltage of 130 V. The micromirrors were actuated successfully before, during and after cryogenic cooling, down to 162K. The micromirror surface deformation was measured at cryo and is below 30 nm peak-to-valley. These performances demonstrate the ability of such MOEMS device to work as objects selector in future generation of MOS instruments both in ground-based and space telescopes. In order to fill large focal planes (mosaicing of several chips), we are currently developing large micromirror arrays integrated with their electronics.

  12. The Advanced Photovoltaic Solar Array (APSA) technology status and performance

    NASA Technical Reports Server (NTRS)

    Stella, Paul M.; Kurland, Richard M.

    1991-01-01

    In 1985, the Jet Propulsion Laboratory initiated the Advanced Photovoltaic Solar Array (APSA) program. The program objective is to demonstrate a producible array system by the early 1990s with a specific performance of at least 130 W/kG (beginning-of-life) as an intermediate milestone towards the long range goal of 300 W/kG. The APSA performance represents an approximately four-fold improvement over existing rigid array technology and a doubling of the performance of the first generation NASA/OAST SAFE flexible blanket array of the early 1980s.

  13. The relationship between affect and constructivism as viewed by middle school science teachers

    NASA Astrophysics Data System (ADS)

    Black, Denise L.

    The purpose of this research was to examine middle school science teachers' perceptions of their students' affective behaviors at each level of the affective domain (receiving, responding, valuing, organization, characterization of value system), perceptions of the usefulness of constructivism as a curricular theory, and constructivist teaching strategies. This study investigated the relationship between affect and constructivism to determine if constructivist strategies can predict levels of affective behavior. Affect is a broad generalization that includes elements (i.e., interests, attitudes, values, emotions, and feelings). The importance of this research relates to enhancing learning, increasing achievement, participatory democracy, and facilitating understanding of science, as well as promoting the development of higher order thinking skills. A nonexperimental, descriptive research design was used to determine the relationship between affect and constructivism. A total of 111 middle school teachers participated in this study. Three instruments were used in this study: Taxonomy of Affective Behavior (TAB), Survey of Science Instruction (SSI), and a short demographic survey. Statistical significance obtained from one-sample t-tests provided evidence that teachers were aware that the affective domain was a viable construct. Statistical evidence of one-sample t-tests provided evidence that teachers perceived constructivism was useful to teach science to middle school students. Pearson product moment correlations results indicated statistically significant relationships between perceptions of constructivism and associated constructivist teaching strategies. Stepwise multiple linear regression analysis revealed a relationship between affect and constructivism. Teacher responses indicated they felt constrained from implementing constructivism due to an emphasis on testing. Colleges of education, curriculum specialists, science teachers, and school districts may benefit from this research. Colleges of education could offer a course on developing objectives in the affective domain. Science curriculum specialists could use constructivist approaches as a rationale for curriculum development, as well as use the TAB to write and evaluate affective objectives. This strategy could assist curriculum leaders in writing goals and objectives that would meet the criteria of No Child Left Behind. Teachers could be shown how to implement affect and constructivism on in-service days. Finally, school districts could use the TAB to provide a value-added component to science instruction.

  14. Subjective global assessment of nutritional status in children.

    PubMed

    Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool

    2010-10-01

    This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.

  15. Cytokines as a predictor of clinical response following hip arthroscopy: minimum 2-year follow-up.

    PubMed

    Shapiro, Lauren M; Safran, Marc R; Maloney, William J; Goodman, Stuart B; Huddleston, James I; Bellino, Michael J; Scuderi, Gaetano J; Abrams, Geoffrey D

    2016-08-01

    Hip arthroscopy in patients with osteoarthritis has been shown to have suboptimal outcomes. Elevated cytokine concentrations in hip synovial fluid have previously been shown to be associated with cartilage pathology. The purpose of this study was to determine whether a relationship exists between hip synovial fluid cytokine concentration and clinical outcomes at a minimum of 2 years following hip arthroscopy. Seventeen patients without radiographic evidence of osteoarthritis had synovial fluid aspirated at time of portal establishment during hip arthroscopy. Analytes included fibronectin-aggrecan complex as well as a multiplex cytokine array. Patients completed the modified Harris Hip Score, Western Ontario and McMaster Universities Arthritis Index and the International Hip Outcomes Tool pre-operatively and at a minimum of 2 years following surgery. Pre and post-operative scores were compared with a paired t-test, and the association between cytokine values and clinical outcome scores was performed with Pearson's correlation coefficient with an alpha value of 0.05 set as significant. Sixteen of seventeen patients completed 2-year follow-up questionnaires (94%). There was a significant increase in pre-operative to post-operative score for each clinical outcome measure. No statistically significant correlation was seen between any of the intra-operative cytokine values and either the 2-year follow-up scores or the change from pre-operative to final follow-up outcome values. No statistically significant associations were seen between hip synovial fluid cytokine concentrations and 2-year follow-up clinical outcome assessment scores for those undergoing hip arthroscopy.

  16. [Functional assessment of patients with vertigo and dizziness in occupational medicine].

    PubMed

    Zamysłowska-Szmytke, Ewa; Szostek-Rogula, Sylwia; Śliwińska-Kowalska, Mariola

    2018-03-09

    Balance assessment relies on symptoms, clinical examination and functional assessment and their verification in objective tests. Our study was aimed at calculating the assessment compatibility between questionnaires, functional scales and objective vestibular and balance examinations. A group of 131 patients (including 101 women; mean age: 59±14 years) of the audiology outpatient clinic was examined. Benign paroxysmal positional vertigo, phobic vertigo and central dizziness were the most common diseases observed in the study group. Patients' symptoms were tested using the questionnaire on Cawthworne-Cooksey exercises (CC), Dizziness Handicap Inventory (DHI) and Duke Anxiety-Depression Scale. Berg Balance Scale (BBS), Dynamic Gait Index (DGI), the Tinetti test, Timed Up and Go test (TUG), and Dynamic Visual Acuity (DVA) were used for the functional balance assessment. Objective evaluation included: videonystagmography caloric test and static posturography. The study results revealed statistically significant but moderate compatibility between functional tests BBS, DGI, TUG, DVA and caloric results (Kendall's W = 0.29) and higher for posturography (W = 0.33). The agreement between questionnaires and objective tests were very low (W = 0.08-0.11).The positive predictive values of BBS were 42% for caloric and 62% for posturography tests, of DGI - 46% and 57%, respectively. The results of functional tests (BBS, DGI, TUG, DVA) revealed statistically significant correlations with objective balance tests but low predictive values did not allow to use these tests in vestibular damage screening. Only half of the patients with functional disturbances revealed abnormal caloric or posturography tests. The qualification to work based on objective tests ignore functional state of the worker, which may influence the ability to work. Med Pr 2018;69(2):179-189. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  17. Gene ARMADA: an integrated multi-analysis platform for microarray data implemented in MATLAB.

    PubMed

    Chatziioannou, Aristotelis; Moulos, Panagiotis; Kolisis, Fragiskos N

    2009-10-27

    The microarray data analysis realm is ever growing through the development of various tools, open source and commercial. However there is absence of predefined rational algorithmic analysis workflows or batch standardized processing to incorporate all steps, from raw data import up to the derivation of significantly differentially expressed gene lists. This absence obfuscates the analytical procedure and obstructs the massive comparative processing of genomic microarray datasets. Moreover, the solutions provided, heavily depend on the programming skills of the user, whereas in the case of GUI embedded solutions, they do not provide direct support of various raw image analysis formats or a versatile and simultaneously flexible combination of signal processing methods. We describe here Gene ARMADA (Automated Robust MicroArray Data Analysis), a MATLAB implemented platform with a Graphical User Interface. This suite integrates all steps of microarray data analysis including automated data import, noise correction and filtering, normalization, statistical selection of differentially expressed genes, clustering, classification and annotation. In its current version, Gene ARMADA fully supports 2 coloured cDNA and Affymetrix oligonucleotide arrays, plus custom arrays for which experimental details are given in tabular form (Excel spreadsheet, comma separated values, tab-delimited text formats). It also supports the analysis of already processed results through its versatile import editor. Besides being fully automated, Gene ARMADA incorporates numerous functionalities of the Statistics and Bioinformatics Toolboxes of MATLAB. In addition, it provides numerous visualization and exploration tools plus customizable export data formats for seamless integration by other analysis tools or MATLAB, for further processing. Gene ARMADA requires MATLAB 7.4 (R2007a) or higher and is also distributed as a stand-alone application with MATLAB Component Runtime. Gene ARMADA provides a highly adaptable, integrative, yet flexible tool which can be used for automated quality control, analysis, annotation and visualization of microarray data, constituting a starting point for further data interpretation and integration with numerous other tools.

  18. Supply chain value creation methodology under BSC approach

    NASA Astrophysics Data System (ADS)

    Golrizgashti, Seyedehfatemeh

    2014-06-01

    The objective of this paper is proposing a developed balanced scorecard approach to measure supply chain performance with the aim of creating more value in manufacturing and business operations. The most important metrics have been selected based on experts' opinion acquired by in-depth interviews focused on creating more value for stakeholders. Using factor analysis method, a survey research has been used to categorize selected metrics into balanced scorecard perspectives. The result identifies the intensity of correlation between perspectives and cause-and-effect chains among them using statistical method based on a real case study in home appliance manufacturing industries.

  19. Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.

    PubMed

    Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2015-03-12

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  20. Spectral x-ray diffraction using a 6 megapixel photon counting array detector

    NASA Astrophysics Data System (ADS)

    Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.

    2015-03-01

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  1. Three-Dimensional Microwave Imaging for Indoor Environments

    NASA Astrophysics Data System (ADS)

    Scott, Simon

    Microwave imaging involves the use of antenna arrays, operating at microwave and millimeter-wave frequencies, for capturing images of real-world objects. Typically, one or more antennas in the array illuminate the scene with a radio-frequency (RF) signal. Part of this signal reflects back to the other antennas, which record both the amplitude and phase of the reflected signal. These reflected RF signals are then processed to form an image of the scene. This work focuses on using planar antenna arrays, operating between 17 and 26 GHz, to capture three-dimensional images of people and other objects inside a room. Such an imaging system enables applications such as indoor positioning and tracking, health monitoring and hand gesture recognition. Microwave imaging techniques based on beamforming cannot be used for indoor imaging, as most objects lie within the array near-field. Therefore, the range-migration algorithm (RMA) is used instead, as it compensates for the curvature of the reflected wavefronts, hence enabling near-field imaging. It is also based on fast-Fourier transforms and is therefore computationally efficient. A number of novel RMA variants were developed to support a wider variety of antenna array configurations, as well as to generate 3-D velocity maps of objects moving around a room. The choice of antenna array configuration, microwave transceiver components and transmit power has a significant effect on both the energy consumed by the imaging system and the quality of the resulting images. A generic microwave imaging testbed was therefore built to characterize the effect of these antenna array parameters on image quality in the 20 GHz band. All variants of the RMA were compared and found to produce good quality three-dimensional images with transmit power levels as low as 1 muW. With an array size of 80x80 antennas, most of the imaging algorithms were able to image objects at 0.5 m range with 12.5 mm resolution, although some were only able to achieve 20 mm resolution. Increasing the size of the antenna array further results in a proportional improvement in image resolution and image SNR, until the resolution reaches the half-wavelength limit. While microwave imaging is not a new technology, it has seen little commercial success due to the cost and power consumption of the large number of antennas and radio transceivers required to build such a system. The cost and power consumption can be reduced by using low-power and low-cost components in both the transmit and receive RF chains, even if these components have poor noise figures. Alternatively, the cost and power consumption can be reduced by decreasing the number of antennas in the array, while keeping the aperture constant. This reduction in antenna count is achieved by randomly depopulating the array, resulting in a sparse antenna array. A novel compressive sensing algorithm, coupled with the wavelet transform, is used to process the samples collected by the sparse array and form a 3-D image of the scene. This algorithm works well for antenna arrays that are up to 96% sparse, equating to a 25 times reduction in the number of required antennas. For microwave imaging to be useful, it needs to capture images of the scene in real time. The architecture of a system capable of capturing real-time 3-D microwave images is therefore designed. The system consists of a modular antenna array, constructed by plugging RF daughtercards into a carrier board. Each daughtercard is a self-contained radio system, containing an antenna, RF transceiver baseband signal chain, and analog-to-digital converters. A small number of daughtercards have been built, and proven to be suitable for real-time microwave imaging. By arranging these daughtercards in different ways, any antenna array pattern can be built. This architecture allows real-time microwave imaging systems to be rapidly prototyped, while still being able to generate images at video frame rates.

  2. Difference Image Analysis of Defocused Observations With CSTAR

    NASA Astrophysics Data System (ADS)

    Oelkers, Ryan J.; Macri, Lucas M.; Wang, Lifan; Ashley, Michael C. B.; Cui, Xiangqun; Feng, Long-Long; Gong, Xuefei; Lawrence, Jon S.; Qiang, Liu; Luong-Van, Daniel; Pennypacker, Carl R.; Yang, Huigen; Yuan, Xiangyan; York, Donald G.; Zhou, Xu; Zhu, Zhenxi

    2015-02-01

    The Chinese Small Telescope ARray carried out high-cadence time-series observations of 27 square degrees centered on the South Celestial Pole during the Antarctic winter seasons of 2008-2010. Aperture photometry of the 2008 and 2010 i-band images resulted in the discovery of over 200 variable stars. Yearly servicing left the array defocused for the 2009 winter season, during which the system also suffered from intermittent frosting and power failures. Despite these technical issues, nearly 800,000 useful images were obtained using g, r, and clear filters. We developed a combination of difference imaging and aperture photometry to compensate for the highly crowded, blended, and defocused frames. We present details of this approach, which may be useful for the analysis of time-series data from other small-aperture telescopes regardless of their image quality. Using this approach, we were able to recover 68 previously known variables and detected variability in 37 additional objects. We also have determined the observing statistics for Dome A during the 2009 winter season; we find the extinction due to clouds to be less than 0.1 and 0.4 mag for 40% and 63% of the dark time, respectively.

  3. High-Resolution WRF Forecasts of Lightning Threat

    NASA Technical Reports Server (NTRS)

    Goodman, S. J.; McCaul, E. W., Jr.; LaCasse, K.

    2007-01-01

    Tropical Rainfall Measuring Mission (TRMM)lightning and precipitation observations have confirmed the existence of a robust relationship between lightning flash rates and the amount of large precipitating ice hydrometeors in storms. This relationship is exploited, in conjunction with the capabilities of the Weather Research and Forecast (WRF) model, to forecast the threat of lightning from convective storms using the output fields from the model forecasts. The simulated vertical flux of graupel at -15C is used in this study as a proxy for charge separation processes and their associated lightning risk. Initial experiments using 6-h simulations are conducted for a number of case studies for which three-dimensional lightning validation data from the North Alabama Lightning Mapping Array are available. The WRF has been initialized on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity and reflectivity fields, and METAR and ACARS data. An array of subjective and objective statistical metrics is employed to document the utility of the WRF forecasts. The simulation results are also compared to other more traditional means of forecasting convective storms, such as those based on inspection of the convective available potential energy field.

  4. [Quantitative Evaluation of Metal Artifacts on CT Images on the Basis of Statistics of Extremes].

    PubMed

    Kitaguchi, Shigetoshi; Imai, Kuniharu; Ueda, Suguru; Hashimoto, Naomi; Hattori, Shouta; Saika, Takahiro; Ono, Yoshifumi

    2016-05-01

    It is well-known that metal artifacts have a harmful effect on the image quality of computed tomography (CT) images. However, the physical property remains still unknown. In this study, we investigated the relationship between metal artifacts and tube currents using statistics of extremes. A commercially available phantom for measuring CT dose index 160 mm in diameter was prepared and a brass rod 13 mm in diameter was placed at the centerline of the phantom. This phantom was used as a target object to evaluate metal artifacts and was scanned using an area detector CT scanner with various tube currents under a constant tube voltage of 120 kV. Sixty parallel line segments with a length of 100 pixels were placed to cross metal artifacts on CT images and the largest difference between two adjacent CT values in each of 60 CT value profiles of these line segments was employed as a feature variable for measuring metal artifacts; these feature variables were analyzed on the basis of extreme value theory. The CT value variation induced by metal artifacts was statistically characterized by Gumbel distribution, which was one of the extreme value distributions; namely, metal artifacts have the same statistical characteristic as streak artifacts. Therefore, Gumbel evaluation method makes it possible to analyze not only streak artifacts but also metal artifacts. Furthermore, the location parameter in Gumbel distribution was shown to be in inverse proportion to the square root of a tube current. This result suggested that metal artifacts have the same dose dependence as image noises.

  5. Design and Application of Combined 8-Channel Transmit and 10-Channel Receive Arrays and Radiofrequency Shimming for 7-T Shoulder Magnetic Resonance Imaging

    PubMed Central

    Brown, Ryan; Deniz, Cem Murat; Zhang, Bei; Chang, Gregory; Sodickson, Daniel K.; Wiggins, Graham C.

    2014-01-01

    Objective The objective of the study was to investigate the feasibility of 7-T shoulder magnetic resonance imaging by developing transmit and receive radiofrequency (RF) coil arrays and exploring RF shim methods. Materials and Methods A mechanically flexible 8-channel transmit array and an anatomically conformable 10-channel receive array were designed and implemented. The transmit performance of various RF shim methods was assessed through local flip angle measurements in the right and left shoulders of 6 subjects. The receive performance was assessed through signal-to-noise ratio measurements using the developed 7-T coil and a baseline commercial 3-T coil. Results The 7-T transmit array driven with phase-coherent RF shim weights provided adequate B1+ efficiency and uniformity for turbo spin echo shoulder imaging. B1+ twisting that is characteristic of high-field loop coils necessitates distinct RF shim weights in the right and left shoulders. The 7-T receive array provided a 2-fold signal-to-noise ratio improvement over the 3-T array in the deep articular shoulder cartilage. Conclusions Shoulder imaging at 7-T is feasible with a custom transmit/receive array either in a single-channel transmit mode with a fixed RF shim or in a parallel transmit mode with a subject-specific RF shim. PMID:24056112

  6. Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.

    PubMed

    Sagers, Jason D; Knobles, David P

    2014-06-01

    Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.

  7. Automated control of linear constricted plasma source array

    DOEpatents

    Anders, Andre; Maschwitz, Peter A.

    2000-01-01

    An apparatus and method for controlling an array of constricted glow discharge chambers are disclosed. More particularly a linear array of constricted glow plasma sources whose polarity and geometry are set so that the contamination and energy of the ions discharged from the sources are minimized. The several sources can be mounted in parallel and in series to provide a sustained ultra low source of ions in a plasma with contamination below practical detection limits. The quality of film along deposition "tracks" opposite the plasma sources can be measured and compared to desired absolute or relative values by optical and/or electrical sensors. Plasma quality can then be adjusted by adjusting the power current values, gas feed pressure/flow, gas mixtures or a combination of some or all of these to improve the match between the measured values and the desired values.

  8. Tracking interstellar space weather toward timing-array millisecond pulsars

    NASA Astrophysics Data System (ADS)

    Bhat, N. D. R.; Ord, S. M.; Tremblay, S. E.; Shannon, R. M.; van Straten, W.; Kaplan, D. L.; Macquart, J.-P.; Kirsten, F.

    2016-07-01

    Recent LIGO detection of milli-Hertz gravitational wave (GW) signals from a black-hole merger event has further reinforced the important role of Pulsar timing array (PTA) experiments in the GW astronomy. PTAs exploit the clock-like stability of fast-spinning millisecond pulsars (MSPs) to make a direct detection of ultra-low frequency (nano-Hertz) gravitational waves. The science enabled by PTAs is thus highly complementary to that possible by LIGO-like detectors. PTAs are also a key science objective for the SKA. PTA efforts over the past few years suggest that interstellar propagation effects on pulsar signals may ultimately limit the detection sensitivity of PTAs unless they are accurately measured and corrected for in timing measurements. Interstellar medium (ISM) effects are much stronger at lower radio frequencies and therefore the MWA presents an exciting and unique opportunity to calibrate interstellar propagation delays. This will potentially lead to enhanced sensitivity and scientific impact of PTA projects. Since our first demonstration of ability to form a coherent (tied-array) beam by reprocessing the recorded VCS data (Bhat et al. 2016), we have successfully ported the full processing chain to the Galaxy cluster of Pawsey and demonstrated the value of high-sensitivity multi-band pulsar observations that are now possible with the MWA. Here we propose further observations of two most promising PTA pulsars that will be nightly objects in the 2016B period. Our main science driver is to characterise the nature of the turbulent ISM through high-quality scintillation and dispersion studies including the investigation of chromatic (frequency-dependent) DMs. Success of these efforts will define the breadth and scope of a more ambitious program in the future, bringing in a new science niche for the MWA and SKA-low.

  9. Reduced-Drift Virtual Gyro from an Array of Low-Cost Gyros.

    PubMed

    Vaccaro, Richard J; Zaki, Ahmed S

    2017-02-11

    A Kalman filter approach for combining the outputs of an array of high-drift gyros to obtain a virtual lower-drift gyro has been known in the literature for more than a decade. The success of this approach depends on the correlations of the random drift components of the individual gyros. However, no method of estimating these correlations has appeared in the literature. This paper presents an algorithm for obtaining the statistical model for an array of gyros, including the cross-correlations of the individual random drift components. In order to obtain this model, a new statistic, called the "Allan covariance" between two gyros, is introduced. The gyro array model can be used to obtain the Kalman filter-based (KFB) virtual gyro. Instead, we consider a virtual gyro obtained by taking a linear combination of individual gyro outputs. The gyro array model is used to calculate the optimal coefficients, as well as to derive a formula for the drift of the resulting virtual gyro. The drift formula for the optimal linear combination (OLC) virtual gyro is identical to that previously derived for the KFB virtual gyro. Thus, a Kalman filter is not necessary to obtain a minimum drift virtual gyro. The theoretical results of this paper are demonstrated using simulated as well as experimental data. In experimental results with a 28-gyro array, the OLC virtual gyro has a drift spectral density 40 times smaller than that obtained by taking the average of the gyro signals.

  10. IkeNet: Social Network Analysis of E-mail Traffic in the Eisenhower Leadership Development Program

    DTIC Science & Technology

    2007-11-01

    8217Create the recipients TO TempArray = Sphit(strTo,") For Each varArrayltem In TemnpArray hextGuy = Chr(34) & CStr (Trim(varArrayltem)) & Chr(34) MsgBox...34next guy = " & nextGuy ’Set oRecipient = Recipients.Add(nextGuy) Set oRecipient = Recipients.Add( CStr (Trim(varArrayItem))) oRecipient.Type = olTo...TempArray = Split(strAttachments, "" For Each varArrayltern In TempArray .Attachments.Add CStr (Trim(varArrayItem)) Next varArrayltern .Send No return value

  11. Low latency counter event indication

    DOEpatents

    Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY

    2008-09-16

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  12. Low latency counter event indication

    DOEpatents

    Gara, Alan G.; Salapura, Valentina

    2010-08-24

    A hybrid counter array device for counting events with interrupt indication includes a first counter portion comprising N counter devices, each for counting signals representing event occurrences and providing a first count value representing lower order bits. An overflow bit device associated with each respective counter device is additionally set in response to an overflow condition. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits. An operatively coupled control device monitors each associated overflow bit device and initiates incrementing a second count value stored at a corresponding memory location in response to a respective overflow bit being set. The incremented second count value is compared to an interrupt threshold value stored in a threshold register, and, when the second counter value is equal to the interrupt threshold value, a corresponding "interrupt arm" bit is set to enable a fast interrupt indication. On a subsequent roll-over of the lower bits of that counter, the interrupt will be fired.

  13. Use of Statistics from National Data Sources to Inform Rehabilitation Program Planning, Evaluation, and Advocacy

    ERIC Educational Resources Information Center

    Bruyere, Susanne M.; Houtenville, Andrew J.

    2006-01-01

    Data on people with disabilities can be used to confirm service needs and to evaluate the resulting impact of services. Disability statistics from surveys and administrative records can play a meaningful role in such efforts. In this article, the authors describe the array of available data and statistics and their potential uses in rehabilitation…

  14. IoT Big-Data Centred Knowledge Granule Analytic and Cluster Framework for BI Applications: A Case Base Analysis.

    PubMed

    Chang, Hsien-Tsung; Mishra, Nilamadhab; Lin, Chung-Chih

    2015-01-01

    The current rapid growth of Internet of Things (IoT) in various commercial and non-commercial sectors has led to the deposition of large-scale IoT data, of which the time-critical analytic and clustering of knowledge granules represent highly thought-provoking application possibilities. The objective of the present work is to inspect the structural analysis and clustering of complex knowledge granules in an IoT big-data environment. In this work, we propose a knowledge granule analytic and clustering (KGAC) framework that explores and assembles knowledge granules from IoT big-data arrays for a business intelligence (BI) application. Our work implements neuro-fuzzy analytic architecture rather than a standard fuzzified approach to discover the complex knowledge granules. Furthermore, we implement an enhanced knowledge granule clustering (e-KGC) mechanism that is more elastic than previous techniques when assembling the tactical and explicit complex knowledge granules from IoT big-data arrays. The analysis and discussion presented here show that the proposed framework and mechanism can be implemented to extract knowledge granules from an IoT big-data array in such a way as to present knowledge of strategic value to executives and enable knowledge users to perform further BI actions.

  15. IoT Big-Data Centred Knowledge Granule Analytic and Cluster Framework for BI Applications: A Case Base Analysis

    PubMed Central

    Chang, Hsien-Tsung; Mishra, Nilamadhab; Lin, Chung-Chih

    2015-01-01

    The current rapid growth of Internet of Things (IoT) in various commercial and non-commercial sectors has led to the deposition of large-scale IoT data, of which the time-critical analytic and clustering of knowledge granules represent highly thought-provoking application possibilities. The objective of the present work is to inspect the structural analysis and clustering of complex knowledge granules in an IoT big-data environment. In this work, we propose a knowledge granule analytic and clustering (KGAC) framework that explores and assembles knowledge granules from IoT big-data arrays for a business intelligence (BI) application. Our work implements neuro-fuzzy analytic architecture rather than a standard fuzzified approach to discover the complex knowledge granules. Furthermore, we implement an enhanced knowledge granule clustering (e-KGC) mechanism that is more elastic than previous techniques when assembling the tactical and explicit complex knowledge granules from IoT big-data arrays. The analysis and discussion presented here show that the proposed framework and mechanism can be implemented to extract knowledge granules from an IoT big-data array in such a way as to present knowledge of strategic value to executives and enable knowledge users to perform further BI actions. PMID:26600156

  16. Analyzing Responses of Chemical Sensor Arrays

    NASA Technical Reports Server (NTRS)

    Zhou, Hanying

    2007-01-01

    NASA is developing a third-generation electronic nose (ENose) capable of continuous monitoring of the International Space Station s cabin atmosphere for specific, harmful airborne contaminants. Previous generations of the ENose have been described in prior NASA Tech Briefs issues. Sensor selection is critical in both (prefabrication) sensor material selection and (post-fabrication) data analysis of the ENose, which detects several analytes that are difficult to detect, or that are at very low concentration ranges. Existing sensor selection approaches usually include limited statistical measures, where selectivity is more important but reliability and sensitivity are not of concern. When reliability and sensitivity can be major limiting factors in detecting target compounds reliably, the existing approach is not able to provide meaningful selection that will actually improve data analysis results. The approach and software reported here consider more statistical measures (factors) than existing approaches for a similar purpose. The result is a more balanced and robust sensor selection from a less than ideal sensor array. The software offers quick, flexible, optimal sensor selection and weighting for a variety of purposes without a time-consuming, iterative search by performing sensor calibrations to a known linear or nonlinear model, evaluating the individual sensor s statistics, scoring the individual sensor s overall performance, finding the best sensor array size to maximize class separation, finding optimal weights for the remaining sensor array, estimating limits of detection for the target compounds, evaluating fingerprint distance between group pairs, and finding the best event-detecting sensors.

  17. Low-cost solar array project progress and plans

    NASA Technical Reports Server (NTRS)

    Callaghan, W. T.

    1981-01-01

    The considered project is part of the DOE Photovoltaic Technology and Market Development Program. This program is concerned with the development and the utilization of cost-competitive photovoltaic systems. The project has the objective to develop, by 1986, the national capability to manufacture low-cost, long-life photovoltaic arrays at production rates that will realize economies of scale, and at a price of less than $0.70/watt. The array performance objectives include an efficiency greater than 10% and an operating lifetime longer than 20 years. The objective of the silicon material task is to establish the practicality of processes for producing silicon suitable for terrestrial photovoltaic applications at a price of $14/kg. The large-area sheet task is concerned with the development of process technology for sheet formation. Low-cost encapsulation material systems are being developed in connection with the encapsulation task. Another project goal is related to the development of economical process sequences.

  18. Continuous diffraction of molecules and disordered molecular crystals

    PubMed Central

    Yefanov, Oleksandr M.; Ayyer, Kartik; White, Thomas A.; Barty, Anton; Morgan, Andrew; Mariani, Valerio; Oberthuer, Dominik; Pande, Kanupriya

    2017-01-01

    The intensities of far-field diffraction patterns of orientationally aligned molecules obey Wilson statistics, whether those molecules are in isolation (giving rise to a continuous diffraction pattern) or arranged in a crystal (giving rise to Bragg peaks). Ensembles of molecules in several orientations, but uncorrelated in position, give rise to the incoherent sum of the diffraction from those objects, modifying the statistics in a similar way as crystal twinning modifies the distribution of Bragg intensities. This situation arises in the continuous diffraction of laser-aligned molecules or translationally disordered molecular crystals. This paper develops the analysis of the intensity statistics of such continuous diffraction to obtain parameters such as scaling, beam coherence and the number of contributing independent object orientations. When measured, continuous molecular diffraction is generally weak and accompanied by a background that far exceeds the strength of the signal. Instead of just relying upon the smallest measured intensities or their mean value to guide the subtraction of the background, it is shown how all measured values can be utilized to estimate the background, noise and signal, by employing a modified ‘noisy Wilson’ distribution that explicitly includes the background. Parameters relating to the background and signal quantities can be estimated from the moments of the measured intensities. The analysis method is demonstrated on previously published continuous diffraction data measured from crystals of photosystem II [Ayyer et al. (2016 ▸), Nature, 530, 202–206]. PMID:28808434

  19. Critical Values for Yen’s Q3: Identification of Local Dependence in the Rasch Model Using Residual Correlations

    PubMed Central

    Christensen, Karl Bang; Makransky, Guido; Horton, Mike

    2016-01-01

    The assumption of local independence is central to all item response theory (IRT) models. Violations can lead to inflated estimates of reliability and problems with construct validity. For the most widely used fit statistic Q3, there are currently no well-documented suggestions of the critical values which should be used to indicate local dependence (LD), and for this reason, a variety of arbitrary rules of thumb are used. In this study, an empirical data example and Monte Carlo simulation were used to investigate the different factors that can influence the null distribution of residual correlations, with the objective of proposing guidelines that researchers and practitioners can follow when making decisions about LD during scale development and validation. A parametric bootstrapping procedure should be implemented in each separate situation to obtain the critical value of LD applicable to the data set, and provide example critical values for a number of data structure situations. The results show that for the Q3 fit statistic, no single critical value is appropriate for all situations, as the percentiles in the empirical null distribution are influenced by the number of items, the sample size, and the number of response categories. Furthermore, the results show that LD should be considered relative to the average observed residual correlation, rather than to a uniform value, as this results in more stable percentiles for the null distribution of an adjusted fit statistic. PMID:29881087

  20. Teacher Evaluation: Alternate Measures of Student Growth. Q&A with Brian Gill. REL Mid-Atlantic Webinar

    ERIC Educational Resources Information Center

    Regional Educational Laboratory Mid-Atlantic, 2013

    2013-01-01

    This webinar described the findings of our literature review on alternative measures of student growth that are used in teacher evaluation. The review focused on two types of alternative growth measures: statistical growth/value-added models and teacher-developed student learning objectives. This Q&A addressed the questions participants had…

  1. Maternal-Child Health Data from the NLSY: 1988 Tabulations and Summary Discussion.

    ERIC Educational Resources Information Center

    Mott, Frank L.; Quinlan, Stephen V.

    This report uses data from the 1983 through 1988 rounds of the National Longitudinal Survey of Youth (NLSY) to provide information about prenatal, infant, and child health. Objectives of the report are to present statistics which should be of value to maternal and child health policymakers, and to provide NLSY users with baseline information about…

  2. The Photovoltaic Array Space Power plus Diagnostics (PASP Plus) Flight Experiment

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael F.; Curtis, Henry B.; Guidice, Donald A.; Severance, Paul S.

    1992-01-01

    An overview of the Photovoltaic Array Space Power Plus Diagnostics (PASP Plus) flight experiment is presented in outline and graphic form. The goal of the experiment is to test a variety of photovoltaic cell and array technologies under various space environmental conditions. Experiment objectives, flight hardware, experiment control and diagnostic instrumentation, and illuminated thermal vacuum testing are addressed.

  3. 77 FR 17456 - Buy American Exception Under the American Recovery and Reinvestment Act of 2009

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-26

    ...,000.00 to Adon Construction for the construction of a 120kw photovoltaic solar array system to be built in eight 15kw sub-arrays at NIST's WWVH radio station in Kauai, HI. The objective of the solar... Recovery Act), for inverters necessary for the construction of a solar array system at NIST's WWVH radio...

  4. Use of Geometric Properties of Landmark Arrays for Reorientation Relative to Remote Cities and Local Objects

    ERIC Educational Resources Information Center

    Mou, Weimin; Nankoo, Jean-François; Zhou, Ruojing; Spetch, Marcia L.

    2014-01-01

    Five experiments investigated how human adults use landmark arrays in the immediate environment to reorient relative to the local environment and relative to remote cities. Participants learned targets' directions with the presence of a proximal 4 poles forming a rectangular shape and an array of more distal poles forming a rectangular shape. Then…

  5. CRSP, numerical results for an electrical resistivity array to detect underground cavities

    NASA Astrophysics Data System (ADS)

    Amini, Amin; Ramazi, Hamidreza

    2017-03-01

    This paper is devoted to the application of the Combined Resistivity Sounding and Profiling electrode configuration (CRSP) to detect underground cavities. Electrical resistivity surveying is among the most favorite geophysical methods due to its nondestructive and economical properties in a wide range of geosciences. Several types of the electrode arrays are applied to detect different certain objectives. In one hand, the electrode array plays an important role in determination of output resolution and depth of investigations in all resistivity surveys. On the other hand, they have their own merits and demerits in terms of depth of investigations, signal strength, and sensitivity to resistivity variations. In this article several synthetic models, simulating different conditions of cavity occurrence, were used to examine the responses of some conventional electrode arrays and also CRSP array. The results showed that CRSP electrode configuration can detect the desired objectives with a higher resolution rather than some other types of arrays. Also a field case study was discussed in which electrical resistivity approach was conducted in Abshenasan expressway (Tehran, Iran) U-turn bridge site for detecting potential cavities and/or filling loose materials. The results led to detect an aqueduct tunnel passing beneath the study area.

  6. Synthetic aperture ultrasound imaging with a ring transducer array: preliminary ex vivo results.

    PubMed

    Qu, Xiaolei; Azuma, Takashi; Yogi, Takeshi; Azuma, Shiho; Takeuchi, Hideki; Tamano, Satoshi; Takagi, Shu

    2016-10-01

    The conventional medical ultrasound imaging has a low lateral spatial resolution, and the image quality depends on the depth of the imaging location. To overcome these problems, this study presents a synthetic aperture (SA) ultrasound imaging method using a ring transducer array. An experimental ring transducer array imaging system was constructed. The array was composed of 2048 transducer elements, and had a diameter of 200 mm and an inter-element pitch of 0.325 mm. The imaging object was placed in the center of the ring transducer array, which was immersed in water. SA ultrasound imaging was then employed to scan the object and reconstruct the reflection image. Both wire phantom and ex vivo experiments were conducted. The proposed method was found to be capable of producing isotropic high-resolution images of the wire phantom. In addition, preliminary ex vivo experiments using porcine organs demonstrated the ability of the method to reconstruct high-quality images without any depth dependence. The proposed ring transducer array and SA ultrasound imaging method were shown to be capable of producing isotropic high-resolution images whose quality was independent of depth.

  7. Detection of foreign body using fast thermoacoustic tomography with a multielement linear transducer array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie Liming; Xing Da; Yang Diwu

    2007-04-23

    Current imaging modalities face challenges in clinical applications due to limitations in resolution or contrast. Microwave-induced thermoacoustic imaging may provide a complementary modality for medical imaging, particularly for detecting foreign objects due to their different absorption of electromagnetic radiation at specific frequencies. A thermoacoustic tomography system with a multielement linear transducer array was developed and used to detect foreign objects in tissue. Radiography and thermoacoustic images of objects with different electromagnetic properties, including glass, sand, and iron, were compared. The authors' results demonstrate that thermoacoustic imaging has the potential to become a fast method for surgical localization of occult foreignmore » objects.« less

  8. Tomographic and analog 3-D simulations using NORA. [Non-Overlapping Redundant Image Array formed by multiple pinholes

    NASA Technical Reports Server (NTRS)

    Yin, L. I.; Trombka, J. I.; Bielefeld, M. J.; Seltzer, S. M.

    1984-01-01

    The results of two computer simulations demonstrate the feasibility of using the nonoverlapping redundant array (NORA) to form three-dimensional images of objects with X-rays. Pinholes admit the X-rays to nonoverlapping points on a detector. The object is reconstructed in the analog mode by optical correlation and in the digital mode by tomographic computations. Trials were run with a stick-figure pyramid and extended objects with out-of-focus backgrounds. Substitution of spherical optical lenses for the pinholes increased the light transmission sufficiently that objects could be easily viewed in a dark room. Out-of-focus aberrations in tomographic reconstruction could be eliminated using Chang's (1976) algorithm.

  9. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

  10. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2015-11-24

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  11. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2016-10-25

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  12. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2016-11-22

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  13. Microscopy imaging device with advanced imaging properties

    DOEpatents

    Ghosh, Kunal; Burns, Laurie; El Gamal, Abbas; Schnitzer, Mark J.; Cocker, Eric; Ho, Tatt Wei

    2017-04-25

    Systems, methods and devices are implemented for microscope imaging solutions. One embodiment of the present disclosure is directed toward an epifluorescence microscope. The microscope includes an image capture circuit including an array of optical sensor. An optical arrangement is configured to direct excitation light of less than about 1 mW to a target object in a field of view of that is at least 0.5 mm.sup.2 and to direct epi-fluorescence emission caused by the excitation light to the array of optical sensors. The optical arrangement and array of optical sensors are each sufficiently close to the target object to provide at least 2.5 .mu.m resolution for an image of the field of view.

  14. Generalized energy detector for weak random signals via vibrational resonance

    NASA Astrophysics Data System (ADS)

    Ren, Yuhao; Pan, Yan; Duan, Fabing

    2018-03-01

    In this paper, the generalized energy (GE) detector is investigated for detecting weak random signals via vibrational resonance (VR). By artificially injecting the high-frequency sinusoidal interferences into an array of GE statistics formed for the detector, we show that the normalized asymptotic efficacy can be maximized when the interference intensity takes an appropriate non-zero value. It is demonstrated that the normalized asymptotic efficacy of the dead-zone-limiter detector, aided by the VR mechanism, outperforms that of the GE detector without the help of high-frequency interferences. Moreover, the maximum normalized asymptotic efficacy of dead-zone-limiter detectors can approach a quarter of the second-order Fisher information for a wide range of non-Gaussian noise types.

  15. The impact of short-term stochastic variability in solar irradiance on optimal microgrid design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schittekatte, Tim; Stadler, Michael; Cardoso, Gonçalo

    2016-07-01

    This paper proposes a new methodology to capture the impact of fast moving clouds on utility power demand charges observed in microgrids with photovoltaic (PV) arrays, generators, and electrochemical energy storage. It consists of a statistical approach to introduce sub-hourly events in the hourly economic accounting process. The methodology is implemented in the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state of the art mixed integer linear model used to optimally size DER in decentralized energy systems. Results suggest that previous iterations of DER-CAM could undersize battery capacities. The improved model depicts more accurately the economic value of PVmore » as well as the synergistic benefits of pairing PV with storage.« less

  16. PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS

    NASA Technical Reports Server (NTRS)

    Savage, M.

    1994-01-01

    The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.

  17. Integrated Array/Metadata Analytics

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  18. The Statistic Results of the ISUAL Lightning Survey

    NASA Astrophysics Data System (ADS)

    Chuang, Chia-Wen; Bing-Chih Chen, Alfred; Liu, Tie-Yue; Lin, Shin-Fa; Su, Han-Tzong; Hsu, Rue-Ron

    2017-04-01

    The ISUAL (Imager for Sprites and Upper Atmospheric Lightning) onboard FORMOSAT-2 is the first science payload dedicated to the study of the lightning-induced transient luminous events (TLEs). Transient events, including TLEs and lightning, were recorded by the intensified imager, spectrophotometer (SP), and array photometer (AP) simultaneously while their light variation observed by SP exceeds a programmed threshold. Therefore, ISUAL surveys not only TLEs but also lightning globally with a good spatial, temporal and spectral resolution. In the past 12 years (2004-2016), approximately 300,000 transient events were registered, and only 42,000 are classified as TLEs. Since the main mission objective is to explore the distribution and characteristics of TLEs, the remaining transient events, mainly lightning, can act as a long-term global lightning survey. These huge amount of events cannot be processed manually as TLEs do, therefore, a data pipeline is developed to scan lightning patterns and to derive their geolocation with an efficient algorithm. The 12-year statistic results including occurrence rate, global distribution, seasonal variation, and the comparison with the LIS/OTD survey are presented in this report.

  19. The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model

    NASA Astrophysics Data System (ADS)

    Verkley, Wim; Severijns, Camiel

    2014-05-01

    Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz, Eur. Phys. J. B, 87:7, http://dx.doi.org/10.1140/epjb/e2013-40681-2 (open access).

  20. Costs and benefits of bicycling investments in Portland, Oregon.

    PubMed

    Gotschi, Thomas

    2011-01-01

    Promoting bicycling has great potential to increase overall physical activity; however, significant uncertainty exists with regard to the amount and effectiveness of investment needed for infrastructure. The objective of this study is to assess how costs of Portland's past and planned investments in bicycling relate to health and other benefits. Costs of investment plans are compared with 2 types of monetized health benefits, health care cost savings and value of statistical life savings. Levels of bicycling are estimated using past trends, future mode share goals, and a traffic demand model. By 2040, investments in the range of $138 to $605 million will result in health care cost savings of $388 to $594 million, fuel savings of $143 to $218 million, and savings in value of statistical lives of $7 to $12 billion. The benefit-cost ratios for health care and fuel savings are between 3.8 and 1.2 to 1, and an order of magnitude larger when value of statistical lives is used. This first of its kind cost-benefit analysis of investments in bicycling in a US city shows that such efforts are cost-effective, even when only a limited selection of benefits is considered.

  1. Effective real estate management helps IDSs meet strategic objectives.

    PubMed

    Campobasso, F D

    2000-05-01

    As IDSs expand their healthcare delivery networks, they acquire an increasingly diverse array of real estate assets. Managing these assets effectively requires a comprehensive real estate strategy. To develop such a strategy, the IDS should form a strategic real estate planning team. The team's role should be to conduct market research; assess the strategic value of the IDS's real estate portfolio; recommend strategies for disposing of unnecessary, underperforming, or mis-aligned assets; evaluate new real estate acquisitions or development projects that may be required to achieve the organization's mission and/or protect market share; and recommend a financing approach that fits the real estate strategy.

  2. Correlation between the reason for referral, clinical, and objective assessment of the risk for dysphagia.

    PubMed

    Mancopes, Renata; Gonçalves, Bruna Franciele da Trindade; Costa, Cintia Conceição; Favero, Talita Cristina; Drozdz, Daniela Rejane Constantino; Bilheri, Diego Fernando Dorneles; Schumacher, Stéfani Fernanda

    2014-01-01

    To correlate the reason for referral to speech therapy service at a university hospital with the results of clinical and objective assessment of risk for dysphagia. This is a cross-sectional, observational, retrospective analytical and quantitative study. The data were gathered from the database, and the information used was the reason for referral to speech therapy service, results of clinical assessment of the risk for dysphagia, and also from swallowing videofluoroscopy. There was a mean difference between the variables of the reason for the referral, results of the clinical and objective swallowing assessments, and scale of penetration/aspiration, although the values were not statistically significant. Statistically significant correlation was observed between clinical and objective assessments and the penetration scale, with the largest occurring between the results of objective assessment and penetration scale. There was a correlation between clinical and objective assessments of swallowing and mean difference between the variables of the reason for the referral with their respective assessment. This shows the importance of the association between the data of patient's history and results of clinical evaluation and complementary tests, such as videofluoroscopy, for correct identification of the swallowing disorders, being important to combine the use of severity scales of penetration/aspiration for diagnosis.

  3. Vibrotactile grasping force and hand aperture feedback for myoelectric forearm prosthesis users.

    PubMed

    Witteveen, Heidi J B; Rietman, Hans S; Veltink, Peter H

    2015-06-01

    User feedback about grasping force and hand aperture is very important in object handling with myoelectric forearm prostheses but is lacking in current prostheses. Vibrotactile feedback increases the performance of healthy subjects in virtual grasping tasks, but no extensive validation on potential users has been performed. Investigate the performance of upper-limb loss subjects in grasping tasks with vibrotactile stimulation, providing hand aperture, and grasping force feedback. Cross-over trial. A total of 10 subjects with upper-limb loss performed virtual grasping tasks while perceiving vibrotactile feedback. Hand aperture feedback was provided through an array of coin motors and grasping force feedback through a single miniature stimulator or an array of coin motors. Objects with varying sizes and weights had to be grasped by a virtual hand. Percentages correctly applied hand apertures and correct grasping force levels were all higher for the vibrotactile feedback condition compared to the no-feedback condition. With visual feedback, the results were always better compared to the vibrotactile feedback condition. Task durations were comparable for all feedback conditions. Vibrotactile grasping force and hand aperture feedback improves grasping performance of subjects with upper-limb loss. However, it should be investigated whether this is of additional value in daily-life tasks. This study is a first step toward the implementation of sensory vibrotactile feedback for users of myoelectric forearm prostheses. Grasping force feedback is crucial for optimal object handling, and hand aperture feedback is essential for reduction of required visual attention. Grasping performance with feedback is evaluated for the potential users. © The International Society for Prosthetics and Orthotics 2014.

  4. Microstrip technology and its application to phased array compensation

    NASA Technical Reports Server (NTRS)

    Dudgeon, J. E.; Daniels, W. D.

    1972-01-01

    A systematic analysis of mutual coupling compensation using microstrip techniques is presented. A method for behind-the-array coupling of a phased antenna array is investigated as to its feasibility. The matching scheme is tried on a rectangular array of one half lambda 2 dipoles, but it is not limited to this array element or geometry. In the example cited the values of discrete components necessary were so small an L-C network is needed for realization. Such L-C tanks might limit an otherwise broadband array match, however, this is not significant for this dipole array. Other areas investigated were balun feeding and power limits of spiral antenna elements.

  5. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  6. Wear behavior of AA 5083/SiC nano-particle metal matrix composite: Statistical analysis

    NASA Astrophysics Data System (ADS)

    Hussain Idrisi, Amir; Ismail Mourad, Abdel-Hamid; Thekkuden, Dinu Thomas; Christy, John Victor

    2018-03-01

    This paper reports study on statistical analysis of the wear characteristics of AA5083/SiC nanocomposite. The aluminum matrix composites with different wt % (0%, 1% and 2%) of SiC nanoparticles were fabricated by using stir casting route. The developed composites were used in the manufacturing of spur gears on which the study was conducted. A specially designed test rig was used in testing the wear performance of the gears. The wear was investigated under different conditions of applied load (10N, 20N, and 30N) and operation time (30 mins, 60 mins, 90 mins, and 120mins). The analysis carried out at room temperature under constant speed of 1450 rpm. The wear parameters were optimized by using Taguchi’s method. During this statistical approach, L27 Orthogonal array was selected for the analysis of output. Furthermore, analysis of variance (ANOVA) was used to investigate the influence of applied load, operation time and SiC wt. % on wear behaviour. The wear resistance was analyzed by selecting “smaller is better” characteristics as the objective of the model. From this research, it is observed that experiment time and SiC wt % have the most significant effect on the wear performance followed by the applied load.

  7. Solar Photovoltaic DC Systems: Basics and Safety: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNutt, Peter F; Sekulic, William R; Dreifuerst, Gary

    Solar Photovoltaic (PV) systems are common and growing with 42.4 GW installed capacity in U.S. (almost 15 GW added in 2016). This paper will help electrical workers, and emergency responders understand the basic operating principles and hazards of PV DC arrays. We briefly discuss the following aspects of solar photovoltaic (PV) DC systems: the effects of solar radiation and temperature on output power; PV module testing standards; common system configurations; a simple PV array sizing example; NEC guidelines and other safety features; DC array commissioning, periodic maintenance and testing; arc-flash hazard potential; how electrical workers and emergency responders can andmore » do work safely around PV arrays; do moonlight and artificial lighting pose a real danger; typical safe operating procedures; and other potential DC-system hazards to be aware of. We also present some statistics on PV DC array electrical incidents and injuries. Safe PV array operation is possible with a good understanding of PV DC arrays basics and having good safe operating procedures in place.« less

  8. WebArray: an online platform for microarray data analysis

    PubMed Central

    Xia, Xiaoqin; McClelland, Michael; Wang, Yipeng

    2005-01-01

    Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH) method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR) estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at . It runs on a Linux server with Apache and MySQL. PMID:16371165

  9. Perceptual impressions of causality are affected by common fate.

    PubMed

    White, Peter A

    2017-03-24

    Many studies of perceptual impressions of causality have used a stimulus in which a moving object (the launcher) contacts a stationary object (the target) and the latter then moves off. Such stimuli give rise to an impression that the launcher makes the target move. In the present experiments, instead of a single target object, an array of four vertically aligned objects was used. The launcher contacted none of them, but stopped at a point between the two central objects. The four objects then moved with similar motion properties, exhibiting the Gestalt property of common fate. Strong impressions of causality were reported for this stimulus. It is argued that the array of four objects was perceived, by the likelihood principle, as a single object with some parts unseen, that the launcher was perceived as contacting one of the unseen parts of this object, and that the causal impression resulted from that. Supporting that argument, stimuli in which kinematic features were manipulated so as to weaken or eliminate common fate yielded weaker impressions of causality.

  10. Simulation design of light field imaging based on ZEMAX

    NASA Astrophysics Data System (ADS)

    Zhou, Ke; Xiao, Xiangguo; Luan, Yadong; Zhou, Xiaobin

    2017-02-01

    Based on the principium of light field imaging, there designed a objective lens and a microlens array for gathering the light field feature, the homologous ZEMAX models was also be built. Then all the parameters were optimized using ZEMAX and the simulation image was given out. It pointed out that the position relationship between the objective lens and the microlens array had a great affect on imaging, which was the guidance when developing a prototype.

  11. Vehicle and cargo container inspection system for drugs

    NASA Astrophysics Data System (ADS)

    Verbinski, Victor V.; Orphan, Victor J.

    1999-06-01

    A vehicle and cargo container inspection system has been developed which uses gamma-ray radiography to produce digital images useful for detection of drugs and other contraband. The system is comprised of a 1 Ci Cs137 gamma-ray source collimated into a fan beam which is aligned with a linear array of NaI gamma-ray detectors located on the opposite side of the container. The NaI detectors are operated in the pulse-counting mode. A digital image of the vehicle or container is obtained by moving the aligned source and detector array relative to the object. Systems have been demonstrated in which the object is stationary (source and detector array move on parallel tracks) and in which the object moves past a stationary source and detector array. Scanning speeds of ˜30 cm/s with a pixel size (at the object) of ˜1 cm have been achieved. Faster scanning speeds of ˜2 m/s have been demonstrated on railcars with more modest spatial resolution (4 cm pixels). Digital radiographic images are generated from the detector count rates. These images, recorded on a PC-based data acquisition and display system, are shown from several applications: 1) inspection of trucks and containers at a border crossing, 2) inspection of railcars at a border crossing, 3) inspection of outbound cargo containers for stolen automobiles, and 4) inspection of trucks and cars for terrorist bombs.

  12. A leading edge heating array and a flat surface heating array - operation, maintenance and repair manual

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A general description of the leading edge/flat surface heating array is presented along with its components, assembly instructions, installation instructions, operation procedures, maintenance instructions, repair procedures, schematics, spare parts lists, engineering drawings of the array, and functional acceptance test log sheets. The proper replacement of components, correct torque values, step-by-step maintenance instructions, and pretest checkouts are described.

  13. Finite Element Electromagnetic Scattering: An Interactive Micro-Computer Algorithm

    DTIC Science & Technology

    1988-06-01

    the array established to hold all of C the transformed Y values. C C xamax The maximum value of the X array (XARRAY) C yamax The maximum v.lue of the...PI, MAGI, NOOTS, NUMPTS REAL N;UMSETS, RAD,1 THETAI RL X-MAX, XFIIJ, XMAJOR, XMAX, XMIN, XORO, XST REAL YAMAX , YFIN, YMAJOR. YMAX, YMIIJ, YCRG...U) go to 1 b =selecti C C LOAD ARRAYS c =1 xamax =0.0 yamax = 0.0 gamax =0.0 tatle2 =title6(a,b,c) do 18 c =1,int(numpts) tarray~c) =thetaca,b,c

  14. Small Arrays for Seismic Intruder Detections: A Simulation Based Experiment

    NASA Astrophysics Data System (ADS)

    Pitarka, A.

    2014-12-01

    Seismic sensors such as geophones and fiber optic have been increasingly recognized as promising technologies for intelligence surveillance, including intruder detection and perimeter defense systems. Geophone arrays have the capability to provide cost effective intruder detection in protecting assets with large perimeters. A seismic intruder detection system uses one or multiple arrays of geophones design to record seismic signals from footsteps and ground vehicles. Using a series of real-time signal processing algorithms the system detects, classify and monitors the intruder's movement. We have carried out numerical experiments to demonstrate the capability of a seismic array to detect moving targets that generate seismic signals. The seismic source is modeled as a vertical force acting on the ground that generates continuous impulsive seismic signals with different predominant frequencies. Frequency-wave number analysis of the synthetic array data was used to demonstrate the array's capability at accurately determining intruder's movement direction. The performance of the array was also analyzed in detecting two or more objects moving at the same time. One of the drawbacks of using a single array system is its inefficiency at detecting seismic signals deflected by large underground objects. We will show simulation results of the effect of an underground concrete block at shielding the seismic signal coming from an intruder. Based on simulations we found that multiple small arrays can greatly improve the system's detection capability in the presence of underground structures. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  15. Multiplex array proteomics detects increased MMP-8 in CSF after spinal cord injury.

    PubMed

    Light, Matthew; Minor, Kenneth H; DeWitt, Peter; Jasper, Kyle H; Davies, Stephen J A

    2012-06-11

    A variety of methods have been used to study inflammatory changes in the acutely injured spinal cord. Recently novel multiplex assays have been used in an attempt to overcome limitations in numbers of available targets studied in a single experiment. Other technical challenges in developing pre-clinical rodent models to investigate biomarkers in cerebrospinal fluid (CSF) include relatively small volumes of sample and low concentrations of target proteins. The primary objective of this study was to characterize the inflammatory profile present in CSF at a subacute time point in a clinically relevant rodent model of traumatic spinal cord injury (SCI). Our other aim was to test a microarray proteomics platform specifically for this application. A 34 cytokine sandwich ELISA microarray was used to study inflammatory changes in CSF samples taken 12 days post-cervical SCI in adult rats. The difference between the median foreground signal and the median background signal was measured. Bonferroni and Benjamini-Hochburg multiple testing corrections were applied to limit the False Discovery Rate (FDR), and a linear mixed model was used to account for repeated measures in the array. We report a novel subacute SCI biomarker, elevated levels of matrix metalloproteinase-8 protein in CSF, and discuss application of statistical models designed for multiplex testing. Major advantages of this assay over conventional methods include high-throughput format, good sensitivity, and reduced sample consumption. This method can be useful for creating comprehensive inflammatory profiles, and biomarkers can be used in the clinic to assess injury severity and to objectively grade response to therapy.

  16. Dual-Polarization, Multi-Frequency Antenna Array for use with Hurricane Imaging Radiometer

    NASA Technical Reports Server (NTRS)

    Little, John

    2013-01-01

    Advancements in common aperture antenna technology were employed to utilize its proprietary genetic algorithmbased modeling tools in an effort to develop, build, and test a dual-polarization array for Hurricane Imaging Radiometer (HIRAD) applications. Final program results demonstrate the ability to achieve a lightweight, thin, higher-gain aperture that covers the desired spectral band. NASA employs various passive microwave and millimeter-wave instruments, such as spectral radiometers, for a range of remote sensing applications, from measurements of the Earth's surface and atmosphere, to cosmic background emission. These instruments such as the HIRAD, SFMR (Stepped Frequency Microwave Radiometer), and LRR (Lightweight Rainfall Radiometer), provide unique data accumulation capabilities for observing sea surface wind, temperature, and rainfall, and significantly enhance the understanding and predictability of hurricane intensity. These microwave instruments require extremely efficient wideband or multiband antennas in order to conserve space on the airborne platform. In addition, the thickness and weight of the antenna arrays is of paramount importance in reducing platform drag, permitting greater time on station. Current sensors are often heavy, single- polarization, or limited in frequency coverage. The ideal wideband antenna will have reduced size, weight, and profile (a conformal construct) without sacrificing optimum performance. The technology applied to this new HIRAD array will allow NASA, NOAA, and other users to gather information related to hurricanes and other tropical storms more cost effectively without sacrificing sensor performance or the aircraft time on station. The results of the initial analysis and numerical design indicated strong potential for an antenna array that would satisfy all of the design requirements for a replacement HIRAD array. Multiple common aperture antenna methodologies were employed to achieve exceptional gain over the entire spectral frequency band while exhibiting superb VSWR (voltage standing wave ratio) values. Element size and spacing requirements were addressed for a direct replacement of the thicker, lower-performance, stack ed patch antenna array currently employed for the HIRAD application. Several variants to the multiband arrays were developed that exhibited four, equally spaced, high efficiency, "sweet spot" frequency bands, as well as the option for a high-performance wideband array. The 0.25-in. (˜6.4- mm) thickness of the antenna stack-up itself was achieved through the application of specialized antenna techniques and meta-materials to accomplish all design objectives.

  17. Systems and methods for detecting a failure event in a field programmable gate array

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Herath, Jeffrey A. (Inventor)

    2009-01-01

    An embodiment generally relates to a method of self-detecting an error in a field programmable gate array (FPGA). The method includes writing a signature value into a signature memory in the FPGA and determining a conclusion of a configuration refresh operation in the FPGA. The method also includes reading an outcome value from the signature memory.

  18. Affective three-dimensional brain-computer interface created using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-12-01

    To avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we applied a prism array-based display when presenting three-dimensional (3-D) objects. Emotional pictures were used as visual stimuli to increase the signal-to-noise ratios of steady-state visually evoked potentials (SSVEPs) because involuntarily motivated selective attention by affective mechanisms can enhance SSVEP amplitudes, thus producing increased interaction efficiency. Ten male and nine female participants voluntarily participated in our experiments. Participants were asked to control objects under three viewing conditions: two-dimension (2-D), stereoscopic 3-D, and prism. The participants performed each condition in a counter-balanced order. One-way repeated measures analysis of variance showed significant increases in the positive predictive values in the prism condition compared to the 2-D and 3-D conditions. Participants' subjective ratings of realness and engagement were also significantly greater in the prism condition than in the 2-D and 3-D conditions, while the ratings for visual fatigue were significantly reduced in the prism condition than in the 3-D condition. The proposed methods are expected to enhance the sense of reality in 3-D space without causing critical visual fatigue. In addition, people who are especially susceptible to stereoscopic 3-D may be able to use the affective brain-computer interface.

  19. Medicine and the call for a moral epistemology, part II: constructing a synthesis of values.

    PubMed

    Tauber, Alfred I

    2008-01-01

    The demands and needs of an individual patient require diverse value judgments to interpret and apply clinical data. Indeed, objective assessment takes on particular meaning in the context of the social and existential status of the patient, and thereby a complex calculus of values determines therapeutic goals. I have previously formulated how this moral thread of care becomes woven into the epistemological project as a "moral epistemology." Having argued its ethical justification elsewhere, I offer another perspective here: clinical choices employ diverse values directed at an array of goals, some of which are derived from a universal clinical science and others from the particular physiological, psychological, and social needs of the patient. Integrating these diverse elements that determine clinical care requires a complex synthesis of facts and judgments from several domains. This constructivist process relies on clinical facts, as well as on personal judgments and subjective assessments in an ongoing negotiation between patient and doctor. A philosophy of medicine must account for the conceptual basis of this process by identifying and addressing the judgments that govern the complex synthesis of these various elements.

  20. Correlation of Geophysical and Geotechnical Methods for Sediment Mapping in Sungai Batu, Kedah

    NASA Astrophysics Data System (ADS)

    Zakaria, M. T.; Taib, A.; Saidin, M. M.; Saad, R.; Muztaza, N. M.; Masnan, S. S. K.

    2018-04-01

    Exploration geophysics is widely used to map the subsurface characteristics of a region, to understand the underlying rock structures and spatial distribution of rock units. 2-D resistivity and seismic refraction methods were conducted in Sungai Batu locality with objective to identify and map the sediment deposit with correlation of borehole record. 2-D resistivity data was acquire using ABEM SAS4000 system with Pole-dipole array and 2.5 m minimum electrode spacing while for seismic refraction ABEM MK8 seismograph was used to record the seismic data and 5 kg sledgehammer used as a seismic source with geophones interval of 5 m spacing. The inversion model of 2-D resistivity result shows that, the resistivity values <100 Ωm was interpreted as saturated zone with while high resistivity values >500 Ωm as the hard layer for this study area. The seismic result indicates that the velocity values <2000 m/s represent as the highly-weathered soil consists of clay and sand while high velocity values >3600 m/s interpreted as the hard layer in this locality.

  1. Shock and Vibration Symposium (59th) Held in Albuquerque, New Mexico on 18-20 October 1988. Volume 4

    DTIC Science & Technology

    1988-12-01

    program to support TOPEX spacecraft design, Statistical energy analysis modeling of nonstructural mass on lightweight equipment panels using VAPEPS...and Stress estimation and statistical energy analysis of the Magellan spacecraft solar array using VAPEPS; Dynamic measurement -- An automated

  2. The application of the statistical classifying models for signal evaluation of the gas sensors analyzing mold contamination of the building materials

    NASA Astrophysics Data System (ADS)

    Majerek, Dariusz; Guz, Łukasz; Suchorab, Zbigniew; Łagód, Grzegorz; Sobczuk, Henryk

    2017-07-01

    Mold that develops on moistened building barriers is a major cause of the Sick Building Syndrome (SBS). Fungal contamination is normally evaluated using standard biological methods which are time-consuming and require a lot of manual labor. Fungi emit Volatile Organic Compounds (VOC) that can be detected in the indoor air using several techniques of detection e.g. chromatography. VOCs can be also detected using gas sensors arrays. All array sensors generate particular voltage signals that ought to be analyzed using properly selected statistical methods of interpretation. This work is focused on the attempt to apply statistical classifying models in evaluation of signals from gas sensors matrix to analyze the air sampled from the headspace of various types of the building materials at different level of contamination but also clean reference materials.

  3. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  4. Adjustment of Turbulent Boundary-Layer Flow to Idealized Urban Surfaces: A Large-Eddy Simulation Study

    NASA Astrophysics Data System (ADS)

    Cheng, Wai-Chi; Porté-Agel, Fernando

    2015-05-01

    Large-eddy simulations (LES) are performed to simulate the atmospheric boundary-layer (ABL) flow through idealized urban canopies represented by uniform arrays of cubes in order to better understand atmospheric flow over rural-to-urban surface transitions. The LES framework is first validated with wind-tunnel experimental data. Good agreement between the simulation results and the experimental data are found for the vertical and spanwise profiles of the mean velocities and velocity standard deviations at different streamwise locations. Next, the model is used to simulate ABL flows over surface transitions from a flat homogeneous terrain to aligned and staggered arrays of cubes with height . For both configurations, five different frontal area densities , equal to 0.028, 0.063, 0.111, 0.174 and 0.250, are considered. Within the arrays, the flow is found to adjust quickly and shows similar structure to the wake of the cubes after the second row of cubes. An internal boundary layer is identified above the cube arrays and found to have a similar depth in all different cases. At a downstream location where the flow immediately above the cube array is already adjusted to the surface, the spatially-averaged velocity is found to have a logarithmic profile in the vertical. The values of the displacement height are found to be quite insensitive to the canopy layout (aligned vs. staggered) and increase roughly from to as increases from 0.028 to 0.25. Relatively larger values of the aerodynamic roughness length are obtained for the staggered arrays, compared with the aligned cases, and a maximum value of is found at for both configurations. By explicitly calculating the drag exerted by the cubes on the flow and the drag coefficients of the cubes using our LES results, and comparing the results with existing theoretical expressions, we show that the larger values of for the staggered arrays are related to the relatively larger drag coefficients of the cubes for that configuration compared with the aligned one. The effective mixing length within and above different cube arrays is also calculated and a local maximum of within the canopy is found in all the cases, with values ranging from to . These patterns of are different from those used in existing urban canopy models.

  5. Innovations in Mission Architectures for Human and Robotic Exploration Beyond Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Cooke, Douglas R.; Joosten, B. Kent; Lo, Martin W.; Ford, Ken; Hansen, Jack

    2002-01-01

    Through the application of advanced technologies, mission concepts, and new ideas in combining capabilities, architectures for missions beyond Earth orbit have been dramatically simplified. These concepts enable a stepping stone approach to discovery driven, technology enabled exploration. Numbers and masses of vehicles required are greatly reduced, yet enable the pursuit of a broader range of objectives. The scope of missions addressed range from the assembly and maintenance of arrays of telescopes for emplacement at the Earth-Sun L2, to Human missions to asteroids, the moon and Mars. Vehicle designs are developed for proof of concept, to validate mission approaches and understand the value of new technologies. The stepping stone approach employs an incremental buildup of capabilities; allowing for decision points on exploration objectives. It enables testing of technologies to achieve greater reliability and understanding of costs for the next steps in exploration.

  6. THE RADIO/GAMMA-RAY CONNECTION IN ACTIVE GALACTIC NUCLEI IN THE ERA OF THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Allafort, A.

    We present a detailed statistical analysis of the correlation between radio and gamma-ray emission of the active galactic nuclei (AGNs) detected by Fermi during its first year of operation, with the largest data sets ever used for this purpose. We use both archival interferometric 8.4 GHz data (from the Very Large Array and ATCA, for the full sample of 599 sources) and concurrent single-dish 15 GHz measurements from the Owens Valley Radio Observatory (OVRO, for a sub sample of 199 objects). Our unprecedentedly large sample permits us to assess with high accuracy the statistical significance of the correlation, using amore » surrogate data method designed to simultaneously account for common-distance bias and the effect of a limited dynamical range in the observed quantities. We find that the statistical significance of a positive correlation between the centimeter radio and the broadband (E > 100 MeV) gamma-ray energy flux is very high for the whole AGN sample, with a probability of <10{sup -7} for the correlation appearing by chance. Using the OVRO data, we find that concurrent data improve the significance of the correlation from 1.6 x 10{sup -6} to 9.0 x 10{sup -8}. Our large sample size allows us to study the dependence of correlation strength and significance on specific source types and gamma-ray energy band. We find that the correlation is very significant (chance probability < 10{sup -7}) for both flat spectrum radio quasars and BL Lac objects separately; a dependence of the correlation strength on the considered gamma-ray energy band is also present, but additional data will be necessary to constrain its significance.« less

  7. The radio/gamma-ray connection in active galactic nuclei in the era of the Fermi Large Area Telescope

    DOE PAGES

    Ackermann, M.; Ajello, M.; Allafort, A.; ...

    2011-10-12

    We present a detailed statistical analysis of the correlation between radio and gamma-ray emission of the active galactic nuclei (AGNs) detected by Fermi during its first year of operation, with the largest data sets ever used for this purpose. We use both archival interferometric 8.4 GHz data (from the Very Large Array and ATCA, for the full sample of 599 sources) and concurrent single-dish 15 GHz measurements from the Owens Valley Radio Observatory (OVRO, for a sub sample of 199 objects). Our unprecedentedly large sample permits us to assess with high accuracy the statistical significance of the correlation, using amore » surrogate data method designed to simultaneously account for common-distance bias and the effect of a limited dynamical range in the observed quantities. We find that the statistical significance of a positive correlation between the centimeter radio and the broadband (E > 100 MeV) gamma-ray energy flux is very high for the whole AGN sample, with a probability of <10 –7 for the correlation appearing by chance. Using the OVRO data, we find that concurrent data improve the significance of the correlation from 1.6 × 10 –6 to 9.0 × 10 –8. Our large sample size allows us to study the dependence of correlation strength and significance on specific source types and gamma-ray energy band. As a result, we find that the correlation is very significant (chance probability < 10 –7) for both flat spectrum radio quasars and BL Lac objects separately; a dependence of the correlation strength on the considered gamma-ray energy band is also present, but additional data will be necessary to constrain its significance.« less

  8. The Radio/Gamma-Ray Connection in Active Galactic Nuclei in the Era of the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Ackermann, M.; Ajello, M.; Allafort, A.; Angelakis, E.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; hide

    2011-01-01

    We present a detailed statistical analysis of the correlation between radio and gamma-ray emission of the active galactic nuclei (AGNs) detected by Fermi during its first year of operation, with the largest data sets ever used for this purpose.We use both archival interferometric 8.4 GHz data (from the Very Large Array and ATCA, for the full sample of 599 sources) and concurrent single-dish 15 GHz measurements from the OwensValley RadioObservatory (OVRO, for a sub sample of 199 objects). Our unprecedentedly large sample permits us to assess with high accuracy the statistical significance of the correlation, using a surrogate data method designed to simultaneously account for common-distance bias and the effect of a limited dynamical range in the observed quantities. We find that the statistical significance of a positive correlation between the centimeter radio and the broadband (E > 100 MeV) gamma-ray energy flux is very high for the whole AGN sample, with a probability of <10(exp -7) for the correlation appearing by chance. Using the OVRO data, we find that concurrent data improve the significance of the correlation from 1.6 10(exp -6) to 9.0 10(exp -8). Our large sample size allows us to study the dependence of correlation strength and significance on specific source types and gamma-ray energy band. We find that the correlation is very significant (chance probability < 10(exp -7)) for both flat spectrum radio quasars and BL Lac objects separately; a dependence of the correlation strength on the considered gamma-ray energy band is also present, but additional data will be necessary to constrain its significance.

  9. A review of applications to constrain pumping test responses to improve on geological description and uncertainty

    NASA Astrophysics Data System (ADS)

    Raghavan, Rajagopal

    2004-12-01

    This review examines the single-phase flow of fluids to wells in heterogeneous porous media and explores procedures to evaluate pumping test or pressure-response curves. This paper examines how these curves may be used to improve descriptions of reservoir properties obtained from geology, geophysics, core analysis, outcrop measurements, and rock physics. We begin our discussion with a summary of the classical attempts to handle the issue of heterogeneity in well test analysis. We then review more recent advances concerning the evaluation of conductivity or permeability in terms of statistical variables and touch on perturbation techniques. Our current view to address the issue of heterogeneity by pumping tests may be simply summarized as follows. We assume a three-dimensional array (ordered set) of values for the properties of the porous medium as a function of the coordinates that is obtained as a result of measurements and interpretations. We presume that this array of values contains all relevant information available from prior geological and geophysical interpretations, core and outcrop measurements, and rock physics. These arrays consist of several million values of properties, and the information available is usually on a very fine scale (often <0.5 m in the vertical direction); for convenience, we refer to these as cell values. The properties are assumed to be constant over the volume of each of these cells; that is, the support volume is the cell volume, and the cell volumes define the geologic scale. In this view it is implicit that small-scale permeability affects the movement of fluids. Although more information than porosity is available, we refer to this system as a "porosity cube." Because it is not economically possible to carry out computations on a fine-scale model with modern resources on a routine basis, we discuss matters relating to the aggregation and scale-up of the fine-scale model from the perspective of testing and show that specific details need to be addressed. The focus is on single-phase flow. Addressing the issue of scale-up also permits us to comment on the application of the classical or analytical solutions to heterogeneous systems. The final part of the discussion outlines the inversion process and the adjustment of cell values to match observed performance. Because the computational scale and the scale of the porosity cube are different, we recommend that the inversion process incorporate adjustments at the fine scale. In this view the scale-up process becomes a part of the inversion algorithm.

  10. MTF measurement and analysis of linear array HgCdTe infrared detectors

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Lin, Chun; Chen, Honglei; Sun, Changhong; Lin, Jiamu; Wang, Xi

    2018-01-01

    The slanted-edge technique is the main method for measurement detectors MTF, however this method is commonly used on planar array detectors. In this paper the authors present a modified slanted-edge method to measure the MTF of linear array HgCdTe detectors. Crosstalk is one of the major factors that degrade the MTF value of such an infrared detector. This paper presents an ion implantation guard-ring structure which was designed to effectively absorb photo-carriers that may laterally defuse between adjacent pixels thereby suppressing crosstalk. Measurement and analysis of the MTF of the linear array detectors with and without a guard-ring were carried out. The experimental results indicated that the ion implantation guard-ring structure effectively suppresses crosstalk and increases MTF value.

  11. Manipulation of Liquids Using Phased Array Generation of Acoustic Radiation Pressure

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C. (Inventor)

    2000-01-01

    A phased array of piezoelectric transducers is used to control and manipulate contained as well as uncontained fluids in space and earth applications. The transducers in the phased array are individually activated while being commonly controlled to produce acoustic radiation pressure and acoustic streaming. The phased array is activated to produce a single pulse, a pulse burst or a continuous pulse to agitate, segregate or manipulate liquids and gases. The phased array generated acoustic radiation pressure is also useful in manipulating a drop, a bubble or other object immersed in a liquid. The transducers can be arranged in any number of layouts including linear single or multi- dimensional, space curved and annular arrays. The individual transducers in the array are activated by a controller, preferably driven by a computer.

  12. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-01-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  13. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-11-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  14. Development of a statistical method to help evaluating the transparency/opacity of decorative thin films

    NASA Astrophysics Data System (ADS)

    da Silva Oliveira, C. I.; Martinez-Martinez, D.; Al-Rjoub, A.; Rebouta, L.; Menezes, R.; Cunha, L.

    2018-04-01

    In this paper, we present a statistical method that allows evaluating the degree of a transparency of a thin film. To do so, the color coordinates are measured on different substrates, and the standard deviation is evaluated. In case of low values, the color depends on the film and not on the substrate, and intrinsic colors are obtained. In contrast, transparent films lead to high values of standard deviation, since the value of the color coordinates depends on the substrate. Between both extremes, colored films with a certain degree of transparency can be found. This method allows an objective and simple evaluation of the transparency of any film, improving the subjective visual inspection and avoiding the thickness problems related to optical spectroscopy evaluation. Zirconium oxynitride films deposited on three different substrates (Si, steel and glass) are used for testing the validity of this method, whose results have been validated with optical spectroscopy, and agree with the visual impression of the samples.

  15. Polarization-interference Jones-matrix mapping of biological crystal networks

    NASA Astrophysics Data System (ADS)

    Ushenko, O. G.; Dubolazov, O. V.; Pidkamin, L. Y.; Sidor, M. I.; Pavlyukovich, N.; Pavlyukovich, O.

    2018-01-01

    The paper consists of two parts. The first part presents short theoretical basics of the method of Jones-matrix mapping with the help of reference wave. It was provided experimentally measured coordinate distributions of modulus of Jones-matrix elements of polycrystalline film of bile. It was defined the values and ranges of changing of statistic moments, which characterize such distributions. The second part presents the data of statistic analysis of the distributions of matrix elements of polycrystalline film of urine of donors and patients with albuminuria. It was defined the objective criteria of differentiation of albuminuria.

  16. Optical Demonstrations with a Scanning Photodiode Array.

    ERIC Educational Resources Information Center

    Turman, Bobby N.

    1980-01-01

    Describes the photodiode array and the electrical connections necessary for it. Also shows a few of the optical demonstration possibilities-shadowgraphs for measuring small objects, interference and diffraction effects, angular resolution of an optical system, and a simple spectrometer. (Author/DS)

  17. Factors influencing successful physician recruitment in pediatrics and internal medicine.

    PubMed

    King, Kelvin; Camfield, Peter; Breau, Lynn

    2005-01-01

    The objective of the study was to survey recently hired physicians to Canadian Academic Departments of Pediatric and Internal Medicine to understand the factors that underlay successful recruitment. Recruits and Chairs agreed on the 10 most important values. Chairs overvalued the 10 least important Recruit values. Statistical analysis revealed five core themes - in order of importance they are: family lifestyle and opportunities, compensation methodology, children/community (housing, schools, recreational), professional working conditions (technology, staffing, facilities), and academic opportunities. Core themes varied by demographics and academic profile.

  18. A first-passage scheme for determination of overall rate constants for non-diffusion-limited suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Shih-Yuan; Yen, Yi-Ming

    2002-02-01

    A first-passage scheme is devised to determine the overall rate constant of suspensions under the non-diffusion-limited condition. The original first-passage scheme developed for diffusion-limited processes is modified to account for the finite incorporation rate at the inclusion surface by using a concept of the nonzero survival probability of the diffusing entity at entity-inclusion encounters. This nonzero survival probability is obtained from solving a relevant boundary value problem. The new first-passage scheme is validated by an excellent agreement between overall rate constant results from the present development and from an accurate boundary collocation calculation for the three common spherical arrays [J. Chem. Phys. 109, 4985 (1998)], namely simple cubic, body-centered cubic, and face-centered cubic arrays, for a wide range of P and f. Here, P is a dimensionless quantity characterizing the relative rate of diffusion versus surface incorporation, and f is the volume fraction of the inclusion. The scheme is further applied to random spherical suspensions and to investigate the effect of inclusion coagulation on overall rate constants. It is found that randomness in inclusion arrangement tends to lower the overall rate constant for f up to the near close-packing value of the regular arrays because of the inclusion screening effect. This screening effect turns stronger for regular arrays when f is near and above the close-packing value of the regular arrays, and consequently the overall rate constant of the random array exceeds that of the regular array. Inclusion coagulation too induces the inclusion screening effect, and leads to lower overall rate constants.

  19. Observations of interplanetary dust by the Juno magnetometer investigation

    NASA Astrophysics Data System (ADS)

    Benn, M.; Jorgensen, J. L.; Denver, T.; Brauer, P.; Jorgensen, P. S.; Andersen, A. C.; Connerney, J. E. P.; Oliversen, R.; Bolton, S. J.; Levin, S.

    2017-05-01

    One of the Juno magnetometer investigation's star cameras was configured to search for unidentified objects during Juno's transit en route to Jupiter. This camera detects and registers luminous objects to magnitude 8. Objects persisting in more than five consecutive images and moving with an apparent angular rate of between 2 and 18,000 arcsec/s were recorded. Among the objects detected were a small group of objects tracked briefly in close proximity to the spacecraft. The trajectory of these objects demonstrates that they originated on the Juno spacecraft, evidently excavated by micrometeoroid impacts on the solar arrays. The majority of detections occurred just prior to and shortly after Juno's transit of the asteroid belt. This rather novel detection technique utilizes the Juno spacecraft's prodigious 60 m2 of solar array as a dust detector and provides valuable information on the distribution and motion of interplanetary (>μm sized) dust.

  20. Markovian properties of wind turbine wakes within a 3x3 array

    NASA Astrophysics Data System (ADS)

    Melius, Matthew; Tutkun, Murat; Cal, Raúl Bayoán

    2012-11-01

    Wind turbine arrays have proven to be significant sources of renewable energy. Accurate projections of energy production is difficult to achieve because the wake of a wind turbine is highly intermittent and turbulent. Seeking to further the understanding of the downstream propagation of wind turbine wakes, a stochastic analysis of experimentally obtained turbulent flow data behind a wind turbine was performed. A 3x3 wind turbine array was constructed in the test section of a recirculating wind tunnel where X-wire anemometers were used to collect point velocity statistics. In this work, mathematics of the theory of Markovian processes are applied to obtain a statistical description of longitudinal velocity increments inside the turbine wake using conditional probability density functions. Our results indicate an existence of Markovian properties at scales on the order of the Taylor microscale, λ, which has also been observed and documented in different turbulent flows. This leads to characterization of the multi-point description of the wind turbine wakes using the most recent states of the flow.

  1. Impact of Using History of Mathematics on Students' Mathematics Attitude: A Meta-Analysis Study

    ERIC Educational Resources Information Center

    Bütüner, Suphi Onder

    2015-01-01

    The main objective of hereby study is to unearth the big picture, reaching studies about influence of using history of mathematics on attitude of mathematics among students. 6 studies with a total effect size of 14 that comply with coding protocol and comprise statistical values necessary for meta-analysis are combined via meta-analysis method…

  2. Learning from Success: How Original Research on Academic Resilience Informs What College Faculty Can Do to Increase the Retention of Low Socioeconomic Status Students

    ERIC Educational Resources Information Center

    Morales, Erik E.

    2014-01-01

    Utilizing resilience theory and original research conducted on fifty academically resilient low socioeconomic status students of color, this article presents specific objectives and values institutions of higher learning can adopt and emphasize to increase the retention and graduation of their most statistically at-risk students. Major findings…

  3. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    PubMed

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  4. Mangrove vegetation structure in Southeast Brazil from phased array L-band synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    de Souza Pereira, Francisca Rocha; Kampel, Milton; Cunha-Lignon, Marilia

    2016-07-01

    The potential use of phased array type L-band synthetic aperture radar (PALSAR) data for discriminating distinct physiographic mangrove types with different forest structure developments in a subtropical mangrove forest located in Cananéia on the Southern coast of São Paulo, Brazil, is investigated. The basin and fringe physiographic types and the structural development of mangrove vegetation were identified with the application of the Kruskal-Wallis statistical test to the SAR backscatter values of 10 incoherent attributes. The best results to separate basin to fringe types were obtained using copolarized HH, cross-polarized HV, and the biomass index (BMI). Mangrove structural parameters were also estimated using multiple linear regressions. BMI and canopy structure index were used as explanatory variables for canopy height, mean height, and mean diameter at breast height regression models, with significant R2=0.69, 0.73, and 0.67, respectively. The current study indicates that SAR L-band images can be used as a tool to discriminate physiographic types and to characterize mangrove forests. The results are relevant considering the crescent availability of freely distributed SAR images that can be more utilized for analysis, monitoring, and conservation of the mangrove ecosystem.

  5. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  6. 300 mm arrays and 30 nm Features: Frontiers in Sorting Biological Objects

    NASA Astrophysics Data System (ADS)

    Austin, Robert; Comella, Brandon; D'Silva, Joseph; Sturm, James

    2014-03-01

    One of the great challenges in prediction of metastasis is determining when the metastatic process actually begins. It is presumed that this process occurs due to passage of biological objects in the blood from tumor to remote sites. We will discuss our attempts to find both very large objects (circulating tumor cell clumps) and very small (exosomes) using a combination of extremely large scale photolithography on 300 mm wafers and deep-UV lithography to produce sub-100 nm arrays to sort exosomes. These technologies push the envelope of present day academic facilities . Supported by the National Science Foundation and the National Cancer Institute.

  7. Coherent acoustic communication in a tidal estuary with busy shipping traffic.

    PubMed

    van Walree, Paul A; Neasham, Jeffrey A; Schrijver, Marco C

    2007-12-01

    High-rate acoustic communication experiments were conducted in a dynamic estuarine environment. Two current profilers deployed in a shipping lane were interfaced with acoustic modems, which modulated and transmitted the sensor readings every 200 s over a period of four days. QPSK modulation was employed at a raw data rate of 8 kbits on a 12-kHz carrier. Two 16-element hydrophone arrays, one horizontal and one vertical, were deployed near the shore. A multichannel decision-feedback equalizer was used to demodulate the modem signals received on both arrays. Long-term statistical analysis reveals the effects of the tidal cycle, subsea unit location, attenuation by the wake of passing vessels, and high levels of ship-generated noise on the fidelity of the communication links. The use of receiver arrays enables vast improvement in the overall reliability of data delivery compared with a single-receiver system, with performance depending strongly on array orientation. The vertical array offers the best performance overall, although the horizontal array proves more robust against shipping noise. Spatial coherence estimates, variation of array aperture, and inspection of array angular responses point to adaptive beamforming and coherent combining as the chief mechanisms of array gain.

  8. Reduction of solar vector magnetograph data using a microMSP array processor

    NASA Technical Reports Server (NTRS)

    Kineke, Jack

    1990-01-01

    The processing of raw data obtained by the solar vector magnetograph at NASA-Marshall requires extensive arithmetic operations on large arrays of real numbers. The objectives of this summer faculty fellowship study are to: (1) learn the programming language of the MicroMSP Array Processor and adapt some existing data reduction routines to exploit its capabilities; and (2) identify other applications and/or existing programs which lend themselves to array processor utilization which can be developed by undergraduate student programmers under the provisions of project JOVE.

  9. Synthetic aperture radar images with composite azimuth resolution

    DOEpatents

    Bielek, Timothy P; Bickel, Douglas L

    2015-03-31

    A synthetic aperture radar (SAR) image is produced by using all phase histories of a set of phase histories to produce a first pixel array having a first azimuth resolution, and using less than all phase histories of the set to produce a second pixel array having a second azimuth resolution that is coarser than the first azimuth resolution. The first and second pixel arrays are combined to produce a third pixel array defining a desired SAR image that shows distinct shadows of moving objects while preserving detail in stationary background clutter.

  10. Clinical evaluation of selected Yogic procedures in individuals with low back pain

    PubMed Central

    Pushpika Attanayake, A. M.; Somarathna, K. I. W. K.; Vyas, G. H.; Dash, S. C.

    2010-01-01

    The present study has been conducted to evaluate selected yogic procedures on individuals with low back pain. The understanding of back pain as one of the commonest clinical presentations during clinical practice made the path to the present study. It has also been calculated that more than three-quarters of the world's population experience back pain at some time in their lives. Twelve patients were selected and randomly divided into two groups, viz., group A yogic group and group B control group. Advice for life style and diet was given for all the patients. The effect of the therapy was assessed subjectively and objectively. Particular scores drawn for yogic group and control group were individually analyzed before and after treatment and the values were compared using standard statistical protocols. Yogic intervention revealed 79% relief in both subjective and objective parameters (i.e., 7 out of 14 parameters showed statistically highly significant P < 0.01 results, while 4 showed significant results P < 0.05). Comparative effect of yogic group and control group showed 79% relief in both subjective and objective parameters. (i.e., total 6 out of 14 parameters showed statistically highly significant (P < 0.01) results, while 5 showed significant results (P < 0.05). PMID:22131719

  11. High resolution telescope

    DOEpatents

    Massie, Norbert A.; Oster, Yale

    1992-01-01

    A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employs speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by an electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activites. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes.

  12. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  13. Initial Operative Experience and Short Term Hearing Preservation Results with a Mid-Scala Cochlear Implant Electrode Array

    PubMed Central

    Svrakic, Maja; Roland, J. Thomas; McMenomey, Sean O.; Svirsky, Mario A.

    2016-01-01

    OBJECTIVE To describe our initial operative experience and hearing preservation results with the Advanced Bionics (AB) Mid Scala Electrode (MSE) STUDY DESIGN Retrospective review. SETTING Tertiary referral center. PATIENTS Sixty-three MSE implants in pediatric and adult patients were compared to age- and gender-matched 1j electrode implants from the same manufacturer. All patients were severe to profoundly deaf. INTERVENTION Cochlear implantation with either the AB 1j electrode or the AB MSE. MAIN OUTCOME MEASURES The MSE and 1j electrode were compared in their angular depth of insertion (aDOI) and pre- to post-operative change in hearing thresholds. Hearing preservation was analyzed as a function of aDOI. Secondary outcome measures included operative time, incidence of abnormal intraoperative impedance and telemetry values, and incidence of postsurgical complications. RESULTS Depth of insertion was similar for both electrodes, but was more consistent for the MSE array and more variable for the 1j array. Patients with MSE electrodes had better hearing preservation. Thresholds shifts at four audiometric frequencies ranging from 250 to 2,000 Hz were 10 dB, 7 dB, 2 dB and 6 dB smaller for the MSE electrode than for the 1j (p<0.05). Hearing preservation at low frequencies was worse with deeper insertion, regardless of array. Secondary outcome measures were similar for both electrodes. CONCLUSIONS The MSE electrode resulted in more consistent insertion depth and somewhat better hearing preservation than the 1j electrode. Differences in other surgical outcome measures were small or unlikely to have a meaningful effect. PMID:27755356

  14. Breadboard linear array scan imager using LSI solid-state technology

    NASA Technical Reports Server (NTRS)

    Tracy, R. A.; Brennan, J. A.; Frankel, D. G.; Noll, R. E.

    1976-01-01

    The performance of large scale integration photodiode arrays in a linear array scan (pushbroom) breadboard was evaluated for application to multispectral remote sensing of the earth's resources. The technical approach, implementation, and test results of the program are described. Several self scanned linear array visible photodetector focal plane arrays were fabricated and evaluated in an optical bench configuration. A 1728-detector array operating in four bands (0.5 - 1.1 micrometer) was evaluated for noise, spectral response, dynamic range, crosstalk, MTF, noise equivalent irradiance, linearity, and image quality. Other results include image artifact data, temporal characteristics, radiometric accuracy, calibration experience, chip alignment, and array fabrication experience. Special studies and experimentation were included in long array fabrication and real-time image processing for low-cost ground stations, including the use of computer image processing. High quality images were produced and all objectives of the program were attained.

  15. Statistical analysis of early failures in electromigration

    NASA Astrophysics Data System (ADS)

    Gall, M.; Capasso, C.; Jawarani, D.; Hernandez, R.; Kawasaki, H.; Ho, P. S.

    2001-07-01

    The detection of early failures in electromigration (EM) and the complicated statistical nature of this important reliability phenomenon have been difficult issues to treat in the past. A satisfactory experimental approach for the detection and the statistical analysis of early failures has not yet been established. This is mainly due to the rare occurrence of early failures and difficulties in testing of large sample populations. Furthermore, experimental data on the EM behavior as a function of varying number of failure links are scarce. In this study, a technique utilizing large interconnect arrays in conjunction with the well-known Wheatstone Bridge is presented. Three types of structures with a varying number of Ti/TiN/Al(Cu)/TiN-based interconnects were used, starting from a small unit of five lines in parallel. A serial arrangement of this unit enabled testing of interconnect arrays encompassing 480 possible failure links. In addition, a Wheatstone Bridge-type wiring using four large arrays in each device enabled simultaneous testing of 1920 interconnects. In conjunction with a statistical deconvolution to the single interconnect level, the results indicate that the electromigration failure mechanism studied here follows perfect lognormal behavior down to the four sigma level. The statistical deconvolution procedure is described in detail. Over a temperature range from 155 to 200 °C, a total of more than 75 000 interconnects were tested. None of the samples have shown an indication of early, or alternate, failure mechanisms. The activation energy of the EM mechanism studied here, namely the Cu incubation time, was determined to be Q=1.08±0.05 eV. We surmise that interface diffusion of Cu along the Al(Cu) sidewalls and along the top and bottom refractory layers, coupled with grain boundary diffusion within the interconnects, constitutes the Cu incubation mechanism.

  16. Multivariate regression methods for estimating velocity of ictal discharges from human microelectrode recordings

    NASA Astrophysics Data System (ADS)

    Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.

    2017-08-01

    Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically-observable EEG data, with a variety of straightforward computation methods available. This opens possibilities for systematic assessments of ictal discharge propagation in clinical and research settings.

  17. Use of geometric properties of landmark arrays for reorientation relative to remote cities and local objects.

    PubMed

    Mou, Weimin; Nankoo, Jean-François; Zhou, Ruojing; Spetch, Marcia L

    2014-03-01

    Five experiments investigated how human adults use landmark arrays in the immediate environment to reorient relative to the local environment and relative to remote cities. Participants learned targets' directions with the presence of a proximal 4 poles forming a rectangular shape and an array of more distal poles forming a rectangular shape. Then participants were disoriented and pointed to targets with the presence of the proximal poles or the distal poles. Participants' orientation was estimated by the mean of their pointing error across targets. The targets could be 7 objects in the immediate local environment in which the poles were located or 7 cities around Edmonton (Alberta, Canada) where the experiments occurred. The directions of the 7 cities could be learned from reading a map first and then from pointing to the cities when the poles were presented. The directions of the 7 cities could also be learned from viewing labels of cities moving back and forth in the specific direction in the immediate local environment in which the poles were located. The shape of the array of the distal poles varied in salience by changing the number of poles on each edge of the rectangle (2 vs. 34). The results showed that participants regained their orientation relative to local objects using the distal poles with 2 poles on each edge; participants could not reorient relative to cities using the distal pole array with 2 poles on each edge but could reorient relative to cities using the distal pole array with 34 poles on each edge. These results indicate that use of cues in reorientation depends not only on the cue salience but also on which environment people need to reorient to.

  18. Laser-Ablated Ba(0.50)Sr(0.50)TiO3/LaAlO3 Films Analyzed Statistically for Microwave Applications

    NASA Technical Reports Server (NTRS)

    Romanofsky, Robert R.

    2003-01-01

    Scanning phased-array antennas represent a highly desirable solution for futuristic near-Earth and deep space communication scenarios requiring vibration-free, rapid beam steering and enhanced reliability. The current state-of-practice in scanning phased arrays is represented by gallium arsenide (GaAs) monolithic microwave integrated circuit (MMIC) technology or ferrite phase shifters. Cost and weight are significant impediments to space applications. Moreover, conventional manifold-fed arrays suffer from beam-forming loss that places considerable burden on MMIC amplifiers. The inefficiency can result in severe thermal management problems.

  19. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface

    PubMed Central

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A.; Castellanos-Ramos, Julián; Hidalgo-López, José A.

    2016-01-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor. PMID:26840321

  20. Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface.

    PubMed

    Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A; Castellanos-Ramos, Julián; Hidalgo-López, José A

    2016-02-01

    Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor.

  1. Javascript Library for Developing Interactive Micro-Level Animations for Teaching and Learning Algorithms on One-Dimensional Arrays

    ERIC Educational Resources Information Center

    Végh, Ladislav

    2016-01-01

    The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…

  2. Optimization of Microphone Locations for Acoustic Liner Impedance Eduction

    NASA Technical Reports Server (NTRS)

    Jones, M. G.; Watson, W. R.; June, J. C.

    2015-01-01

    Two impedance eduction methods are explored for use with data acquired in the NASA Langley Grazing Flow Impedance Tube. The first is an indirect method based on the convected Helmholtz equation, and the second is a direct method based on the Kumaresan and Tufts algorithm. Synthesized no-flow data, with random jitter to represent measurement error, are used to evaluate a number of possible microphone locations. Statistical approaches are used to evaluate the suitability of each set of microphone locations. Given the computational resources required, small sample statistics are employed for the indirect method. Since the direct method is much less computationally intensive, a Monte Carlo approach is employed to gather its statistics. A comparison of results achieved with full and reduced sets of microphone locations is used to determine which sets of microphone locations are acceptable. For the indirect method, each array that includes microphones in all three regions (upstream and downstream hard wall sections, and liner test section) provides acceptable results, even when as few as eight microphones are employed. The best arrays employ microphones well away from the leading and trailing edges of the liner. The direct method is constrained to use microphones opposite the liner. Although a number of arrays are acceptable, the optimum set employs 14 microphones positioned well away from the leading and trailing edges of the liner. The selected sets of microphone locations are also evaluated with data measured for ceramic tubular and perforate-over-honeycomb liners at three flow conditions (Mach 0.0, 0.3, and 0.5). They compare favorably with results attained using all 53 microphone locations. Although different optimum microphone locations are selected for the two impedance eduction methods, there is significant overlap. Thus, the union of these two microphone arrays is preferred, as it supports usage of both methods. This array contains 3 microphones in the upstream hard wall section, 14 microphones opposite the liner, and 3 microphones in the downstream hard wall section.

  3. Imaging through turbulence using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2015-09-01

    Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.

  4. Ka-Band MMIC Subarray Technology Program (Ka-Mist)

    NASA Technical Reports Server (NTRS)

    Pottinger, W.

    1995-01-01

    Ka-band monolithic microwave integrated circuit (MMIC) arrays have been considered as having high potential for increasing the capability of space, aircraft, and land mobile communication systems in terms of scan performance, data rate, link margin, and flexibility while offering a significant reduction in size, weight, and power consumption. Insertion of MMIC technology into antenna systems, particularly at millimeter wave frequencies using low power and low noise amplifiers in closed proximity to the radiating elements, offers a significant improvement in the array transmit efficiency, receive system noise figure, and overall array reliability. Application of active array technology also leads to the use of advanced beamforming techniques that can improve beam agility, diversity, and adaptivity to complex signal environments. The objective of this program was to demonstrate the technical feasibility of the 'tile' array packaging architecture at EHF via the insertion of 1990 MMIC technology into a functional tile array or subarray module. The means test of this objective was to demonstrate and deliver to NASA a minimum of two 4 x 4 (16 radiating element) subarray modules operating in a transmit mode at 29.6 GHz. Available (1990) MMIC technology was chosen to focus the program effort on the novel interconnect schemes and packaging requirements rather than focusing on MMIC development. Major technical achievements of this program include the successful integration of two 4 x 4 subarray modules into a single antenna array. This 32 element array demonstrates a transmit EIRP of over 300 watts yielding an effective directive power gain in excess of 55 dB at 29.63 GHz. The array has been actively used as the transmit link in airborne/terrestrial mobile communication experiments accomplished via the ACTS satellite launched in August 1993.

  5. The Value of a Well-Being Improvement Strategy

    PubMed Central

    Guo, Xiaobo; Coberley, Carter; Pope, James E.; Wells, Aaron

    2015-01-01

    Objective: The objective of this study is to evaluate effectiveness of a firm's 5-year strategy toward improving well-being while lowering health care costs amidst adoption of a Consumer-Driven Health Plan. Methods: Repeated measures statistical models were employed to test and quantify association between key demographic factors, employment type, year, individual well-being, and outcomes of health care costs, obesity, smoking, absence, and performance. Results: Average individual well-being trended upward by 13.5% over 5 years, monthly allowed amount health care costs declined 5.2% on average per person per year, and obesity and smoking rates declined by 4.8 and 9.7%, respectively, on average each year. The results show that individual well-being was significantly associated with each outcome and in the expected direction. Conclusions: The firm's strategy was successful in driving statistically significant, longitudinal well-being, biometric and productivity improvements, and health care cost reduction. PMID:26461860

  6. Promoting Statistical Thinking in Schools with Road Injury Data

    ERIC Educational Resources Information Center

    Woltman, Marie

    2017-01-01

    Road injury is an immediately relevant topic for 9-19 year olds. Current availability of Open Data makes it increasingly possible to find locally relevant data. Statistical lessons developed from these data can mutually reinforce life lessons about minimizing risk on the road. Devon County Council demonstrate how a wide array of statistical…

  7. A Matched Field Processing Framework for Coherent Detection Over Local and Regional Networks (Postprint)

    DTIC Science & Technology

    2011-12-30

    the term " superresolution "). The single-phase matched field statistic for a given template was also demonstrated to be a viable detection statistic... Superresolution with seismic arrays using empirical matched field processing, Geophys. J. Int. 182: 1455–1477. Kim, K.-H. and Park, Y. (2010): The 20

  8. Women, Work and Child Care.

    ERIC Educational Resources Information Center

    Mercer, Elizabeth

    This fact sheet provides an array of statistical data on working mothers, such as the need for child care, the child care providers, who supports child care, and work and family. Data sources include a number of federal government and private organizations. Among the statistics highlighted are the following: (1) in 1988, 65 percent of all women…

  9. A Prospective, Multicenter, Single-Blind Study Assessing Indices of SNAP II Versus BIS VISTA on Surgical Patients Undergoing General Anesthesia.

    PubMed

    Bergese, Sergio D; Uribe, Alberto A; Puente, Erika G; Marcus, R-Jay L; Krohn, Randall J; Docsa, Steven; Soto, Roy G; Candiotti, Keith A

    2017-02-03

    Traditionally, anesthesiologists have relied on nonspecific subjective and objective physical signs to assess patients' comfort level and depth of anesthesia. Commercial development of electrical monitors, which use low- and high-frequency electroencephalogram (EEG) signals, have been developed to enhance the assessment of patients' level of consciousness. Multiple studies have shown that monitoring patients' consciousness levels can help in reducing drug consumption, anesthesia-related adverse events, and recovery time. This clinical study will provide information by simultaneously comparing the performance of the SNAP II (a single-channel EEG device) and the bispectral index (BIS) VISTA (a dual-channel EEG device) by assessing their efficacy in monitoring different anesthetic states in patients undergoing general anesthesia. The primary objective of this study is to establish the range of index values for the SNAP II corresponding to each anesthetic state (preinduction, loss of response, maintenance, first purposeful response, and extubation). The secondary objectives will assess the range of index values for BIS VISTA corresponding to each anesthetic state compared to published BIS VISTA range information, and estimate the area under the curve, sensitivity, and specificity for both devices. This is a multicenter, prospective, double-arm, parallel assignment, single-blind study involving patients undergoing elective surgery that requires general anesthesia. The study will include 40 patients and will be conducted at the following sites: The Ohio State University Medical Center (Columbus, OH); Northwestern University Prentice Women's Hospital (Chicago, IL); and University of Miami Jackson Memorial Hospital (Miami, FL). The study will assess the predictive value of SNAP II versus BIS VISTA indices at various anesthetic states in patients undergoing general anesthesia (preinduction, loss of response, maintenance, first purposeful response, and extubation). The SNAP II and BIS VISTA electrode arrays will be placed on the patient's forehead on opposite sides. The hemisphere location for both devices' electrodes will be equally alternated among the patient population. The index values for both devices will be recorded and correlated with the scorings received by performing the Modified Observer's Assessment of Alertness and Sedation and the American Society of Anesthesiologists Continuum of Depth of Sedation, at different stages of anesthesia. Enrollment for this study has been completed and statistical data analyses are currently underway. The results of this trial will provide information that will simultaneously compare the performance of SNAP II and BIS VISTA devices, with regards to monitoring different anesthesia states among patients. Clinicaltrials.gov NCT00829803; https://clinicaltrials.gov/ct2/show/NCT00829803 (Archived by WebCite at http://www.webcitation.org/6nmyi8YKO). ©Sergio D Bergese, Alberto A Uribe, Erika G Puente, R-Jay L Marcus, Randall J Krohn, Steven Docsa, Roy G Soto, Keith A Candiotti. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 03.02.2017.

  10. A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays

    NASA Astrophysics Data System (ADS)

    Guo, Y. J.; Lee, K. J.; Caballero, R. N.

    2018-04-01

    The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.

  11. PHOTOMETRY OF VARIABLE STARS FROM DOME A, ANTARCTICA: RESULTS FROM THE 2010 OBSERVING SEASON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lingzhi; Zhu, Zonghong; Macri, Lucas M.

    We present results from a season of observations with the Chinese Small Telescope ARray, obtained over 183 days of the 2010 Antarctic winter. We carried out high-cadence time-series aperture photometry of 9125 stars with i ∼< 15.3 mag located in a 23 deg{sup 2} region centered on the south celestial pole. We identified 188 variable stars, including 67 new objects relative to our 2008 observations, thanks to broader synoptic coverage, a deeper magnitude limit, and a larger field of view. We used the photometric data set to derive site statistics from Dome A. Based on two years of observations, wemore » find that extinction due to clouds at this site is less than 0.1 and 0.4 mag during 45% and 75% of the dark time, respectively.« less

  12. Speckle-metric-optimization-based adaptive optics for laser beam projection and coherent beam combining.

    PubMed

    Vorontsov, Mikhail; Weyrauch, Thomas; Lachinova, Svetlana; Gatz, Micah; Carhart, Gary

    2012-07-15

    Maximization of a projected laser beam's power density at a remotely located extended object (speckle target) can be achieved by using an adaptive optics (AO) technique based on sensing and optimization of the target-return speckle field's statistical characteristics, referred to here as speckle metrics (SM). SM AO was demonstrated in a target-in-the-loop coherent beam combining experiment using a bistatic laser beam projection system composed of a coherent fiber-array transmitter and a power-in-the-bucket receiver. SM sensing utilized a 50 MHz rate dithering of the projected beam that provided a stair-mode approximation of the outgoing combined beam's wavefront tip and tilt with subaperture piston phases. Fiber-integrated phase shifters were used for both the dithering and SM optimization with stochastic parallel gradient descent control.

  13. Evaluation of a miniature microscope objective designed for fluorescence array microscopy detection of Mycobacterium tuberculosis.

    PubMed

    McCall, Brian; Olsen, Randall J; Nelles, Nicole J; Williams, Dawn L; Jackson, Kevin; Richards-Kortum, Rebecca; Graviss, Edward A; Tkaczyk, Tomasz S

    2014-03-01

    A prototype miniature objective that was designed for a point-of-care diagnostic array microscope for detection of Mycobacterium tuberculosis and previously fabricated and presented in a proof of concept is evaluated for its effectiveness in detecting acid-fast bacteria. To evaluate the ability of the microscope to resolve submicron features and details in the image of acid-fast microorganisms stained with a fluorescent dye, and to evaluate the accuracy of clinical diagnoses made with digital images acquired with the objective. The lens prescription data for the microscope design are presented. A test platform is built by combining parts of a standard microscope, a prototype objective, and a digital single-lens reflex camera. Counts of acid-fast bacteria made with the prototype objective are compared to counts obtained with a standard microscope over matched fields of view. Two sets of 20 smears, positive and negative, are diagnosed by 2 pathologists as sputum smear positive or sputum smear negative, using both a standard clinical microscope and the prototype objective under evaluation. The results are compared to a reference diagnosis of the same sample. More bacteria are counted in matched fields of view in digital images taken with the prototype objective than with the standard clinical microscope. All diagnostic results are found to be highly concordant. An array microscope built with this miniature lens design will be able to detect M tuberculosis with high sensitivity and specificity.

  14. Experiment on interface separation detection of concrete-filled steel tubular arch bridge using accelerometer array

    NASA Astrophysics Data System (ADS)

    Pan, Shengshan; Zhao, Xuefeng; Zhao, Hailiang; Mao, Jian

    2015-04-01

    Based on the vibration testing principle, and taking the local vibration of steel tube at the interface separation area as the study object, a real-time monitoring and the damage detection method of the interface separation of concrete-filled steel tube by accelerometer array through quantitative transient self-excitation is proposed. The accelerometers are arranged on the steel tube area with or without void respectively, and the signals of accelerometers are collected at the same time and compared under different transient excitation points. The results show that compared with the signal of compact area, the peak value of accelerometer signal at void area increases and attenuation speed slows down obviously, and the spectrum peaks of the void area are much more and disordered and the amplitude increases obviously. whether the input point of transient excitation is on void area or not is irrelevant with qualitative identification results. So the qualitative identification of the interface separation of concrete-filled steel tube based on the signal of acceleration transducer is feasible and valid.

  15. Temperature Effect of Hydrogen-Like Impurity on the Ground State Energy of Strong Coupling Polaron in a RbCl Quantum Pseudodot

    NASA Astrophysics Data System (ADS)

    Xiao, Jing-Lin

    2016-11-01

    We study the ground state energy and the mean number of LO phonons of the strong-coupling polaron in a RbCl quantum pseudodot (QPD) with hydrogen-like impurity at the center. The variations of the ground state energy and the mean number of LO phonons with the temperature and the strength of the Coulombic impurity potential are obtained by employing the variational method of Pekar type and the quantum statistical theory (VMPTQST). Our numerical results have displayed that [InlineMediaObject not available: see fulltext.] the absolute value of the ground state energy increases (decreases) when the temperature increases at lower (higher) temperature regime, [InlineMediaObject not available: see fulltext.] the mean number of the LO phonons increases with increasing temperature, [InlineMediaObject not available: see fulltext.] the absolute value of ground state energy and the mean number of LO phonons are increasing functions of the strength of the Coulombic impurity potential.

  16. Reliability of high-power QCW arrays

    NASA Astrophysics Data System (ADS)

    Feeler, Ryan; Junghans, Jeremy; Remley, Jennifer; Schnurbusch, Don; Stephens, Ed

    2010-02-01

    Northrop Grumman Cutting Edge Optronics has developed a family of arrays for high-power QCW operation. These arrays are built using CTE-matched heat sinks and hard solder in order to maximize the reliability of the devices. A summary of a recent life test is presented in order to quantify the reliability of QCW arrays and associated laser gain modules. A statistical analysis of the raw lifetime data is presented in order to quantify the data in such a way that is useful for laser system designers. The life tests demonstrate the high level of reliability of these arrays in a number of operating regimes. For single-bar arrays, a MTTF of 19.8 billion shots is predicted. For four-bar samples, a MTTF of 14.6 billion shots is predicted. In addition, data representing a large pump source is analyzed and shown to have an expected lifetime of 13.5 billion shots. This corresponds to an expected operational lifetime of greater than ten thousand hours at repetition rates less than 370 Hz.

  17. Development and Implementation of an Empirical Ionosphere Variability Model

    NASA Technical Reports Server (NTRS)

    Minow, Joesph I.; Almond, Deborah (Technical Monitor)

    2002-01-01

    Spacecraft designers and operations support personnel involved in space environment analysis for low Earth orbit missions require ionospheric specification and forecast models that provide not only average ionospheric plasma parameters for a given set of geophysical conditions but the statistical variations about the mean as well. This presentation describes the development of a prototype empirical model intended for use with the International Reference Ionosphere (IRI) to provide ionospheric Ne and Te variability. We first describe the database of on-orbit observations from a variety of spacecraft and ground based radars over a wide range of latitudes and altitudes used to obtain estimates of the environment variability. Next, comparison of the observations with the IRI model provide estimates of the deviations from the average model as well as the range of possible values that may correspond to a given IRI output. Options for implementation of the statistical variations in software that can be run with the IRI model are described. Finally, we provide example applications including thrust estimates for tethered satellites and specification of sunrise Ne, Te conditions required to support spacecraft charging issues for satellites with high voltage solar arrays.

  18. Non-parametric early seizure detection in an animal model of temporal lobe epilepsy

    NASA Astrophysics Data System (ADS)

    Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.

    2008-03-01

    The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.

  19. Computational investigation of hydrokinetic turbine arrays in an open channel using an actuator disk-LES model

    NASA Astrophysics Data System (ADS)

    Kang, Seokkoo; Yang, Xiaolei; Sotiropoulos, Fotis

    2012-11-01

    While a considerable amount of work has focused on studying the effects and performance of wind farms, very little is known about the performance of hydrokinetic turbine arrays in open channels. Unlike large wind farms, where the vertical fluxes of momentum and energy from the atmospheric boundary layer comprise the main transport mechanisms, the presence of free surface in hydrokinetic turbine arrays inhibits vertical transport. To explore this fundamental difference between wind and hydrokinetic turbine arrays, we carry out LES with the actuator disk model to systematically investigate various layouts of hydrokinetic turbine arrays mounted on the bed of a straight open channel with fully-developed turbulent flow fed at the channel inlet. Mean flow quantities and turbulence statistics within and downstream of the arrays will be analyzed and the effect of the turbine arrays as means for increasing the effective roughness of the channel bed will be extensively discussed. This work was supported by Initiative for Renewable Energy & the Environment (IREE) (Grant No. RO-0004-12), and computational resources were provided by Minnesota Supercomputing Institute.

  20. Color filter array pattern identification using variance of color difference image

    NASA Astrophysics Data System (ADS)

    Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu

    2017-07-01

    A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.

  1. Magnetoencephalography with temporal spread imaging to visualize propagation of epileptic activity.

    PubMed

    Shibata, Sumiya; Matsuhashi, Masao; Kunieda, Takeharu; Yamao, Yukihiro; Inano, Rika; Kikuchi, Takayuki; Imamura, Hisaji; Takaya, Shigetoshi; Matsumoto, Riki; Ikeda, Akio; Takahashi, Ryosuke; Mima, Tatsuya; Fukuyama, Hidenao; Mikuni, Nobuhiro; Miyamoto, Susumu

    2017-05-01

    We describe temporal spread imaging (TSI) that can identify the spatiotemporal pattern of epileptic activity using Magnetoencephalography (MEG). A three-dimensional grid of voxels covering the brain is created. The array-gain minimum-variance spatial filter is applied to an interictal spike to estimate the magnitude of the source and the time (Ta) when the magnitude exceeds a predefined threshold at each voxel. This calculation is performed through all spikes. Each voxel has the mean Ta () and spike number (N sp ), which is the number of spikes whose source exceeds the threshold. Then, a random resampling method is used to determine the cutoff value of N sp for the statistically reproducible pattern of the activity. Finally, all the voxels where the source exceeds the threshold reproducibly shown on the MRI with a color scale representing . Four patients with intractable mesial temporal lobe epilepsy (MTLE) were analyzed. In three patients, the common pattern of the overlap between the propagation and the hypometabolism shown by fluorodeoxyglucose-positron emission tomography (FDG-PET) was identified. TSI can visualize statistically reproducible patterns of the temporal and spatial spread of epileptic activity. TSI can assess the statistical significance of the spatiotemporal pattern based on its reproducibility. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  2. Identification of discriminant proteins through antibody profiling, methods and apparatus for identifying an individual

    DOEpatents

    Apel, William A.; Thompson, Vicki S; Lacey, Jeffrey A.; Gentillon, Cynthia A.

    2016-08-09

    A method for determining a plurality of proteins for discriminating and positively identifying an individual based from a biological sample. The method may include profiling a biological sample from a plurality of individuals against a protein array including a plurality of proteins. The protein array may include proteins attached to a support in a preselected pattern such that locations of the proteins are known. The biological sample may be contacted with the protein array such that a portion of antibodies in the biological sample reacts with and binds to the proteins forming immune complexes. A statistical analysis method, such as discriminant analysis, may be performed to determine discriminating proteins for distinguishing individuals. Proteins of interest may be used to form a protein array. Such a protein array may be used, for example, to compare a forensic sample from an unknown source with a sample from a known source.

  3. Identification of discriminant proteins through antibody profiling, methods and apparatus for identifying an individual

    DOEpatents

    Thompson, Vicki S; Lacey, Jeffrey A; Gentillon, Cynthia A; Apel, William A

    2015-03-03

    A method for determining a plurality of proteins for discriminating and positively identifying an individual based from a biological sample. The method may include profiling a biological sample from a plurality of individuals against a protein array including a plurality of proteins. The protein array may include proteins attached to a support in a preselected pattern such that locations of the proteins are known. The biological sample may be contacted with the protein array such that a portion of antibodies in the biological sample reacts with and binds to the proteins forming immune complexes. A statistical analysis method, such as discriminant analysis, may be performed to determine discriminating proteins for distinguishing individuals. Proteins of interest may be used to form a protein array. Such a protein array may be used, for example, to compare a forensic sample from an unknown source with a sample from a known source.

  4. Spiked Models of Large Dimensional Random Matrices Applied to Wireless Communications and Array Signal Processing

    DTIC Science & Technology

    2013-12-14

    population covariance matrix with application to array signal processing; and 5) a sample covariance matrix for which a CLT is studied on linear...Applications , (01 2012): 1150004. doi: Walid Hachem, Malika Kharouf, Jamal Najim, Jack W. Silverstein. A CLT FOR INFORMATION- THEORETIC STATISTICS...for Multi-source Power Estimation, (04 2010) Malika Kharouf, Jamal Najim, Jack W. Silverstein, Walid Hachem. A CLT FOR INFORMATION- THEORETIC

  5. Infrared spectral imaging as a novel approach for histopathological recognition in colon cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Nallala, Jayakrupakar; Gobinet, Cyril; Diebold, Marie-Danièle; Untereiner, Valérie; Bouché, Olivier; Manfait, Michel; Sockalingum, Ganesh Dhruvananda; Piot, Olivier

    2012-11-01

    Innovative diagnostic methods are the need of the hour that could complement conventional histopathology for cancer diagnosis. In this perspective, we propose a new concept based on spectral histopathology, using IR spectral micro-imaging, directly applied to paraffinized colon tissue array stabilized in an agarose matrix without any chemical pre-treatment. In order to correct spectral interferences from paraffin and agarose, a mathematical procedure is implemented. The corrected spectral images are then processed by a multivariate clustering method to automatically recover, on the basis of their intrinsic molecular composition, the main histological classes of the normal and the tumoral colon tissue. The spectral signatures from different histological classes of the colonic tissues are analyzed using statistical methods (Kruskal-Wallis test and principal component analysis) to identify the most discriminant IR features. These features allow characterizing some of the biomolecular alterations associated with malignancy. Thus, via a single analysis, in a label-free and nondestructive manner, main changes associated with nucleotide, carbohydrates, and collagen features can be identified simultaneously between the compared normal and the cancerous tissues. The present study demonstrates the potential of IR spectral imaging as a complementary modern tool, to conventional histopathology, for an objective cancer diagnosis directly from paraffin-embedded tissue arrays.

  6. Use of Pattern Classification Algorithms to Interpret Passive and Active Data Streams from a Walking-Speed Robotic Sensor Platform

    NASA Astrophysics Data System (ADS)

    Dieckman, Eric Allen

    In order to perform useful tasks for us, robots must have the ability to notice, recognize, and respond to objects and events in their environment. This requires the acquisition and synthesis of information from a variety of sensors. Here we investigate the performance of a number of sensor modalities in an unstructured outdoor environment, including the Microsoft Kinect, thermal infrared camera, and coffee can radar. Special attention is given to acoustic echolocation measurements of approaching vehicles, where an acoustic parametric array propagates an audible signal to the oncoming target and the Kinect microphone array records the reflected backscattered signal. Although useful information about the target is hidden inside the noisy time domain measurements, the Dynamic Wavelet Fingerprint process (DWFP) is used to create a time-frequency representation of the data. A small-dimensional feature vector is created for each measurement using an intelligent feature selection process for use in statistical pattern classification routines. Using our experimentally measured data from real vehicles at 50 m, this process is able to correctly classify vehicles into one of five classes with 94% accuracy. Fully three-dimensional simulations allow us to study the nonlinear beam propagation and interaction with real-world targets to improve classification results.

  7. WISDOM project - I. Black hole mass measurement using molecular gas kinematics in NGC 3665

    NASA Astrophysics Data System (ADS)

    Onishi, Kyoko; Iguchi, Satoru; Davis, Timothy A.; Bureau, Martin; Cappellari, Michele; Sarzi, Marc; Blitz, Leo

    2017-07-01

    As a part of the mm-Wave Interferometric Survey of Dark Object Masses (WISDOM) project, we present an estimate of the mass of the supermassive black hole (SMBH) in the nearby fast-rotator early-type galaxy NGC 3665. We obtained the Combined Array for Research in Millimeter Astronomy (CARMA) B and C array observations of the 12CO(J = 2 - 1) emission line with a combined angular resolution of 0.59 arcsec. We analysed and modelled the three-dimensional molecular gas kinematics, obtaining a best-fitting SMBH mass M_BH=5.75^{+1.49}_{-1.18} × 108 M⊙, a mass-to-light ratio at H-band (M/L)H = 1.45 ± 0.04 (M/L)⊙,H and other parameters describing the geometry of the molecular gas disc (statistical errors, all at 3σ confidence). We estimate the systematic uncertainties on the stellar M/L to be ≈0.2 (M/L)⊙,H, and on the SMBH mass to be ≈0.4 × 108 M⊙. The measured SMBH mass is consistent with that estimated from the latest correlations with galaxy properties. Following our older works, we also analysed and modelled the kinematics using only the major-axis position-velocity diagram, and conclude that the two methods are consistent.

  8. Detecting Seismic Activity with a Covariance Matrix Analysis of Data Recorded on Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Seydoux, L.; Shapiro, N.; de Rosny, J.; Brenguier, F.

    2014-12-01

    Modern seismic networks are recording the ground motion continuously all around the word, with very broadband and high-sensitivity sensors. The aim of our study is to apply statistical array-based approaches to processing of these records. We use the methods mainly brought from the random matrix theory in order to give a statistical description of seismic wavefields recorded at the Earth's surface. We estimate the array covariance matrix and explore the distribution of its eigenvalues that contains information about the coherency of the sources that generated the studied wavefields. With this approach, we can make distinctions between the signals generated by isolated deterministic sources and the "random" ambient noise. We design an algorithm that uses the distribution of the array covariance matrix eigenvalues to detect signals corresponding to coherent seismic events. We investigate the detection capacity of our methods at different scales and in different frequency ranges by applying it to the records of two networks: (1) the seismic monitoring network operating on the Piton de la Fournaise volcano at La Réunion island composed of 21 receivers and with an aperture of ~15 km, and (2) the transportable component of the USArray composed of ~400 receivers with ~70 km inter-station spacing.

  9. High resolution telescope including an array of elemental telescopes aligned along a common axis and supported on a space frame with a pivot at its geometric center

    DOEpatents

    Norbert, M.A.; Yale, O.

    1992-04-28

    A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employes speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by a electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activities. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes. 15 figs.

  10. High resolution telescope including an array of elemental telescopes aligned along a common axis and supported on a space frame with a pivot at its geometric center

    DOEpatents

    Norbert, Massie A.; Yale, Oster

    1992-01-01

    A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employes speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by a electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activities. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes.

  11. Apparatus and method for reducing inductive coupling between levitation and drive coils within a magnetic propulsion system

    DOEpatents

    Post, Richard F.

    2001-01-01

    An apparatus and method is disclosed for reducing inductive coupling between levitation and drive coils within a magnetic levitation system. A pole array has a magnetic field. A levitation coil is positioned so that in response to motion of the magnetic field of the pole array a current is induced in the levitation coil. A first drive coil having a magnetic field coupled to drive the pole array also has a magnetic flux which induces a parasitic current in the levitation coil. A second drive coil having a magnetic field is positioned to attenuate the parasitic current in the levitation coil by canceling the magnetic flux of the first drive coil which induces the parasitic current. Steps in the method include generating a magnetic field with a pole array for levitating an object; inducing current in a levitation coil in response to motion of the magnetic field of the pole array; generating a magnetic field with a first drive coil for propelling the object; and generating a magnetic field with a second drive coil for attenuating effects of the magnetic field of the first drive coil on the current in the levitation coil.

  12. Range and egomotion estimation from compound photodetector arrays with parallel optical axis using optical flow techniques.

    PubMed

    Chahl, J S

    2014-01-20

    This paper describes an application for arrays of narrow-field-of-view sensors with parallel optical axes. These devices exhibit some complementary characteristics with respect to conventional perspective projection or angular projection imaging devices. Conventional imaging devices measure rotational egomotion directly by measuring the angular velocity of the projected image. Translational egomotion cannot be measured directly by these devices because the induced image motion depends on the unknown range of the viewed object. On the other hand, a known translational motion generates image velocities which can be used to recover the ranges of objects and hence the three-dimensional (3D) structure of the environment. A new method is presented for computing egomotion and range using the properties of linear arrays of independent narrow-field-of-view optical sensors. An approximate parallel projection can be used to measure translational egomotion in terms of the velocity of the image. On the other hand, a known rotational motion of the paraxial sensor array generates image velocities, which can be used to recover the 3D structure of the environment. Results of tests of an experimental array confirm these properties.

  13. A neurophysiological explanation for biases in visual localization.

    PubMed

    Moreland, James C; Boynton, Geoffrey M

    2017-02-01

    Observers show small but systematic deviations from equal weighting of all elements when asked to localize the center of an array of dots. Counter-intuitively, with small numbers of dots drawn from a Gaussian distribution, this bias results in subjects overweighting the influence of outlier dots - inconsistent with traditional statistical estimators of central tendency. Here we show that this apparent statistical anomaly can be explained by the observation that outlier dots also lie in regions of lower dot density. Using a standard model of V1 processing, which includes spatial integration followed by a compressive static nonlinearity, we can successfully predict the finding that dots in less dense regions of an array have a relatively greater influence on the perceived center.

  14. A proposal to classify happiness as a psychiatric disorder.

    PubMed Central

    Bentall, R P

    1992-01-01

    It is proposed that happiness be classified as a psychiatric disorder and be included in future editions of the major diagnostic manuals under the new name: major affective disorder, pleasant type. In a review of the relevant literature it is shown that happiness is statistically abnormal, consists of a discrete cluster of symptoms, is associated with a range of cognitive abnormalities, and probably reflects the abnormal functioning of the central nervous system. One possible objection to this proposal remains--that happiness is not negatively valued. However, this objection is dismissed as scientifically irrelevant. PMID:1619629

  15. Correlation between ERMI values and other Moisture and Mold Assessments of Homes in the American Healthy Home Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesper, Sephen J.; McKinstry, Craig A.; Cox, David J.

    2009-11-30

    Objective: The objective of this study was to determine the correlation between ERMI values in the HUD American Healthy Home Survey (AHHS) homes and either inspector reports or occupant assessments of mold and moisture. Methods: In the AHHS, moisture and mold were assessed by a pair of inspectors and with an occupant questionnaire. These results were compared to the results of the Environmental Relative Moldiness Index (ERMI) values for each home. Results: Homes in the highest ERMI quartile were most often in agreement with visual inspection and/or occupant assessment. However, in 52% of the fourth quartile ERMI homes, the inspectormore » and occupant assessment did not indicate water or mold problems. Yet the concentrations of each ERMI panel mold species detected in all fourth quartile homes were statistically indistinguishable. Conclusions: About 50% of water-damaged, moldy homes were not detected by inspection or questioning of the occupant about water and mold.« less

  16. High Voltage Dielectrophoretic and Magnetophoretic Hybrid Integrated Circuit / Microfluidic Chip

    PubMed Central

    Issadore, David; Franke, Thomas; Brown, Keith A.; Hunt, Thomas P.; Westervelt, Robert M.

    2010-01-01

    A hybrid integrated circuit (IC) / microfluidic chip is presented that independently and simultaneously traps and moves microscopic objects suspended in fluid using both electric and magnetic fields. This hybrid chip controls the location of dielectric objects, such as living cells and drops of fluid, on a 60 × 61 array of pixels that are 30 × 38 μm2 in size, each of which can be individually addressed with a 50 V peak-to-peak, DC to 10 MHz radio frequency voltage. These high voltage pixels produce electric fields above the chip’s surface with a magnitude , resulting in strong dielectrophoresis (DEP) forces . Underneath the array of DEP pixels there is a magnetic matrix that consists of two perpendicular sets of 60 metal wires running across the chip. Each wire can be sourced with 120 mA to trap and move magnetically susceptible objects using magnetophoresis (MP). The DEP pixel array and magnetic matrix can be used simultaneously to apply forces to microscopic objects, such as living cells or lipid vesicles, that are tagged with magnetic nanoparticles. The capabilities of the hybrid IC / microfluidic chip demonstrated in this paper provide important building blocks for a platform for biological and chemical applications. PMID:20625468

  17. High Voltage Dielectrophoretic and Magnetophoretic Hybrid Integrated Circuit / Microfluidic Chip.

    PubMed

    Issadore, David; Franke, Thomas; Brown, Keith A; Hunt, Thomas P; Westervelt, Robert M

    2009-12-01

    A hybrid integrated circuit (IC) / microfluidic chip is presented that independently and simultaneously traps and moves microscopic objects suspended in fluid using both electric and magnetic fields. This hybrid chip controls the location of dielectric objects, such as living cells and drops of fluid, on a 60 × 61 array of pixels that are 30 × 38 μm(2) in size, each of which can be individually addressed with a 50 V peak-to-peak, DC to 10 MHz radio frequency voltage. These high voltage pixels produce electric fields above the chip's surface with a magnitude , resulting in strong dielectrophoresis (DEP) forces . Underneath the array of DEP pixels there is a magnetic matrix that consists of two perpendicular sets of 60 metal wires running across the chip. Each wire can be sourced with 120 mA to trap and move magnetically susceptible objects using magnetophoresis (MP). The DEP pixel array and magnetic matrix can be used simultaneously to apply forces to microscopic objects, such as living cells or lipid vesicles, that are tagged with magnetic nanoparticles. The capabilities of the hybrid IC / microfluidic chip demonstrated in this paper provide important building blocks for a platform for biological and chemical applications.

  18. Physical studies of Centaurs and Trans-Neptunian Objects with the Atacama Large Millimeter Array

    NASA Astrophysics Data System (ADS)

    Moullet, Arielle; Lellouch, Emmanuel; Moreno, Raphael; Gurwell, Mark

    2011-05-01

    Once completed, the Atacama Large Millimeter Array (ALMA) will be the most powerful (sub)millimeter interferometer in terms of sensitivity, spatial resolution and imaging. This paper presents the capabilities of ALMA applied to the observation of Centaurs and Trans-Neptunian Objects, and their possible output in terms of physical properties. Realistic simulations were performed to explore the performances of the different frequency bands and array configurations, and several projects are detailed along with their feasibility, their limitations and their possible targets. Determination of diameters and albedos via the radiometric method appears to be possible on ˜500 objects, while sampling of the thermal lightcurve to derive the bodies' ellipticity could be performed at least 30 bodies that display a significant optical lightcurve. On a limited number of objects, the spatial resolution allows for direct measurement of the size or even surface mapping with a resolution down to 13 milliarcsec. Finally, ALMA could separate members of multiple systems with a separation power comparable to that of the HST. The overall performance of ALMA will make it an invaluable instrument to explore the outer Solar System, complementary to space-based telescopes and spacecrafts.

  19. Enhancement of photoacoustic tomography by ultrasonic computed tomography based on optical excitation of elements of a full-ring transducer array.

    PubMed

    Xia, Jun; Huang, Chao; Maslov, Konstantin; Anastasio, Mark A; Wang, Lihong V

    2013-08-15

    Photoacoustic computed tomography (PACT) is a hybrid technique that combines optical excitation and ultrasonic detection to provide high-resolution images in deep tissues. In the image reconstruction, a constant speed of sound (SOS) is normally assumed. This assumption, however, is often not strictly satisfied in deep tissue imaging, due to acoustic heterogeneities within the object and between the object and the coupling medium. If these heterogeneities are not accounted for, they will cause distortions and artifacts in the reconstructed images. In this Letter, we incorporated ultrasonic computed tomography (USCT), which measures the SOS distribution within the object, into our full-ring array PACT system. Without the need for ultrasonic transmitting electronics, USCT was performed using the same laser beam as for PACT measurement. By scanning the laser beam on the array surface, we can sequentially fire different elements. As a first demonstration of the system, we studied the effect of acoustic heterogeneities on photoacoustic vascular imaging. We verified that constant SOS is a reasonable approximation when the SOS variation is small. When the variation is large, distortion will be observed in the periphery of the object, especially in the tangential direction.

  20. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  1. Feasibility study of an optically coherent telescope array in space

    NASA Technical Reports Server (NTRS)

    Traub, W. A.

    1983-01-01

    Numerical methods of image construction which can be used to produce very high angular resolution images at optical wavelengths of astronomical objects from an orbiting array of telescopes are discussed and a concept is presented for a phase-coherent optical telescope array which may be deployed by space shuttle in the 1990's. The system would start as a four-element linear array with a 12 m baseline. The initial module is a minimum redundant array with a photon-counting collecting area three times larger than space telescope and a one dimensional resolution of better than 0.01 arc seconds in the visible range. Later additions to the array would build up facility capability. The advantages of a VLBI observatory in space are considered as well as apertures for the telescopes.

  2. Enhancement and Validation of an Arab Surname Database

    PubMed Central

    Schwartz, Kendra; Beebani, Ganj; Sedki, Mai; Tahhan, Mamon; Ruterbusch, Julie J.

    2015-01-01

    Objectives Arab Americans constitute a large, heterogeneous, and quickly growing subpopulation in the United States. Health statistics for this group are difficult to find because US governmental offices do not recognize Arab as separate from white. The development and validation of an Arab- and Chaldean-American name database will enhance research efforts in this population subgroup. Methods A previously validated name database was supplemented with newly identified names gathered primarily from vital statistic records and then evaluated using a multistep process. This process included 1) review by 4 Arabic- and Chaldean-speaking reviewers, 2) ethnicity assessment by social media searches, and 3) self-report of ancestry obtained from a telephone survey. Results Our Arab- and Chaldean-American name algorithm has a positive predictive value of 91% and a negative predictive value of 100%. Conclusions This enhanced name database and algorithm can be used to identify Arab Americans in health statistics data, such as cancer and hospital registries, where they are often coded as white, to determine the extent of health disparities in this population. PMID:24625771

  3. TPF-I Emma X-Array: 2007 Design Team Study

    NASA Technical Reports Server (NTRS)

    Martin, Stefan R.; Rodriguez, Jose; Scharf, Dan; Smith, Jim; McKinstry, David; Wirz, Richie; Purcell, George; Wayne, Len; Scherr, Larry; Mennesson, Bertrand; hide

    2007-01-01

    This viewgraph presentation is a study of an Emma design for Terrestrial Planet Finder (TPF) formation flying interferometer. The objective is to develop a design with reduced cost compared to TPF-I X-Array, derive mass and cost estimates, and study thermal and radiation issues.

  4. Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications

    USDA-ARS?s Scientific Manuscript database

    Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...

  5. Area Array Technology Evaluations for Space and Military Applications

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    1996-01-01

    The Jet Propulsion Laboratory (JPL) is currently assessing the use of Area Array Packaging (AAP) for National Aeronautics and Space Administration (NASA) spaceflight applications. this work is being funded through NASA Headquarters, Code Q. The paper discusses background of AAP, objectives, and uses of AAP.

  6. Balancing emotion and cognition: a case for decision aiding in conservation efforts.

    PubMed

    Wilson, Robyn S

    2008-12-01

    Despite advances in the quality of participatory decision making for conservation, many current efforts still suffer from an inability to bridge the gap between science and policy. Judgment and decision-making research suggests this gap may result from a person's reliance on affect-based shortcuts in complex decision contexts. I examined the results from 3 experiments that demonstrate how affect (i.e., the instantaneous reaction one has to a stimulus) influences individual judgments in these contexts and identified techniques from the decision-aiding literature that help encourage a balance between affect-based emotion and cognition in complex decision processes. In the first study, subjects displayed a lack of focus on their stated conservation objectives and made decisions that reflected their initial affective impressions. Value-focused approaches may help individuals incorporate all the decision-relevant objectives by making the technical and value-based objectives more salient. In the second study, subjects displayed a lack of focus on statistical risk and again made affect-based decisions. Trade-off techniques may help individuals incorporate relevant technical data, even when it conflicts with their initial affective impressions or other value-based objectives. In the third study, subjects displayed a lack of trust in decision-making authorities when the decision involved a negatively affect-rich outcome (i.e., a loss). Identifying shared salient values and increasing procedural fairness may help build social trust in both decision-making authorities and the decision process.

  7. Miniature objective lens for array digital pathology: design improvement based on clinical evaluation

    NASA Astrophysics Data System (ADS)

    McCall, Brian; Pierce, Mark; Graviss, Edward A.; Richards-Kortum, Rebecca R.; Tkaczyk, Tomasz S.

    2016-03-01

    A miniature objective designed for digital detection of Mycobacterium tuberculosis (MTB) was evaluated for diagnostic accuracy. The objective was designed for array microscopy, but fabricated and evaluated at this stage of development as a single objective. The counts and diagnoses of patient samples were directly compared for digital detection and standard microscopy. The results were found to be correlated and highly concordant. The evaluation of this lens by direct comparison to standard fluorescence sputum smear microscopy presented unique challenges and led to some new insights in the role played by the system parameters of the microscope. The design parameters and how they were developed are reviewed in light of these results. New system parameters are proposed with the goal of easing the challenges of evaluating the miniature objective and maintaining the optical performance that produced the agreeable results presented without over-optimizing. A new design is presented that meets and exceeds these criteria.

  8. Epistemological and methodological significance of quantitative studies of psychomotor activity for the explanation of clinical depression.

    PubMed

    Terziivanova, Petya; Haralanov, Svetlozar

    2012-12-01

    Psychomotor disturbances have been regarded as cardinal symptoms of depression for centuries and their objective assessment may have predictive value with respect to the severity of clinical depression, treatment outcome and prognosis of the affective disorder. Montgomery-Åsberg Depression Rating Scale (MADRS) and Hamilton Rating Scale for Anxiety (HAM-A). Psychomotor indicators of activity and reactivity were objectively recorded and measured by means of computerized ultrasonographic craniocorpography. We found a statistically significant correlation between disturbances in psychomotor indicators and MADRS total score (r = 0.4; P < 0.0001). The severity of HAM-A total score had no statistically significant correlation with psychomotor indicators (P > 0.05). We found that different items of MADRS and HAM-A correlated with psychomotor disturbances of different strength and significance. Objectively, measured psychomotor retardation was associated with greater severity of depressive symptoms assessed at the clinical level. Integration between different methods is needed in order to improve understanding of the psychopathology and the neurobiology of a disputable diagnosis such as clinical depression. © 2012 Blackwell Publishing Ltd.

  9. Evaluation and comparison of statistical methods for early temporal detection of outbreaks: A simulation-based study

    PubMed Central

    Le Strat, Yann

    2017-01-01

    The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489

  10. Evaluation of risk communication in a mammography patient decision aid

    PubMed Central

    Klein, Krystal A.; Watson, Lindsey; Ash, Joan S.; Eden, Karen B.

    2016-01-01

    Objectives We characterized patients’ comprehension, memory, and impressions of risk communication messages in a patient decision aid (PtDA), Mammopad, and clarified perceived importance of numeric risk information in medical decision making. Methods Participants were 75 women in their forties with average risk factors for breast cancer. We used mixed methods, comprising a risk estimation problem administered within a pretest–posttest design, and semi-structured qualitative interviews with a subsample of 21 women. Results Participants’ positive predictive value estimates of screening mammography improved after using Mammopad. Although risk information was only briefly memorable, through content analysis, we identified themes describing why participants value quantitative risk information, and obstacles to understanding. We describe ways the most complicated graphic was incompletely comprehended. Conclusions Comprehension of risk information following Mammopad use could be improved. Patients valued receiving numeric statistical information, particularly in pictograph format. Obstacles to understanding risk information, including potential for confusion between statistics, should be identified and mitigated in PtDA design. Practice implications Using simple pictographs accompanied by text, PtDAs may enhance a shared decision-making discussion. PtDA designers and providers should be aware of benefits and limitations of graphical risk presentations. Incorporating comprehension checks could help identify and correct misapprehensions of graphically presented statistics PMID:26965020

  11. Effect of spatial coherence of light on the photoregulation processes in cells

    NASA Astrophysics Data System (ADS)

    Budagovsky, A. V.; Solovykh, N. V.; Yankovskaya, M. B.; Maslova, M. V.; Budagovskaya, O. N.; Budagovsky, I. A.

    2016-07-01

    The effect of the statistical properties of light on the value of the photoinduced reaction of the biological objects, which differ in the morphological and physiological characteristics, the optical properties, and the size of cells, was studied. The fruit of apple trees, the pollen of cherries, the microcuttings of blackberries in vitro, and the spores and the mycelium of fungi were irradiated by quasimonochromatic light fluxes with identical energy parameters but different values of coherence length and radius of correlation. In all cases, the greatest stimulation effect occurred when the cells completely fit in the volume of the coherence of the field, while both temporal and spatial coherence have a significant and mathematically certain impact on the physiological activity of cells. It was concluded that not only the spectral, but also the statistical (coherent) properties of the acting light play an important role in the photoregulation process.

  12. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  13. Electricity from photovoltaic solar cells. Flat-Plate Solar Array Project of the US Department of Energy's National Photovoltaics Program: 10 years of progress

    NASA Technical Reports Server (NTRS)

    Christensen, Elmer

    1985-01-01

    The objectives were to develop the flat-plate photovoltaic (PV) array technologies required for large-scale terrestrial use late in the 1980s and in the 1990s; advance crystalline silicon PV technologies; develop the technologies required to convert thin-film PV research results into viable module and array technology; and to stimulate transfer of knowledge of advanced PV materials, solar cells, modules, and arrays to the PV community. Progress reached on attaining these goals, along with future recommendations are discussed.

  14. Efficient structures for geosynchronous spacecraft solar arrays. Phase 1, 2 and 3

    NASA Astrophysics Data System (ADS)

    Adams, L. R.; Hedgepeth, J. M.

    1981-09-01

    Structural concepts for deploying and supporting lightweight solar-array blankets for geosynchronous electrical power are evaluated. It is recommended that the STACBEAM solar-array system should be the object of further study and detailed evaluation. The STACBEAM system provides high stiffness at low mass, and with the use of a low mass deployment mechanism, full structural properties can be maintained throughout deployment. The stowed volume of the STACBEAM is acceptably small, and its linear deployment characteristic allows periodic attachments to the solar-array blanket to be established in the stowed configuration and maintained during deployment.

  15. Array Detector Modules for Spent Fuel Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolotnikov, Aleksey

    Brookhaven National Laboratory (BNL) proposes to evaluate the arrays of position-sensitive virtual Frisch-grid (VFG) detectors for passive gamma-ray emission tomography (ET) to verify the spent fuel in storage casks before storing them in geo-repositories. Our primary objective is to conduct a preliminary analysis of the arrays capabilities and to perform field measurements to validate the effectiveness of the proposed array modules. The outcome of this proposal will consist of baseline designs for the future ET system which can ultimately be used together with neutrons detectors. This will demonstrate the usage of this technology in spent fuel storage casks.

  16. Efficient structures for geosynchronous spacecraft solar arrays. Phase 1, 2 and 3

    NASA Technical Reports Server (NTRS)

    Adams, L. R.; Hedgepeth, J. M.

    1981-01-01

    Structural concepts for deploying and supporting lightweight solar-array blankets for geosynchronous electrical power are evaluated. It is recommended that the STACBEAM solar-array system should be the object of further study and detailed evaluation. The STACBEAM system provides high stiffness at low mass, and with the use of a low mass deployment mechanism, full structural properties can be maintained throughout deployment. The stowed volume of the STACBEAM is acceptably small, and its linear deployment characteristic allows periodic attachments to the solar-array blanket to be established in the stowed configuration and maintained during deployment.

  17. Array microscopy technology and its application to digital detection of Mycobacterium tuberculosis

    NASA Astrophysics Data System (ADS)

    McCall, Brian P.

    Tuberculosis causes more deaths worldwide than any other curable infectious disease. This is the case despite tuberculosis appearing to be on the verge of eradication midway through the last century. Efforts at reversing the spread of tuberculosis have intensified since the early 1990s. Since then, microscopy has been the primary frontline diagnostic. In this dissertation, advances in clinical microscopy towards array microscopy for digital detection of Mycobacterium tuberculosis are presented. Digital array microscopy separates the tasks of microscope operation and pathogen detection and will reduce the specialization needed in order to operate the microscope. Distributing the work and reducing specialization will allow this technology to be deployed at the point of care, taking the front-line diagnostic for tuberculosis from the microscopy center to the community health center. By improving access to microscopy centers, hundreds of thousands of lives can be saved. For this dissertation, a lens was designed that can be manufactured as 4x6 array of microscopes. This lens design is diffraction limited, having less than 0.071 waves of aberration (root mean square) over the entire field of view. A total area imaged onto a full-frame digital image sensor is expected to be 3.94 mm2, which according to tuberculosis microscopy guidelines is more than sufficient for a sensitive diagnosis. The design is tolerant to single point diamond turning manufacturing errors, as found by tolerance analysis and by fabricating a prototype. Diamond micro-milling, a fabrication technique for lens array molds, was applied to plastic plano-concave and plano-convex lens arrays, and found to produce high quality optical surfaces. The micro-milling technique did not prove robust enough to produce bi-convex and meniscus lens arrays in a variety of lens shapes, however, and it required lengthy fabrication times. In order to rapidly prototype new lenses, a new diamond machining technique was developed called 4-axis single point diamond machining. This technique is 2-10x faster than micro-milling, depending on how advanced the micro-milling equipment is. With array microscope fabrication still in development, a single prototype of the lens designed for an array microscope was fabricated using single point diamond turning. The prototype microscope objective was validated in a pre-clinical trial. The prototype was compared with a standard clinical microscope objective in diagnostic tests. High concordance, a Fleiss's kappa of 0.88, was found between diagnoses made using the prototype and standard microscope objectives and a reference test. With the lens designed and validated and an advanced fabrication process developed, array microscopy technology is advanced to the point where it is feasible to rapidly prototype an array microscope for detection of tuberculosis and translate array microscope from an innovative concept to a device that can save lives.

  18. "TNOs are Cool": A survey of the trans-Neptunian region. XIII. Statistical analysis of multiple trans-Neptunian objects observed with Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Kovalenko, I. D.; Doressoundiram, A.; Lellouch, E.; Vilenius, E.; Müller, T.; Stansberry, J.

    2017-11-01

    Context. Gravitationally bound multiple systems provide an opportunity to estimate the mean bulk density of the objects, whereas this characteristic is not available for single objects. Being a primitive population of the outer solar system, binary and multiple trans-Neptunian objects (TNOs) provide unique information about bulk density and internal structure, improving our understanding of their formation and evolution. Aims: The goal of this work is to analyse parameters of multiple trans-Neptunian systems, observed with Herschel and Spitzer space telescopes. Particularly, statistical analysis is done for radiometric size and geometric albedo, obtained from photometric observations, and for estimated bulk density. Methods: We use Monte Carlo simulation to estimate the real size distribution of TNOs. For this purpose, we expand the dataset of diameters by adopting the Minor Planet Center database list with available values of the absolute magnitude therein, and the albedo distribution derived from Herschel radiometric measurements. We use the 2-sample Anderson-Darling non-parametric statistical method for testing whether two samples of diameters, for binary and single TNOs, come from the same distribution. Additionally, we use the Spearman's coefficient as a measure of rank correlations between parameters. Uncertainties of estimated parameters together with lack of data are taken into account. Conclusions about correlations between parameters are based on statistical hypothesis testing. Results: We have found that the difference in size distributions of multiple and single TNOs is biased by small objects. The test on correlations between parameters shows that the effective diameter of binary TNOs strongly correlates with heliocentric orbital inclination and with magnitude difference between components of binary system. The correlation between diameter and magnitude difference implies that small and large binaries are formed by different mechanisms. Furthermore, the statistical test indicates, although not significant with the sample size, that a moderately strong correlation exists between diameter and bulk density. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  19. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    PubMed

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales.

  20. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions

    PubMed Central

    Hahn, Beth A.; Dreitz, Victoria J.; George, T. Luke

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer’s sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer’s sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales. PMID:29065128

Top