NASA Technical Reports Server (NTRS)
Decker, Arthur J. (Inventor)
2006-01-01
An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2001-01-01
Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.
NASA Astrophysics Data System (ADS)
Wang, Lynn T.-N.; Schroeder, Uwe Paul; Madhavan, Sriram
2017-03-01
A pattern-based methodology for optimizing SADP-compliant layout designs is developed based on identifying cut mask patterns and replacing them with pre-characterized fixing solutions. A pattern-based library of difficult-tomanufacture cut patterns with pre-characterized fixing solutions is built. A pattern-based engine searches for matching patterns in the decomposed layouts. When a match is found, the engine opportunistically replaces the detected pattern with a pre-characterized fixing solution. The methodology was demonstrated on a 7nm routed metal2 block. A small library of 30 cut patterns increased the number of more manufacturable cuts by 38% and metal-via enclosure by 13% with a small parasitic capacitance impact of 0.3%.
Is countershading camouflage robust to lighting change due to weather?
Penacchio, Olivier; Lovell, P George; Harris, Julie M
2018-02-01
Countershading is a pattern of coloration thought to have evolved in order to implement camouflage. By adopting a pattern of coloration that makes the surface facing towards the sun darker and the surface facing away from the sun lighter, the overall amount of light reflected off an animal can be made more uniformly bright. Countershading could hence contribute to visual camouflage by increasing background matching or reducing cues to shape. However, the usefulness of countershading is constrained by a particular pattern delivering 'optimal' camouflage only for very specific lighting conditions. In this study, we test the robustness of countershading camouflage to lighting change due to weather, using human participants as a 'generic' predator. In a simulated three-dimensional environment, we constructed an array of simple leaf-shaped items and a single ellipsoidal target 'prey'. We set these items in two light environments: strongly directional 'sunny' and more diffuse 'cloudy'. The target object was given the optimal pattern of countershading for one of these two environment types or displayed a uniform pattern. By measuring detection time and accuracy, we explored whether and how target detection depended on the match between the pattern of coloration on the target object and scene lighting. Detection times were longest when the countershading was appropriate to the illumination; incorrectly camouflaged targets were detected with a similar pattern of speed and accuracy to uniformly coloured targets. We conclude that structural changes in light environment, such as caused by differences in weather, do change the effectiveness of countershading camouflage.
Immobilization, stabilization and patterning techniques for enzyme based sensor systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flounders, A.W.; Carichner, S.C.; Singh, A.K.
1997-01-01
Sandia National Laboratories has recently opened the Chemical and Radiation Detection Laboratory (CRDL) in Livermore CA to address the detection needs of a variety of government agencies (e.g., Department of Energy, Environmental Protection Agency, Department of Agriculture) as well as provide a fertile environment for the cooperative development of new industrial technologies. This laboratory consolidates a variety of existing chemical and radiation detection efforts and enables Sandia to expand into the novel area of biochemically based sensors. One aspect of this biosensor effort is further development and optimization of enzyme modified field effect transistors (EnFETs). Recent work has focused uponmore » covalent attachment of enzymes to silicon dioxide and silicon nitride surfaces for EnFET fabrication. They are also investigating methods to pattern immobilized proteins; a critical component for development of array-based sensor systems. Novel enzyme stabilization procedures are key to patterning immobilized enzyme layers while maintaining enzyme activity. Results related to maximized enzyme loading, optimized enzyme activity and fluorescent imaging of patterned surfaces will be presented.« less
Overview of field gamma spectrometries based on Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Optimization of illuminating system to detect optical properties inside a finger
NASA Astrophysics Data System (ADS)
Sano, Emiko; Shikai, Masahiro; Shiratsuki, Akihide; Maeda, Takuji; Matsushita, Masahito; Sasakawa, Koichi
2007-01-01
Biometrics performs personal authentication using individual bodily features including fingerprints, faces, etc. These technologies have been studied and developed for many years. In particular, fingerprint authentication has evolved over many years, and fingerprinting is currently one of world's most established biometric authentication techniques. Not long ago this technique was only used for personal identification in criminal investigations and high-security facilities. In recent years, however, various biometric authentication techniques have appeared in everyday applications. Even though providing great convenience, they have also produced a number of technical issues concerning operation. Generally, fingerprint authentication is comprised of a number of component technologies: (1) sensing technology for detecting the fingerprint pattern; (2) image processing technology for converting the captured pattern into feature data that can be used for verification; (3) verification technology for comparing the feature data with a reference and determining whether it matches. Current fingerprint authentication issues, revealed in research results, originate with fingerprint sensing technology. Sensing methods for detecting a person's fingerprint pattern for image processing are particularly important because they impact overall fingerprint authentication performance. The following are the current problems concerning sensing methods that occur in some cases: Some fingers whose fingerprints used to be difficult to detect by conventional sensors. Fingerprint patterns are easily affected by the finger's surface condition, such noise as discontinuities and thin spots can appear in fingerprint patterns obtained from wrinkled finger, sweaty finger, and so on. To address these problems, we proposed a novel fingerprint sensor based on new scientific knowledge. A characteristic of this new method is that obtained fingerprint patterns are not easily affected by the finger's surface condition because it detects the fingerprint pattern inside the finger using transmitted light. We examined optimization of illumination system of this novel fingerprint sensor to detect contrasty fingerprint pattern from wide area and to improve image processing at (2).
Rigorous ILT optimization for advanced patterning and design-process co-optimization
NASA Astrophysics Data System (ADS)
Selinidis, Kosta; Kuechler, Bernd; Cai, Howard; Braam, Kyle; Hoppe, Wolfgang; Domnenko, Vitaly; Poonawala, Amyn; Xiao, Guangming
2018-03-01
Despite the large difficulties involved in extending 193i multiple patterning and the slow ramp of EUV lithography to full manufacturing readiness, the pace of development for new technology node variations has been accelerating. Multiple new variations of new and existing technology nodes have been introduced for a range of device applications; each variation with at least a few new process integration methods, layout constructs and/or design rules. This had led to a strong increase in the demand for predictive technology tools which can be used to quickly guide important patterning and design co-optimization decisions. In this paper, we introduce a novel hybrid predictive patterning method combining two patterning technologies which have each individually been widely used for process tuning, mask correction and process-design cooptimization. These technologies are rigorous lithography simulation and inverse lithography technology (ILT). Rigorous lithography simulation has been extensively used for process development/tuning, lithography tool user setup, photoresist hot-spot detection, photoresist-etch interaction analysis, lithography-TCAD interactions/sensitivities, source optimization and basic lithography design rule exploration. ILT has been extensively used in a range of lithographic areas including logic hot-spot fixing, memory layout correction, dense memory cell optimization, assist feature (AF) optimization, source optimization, complex patterning design rules and design-technology co-optimization (DTCO). The combined optimization capability of these two technologies will therefore have a wide range of useful applications. We investigate the benefits of the new functionality for a few of these advanced applications including correction for photoresist top loss and resist scumming hotspots.
NASA Astrophysics Data System (ADS)
Wang, Lynn T.-N.; Madhavan, Sriram
2018-03-01
A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.
An experimental sample of the field gamma-spectrometer based on solid state Si-photomultiplier
NASA Astrophysics Data System (ADS)
Denisov, Viktor; Korotaev, Valery; Titov, Aleksandr; Blokhina, Anastasia; Kleshchenok, Maksim
2017-05-01
Design of optical-electronic devices and systems involves the selection of such technical patterns that under given initial requirements and conditions are optimal according to certain criteria. The original characteristic of the OES for any purpose, defining its most important feature ability is a threshold detection. Based on this property, will be achieved the required functional quality of the device or system. Therefore, the original criteria and optimization methods have to subordinate to the idea of a better detectability. Generally reduces to the problem of optimal selection of the expected (predetermined) signals in the predetermined observation conditions. Thus the main purpose of optimization of the system when calculating its detectability is the choice of circuits and components that provide the most effective selection of a target.
Advanced Doppler radar physiological sensing technique for drone detection
NASA Astrophysics Data System (ADS)
Yoon, Ji Hwan; Xu, Hao; Garcia Carrillo, Luis R.
2017-05-01
A 24 GHz medium-range human detecting sensor, using the Doppler Radar Physiological Sensing (DRPS) technique, which can also detect unmanned aerial vehicles (UAVs or drones), is currently under development for potential rescue and anti-drone applications. DRPS systems are specifically designed to remotely monitor small movements of non-metallic human tissues such as cardiopulmonary activity and respiration. Once optimized, the unique capabilities of DRPS could be used to detect UAVs. Initial measurements have shown that DRPS technology is able to detect moving and stationary humans, as well as largely non-metallic multi-rotor drone helicopters. Further data processing will incorporate pattern recognition to detect multiple signatures (motor vibration and hovering patterns) of UAVs.
Root System Water Consumption Pattern Identification on Time Series Data
Figueroa, Manuel; Pope, Christopher
2017-01-01
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers’ detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system’s 0.348 precision. PMID:28621739
Root System Water Consumption Pattern Identification on Time Series Data.
Figueroa, Manuel; Pope, Christopher
2017-06-16
In agriculture, soil and meteorological sensors are used along low power networks to capture data, which allows for optimal resource usage and minimizing environmental impact. This study uses time series analysis methods for outliers' detection and pattern recognition on soil moisture sensor data to identify irrigation and consumption patterns and to improve a soil moisture prediction and irrigation system. This study compares three new algorithms with the current detection technique in the project; the results greatly decrease the number of false positives detected. The best result is obtained by the Series Strings Comparison (SSC) algorithm averaging a precision of 0.872 on the testing sets, vastly improving the current system's 0.348 precision.
Consistency functional map propagation for repetitive patterns
NASA Astrophysics Data System (ADS)
Wang, Hao
2017-09-01
Repetitive patterns appear frequently in both man-made and natural environments. Automatically and robustly detecting such patterns from an image is a challenging problem. We study repetitive pattern alignment by embedding segmentation cue with a functional map model. However, this model cannot tackle the repetitive patterns directly due to the large photometric and geometric variations. Thus, a consistency functional map propagation (CFMP) algorithm that extends the functional map with dynamic propagation is proposed to address this issue. This propagation model is acquired in two steps. The first one aligns the patterns from a local region, transferring segmentation functions among patterns. It can be cast as an L norm optimization problem. The latter step updates the template segmentation for the next round of pattern discovery by merging the transferred segmentation functions. Extensive experiments and comparative analyses have demonstrated an encouraging performance of the proposed algorithm in detection and segmentation of repetitive patterns.
Smart detection of microRNAs through fluorescence enhancement on a photonic crystal.
Pasquardini, L; Potrich, C; Vaghi, V; Lunelli, L; Frascella, F; Descrovi, E; Pirri, C F; Pederzolli, C
2016-04-01
The detection of low abundant biomarkers, such as circulating microRNAs, demands innovative detection methods with increased resolution, sensitivity and specificity. Here, a biofunctional surface was implemented for the selective capture of microRNAs, which were detected through fluorescence enhancement directly on a photonic crystal. To set up the optimal biofunctional surface, epoxy-coated commercially available microscope slides were spotted with specific anti-microRNA probes. The optimal concentration of probe as well as of passivating agent were selected and employed for titrating the microRNA hybridization. Cross-hybridization of different microRNAs was also tested, resulting negligible. Once optimized, the protocol was adapted to the photonic crystal surface, where fluorescent synthetic miR-16 was hybridized and imaged with a dedicated equipment. The photonic crystal consists of a dielectric multilayer patterned with a grating structure. In this way, it is possible to take advantage from both a resonant excitation of fluorophores and an angularly redirection of the emitted radiation. As a result, a significant fluorescence enhancement due to the resonant structure is collected from the patterned photonic crystal with respect to the outer non-structured surface. The dedicated read-out system is compact and based on a wide-field imaging detection, with little or no optical alignment issues, which makes this approach particularly interesting for further development such as for example in microarray-type bioassays. Copyright © 2016 Elsevier B.V. All rights reserved.
Xiao, Feng; Kong, Lingjiang; Chen, Jian
2017-06-01
A rapid-search algorithm to improve the beam-steering efficiency for a liquid crystal optical phased array was proposed and experimentally demonstrated in this paper. This proposed algorithm, in which the value of steering efficiency is taken as the objective function and the controlling voltage codes are considered as the optimization variables, consisted of a detection stage and a construction stage. It optimized the steering efficiency in the detection stage and adjusted its search direction adaptively in the construction stage to avoid getting caught in a wrong search space. Simulations had been conducted to compare the proposed algorithm with the widely used pattern-search algorithm using criteria of convergence rate and optimized efficiency. Beam-steering optimization experiments had been performed to verify the validity of the proposed method.
Desaturation Patterns Detected by Oximetry in a Large Population of Athletes
ERIC Educational Resources Information Center
Garrido-Chamorro, Raul P.; Gonzalez-Lorenzo, Marta; Sirvent-Belando, Jose; Blasco-Lafarga, Cristina; Roche, Enrique
2009-01-01
Optimal exercise performance in well trained athletes can be affected by arterial oxygen saturation failure. Noninvasive detection of this phenomenon when performing a routine ergometric test can be a valuable tool for subsequent planning of the athlete's training, recovery, and nutrition. Oximetry has been used to this end. The authors studied…
Quantifying Performance Bias in Label Fusion
2012-08-21
detect ), may provide the end-user with the means to appropriately adjust the performance and optimal thresholds for performance by fusing legacy systems...boolean combination of classification systems in ROC space: An application to anomaly detection with HMMs. Pattern Recognition, 43(8), 2732-2752. 10...Shamsuddin, S. (2009). An overview of neural networks use in anomaly intrusion detection systems. Paper presented at the Research and Development (SCOReD
Statistical detection of patterns in unidimensional distributions by continuous wavelet transforms
NASA Astrophysics Data System (ADS)
Baluev, R. V.
2018-04-01
Objective detection of specific patterns in statistical distributions, like groupings or gaps or abrupt transitions between different subsets, is a task with a rich range of applications in astronomy: Milky Way stellar population analysis, investigations of the exoplanets diversity, Solar System minor bodies statistics, extragalactic studies, etc. We adapt the powerful technique of the wavelet transforms to this generalized task, making a strong emphasis on the assessment of the patterns detection significance. Among other things, our method also involves optimal minimum-noise wavelets and minimum-noise reconstruction of the distribution density function. Based on this development, we construct a self-closed algorithmic pipeline aimed to process statistical samples. It is currently applicable to single-dimensional distributions only, but it is flexible enough to undergo further generalizations and development.
Cui, Zhihua; Zhang, Yi
2014-02-01
As a promising and innovative research field, bioinformatics has attracted increasing attention recently. Beneath the enormous number of open problems in this field, one fundamental issue is about the accurate and efficient computational methodology that can deal with tremendous amounts of data. In this paper, we survey some applications of swarm intelligence to discover patterns of multiple sequences. To provide a deep insight, ant colony optimization, particle swarm optimization, artificial bee colony and artificial fish swarm algorithm are selected, and their applications to multiple sequence alignment and motif detecting problem are discussed.
Vicinal light inspection of translucent materials
Burns, Geroge R [Albuquerque, NM; Yang, Pin [Albuquerque, NM
2010-01-19
The present invention includes methods and apparatus for inspecting vicinally illuminated non-patterned areas of translucent materials. An initial image of the material is received. A second image is received following a relative translation between the material being inspected and a device generating the images. Each vicinally illuminated image includes a portion having optimal illumination, that can be extracted and stored in a composite image of the non-patterned area. The composite image includes aligned portions of the extracted image portions, and provides a composite having optimal illumination over a non-patterned area of the material to be inspected. The composite image can be processed by enhancement and object detection algorithms, to determine the presence of, and characterize any inhomogeneities present in the material.
The optimal community detection of software based on complex networks
NASA Astrophysics Data System (ADS)
Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong
2016-02-01
The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.
Establishing the behavioural limits for countershaded camouflage.
Penacchio, Olivier; Harris, Julie M; Lovell, P George
2017-10-20
Countershading is a ubiquitous patterning of animals whereby the side that typically faces the highest illumination is darker. When tuned to specific lighting conditions and body orientation with respect to the light field, countershading minimizes the gradient of light the body reflects by counterbalancing shadowing due to illumination, and has therefore classically been thought of as an adaptation for visual camouflage. However, whether and how crypsis degrades when body orientation with respect to the light field is non-optimal has never been studied. We tested the behavioural limits on body orientation for countershading to deliver effective visual camouflage. We asked human participants to detect a countershaded target in a simulated three-dimensional environment. The target was optimally coloured for crypsis in a reference orientation and was displayed at different orientations. Search performance dramatically improved for deviations beyond 15 degrees. Detection time was significantly shorter and accuracy significantly higher than when the target orientation matched the countershading pattern. This work demonstrates the importance of maintaining body orientation appropriate for the displayed camouflage pattern, suggesting a possible selective pressure for animals to orient themselves appropriately to enhance crypsis.
Conditional anomaly detection methods for patient–management alert systems
Valko, Michal; Cooper, Gregory; Seybert, Amy; Visweswaran, Shyam; Saul, Melissa; Hauskrecht, Milos
2010-01-01
Anomaly detection methods can be very useful in identifying unusual or interesting patterns in data. A recently proposed conditional anomaly detection framework extends anomaly detection to the problem of identifying anomalous patterns on a subset of attributes in the data. The anomaly always depends (is conditioned) on the value of remaining attributes. The work presented in this paper focuses on instance–based methods for detecting conditional anomalies. The methods rely on the distance metric to identify examples in the dataset that are most critical for detecting the anomaly. We investigate various metrics and metric learning methods to optimize the performance of the instance–based anomaly detection methods. We show the benefits of the instance–based methods on two real–world detection problems: detection of unusual admission decisions for patients with the community–acquired pneumonia and detection of unusual orders of an HPF4 test that is used to confirm Heparin induced thrombocytopenia — a life–threatening condition caused by the Heparin therapy. PMID:25392850
Pattern-based IP block detection, verification, and variability analysis
NASA Astrophysics Data System (ADS)
Ahmad Ibrahim, Muhamad Asraf Bin; Muhsain, Mohamad Fahmi Bin; Kamal Baharin, Ezni Aznida Binti; Sweis, Jason; Lai, Ya-Chieh; Hurat, Philippe
2018-03-01
The goal of a foundry partner is to deliver high quality silicon product to its customers on time. There is an assumed trust that the silicon will yield, function and perform as expected when the design fits all the sign-off criteria. The use of Intellectual Property (IP) blocks is very common today and provides the customer with pre-qualified and optimized functions for their design thus shortening the design cycle. There are many methods by which an IP Block can be generated and placed within layout. Even with the most careful methods and following of guidelines comes the responsibility of sign-off checking. A foundry needs to detect where these IP Blocks have been placed and look for any violations. This includes DRC clean modifications to the IP Block which may or may not be intentional. Using a pattern-based approach to detect all IP Blocks used provides the foundry advanced capabilities to analyze them further for any kind of changes which could void the OPC and process window optimizations. Having any changes in an IP Block could cause functionality changes or even failures. This also opens the foundry to legal and cost issues while at the same time forcing re-spins of the design. In this publication, we discuss the methodology we have employed to avoid process issues and tape-out errors while at the same time reduce our manual work and improve the turnaround time. We are also able to use our pattern analysis to improve our OPC optimizations when modifications are encountered which have not been seen before.
Use of Acoustic Emission and Pattern Recognition for Crack Detection of a Large Carbide Anvil
Chen, Bin; Wang, Yanan; Yan, Zhaoli
2018-01-01
Large-volume cubic high-pressure apparatus is commonly used to produce synthetic diamond. Due to the high pressure, high temperature and alternative stresses in practical production, cracks often occur in the carbide anvil, thereby resulting in significant economic losses or even casualties. Conventional methods are unsuitable for crack detection of the carbide anvil. This paper is concerned with acoustic emission-based crack detection of carbide anvils, regarded as a pattern recognition problem; this is achieved using a microphone, with methods including sound pulse detection, feature extraction, feature optimization and classifier design. Through analyzing the characteristics of background noise, the cracked sound pulses are separated accurately from the originally continuous signal. Subsequently, three different kinds of features including a zero-crossing rate, sound pressure levels, and linear prediction cepstrum coefficients are presented for characterizing the cracked sound pulses. The original high-dimensional features are adaptively optimized using principal component analysis. A hybrid framework of a support vector machine with k nearest neighbors is designed to recognize the cracked sound pulses. Finally, experiments are conducted in a practical diamond workshop to validate the feasibility and efficiency of the proposed method. PMID:29382144
Use of Acoustic Emission and Pattern Recognition for Crack Detection of a Large Carbide Anvil.
Chen, Bin; Wang, Yanan; Yan, Zhaoli
2018-01-29
Large-volume cubic high-pressure apparatus is commonly used to produce synthetic diamond. Due to the high pressure, high temperature and alternative stresses in practical production, cracks often occur in the carbide anvil, thereby resulting in significant economic losses or even casualties. Conventional methods are unsuitable for crack detection of the carbide anvil. This paper is concerned with acoustic emission-based crack detection of carbide anvils, regarded as a pattern recognition problem; this is achieved using a microphone, with methods including sound pulse detection, feature extraction, feature optimization and classifier design. Through analyzing the characteristics of background noise, the cracked sound pulses are separated accurately from the originally continuous signal. Subsequently, three different kinds of features including a zero-crossing rate, sound pressure levels, and linear prediction cepstrum coefficients are presented for characterizing the cracked sound pulses. The original high-dimensional features are adaptively optimized using principal component analysis. A hybrid framework of a support vector machine with k nearest neighbors is designed to recognize the cracked sound pulses. Finally, experiments are conducted in a practical diamond workshop to validate the feasibility and efficiency of the proposed method.
Searching for patterns in remote sensing image databases using neural networks
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.
Design and scheduling for periodic concurrent error detection and recovery in processor arrays
NASA Technical Reports Server (NTRS)
Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent
1992-01-01
Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.
Detection of pigment network in dermatoscopy images using texture analysis
Anantha, Murali; Moss, Randy H.; Stoecker, William V.
2011-01-01
Dermatoscopy, also known as dermoscopy or epiluminescence microscopy (ELM), is a non-invasive, in vivo technique, which permits visualization of features of pigmented melanocytic neoplasms that are not discernable by examination with the naked eye. ELM offers a completely new range of visual features. One such prominent feature is the pigment network. Two texture-based algorithms are developed for the detection of pigment network. These methods are applicable to various texture patterns in dermatoscopy images, including patterns that lack fine lines such as cobblestone, follicular, or thickened network patterns. Two texture algorithms, Laws energy masks and the neighborhood gray-level dependence matrix (NGLDM) large number emphasis, were optimized on a set of 155 dermatoscopy images and compared. Results suggest superiority of Laws energy masks for pigment network detection in dermatoscopy images. For both methods, a texel width of 10 pixels or approximately 0.22 mm is found for dermatoscopy images. PMID:15249068
Early Obstacle Detection and Avoidance for All to All Traffic Pattern in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Huc, Florian; Jarry, Aubin; Leone, Pierre; Moraru, Luminita; Nikoletseas, Sotiris; Rolim, Jose
This paper deals with early obstacles recognition in wireless sensor networks under various traffic patterns. In the presence of obstacles, the efficiency of routing algorithms is increased by voluntarily avoiding some regions in the vicinity of obstacles, areas which we call dead-ends. In this paper, we first propose a fast convergent routing algorithm with proactive dead-end detection together with a formal definition and description of dead-ends. Secondly, we present a generalization of this algorithm which improves performances in all to many and all to all traffic patterns. In a third part we prove that this algorithm produces paths that are optimal up to a constant factor of 2π + 1. In a fourth part we consider the reactive version of the algorithm which is an extension of a previously known early obstacle detection algorithm. Finally we give experimental results to illustrate the efficiency of our algorithms in different scenarios.
Early stage hot spot analysis through standard cell base random pattern generation
NASA Astrophysics Data System (ADS)
Jeon, Joong-Won; Song, Jaewan; Kim, Jeong-Lim; Park, Seongyul; Yang, Seung-Hune; Lee, Sooryong; Kang, Hokyu; Madkour, Kareem; ElManhawy, Wael; Lee, SeungJo; Kwan, Joe
2017-04-01
Due to limited availability of DRC clean patterns during the process and RET recipe development, OPC recipes are not tested with high pattern coverage. Various kinds of pattern can help OPC engineer to detect sensitive patterns to lithographic effects. Random pattern generation is needed to secure robust OPC recipe. However, simple random patterns without considering real product layout style can't cover patterning hotspot in production levels. It is not effective to use them for OPC optimization thus it is important to generate random patterns similar to real product patterns. This paper presents a strategy for generating random patterns based on design architecture information and preventing hotspot in early process development stage through a tool called Layout Schema Generator (LSG). Using LSG, we generate standard cell based on random patterns reflecting real design cell structure - fin pitch, gate pitch and cell height. The output standard cells from LSG are applied to an analysis methodology to assess their hotspot severity by assigning a score according to their optical image parameters - NILS, MEEF, %PV band and thus potential hotspots can be defined by determining their ranking. This flow is demonstrated on Samsung 7nm technology optimizing OPC recipe and early enough in the process avoiding using problematic patterns.
NASA Technical Reports Server (NTRS)
Mcfarland, M. J.
1975-01-01
Horizontal wind components, potential temperature, and mixing ratio fields associated with a severe storm environment in the south central U.S. were analyzed from synoptic upper air observations with a nonhomogeneous, anisotropic weighting function. Each data field was filtered with variational optimization analysis techniques. Variational optimization analysis was also performed on the vertical motion field and was used to produce advective forecasts of the potential temperature and mixing ratio fields. Results show that the dry intrusion is characterized by warm air, the advection of which produces a well-defined upward motion pattern. A corresponding downward motion pattern comprising a deep vertical circulation in the warm air sector of the low pressure system was detected. The axes alignment of maximum dry and warm advection with the axis of the tornado-producing squall line also resulted.
Evaluation of a New Digital Automated Glycemic Pattern Detection Tool.
Comellas, María José; Albiñana, Emma; Artes, Maite; Corcoy, Rosa; Fernández-García, Diego; García-Alemán, Jorge; García-Cuartero, Beatriz; González, Cintia; Rivero, María Teresa; Casamira, Núria; Weissmann, Jörg
2017-11-01
Blood glucose meters are reliable devices for data collection, providing electronic logs of historical data easier to interpret than handwritten logbooks. Automated tools to analyze these data are necessary to facilitate glucose pattern detection and support treatment adjustment. These tools emerge in a broad variety in a more or less nonevaluated manner. The aim of this study was to compare eDetecta, a new automated pattern detection tool, to nonautomated pattern analysis in terms of time investment, data interpretation, and clinical utility, with the overarching goal to identify early in development and implementation of tool areas of improvement and potential safety risks. Multicenter web-based evaluation in which 37 endocrinologists were asked to assess glycemic patterns of 4 real reports (2 continuous subcutaneous insulin infusion [CSII] and 2 multiple daily injection [MDI]). Endocrinologist and eDetecta analyses were compared on time spent to analyze each report and agreement on the presence or absence of defined patterns. eDetecta module markedly reduced the time taken to analyze each case on the basis of the emminens eConecta reports (CSII: 18 min; MDI: 12.5), compared to the automatic eDetecta analysis. Agreement between endocrinologists and eDetecta varied depending on the patterns, with high level of agreement in patterns of glycemic variability. Further analysis of low level of agreement led to identifying areas where algorithms used could be improved to optimize trend pattern identification. eDetecta was a useful tool for glycemic pattern detection, helping clinicians to reduce time required to review emminens eConecta glycemic reports. No safety risks were identified during the study.
NASA Astrophysics Data System (ADS)
He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang
2017-03-01
Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.
Evaluation of a New Digital Automated Glycemic Pattern Detection Tool
Albiñana, Emma; Artes, Maite; Corcoy, Rosa; Fernández-García, Diego; García-Alemán, Jorge; García-Cuartero, Beatriz; González, Cintia; Rivero, María Teresa; Casamira, Núria; Weissmann, Jörg
2017-01-01
Abstract Background: Blood glucose meters are reliable devices for data collection, providing electronic logs of historical data easier to interpret than handwritten logbooks. Automated tools to analyze these data are necessary to facilitate glucose pattern detection and support treatment adjustment. These tools emerge in a broad variety in a more or less nonevaluated manner. The aim of this study was to compare eDetecta, a new automated pattern detection tool, to nonautomated pattern analysis in terms of time investment, data interpretation, and clinical utility, with the overarching goal to identify early in development and implementation of tool areas of improvement and potential safety risks. Methods: Multicenter web-based evaluation in which 37 endocrinologists were asked to assess glycemic patterns of 4 real reports (2 continuous subcutaneous insulin infusion [CSII] and 2 multiple daily injection [MDI]). Endocrinologist and eDetecta analyses were compared on time spent to analyze each report and agreement on the presence or absence of defined patterns. Results: eDetecta module markedly reduced the time taken to analyze each case on the basis of the emminens eConecta reports (CSII: 18 min; MDI: 12.5), compared to the automatic eDetecta analysis. Agreement between endocrinologists and eDetecta varied depending on the patterns, with high level of agreement in patterns of glycemic variability. Further analysis of low level of agreement led to identifying areas where algorithms used could be improved to optimize trend pattern identification. Conclusion: eDetecta was a useful tool for glycemic pattern detection, helping clinicians to reduce time required to review emminens eConecta glycemic reports. No safety risks were identified during the study. PMID:29091477
Huang, Xuechen; Denprasert, Petcharat May; Zhou, Li; Vest, Adriana Nicholson; Kohan, Sam; Loeb, Gerald E
2017-09-01
We have developed and applied new methods to estimate the functional life of miniature, implantable, wireless electronic devices that rely on non-hermetic, adhesive encapsulants such as epoxy. A comb pattern board with a high density of interdigitated electrodes (IDE) could be used to detect incipient failure from water vapor condensation. Inductive coupling of an RF magnetic field was used to provide DC bias and to detect deterioration of an encapsulated comb pattern. Diodes in the implant converted part of the received energy into DC bias on the comb pattern. The capacitance of the comb pattern forms a resonant circuit with the inductor by which the implant receives power. Any moisture affects both the resonant frequency and the Q-factor of the resonance of the circuitry, which was detected wirelessly by its effects on the coupling between two orthogonal RF coils placed around the device. Various defects were introduced into the comb pattern devices to demonstrate sensitivity to failures and to correlate these signals with visual inspection of failures. Optimized encapsulation procedures were validated in accelerated life tests of both comb patterns and a functional neuromuscular stimulator under development. Strong adhesive bonding between epoxy and electronic circuitry proved to be necessary and sufficient to predict 1 year packaging reliability of 99.97% for the neuromuscular stimulator.
Optimal Patrol to Detect Attacks at Dispersed Heterogeneous Locations
2013-12-01
path with one revisit SPR2 Shortest path with two revisits SPR3 Shortest path with three revisits TSP Traveling salesman problem UAV Unmanned aerial...path patrol pattern. Finding the shortest-path patrol pattern is an example of solving a traveling salesman problem , as described in Section 16.5 of...use of patrol paths based on the traveling salesman prob- lem (TSP), where patrollers follow the shortest Hamiltonian cycle in a graph in order to
Gang, G J; Siewerdsen, J H; Stayman, J W
2017-02-11
This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Pigeons ("Columba Livia") Approach Nash Equilibrium in Experimental Matching Pennies Competitions
ERIC Educational Resources Information Center
Sanabria, Federico; Thrailkill, Eric
2009-01-01
The game of Matching Pennies (MP), a simplified version of the more popular Rock, Papers, Scissors, schematically represents competitions between organisms with incentives to predict each other's behavior. Optimal performance in iterated MP competitions involves the production of random choice patterns and the detection of nonrandomness in the…
NASA Astrophysics Data System (ADS)
Sims, David W.
2015-09-01
The seminal papers by Viswanathan and colleagues in the late 1990s [1,2] proposed not only that scale-free, superdiffusive Lévy walks can describe the free-ranging movement patterns observed in animals such as the albatross [1], but that the Lévy walk was optimal for searching for sparsely and randomly distributed resource targets [2]. This distinct advantage, now shown to be present over a much broader set of conditions than originally theorised [3], implied that the Lévy walk is a search strategy that should be found very widely in organisms [4]. In the years since there have been several influential empirical studies showing that Lévy walks can indeed be detected in the movement patterns of a very broad range of taxa, from jellyfish, insects, fish, reptiles, seabirds, humans [5-10], and even in the fossilised trails of extinct invertebrates [11]. The broad optimality and apparent deep evolutionary origin of movement (search) patterns that are well approximated by Lévy walks led to the development of the Lévy flight foraging (LFF) hypothesis [12], which states that "since Lévy flights and walks can optimize search efficiencies, therefore natural selection should have led to adaptations for Lévy flight foraging".
Pietrowska, M; Marczak, L; Polanska, J; Nowicka, E; Behrent, K; Tarnawski, R; Stobiecki, M; Polanski, A; Widlak, P
2010-01-01
Mass spectrometry-based analysis of the serum proteome allows identifying multi-peptide patterns/signatures specific for blood of cancer patients, thus having high potential value for cancer diagnostics. However, because of problems with optimization and standardization of experimental and computational design, none of identified proteome patterns/signatures was approved for diagnostics in clinical practice as yet. Here we compared two methods of serum sample preparation for mass spectrometry-based proteome pattern analysis aimed to identify biomarkers that could be used in early detection of breast cancer patients. Blood samples were collected in a group of 92 patients diagnosed at early (I and II) stages of the disease before the start of therapy, and in a group of age-matched healthy controls (104 women). Serum specimens were purified and analyzed using MALDI-ToF spectrometry, either directly or after membrane filtration (50 kDa cut-off) to remove albumin and other large serum proteins. Mass spectra of the low-molecular-weight fraction (2-10 kDa) of the serum proteome were resolved using the Gaussian mixture decomposition, and identified spectral components were used to build classifiers that differentiated samples from breast cancer patients and healthy persons. Mass spectra of complete serum and membrane-filtered albumin-depleted samples have apparently different structure and peaks specific for both types of samples could be identified. The optimal classifier built for the complete serum specimens consisted of 8 spectral components, and had 81% specificity and 72% sensitivity, while that built for the membrane-filtered samples consisted of 4 components, and had 80% specificity and 81% sensitivity. We concluded that pre-processing of samples to remove albumin might be recommended before MALDI-ToF mass spectrometric analysis of the low-molecular-weight components of human serum Keywords: albumin removal; breast cancer; clinical proteomics; mass spectrometry; pattern analysis; serum proteome.
Automatic burst detection for the EEG of the preterm infant.
Jennekens, Ward; Ruijs, Loes S; Lommen, Charlotte M L; Niemarkt, Hendrik J; Pasman, Jaco W; van Kranen-Mastenbroek, Vivianne H J M; Wijn, Pieter F F; van Pul, Carola; Andriessen, Peter
2011-10-01
To aid with prognosis and stratification of clinical treatment for preterm infants, a method for automated detection of bursts, interburst-intervals (IBIs) and continuous patterns in the electroencephalogram (EEG) is developed. Results are evaluated for preterm infants with normal neurological follow-up at 2 years. The detection algorithm (MATLAB®) for burst, IBI and continuous pattern is based on selection by amplitude, time span, number of channels and numbers of active electrodes. Annotations of two neurophysiologists were used to determine threshold values. The training set consisted of EEG recordings of four preterm infants with postmenstrual age (PMA, gestational age + postnatal age) of 29-34 weeks. Optimal threshold values were based on overall highest sensitivity. For evaluation, both observers verified detections in an independent dataset of four EEG recordings with comparable PMA. Algorithm performance was assessed by calculation of sensitivity and positive predictive value. The results of algorithm evaluation are as follows: sensitivity values of 90% ± 6%, 80% ± 9% and 97% ± 5% for burst, IBI and continuous patterns, respectively. Corresponding positive predictive values were 88% ± 8%, 96% ± 3% and 85% ± 15%, respectively. In conclusion, the algorithm showed high sensitivity and positive predictive values for bursts, IBIs and continuous patterns in preterm EEG. Computer-assisted analysis of EEG may allow objective and reproducible analysis for clinical treatment.
Is countershading camouflage robust to lighting change due to weather?
2018-01-01
Countershading is a pattern of coloration thought to have evolved in order to implement camouflage. By adopting a pattern of coloration that makes the surface facing towards the sun darker and the surface facing away from the sun lighter, the overall amount of light reflected off an animal can be made more uniformly bright. Countershading could hence contribute to visual camouflage by increasing background matching or reducing cues to shape. However, the usefulness of countershading is constrained by a particular pattern delivering ‘optimal’ camouflage only for very specific lighting conditions. In this study, we test the robustness of countershading camouflage to lighting change due to weather, using human participants as a ‘generic’ predator. In a simulated three-dimensional environment, we constructed an array of simple leaf-shaped items and a single ellipsoidal target ‘prey’. We set these items in two light environments: strongly directional ‘sunny’ and more diffuse ‘cloudy’. The target object was given the optimal pattern of countershading for one of these two environment types or displayed a uniform pattern. By measuring detection time and accuracy, we explored whether and how target detection depended on the match between the pattern of coloration on the target object and scene lighting. Detection times were longest when the countershading was appropriate to the illumination; incorrectly camouflaged targets were detected with a similar pattern of speed and accuracy to uniformly coloured targets. We conclude that structural changes in light environment, such as caused by differences in weather, do change the effectiveness of countershading camouflage. PMID:29515822
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Beran; John Christenson; Dragos Nica
2002-12-15
The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.
2013-10-15
statistic,” in Artifical Intelligence and Statistics (AISTATS), 2013. [6] ——, “Detecting activity in graphs via the Graph Ellipsoid Scan Statistic... Artifical Intelligence and Statistics (AISTATS), 2013. [8] ——, “Near-optimal anomaly detection in graphs using Lovász Extended Scan Statistic,” in Neural...networks,” in Artificial Intelligence and Statistics (AISTATS), 2010. 11 [11] D. Aldous, “The random walk construction of uniform spanning trees and
Feature-space-based FMRI analysis using the optimal linear transformation.
Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S
2010-09-01
The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.
Jensen, Morten Hasselstrøm; Christensen, Toke Folke; Tarnow, Lise; Seto, Edmund; Dencker Johansen, Mette; Hejlesen, Ole Kristian
2013-07-01
Hypoglycemia is a potentially fatal condition. Continuous glucose monitoring (CGM) has the potential to detect hypoglycemia in real time and thereby reduce time in hypoglycemia and avoid any further decline in blood glucose level. However, CGM is inaccurate and shows a substantial number of cases in which the hypoglycemic event is not detected by the CGM. The aim of this study was to develop a pattern classification model to optimize real-time hypoglycemia detection. Features such as time since last insulin injection and linear regression, kurtosis, and skewness of the CGM signal in different time intervals were extracted from data of 10 male subjects experiencing 17 insulin-induced hypoglycemic events in an experimental setting. Nondiscriminative features were eliminated with SEPCOR and forward selection. The feature combinations were used in a Support Vector Machine model and the performance assessed by sample-based sensitivity and specificity and event-based sensitivity and number of false-positives. The best model was composed by using seven features and was able to detect 17 of 17 hypoglycemic events with one false-positive compared with 12 of 17 hypoglycemic events with zero false-positives for the CGM alone. Lead-time was 14 min and 0 min for the model and the CGM alone, respectively. This optimized real-time hypoglycemia detection provides a unique approach for the diabetes patient to reduce time in hypoglycemia and learn about patterns in glucose excursions. Although these results are promising, the model needs to be validated on CGM data from patients with spontaneous hypoglycemic events.
Discovering biclusters in gene expression data based on high-dimensional linear geometries
Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong
2008-01-01
Background In DNA microarray experiments, discovering groups of genes that share similar transcriptional characteristics is instrumental in functional annotation, tissue classification and motif identification. However, in many situations a subset of genes only exhibits consistent pattern over a subset of conditions. Conventional clustering algorithms that deal with the entire row or column in an expression matrix would therefore fail to detect these useful patterns in the data. Recently, biclustering has been proposed to detect a subset of genes exhibiting consistent pattern over a subset of conditions. However, most existing biclustering algorithms are based on searching for sub-matrices within a data matrix by optimizing certain heuristically defined merit functions. Moreover, most of these algorithms can only detect a restricted set of bicluster patterns. Results In this paper, we present a novel geometric perspective for the biclustering problem. The biclustering process is interpreted as the detection of linear geometries in a high dimensional data space. Such a new perspective views biclusters with different patterns as hyperplanes in a high dimensional space, and allows us to handle different types of linear patterns simultaneously by matching a specific set of linear geometries. This geometric viewpoint also inspires us to propose a generic bicluster pattern, i.e. the linear coherent model that unifies the seemingly incompatible additive and multiplicative bicluster models. As a particular realization of our framework, we have implemented a Hough transform-based hyperplane detection algorithm. The experimental results on human lymphoma gene expression dataset show that our algorithm can find biologically significant subsets of genes. Conclusion We have proposed a novel geometric interpretation of the biclustering problem. We have shown that many common types of bicluster are just different spatial arrangements of hyperplanes in a high dimensional data space. An implementation of the geometric framework using the Fast Hough transform for hyperplane detection can be used to discover biologically significant subsets of genes under subsets of conditions for microarray data analysis. PMID:18433477
Human connectome module pattern detection using a new multi-graph MinMax cut model.
De, Wang; Wang, Yang; Nie, Feiping; Yan, Jingwen; Cai, Weidong; Saykin, Andrew J; Shen, Li; Huang, Heng
2014-01-01
Many recent scientific efforts have been devoted to constructing the human connectome using Diffusion Tensor Imaging (DTI) data for understanding the large-scale brain networks that underlie higher-level cognition in human. However, suitable computational network analysis tools are still lacking in human connectome research. To address this problem, we propose a novel multi-graph min-max cut model to detect the consistent network modules from the brain connectivity networks of all studied subjects. A new multi-graph MinMax cut model is introduced to solve this challenging computational neuroscience problem and the efficient optimization algorithm is derived. In the identified connectome module patterns, each network module shows similar connectivity patterns in all subjects, which potentially associate to specific brain functions shared by all subjects. We validate our method by analyzing the weighted fiber connectivity networks. The promising empirical results demonstrate the effectiveness of our method.
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features.
Amudha, P; Karthik, S; Sivakumari, S
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.
A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features
Amudha, P.; Karthik, S.; Sivakumari, S.
2015-01-01
Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC) with Enhanced Particle Swarm Optimization (EPSO) to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup'99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different. PMID:26221625
Damage Detection Using Holography and Interferometry
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2003-01-01
This paper reviews classical approaches to damage detection using laser holography and interferometry. The paper then details the modern uses of electronic holography and neural-net-processed characteristic patterns to detect structural damage. The design of the neural networks and the preparation of the training sets are discussed. The use of a technique to optimize the training sets, called folding, is explained. Then a training procedure is detailed that uses the holography-measured vibration modes of the undamaged structures to impart damage-detection sensitivity to the neural networks. The inspections of an optical strain gauge mounting plate and an International Space Station cold plate are presented as examples.
Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-01-01
Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290
Joint optimization of fluence field modulation and regularization in task-driven computed tomography
NASA Astrophysics Data System (ADS)
Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.
2017-03-01
Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.
Quantitative approach for optimizing e-beam condition of photoresist inspection and measurement
NASA Astrophysics Data System (ADS)
Lin, Chia-Jen; Teng, Chia-Hao; Cheng, Po-Chung; Sato, Yoshishige; Huang, Shang-Chieh; Chen, Chu-En; Maruyama, Kotaro; Yamazaki, Yuichiro
2018-03-01
Severe process margin in advanced technology node of semiconductor device is controlled by e-beam metrology system and e-beam inspection system with scanning electron microscopy (SEM) image. By using SEM, larger area image with higher image quality is required to collect massive amount of data for metrology and to detect defect in a large area for inspection. Although photoresist is the one of the critical process in semiconductor device manufacturing, observing photoresist pattern by SEM image is crucial and troublesome especially in the case of large image. The charging effect by e-beam irradiation on photoresist pattern causes deterioration of image quality, and it affect CD variation on metrology system and causes difficulties to continue defect inspection in a long time for a large area. In this study, we established a quantitative approach for optimizing e-beam condition with "Die to Database" algorithm of NGR3500 on photoresist pattern to minimize charging effect. And we enhanced the performance of measurement and inspection on photoresist pattern by using optimized e-beam condition. NGR3500 is the geometry verification system based on "Die to Database" algorithm which compares SEM image with design data [1]. By comparing SEM image and design data, key performance indicator (KPI) of SEM image such as "Sharpness", "S/N", "Gray level variation in FOV", "Image shift" can be retrieved. These KPIs were analyzed with different e-beam conditions which consist of "Landing Energy", "Probe Current", "Scanning Speed" and "Scanning Method", and the best e-beam condition could be achieved with maximum image quality, maximum scanning speed and minimum image shift. On this quantitative approach of optimizing e-beam condition, we could observe dependency of SEM condition on photoresist charging. By using optimized e-beam condition, measurement could be continued on photoresist pattern over 24 hours stably. KPIs of SEM image proved image quality during measurement and inspection was stabled enough.
Ibrahim, Heba; Saad, Amr; Abdo, Amany; Sharaf Eldin, A
2016-04-01
Pharmacovigilance (PhV) is an important clinical activity with strong implications for population health and clinical research. The main goal of PhV is the timely detection of adverse drug events (ADEs) that are novel in their clinical nature, severity and/or frequency. Drug interactions (DI) pose an important problem in the development of new drugs and post marketing PhV that contribute to 6-30% of all unexpected ADEs. Therefore, the early detection of DI is vital. Spontaneous reporting systems (SRS) have served as the core data collection system for post marketing PhV since the 1960s. The main objective of our study was to particularly identify signals of DI from SRS. In addition, we are presenting an optimized tailored mining algorithm called "hybrid Apriori". The proposed algorithm is based on an optimized and modified association rule mining (ARM) approach. A hybrid Apriori algorithm has been applied to the SRS of the United States Food and Drug Administration's (U.S. FDA) adverse events reporting system (FAERS) in order to extract significant association patterns of drug interaction-adverse event (DIAE). We have assessed the resulting DIAEs qualitatively and quantitatively using two different triage features: a three-element taxonomy and three performance metrics. These features were applied on two random samples of 100 interacting and 100 non-interacting DIAE patterns. Additionally, we have employed logistic regression (LR) statistic method to quantify the magnitude and direction of interactions in order to test for confounding by co-medication in unknown interacting DIAE patterns. Hybrid Apriori extracted 2933 interacting DIAE patterns (including 1256 serious ones) and 530 non-interacting DIAE patterns. Referring to the current knowledge using four different reliable resources of DI, the results showed that the proposed method can extract signals of serious interacting DIAEs. Various association patterns could be identified based on the relationships among the elements which composed a pattern. The average performance of the method showed 85% precision, 80% negative predictive value, 81% sensitivity and 84% specificity. The LR modeling could provide the statistical context to guard against spurious DIAEs. The proposed method could efficiently detect DIAE signals from SRS data as well as, identifying rare adverse drug reactions (ADRs). Copyright © 2016 Elsevier Inc. All rights reserved.
Best chirplet chain: Near-optimal detection of gravitational wave chirps
NASA Astrophysics Data System (ADS)
Chassande-Mottin, Éric; Pai, Archana
2006-02-01
The list of putative sources of gravitational waves possibly detected by the ongoing worldwide network of large scale interferometers has been continuously growing in the last years. For some of them, the detection is made difficult by the lack of a complete information about the expected signal. We concentrate on the case where the expected gravitational wave (GW) is a quasiperiodic frequency modulated signal i.e., a chirp. In this article, we address the question of detecting an a priori unknown GW chirp. We introduce a general chirp model and claim that it includes all physically realistic GW chirps. We produce a finite grid of template waveforms which samples the resulting set of possible chirps. If we follow the classical approach (used for the detection of inspiralling binary chirps, for instance), we would build a bank of quadrature matched filters comparing the data to each of the templates of this grid. The detection would then be achieved by thresholding the output, the maximum giving the individual which best fits the data. In the present case, this exhaustive search is not tractable because of the very large number of templates in the grid. We show that the exhaustive search can be reformulated (using approximations) as a pattern search in the time-frequency plane. This motivates an approximate but feasible alternative solution which is clearly linked to the optimal one. The time-frequency representation and pattern search algorithm are fully determined by the reformulation. This contrasts with the other time-frequency based methods presented in the literature for the same problem, where these choices are justified by “ad hoc” arguments. In particular, the time-frequency representation has to be unitary. Finally, we assess the performance, robustness and computational cost of the proposed method with several benchmarks using simulated data.
A fast process development flow by applying design technology co-optimization
NASA Astrophysics Data System (ADS)
Chen, Yi-Chieh; Yeh, Shin-Shing; Ou, Tsong-Hua; Lin, Hung-Yu; Mai, Yung-Ching; Lin, Lawrence; Lai, Jun-Cheng; Lai, Ya Chieh; Xu, Wei; Hurat, Philippe
2017-03-01
Beyond 40 nm technology node, the pattern weak points and hotspot types increase dramatically. The typical patterns for lithography verification suffers huge turn-around-time (TAT) to handle the design complexity. Therefore, in order to speed up process development and increase pattern variety, accurate design guideline and realistic design combinations are required. This paper presented a flow for creating a cell-based layout, a lite realistic design, to early identify problematic patterns which will negatively affect the yield. A new random layout generating method, Design Technology Co-Optimization Pattern Generator (DTCO-PG), is reported in this paper to create cell-based design. DTCO-PG also includes how to characterize the randomness and fuzziness, so that it is able to build up the machine learning scheme which model could be trained by previous results, and then it generates patterns never seen in a lite design. This methodology not only increases pattern diversity but also finds out potential hotspot preliminarily. This paper also demonstrates an integrated flow from DTCO pattern generation to layout modification. Optical Proximity Correction, OPC and lithographic simulation is then applied to DTCO-PG design database to detect hotspots and then hotspots or weak points can be automatically fixed through the procedure or handled manually. This flow benefits the process evolution to have a faster development cycle time, more complexity pattern design, higher probability to find out potential hotspots in early stage, and a more holistic yield ramping operation.
Regular Deployment of Wireless Sensors to Achieve Connectivity and Information Coverage
Cheng, Wei; Li, Yong; Jiang, Yi; Yin, Xipeng
2016-01-01
Coverage and connectivity are two of the most critical research subjects in WSNs, while regular deterministic deployment is an important deployment strategy and results in some pattern-based lattice WSNs. Some studies of optimal regular deployment for generic values of rc/rs were shown recently. However, most of these deployments are subject to a disk sensing model, and cannot take advantage of data fusion. Meanwhile some other studies adapt detection techniques and data fusion to sensing coverage to enhance the deployment scheme. In this paper, we provide some results on optimal regular deployment patterns to achieve information coverage and connectivity as a variety of rc/rs, which are all based on data fusion by sensor collaboration, and propose a novel data fusion strategy for deployment patterns. At first the relation between variety of rc/rs and density of sensors needed to achieve information coverage and connectivity is derived in closed form for regular pattern-based lattice WSNs. Then a dual triangular pattern deployment based on our novel data fusion strategy is proposed, which can utilize collaborative data fusion more efficiently. The strip-based deployment is also extended to a new pattern to achieve information coverage and connectivity, and its characteristics are deduced in closed form. Some discussions and simulations are given to show the efficiency of all deployment patterns, including previous patterns and the proposed patterns, to help developers make more impactful WSN deployment decisions. PMID:27529246
On optimal current patterns for electrical impedance tomography.
Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D
2005-02-01
We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.
Detection of movement intention from single-trial movement-related cortical potentials
NASA Astrophysics Data System (ADS)
Niazi, Imran Khan; Jiang, Ning; Tiberghien, Olivier; Feldbæk Nielsen, Jørgen; Dremstrup, Kim; Farina, Dario
2011-10-01
Detection of movement intention from neural signals combined with assistive technologies may be used for effective neurofeedback in rehabilitation. In order to promote plasticity, a causal relation between intended actions (detected for example from the EEG) and the corresponding feedback should be established. This requires reliable detection of motor intentions. In this study, we propose a method to detect movements from EEG with limited latency. In a self-paced asynchronous BCI paradigm, the initial negative phase of the movement-related cortical potentials (MRCPs), extracted from multi-channel scalp EEG was used to detect motor execution/imagination in healthy subjects and stroke patients. For MRCP detection, it was demonstrated that a new optimized spatial filtering technique led to better accuracy than a large Laplacian spatial filter and common spatial pattern. With the optimized spatial filter, the true positive rate (TPR) for detection of movement execution in healthy subjects (n = 15) was 82.5 ± 7.8%, with latency of -66.6 ± 121 ms. Although TPR decreased with motor imagination in healthy subject (n = 10, 64.5 ± 5.33%) and with attempted movements in stroke patients (n = 5, 55.01 ± 12.01%), the results are promising for the application of this approach to provide patient-driven real-time neurofeedback.
Comparison of three underwater antennas for use in radiotelemetry
Beeman, J.W.; Grant, C.; Haner, P.V.
2004-01-01
The radiation patterns of three versions of underwater radiotelemetry antennas were measured to compare the relative reception ranges in the horizontal and vertical planes, which are important considerations when designing detection systems. The received signal strengths of an antenna made by stripping shielding from a section of coaxial cable (stripped coax) and by two versions of a dipole antenna were measured at several orientations relative to a dipole transmit antenna under controlled field conditions. The received signal strengths were greater when the transmit and receive antennas were parallel to each other than when they were perpendicular, indicating that a parallel orientation provides optimal detection range. The horizontal plane radiation pattern of the flexible, stripped coax antenna was similar to that of a rigid dipole antenna, but movement of underwater stripped coax antennas in field applications could affect the orientation of transmit and receive antennas in some applications, resulting in decreased range and variation in received signal strengths. Compared with a standard dipole, a dipole antenna armored by housing within a polyvinyl chloride fitting had a smaller radiation pattern in the horizontal plane but a larger radiation pattern in the vertical plane. Each of these types of underwater antenna can be useful, but detection ranges can be maximized by choosing an appropriate antenna after consideration of the location, relation between transmit and receive antenna orientations, radiation patterns, and overall antenna resiliency.
Shin, Jae Hyuk; Lee, Boreom; Park, Kwang Suk
2011-05-01
In this study, we developed an automated behavior analysis system using infrared (IR) motion sensors to assist the independent living of the elderly who live alone and to improve the efficiency of their healthcare. An IR motion-sensor-based activity-monitoring system was installed in the houses of the elderly subjects to collect motion signals and three different feature values, activity level, mobility level, and nonresponse interval (NRI). These factors were calculated from the measured motion signals. The support vector data description (SVDD) method was used to classify normal behavior patterns and to detect abnormal behavioral patterns based on the aforementioned three feature values. The simulation data and real data were used to verify the proposed method in the individual analysis. A robust scheme is presented in this paper for optimally selecting the values of different parameters especially that of the scale parameter of the Gaussian kernel function involving in the training of the SVDD window length, T of the circadian rhythmic approach with the aim of applying the SVDD to the daily behavior patterns calculated over 24 h. Accuracies by positive predictive value (PPV) were 95.8% and 90.5% for the simulation and real data, respectively. The results suggest that the monitoring system utilizing the IR motion sensors and abnormal-behavior-pattern detection with SVDD are effective methods for home healthcare of elderly people living alone.
Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery
NASA Astrophysics Data System (ADS)
Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin
2013-06-01
In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".
NASA Astrophysics Data System (ADS)
Hao, Yufang; Xie, Shaodong
2018-03-01
Air quality monitoring networks play a significant role in identifying the spatiotemporal patterns of air pollution, and they need to be deployed efficiently, with a minimum number of sites. The revision and optimal adjustment of existing monitoring networks is crucial for cities that have undergone rapid urban expansion and experience temporal variations in pollution patterns. The approach based on the Weather Research and Forecasting-California PUFF (WRF-CALPUFF) model and genetic algorithm (GA) was developed to design an optimal monitoring network. The maximization of coverage with minimum overlap and the ability to detect violations of standards were developed as the design objectives for redistributed networks. The non-dominated sorting genetic algorithm was applied to optimize the network size and site locations simultaneously for Shijiazhuang city, one of the most polluted cities in China. The assessment on the current network identified the insufficient spatial coverage of SO2 and NO2 monitoring for the expanding city. The optimization results showed that significant improvements were achieved in multiple objectives by redistributing the original network. Efficient coverage of the resulting designs improved to 60.99% and 76.06% of the urban area for SO2 and NO2, respectively. The redistributing design for multi-pollutant including 8 sites was also proposed, with the spatial representation covered 52.30% of the urban area and the overlapped areas decreased by 85.87% compared with the original network. The abilities to detect violations of standards were not improved as much as the other two objectives due to the conflicting nature between the multiple objectives. Additionally, the results demonstrated that the algorithm was slightly sensitive to the parameter settings, with the number of generations presented the most significant effect. Overall, our study presents an effective and feasible procedure for air quality network optimization at a city scale.
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Ueda, Erica; Feng, Wenqian; Levkin, Pavel A
2016-10-01
High-density microarrays can screen thousands of genetic and chemical probes at once in a miniaturized and parallelized manner, and thus are a cost-effective alternative to microwell plates. Here, high-density cell microarrays are fabricated by creating superhydrophilic-superhydrophobic micropatterns in thin, nanoporous polymer substrates such that the superhydrophobic barriers confine both aqueous solutions and adherent cells within each superhydrophilic microspot. The superhydrophobic barriers confine and prevent the mixing of larger droplet volumes, and also control the spreading of droplets independent of the volume, minimizing the variability that arises due to different liquid and surface properties. Using a novel liposomal transfection reagent, ScreenFect A, the method of reverse cell transfection is optimized on the patterned substrates and several factors that affect transfection efficiency and cytotoxicity are identified. Higher levels of transfection are achieved on HOOC- versus NH 2 -functionalized superhydrophilic spots, as well as when gelatin and fibronectin are added to the transfection mixture, while minimizing the amount of transfection reagent improves cell viability. Almost no diffusion of the printed transfection mixtures to the neighboring microspots is detected. Thus, superhydrophilic-superhydrophobic patterned surfaces can be used as cell microarrays and for optimizing reverse cell transfection conditions before performing further cell screenings. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Utilization of optical emission endpoint in photomask dry etch processing
NASA Astrophysics Data System (ADS)
Faure, Thomas B.; Huynh, Cuc; Lercel, Michael J.; Smith, Adam; Wagner, Thomas
2002-03-01
Use of accurate and repeatable endpoint detection during dry etch processing of photomask is very important for obtaining good mask mean-to-target and CD uniformity performance. It was found that the typical laser reflectivity endpoint detecting system used on photomask dry etch systems had several key limitations that caused unnecessary scrap and non-optimum image size performance. Consequently, work to develop and implement use of a more robust optical emission endpoint detection system for chrome dry etch processing of photomask was performed. Initial feasibility studies showed that the emission technique was sensitive enough to monitor pattern loadings on contact and via level masks down to 3 percent pattern coverage. Additional work was performed to further improve this to 1 percent pattern coverage by optimizing the endpoint detection parameters. Comparison studies of mask mean-to-target performance and CD uniformity were performed with the use of optical emission endpoint versus laser endpoint for masks built using TOK IP3600 and ZEP 7000 resist systems. It was found that an improvement in mean-to-target performance and CD uniformity was realized on several types of production masks. In addition, part-to-part endpoint time repeatability was found to be significantly improved with the use of optical emission endpoint.
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.
2018-01-01
The analysis of measured data plays a significant role in enhancing nuclear nonproliferation mainly by inferring the presence of patterns associated with special nuclear materials. Among various types of measurements, gamma-ray spectra is the widest utilized type of data in nonproliferation applications. In this paper, a method that employs the fireworks algorithm (FWA) for analyzing gamma-ray spectra aiming at detecting gamma signatures is presented. In particular, FWA is utilized to fit a set of known signatures to a measured spectrum by optimizing an objective function, where non-zero coefficients express the detected signatures. FWA is tested on a set of experimentallymore » obtained measurements optimizing various objective functions—MSE, RMSE, Theil-2, MAE, MAPE, MAP—with results exhibiting its potential in providing highly accurate and precise signature detection. Finally and furthermore, FWA is benchmarked against genetic algorithms and multiple linear regression, showing its superiority over those algorithms regarding precision with respect to MAE, MAPE, and MAP measures.« less
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Gao, Yi
2017-03-01
This paper presents a fully automatic approach to grade intermediate prostate malignancy with hematoxylin and eosin-stained whole slide images. Deep learning architectures such as convolutional neural networks have been utilized in the domain of histopathology for automated carcinoma detection and classification. However, few work show its power in discriminating intermediate Gleason patterns, due to sporadic distribution of prostate glands on stained surgical section samples. We propose optimized hematoxylin decomposition on localized images, followed by convolutional neural network to classify Gleason patterns 3+4 and 4+3 without handcrafted features or gland segmentation. Crucial glands morphology and structural relationship of nuclei are extracted twice in different color space by the multi-scale strategy to mimic pathologists' visual examination. Our novel classification scheme evaluated on 169 whole slide images yielded a 70.41% accuracy and corresponding area under the receiver operating characteristic curve of 0.7247.
NASA Astrophysics Data System (ADS)
de Oliveira, Helder C. R.; Moraes, Diego R.; Reche, Gustavo A.; Borges, Lucas R.; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.
2017-03-01
This paper presents a new local micro-pattern texture descriptor for the detection of Architectural Distortion (AD) in digital mammography images. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automatic detection of AD, but their performance are still unsatisfactory. The proposed descriptor, Local Mapped Pattern (LMP), is a generalization of the Local Binary Pattern (LBP), which is considered one of the most powerful feature descriptor for texture classification in digital images. Compared to LBP, the LMP descriptor captures more effectively the minor differences between the local image pixels. Moreover, LMP is a parametric model which can be optimized for the desired application. In our work, the LMP performance was compared to the LBP and four Haralick's texture descriptors for the classification of 400 regions of interest (ROIs) extracted from clinical mammograms. ROIs were selected and divided into four classes: AD, normal tissue, microcalcifications and masses. Feature vectors were used as input to a multilayer perceptron neural network, with a single hidden layer. Results showed that LMP is a good descriptor to distinguish AD from other anomalies in digital mammography. LMP performance was slightly better than the LBP and comparable to Haralick's descriptors (mean classification accuracy = 83%).
Comparison of detection methods for cell surface globotriaosylceramide.
Kim, Minji; Binnington, Beth; Sakac, Darinka; Fernandes, Kimberly R; Shi, Sheryl P; Lingwood, Clifford A; Branch, Donald R
2011-08-31
The cell surface-expressed glycosphingolipid (GSL), globotriaosylceramide (Gb(3)), is becoming increasingly important and is widely studied in the areas of verotoxin (VT)-mediated cytotoxicity, human immunodeficiency virus (HIV) infection, immunology and cancer. However, despite its diverse roles and implications, an optimized detection method for cell surface Gb(3) has not been determined. GSLs are differentially organized in the plasma membrane which can affect their availability for protein binding. To examine various detection methods for cell surface Gb(3), we compared four reagents for use in flow cytometry analysis. A natural ligand (VT1B) and three different monoclonal antibodies (mAbs) were optimized and tested on various human cell lines for Gb(3) detection. A differential detection pattern of cell surface Gb(3) expression, which was influenced by the choice of reagent, was observed. Two mAb were found to be suboptimal. However, two other methods were found to be useful as defined by their high percentage of positivity and mean fluorescence intensity (MFI) values. Rat IgM anti-Gb(3) mAb (clone 38-13) using phycoerythrin-conjugated secondary antibody was found to be the most specific detection method while the use of VT1B conjugated to Alexa488 fluorochrome was found to be the most sensitive; showing a rare crossreactivity only when Gb(4) expression was highly elevated. The findings of this study demonstrate the variability in detection of Gb(3) depending on the reagent and cell target used and emphasize the importance of selecting an optimal methodology in studies for the detection of cell surface expression of Gb(3). Copyright © 2011 Elsevier B.V. All rights reserved.
Community Detection in Complex Networks via Clique Conductance.
Lu, Zhenqi; Wahlström, Johan; Nehorai, Arye
2018-04-13
Network science plays a central role in understanding and modeling complex systems in many areas including physics, sociology, biology, computer science, economics, politics, and neuroscience. One of the most important features of networks is community structure, i.e., clustering of nodes that are locally densely interconnected. Communities reveal the hierarchical organization of nodes, and detecting communities is of great importance in the study of complex systems. Most existing community-detection methods consider low-order connection patterns at the level of individual links. But high-order connection patterns, at the level of small subnetworks, are generally not considered. In this paper, we develop a novel community-detection method based on cliques, i.e., local complete subnetworks. The proposed method overcomes the deficiencies of previous similar community-detection methods by considering the mathematical properties of cliques. We apply the proposed method to computer-generated graphs and real-world network datasets. When applied to networks with known community structure, the proposed method detects the structure with high fidelity and sensitivity. When applied to networks with no a priori information regarding community structure, the proposed method yields insightful results revealing the organization of these complex networks. We also show that the proposed method is guaranteed to detect near-optimal clusters in the bipartition case.
Interfacing of differential-capacitive biomimetic hair flow-sensors for optimal sensitivity
NASA Astrophysics Data System (ADS)
Dagamseh, A. M. K.; Bruinink, C. M.; Wiegerink, R. J.; Lammerink, T. S. J.; Droogendijk, H.; Krijnen, G. J. M.
2013-03-01
Biologically inspired sensor-designs are investigated as a possible path to surpass the performance of more traditionally engineered designs. Inspired by crickets, artificial hair sensors have shown the ability to detect minute flow signals. This paper addresses developments in the design, fabrication, interfacing and characterization of biomimetic hair flow-sensors towards sensitive high-density arrays. Improvement of the electrode design of the hair sensors has resulted in a reduction of the smallest hair movements that can be measured. In comparison to the arrayed hairs-sensor design, the detection-limit was arguably improved at least twelve-fold, down to 1 mm s-1 airflow amplitude at 250 Hz as measured in a bandwidth of 3 kHz. The directivity pattern closely resembles a figure-of-eight. These sensitive hair-sensors open possibilities for high-resolution spatio-temporal flow pattern observations.
NASA Astrophysics Data System (ADS)
Bin Abdul Rahim, Hazli Rafis; Bin Lokman, Muhammad Quisar; Harun, Sulaiman Wadi; Hornyak, Gabor Louis; Sterckx, Karel; Mohammed, Waleed Soliman; Dutta, Joydeep
2016-07-01
The width of spiral-patterned zinc oxide (ZnO) nanorod coatings on plastic optical fiber (POF) was optimized theoretically for light-side coupling and found to be 5 mm. Structured ZnO nanorods were grown on large core POFs for the purpose of alcohol vapor sensing. The aim of the spiral patterns was to enhance signal transmission by reduction of the effective ZnO growth area, thereby minimizing light leakage due to backscattering. The sensing mechanism utilized changes in the output signal due to adsorption of methanol, ethanol, and isopropanol vapors. Three spectral bands consisting of red (620 to 750 nm), green (495 to 570 nm), and blue (450 to 495 nm) were applied in measurements. The range of relative intensity modulation (RIM) was determined to be for concentrations between 25 to 300 ppm. Methanol presented the strongest response compared to ethanol and isopropanol in all three spectral channels. With regard to alcohol detection RIM by spectral band, the green channel demonstrated the highest RIM values followed by the blue and red channels, respectively.
Automated mapping of linear dunefield morphometric parameters from remotely-sensed data
NASA Astrophysics Data System (ADS)
Telfer, M. W.; Fyfe, R. M.; Lewin, S.
2015-12-01
Linear dunes are among the world's most common desert dune types, and typically occur in dunefields arranged in remarkably organized patterns extending over hundreds of kilometers. The causes of the patterns, formed by dunes merging, bifurcating and terminating, are still poorly understood, although it is widely accepted that they are emergent properties of the complex system of interactions between the boundary layer and an often-vegetated erodible substrate. Where such dunefields are vegetated, they are typically used as extensive rangeland, yet it is evident that many currently stabilized dunefields have been reactivated repeatedly during the late Quaternary. It has been suggested that dunefield patterning and the temporal evolution of dunefields are related, and thus there is considerable interest in better understanding the boundary conditions controlling dune patterning, especially given the possibility of reactivation of currently-stabilized dunefields under 21st century climate change. However, the time-consuming process of manual dune mapping has hampered attempts at quantitative description of dunefield patterning. This study aims to develop and test methods for delineating linear dune trendlines automatically from freely-available remotely sensed datasets. The highest resolution free global topographic data presently available (Aster GDEM v2) proved to be of marginal use, as the topographic expression of the dunes is of the same order as the vertical precision of the dataset (∼10 m), but in regions with relatively simple patterning it defined dune trends adequately. Analysis of spectral data (panchromatic Landsat 8 data) proved more promising in five of the six test sites, and despite poor panchromatic signal/noise ratios for the sixth site, the reflectance in the deep blue/violet (Landsat 8 Band 1) offers an alternative method of delineating dune pattern. A new edge detection algorithm (LInear Dune Optimized edge detection; LIDO) is proposed, based on Sobel operators with directional filtering and topologically-constrained recursion to optimize the inclusion of marginal zones. The method offers the potential for rapid quantitative mapping of linear dunefield patterning, providing validation data for modeling studies, and offering for the first time the ability to readily remap dunefields to assess dune reorganization at the dunefield scale.
Optimal pattern synthesis for speech recognition based on principal component analysis
NASA Astrophysics Data System (ADS)
Korsun, O. N.; Poliyev, A. V.
2018-02-01
The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.
NASA Astrophysics Data System (ADS)
Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria
2008-11-01
Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.
Statistical evaluation of synchronous spike patterns extracted by frequent item set mining
Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja
2013-01-01
We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487
NASA Astrophysics Data System (ADS)
Humphries, Nicolas E.
2015-09-01
The comprehensive review of Lévy patterns observed in the moves and pauses of a vast array of organisms by Reynolds [1] makes clear a need to attempt to unify phenomena to understand how organism movement may have evolved. However, I would contend that the research on Lévy 'movement patterns' we detect in time series of animal movements has to a large extent been misunderstood. The statistical techniques, such as Maximum Likelihood Estimation, used to detect these patterns look only at the statistical distribution of move step-lengths and not at the actual pattern, or structure, of the movement path. The path structure is lost altogether when move step-lengths are sorted prior to analysis. Likewise, the simulated movement paths, with step-lengths drawn from a truncated power law distribution in order to test characteristics of the path, such as foraging efficiency, in no way match the actual paths, or trajectories, of real animals. These statistical distributions are, therefore, null models of searching or foraging activity. What has proved surprising about these step-length distributions is the extent to which they improve the efficiency of random searches over simple Brownian motion. It has been shown unequivocally that a power law distribution of move step lengths is more efficient, in terms of prey items located per unit distance travelled, than any other distribution of move step-lengths so far tested (up to 3 times better than Brownian), and over a range of prey field densities spanning more than 4 orders of magnitude [2].
Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity
Tripathi, Ashish; McNulty, Ian; Munson, Todd; ...
2016-10-14
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
Multicolor microcontact printing of proteins on nanoporous surface for patterned immunoassay
NASA Astrophysics Data System (ADS)
Ng, Elaine; Gopal, Ashwini; Hoshino, Kazunori; Zhang, Xiaojing
2011-07-01
The large scale patterning of therapeutic proteins is a key to the efficient design, characterization, and production of biologics for cost effective, high throughput, and point-of-care detection and analysis system. We demonstrate an efficient method for protein deposition and adsorption on nanoporous silica substrates in specific patterns using a method called "micro-contact printing". Multiple color-tagged proteins can be printed through sequential application of such micro-patterning technique. Two groups of experiments were performed. In the first group, the protein stamp was aligned precisely with the printing sites, where the stamp was applied multiple times. Optimal conditions were identified for protein transfer and adsorption using the pore size of 4 nm and thickness of 30 nm porous silica thin film. In the second group, we demonstrate the patterning of two-color rabbit immunoglobin labeled with fluorescein isothiocyanate and tetramethyl rhodamine iso-thiocyanate on porous silica substrates that have a pore size 4 nm, porosity 57% and thickness of the porous layer 30 nm. A pair of protein stamps, with corresponding alignment markings and coupled patterns, were aligned and used to produce a two-colored stamp pattern of proteins on porous silica. Different colored proteins can be applied to exemplify the diverse protein composition within a sample. This method of multicolor microcontact printing can be used to perform a fluorescence-based patterned enzyme-linked immunosorbent assay to detect the presence of various proteins within a sample.
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Accelerated wavefront determination technique for optical imaging through scattering medium
NASA Astrophysics Data System (ADS)
He, Hexiang; Wong, Kam Sing
2016-03-01
Wavefront shaping applied on scattering light is a promising optical imaging method in biological systems. Normally, optimized modulation can be obtained by a Liquid-Crystal Spatial Light Modulator (LC-SLM) and CCD hardware iteration. Here we introduce an improved method for this optimization process. The core of the proposed method is to firstly detect the disturbed wavefront, and then to calculate the modulation phase pattern by computer simulation. In particular, phase retrieval method together with phase conjugation is most effective. In this way, the LC-SLM based system can complete the wavefront optimization and imaging restoration within several seconds which is two orders of magnitude faster than the conventional technique. The experimental results show good imaging quality and may contribute to real time imaging recovery in scattering medium.
Optimizing morphology through blood cell image analysis.
Merino, A; Puigví, L; Boldú, L; Alférez, S; Rodellar, J
2018-05-01
Morphological review of the peripheral blood smear is still a crucial diagnostic aid as it provides relevant information related to the diagnosis and is important for selection of additional techniques. Nevertheless, the distinctive cytological characteristics of the blood cells are subjective and influenced by the reviewer's interpretation and, because of that, translating subjective morphological examination into objective parameters is a challenge. The use of digital microscopy systems has been extended in the clinical laboratories. As automatic analyzers have some limitations for abnormal or neoplastic cell detection, it is interesting to identify quantitative features through digital image analysis for morphological characteristics of different cells. Three main classes of features are used as follows: geometric, color, and texture. Geometric parameters (nucleus/cytoplasmic ratio, cellular area, nucleus perimeter, cytoplasmic profile, RBC proximity, and others) are familiar to pathologists, as they are related to the visual cell patterns. Different color spaces can be used to investigate the rich amount of information that color may offer to describe abnormal lymphoid or blast cells. Texture is related to spatial patterns of color or intensities, which can be visually detected and quantitatively represented using statistical tools. This study reviews current and new quantitative features, which can contribute to optimize morphology through blood cell digital image processing techniques. © 2018 John Wiley & Sons Ltd.
Single-protein detection in crowded molecular environments in cryo-EM images
Rickgauer, J Peter; Grigorieff, Nikolaus; Denk, Winfried
2017-01-01
We present an approach to study macromolecular assemblies by detecting component proteins’ characteristic high-resolution projection patterns, calculated from their known 3D structures, in single electron cryo-micrographs. Our method detects single apoferritin molecules in vitreous ice with high specificity and determines their orientation and location precisely. Simulations show that high spatial-frequency information and—in the presence of protein background—a whitening filter are essential for optimal detection, in particular for images taken far from focus. Experimentally, we could detect small viral RNA polymerase molecules, distributed randomly among binding locations, inside rotavirus particles. Based on the currently attainable image quality, we estimate a threshold for detection that is 150 kDa in ice and 300 kDa in 100 nm thick samples of dense biological material. DOI: http://dx.doi.org/10.7554/eLife.25648.001 PMID:28467302
Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas
2015-06-01
We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.
NASA Astrophysics Data System (ADS)
Jiang, Z.; Llandro, J.; Mitrelias, T.; Bland, J. A. C.
2006-04-01
A lab-on-a-chip integrated microfluidic cell has been developed for magnetic biosensing, which is comprised of anisotropic magnetoresistance (AMR) sensors optimized for the detection of single magnetic beads and electrodes to manipulate and sort the beads, integrated into a microfluidic channel. The device is designed to read out the real-time signal from 9 μm diameter magnetic beads moving over AMR sensors patterned into 18×4.5 μm rectangles and 10 μm diameter rings and arranged in Wheatstone bridges. The beads are moved over the sensors along a 75×75 μm wide channel patterned in SU8. Beads of different magnetic moments can be sorted through a magnetostatic sorting gate into different branches of the microfluidic channel using a magnetic field gradient applied by lithographically defined 120 nm thick Cu striplines carrying 0.2 A current.
Detecting Mental States by Machine Learning Techniques: The Berlin Brain-Computer Interface
NASA Astrophysics Data System (ADS)
Blankertz, Benjamin; Tangermann, Michael; Vidaurre, Carmen; Dickhaus, Thorsten; Sannelli, Claudia; Popescu, Florin; Fazli, Siamac; Danóczy, Márton; Curio, Gabriel; Müller, Klaus-Robert
The Berlin Brain-Computer Interface Brain-Computer Interface (BBCI) uses a machine learning approach to extract user-specific patterns from high-dimensional EEG-features optimized for revealing the user's mental state. Classical BCI applications are brain actuated tools for patients such as prostheses (see Section 4.1) or mental text entry systems ([1] and see [2-5] for an overview on BCI). In these applications, the BBCI uses natural motor skills of the users and specifically tailored pattern recognition algorithms for detecting the user's intent. But beyond rehabilitation, there is a wide range of possible applications in which BCI technology is used to monitor other mental states, often even covert ones (see also [6] in the fMRI realm). While this field is still largely unexplored, two examples from our studies are exemplified in Sections 4.3 and 4.4.
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Macrae, John D.; Hammond, Vincent H.; Kranbuehl, David E.; Hart, Sean M.; Hasko, Gregory H.; Markus, Alan M.
1993-01-01
A two-dimensional model of the resin transfer molding (RTM) process was developed which can be used to simulate the infiltration of resin into an anisotropic fibrous preform. Frequency dependent electromagnetic sensing (FDEMS) has been developed for in situ monitoring of the RTM process. Flow visualization tests were performed to obtain data which can be used to verify the sensor measurements and the model predictions. Results of the tests showed that FDEMS can accurately detect the position of the resin flow-front during mold filling, and that the model predicted flow-front patterns agreed well with the measured flow-front patterns.
Ma, Haixia; Gao, Min; Li, Jia; Zhou, Li; Guo, Jie; Liu, Junjuan; Han, Xu; Zhai, Lu; Wu, Ting
2016-11-01
This study was conducted to re-recognize serological change patterns of patients with acute hepatitis B (AHB) by a highly sensitive detection technology, as well as to explore methods to select the optimal treatment opportunity. The biochemical and virological parameters of 558 AHB patients were analyzed retrospectively. The serological markers of hepatitis B virus and HBV DNA were detected by electrochemiluminescence immunoassay and automatic real-time fluorescent quantitative PCR, respectively. At baseline, the positive rate of hepatitis B surface antigen (HBsAg) (86.2%) was significantly higher than the positive rate of HBV DNA (51.9%). Among the 58 patients with HBsAg-negative AHB, 16 were detected with trace amounts of HBV DNA at baseline. At 12 weeks, the HBsAg of 43 cases remained positive, and the mean level of HBsAg was 587.5IU/mL±313.4IU/mL. A total of 18 patients with HBsAg levels greater than 1500IU/mL at 12 weeks received interferon α-1b treatment and achieved HBsAg clearance within 24 weeks. Unlike traditional changing patterns, the clearance of HBV DNA in peripheral circulation for a few patients with AHB occurred later than HBsAg clearance. Detection of HBV DNA in peripheral circulation by highly sensitive detection technology could provide a diagnostic basis for those AHB patients who rapidly achieved HBsAg clearance before achieving HBV DNA clearance in their peripheral circulation and prevent misdiagnosis. Dynamic monitoring of the changes in HBsAg levels through highly sensitive detection technology could be used as a guide for the timely adoption of antiviral treatment with interferon and then AHB chronicity would be prevented. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Spectral saliency via automatic adaptive amplitude spectrum analysis
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan
2016-03-01
Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.
Improvement on Exoplanet Detection Methods and Analysis via Gaussian Process Fitting Techniques
NASA Astrophysics Data System (ADS)
Van Ross, Bryce; Teske, Johanna
2018-01-01
Planetary signals in radial velocity (RV) data are often accompanied by signals coming solely from stellar photo- or chromospheric variation. Such variation can reduce the precision of planet detection and mass measurements, and cause misidentification of planetary signals. Recently, several authors have demonstrated the utility of Gaussian Process (GP) regression for disentangling planetary signals in RV observations (Aigrain et al. 2012; Angus et al. 2017; Czekala et al. 2017; Faria et al. 2016; Gregory 2015; Haywood et al. 2014; Rajpaul et al. 2015; Foreman-Mackey et al. 2017). GP models the covariance of multivariate data to make predictions about likely underlying trends in the data, which can be applied to regions where there are no existing observations. The potency of GP has been used to infer stellar rotation periods; to model and disentangle time series spectra; and to determine physical aspects, populations, and detection of exoplanets, among other astrophysical applications. Here, we implement similar analysis techniques to times series of the Ca-2 H and K activity indicator measured simultaneously with RVs in a small sample of stars from the large Keck/HIRES RV planet search program. Our goal is to characterize the pattern(s) of non-planetary variation to be able to know what is/ is not a planetary signal. We investigated ten different GP kernels and their respective hyperparameters to determine the optimal combination (e.g., the lowest Bayesian Information Criterion value) in each stellar data set. To assess the hyperparameters’ error, we sampled their posterior distributions using Markov chain Monte Carlo (MCMC) analysis on the optimized kernels. Our results demonstrate how GP analysis of stellar activity indicators alone can contribute to exoplanet detection in RV data, and highlight the challenges in applying GP analysis to relatively small, irregularly sampled time series.
NASA Astrophysics Data System (ADS)
Quitadamo, L. R.; Cavrini, F.; Sbernini, L.; Riillo, F.; Bianchi, L.; Seri, S.; Saggio, G.
2017-02-01
Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported.
Petr, T.; Šmíd, V.; Šmídová, J.; Hůlková, H.; Jirkovská, M.; Elleder, M.; Muchová, L.; Vítek, L.; Šmíd, F.
2010-01-01
A comparison of histochemical detection of GM1 ganglioside in cryostat sections using cholera toxin B-subunit after fixation with 4% formaldehyde and dry acetone gave tissue-dependent results. In the liver no pre-treatment showed detectable differences related to GM1 reaction products, while studies in the brain showed the superiority of acetone pre-extraction (followed by formaldehyde), which yielded sharper images compared with the diffuse, blurred staining pattern associated with formaldehyde. Therefore, the aim of our study was to define the optimal conditions for the GM1 detection using cholera toxin B-subunit. Ganglioside extractability with acetone, the ever neglected topic, was tested comparing anhydrous acetone with acetone containing admixture of water. TLC analysis of acetone extractable GM1 ganglioside from liver sections did not exceed 2% of the total GM1 ganglioside content using anhydrous acetone at −20°C, and 4% at room temperature. The loss increased to 30.5% using 9:1 acetone/water. Similarly, photometric analysis of lipid sialic acid, extracted from dried liver homogenates with anhydrous acetone, showed the loss of gangliosides into acetone 3.0±0.3% only. The loss from dried brain homogenate was 9.5±1.1%. Thus, anhydrous conditions (dry tissue samples and anhydrous acetone) are crucial factors for optimal in situ ganglioside detection using acetone pre-treatment. This ensures effective physical fixation, especially in tissues rich in polar lipids (precipitation, prevention of in situ diffusion), and removal of cholesterol, which can act as a hydrophobic blocking barrier. PMID:20558344
Keiter, David A.; Cunningham, Fred L.; Rhodes, Olin E.; Irwin, Brian J.; Beasley, James
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
Keiter, David A; Cunningham, Fred L; Rhodes, Olin E; Irwin, Brian J; Beasley, James C
2016-01-01
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocols with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig (Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. Knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.
Keiter, David A.; Cunningham, Fred L.; Rhodes, Jr., Olin E.; ...
2016-05-25
Collection of scat samples is common in wildlife research, particularly for genetic capture-mark-recapture applications. Due to high degradation rates of genetic material in scat, large numbers of samples must be collected to generate robust estimates. Optimization of sampling approaches to account for taxa-specific patterns of scat deposition is, therefore, necessary to ensure sufficient sample collection. While scat collection methods have been widely studied in carnivores, research to maximize scat collection and noninvasive sampling efficiency for social ungulates is lacking. Further, environmental factors or scat morphology may influence detection of scat by observers. We contrasted performance of novel radial search protocolsmore » with existing adaptive cluster sampling protocols to quantify differences in observed amounts of wild pig ( Sus scrofa) scat. We also evaluated the effects of environmental (percentage of vegetative ground cover and occurrence of rain immediately prior to sampling) and scat characteristics (fecal pellet size and number) on the detectability of scat by observers. We found that 15- and 20-m radial search protocols resulted in greater numbers of scats encountered than the previously used adaptive cluster sampling approach across habitat types, and that fecal pellet size, number of fecal pellets, percent vegetative ground cover, and recent rain events were significant predictors of scat detection. Our results suggest that use of a fixed-width radial search protocol may increase the number of scats detected for wild pigs, or other social ungulates, allowing more robust estimation of population metrics using noninvasive genetic sampling methods. Further, as fecal pellet size affected scat detection, juvenile or smaller-sized animals may be less detectable than adult or large animals, which could introduce bias into abundance estimates. In conclusion, knowledge of relationships between environmental variables and scat detection may allow researchers to optimize sampling protocols to maximize utility of noninvasive sampling for wild pigs and other social ungulates.« less
An Effective and Novel Neural Network Ensemble for Shift Pattern Detection in Control Charts.
Barghash, Mahmoud
2015-01-01
Pattern recognition in control charts is critical to make a balance between discovering faults as early as possible and reducing the number of false alarms. This work is devoted to designing a multistage neural network ensemble that achieves this balance which reduces rework and scrape without reducing productivity. The ensemble under focus is composed of a series of neural network stages and a series of decision points. Initially, this work compared using multidecision points and single-decision point on the performance of the ANN which showed that multidecision points are highly preferable to single-decision points. This work also tested the effect of population percentages on the ANN and used this to optimize the ANN's performance. Also this work used optimized and nonoptimized ANNs in an ensemble and proved that using nonoptimized ANN may reduce the performance of the ensemble. The ensemble that used only optimized ANNs has improved performance over individual ANNs and three-sigma level rule. In that respect using the designed ensemble can help in reducing the number of false stops and increasing productivity. It also can be used to discover even small shifts in the mean as early as possible.
Immersion and dry scanner extensions for sub-10nm production nodes
NASA Astrophysics Data System (ADS)
Weichselbaum, Stefan; Bornebroek, Frank; de Kort, Toine; Droste, Richard; de Graaf, Roelof F.; van Ballegoij, Rob; Botter, Herman; McLaren, Matthew G.; de Boeij, Wim P.
2015-03-01
Progressing towards the 10nm and 7nm imaging node, pattern-placement and layer-to-layer overlay requirements keep on scaling down and drives system improvements in immersion (ArFi) and dry (ArF/KrF) scanners. A series of module enhancements in the NXT platform have been introduced; among others, the scanner is equipped with exposure stages with better dynamics and thermal control. Grid accuracy improvements with respect to calibration, setup, stability, and layout dependency tighten MMO performance and enable mix and match scanner operation. The same platform improvements also benefit focus control. Improvements in detectability and reproducibility of low contrast alignment marks enhance the alignment solution window for 10nm logic processes and beyond. The system's architecture allows dynamic use of high-order scanner optimization based on advanced actuators of projection lens and scanning stages. This enables a holistic optimization approach for the scanner, the mask, and the patterning process. Productivity scanner design modifications esp. stage speeds and optimization in metrology schemes provide lower layer costs for customers using immersion lithography as well as conventional dry technology. Imaging, overlay, focus, and productivity data is presented, that demonstrates 10nm and 7nm node litho-capability for both (immersion & dry) platforms.
Optical and microwave detection using Bi-Sr-Ca-Cu-O thin films
NASA Technical Reports Server (NTRS)
Grabow, B. E.; Sova, R. M.; Boone, B. G.; Moorjani, K.; Kim, B. F.; Bohandy, J.; Adrian, F.; Green, W. J.
1990-01-01
Recent progress at the Johns Hopkins University Applied Physics Laboratory (JHU/APL) in the development of optical and microwave detectors using high temperature superconducting thin films is described. Several objectives of this work have been accomplished, including: deposition of Bi-Sr-Ca-Cu-O thin films by laser abation processing (LAP); development of thin film patterning techniques, including in situ masking, wet chemical etching and laser patterning; measurements of bolometric and non-bolometric signatures in patterned Bi-Sr-Ca-Cu-O films using optical and microwave sources, respectively; analysis and design of an optimized bolometer through computer simulation, and investigation of its use in a Fourier transform spectrometer. The focus here is primarily on results from the measurement of the bolometric and non-bolometric response.
Optical and microwave detection using Bi-Sr-Ca-Cu-O thin films
NASA Technical Reports Server (NTRS)
Grabow, B. E.; Sova, R. M.; Boone, B. G.; Moorjani, K.; Kim, B. F.; Bohandy, J.; Adrian, F.; Green, W. J.
1991-01-01
Recent progress at the Johns Hopkins University Applied Physics Laboratory (JHU/APL) in the development of optical and microwave detectors using high temperature superconducting thin films is described. Several objectives of this work have been accomplished, including: deposition of Bi-Sr-Ca-Cu-O thin films by laser abation processing (LAP); development of thin film patterning techniques, including in situ masking, wet chemical etching, and laser patterning; measurements of bolometric and non-bolometric signatures in patterned Bi-Sr-Ca-Cu-O films using optical and microwave sources, respectively; analysis and design of an optimized bolometer through computer simulation; and investigation of its use in a Fourier transform spectrometer. The focus here is primarily on results from the measurement of the bolometric and non-bolometric response.
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Sims, David W; Humphries, Nicolas E; Bradford, Russell W; Bruce, Barry D
2012-03-01
1. Search processes play an important role in physical, chemical and biological systems. In animal foraging, the search strategy predators should use to search optimally for prey is an enduring question. Some models demonstrate that when prey is sparsely distributed, an optimal search pattern is a specialised random walk known as a Lévy flight, whereas when prey is abundant, simple Brownian motion is sufficiently efficient. These predictions form part of what has been termed the Lévy flight foraging hypothesis (LFF) which states that as Lévy flights optimise random searches, movements approximated by optimal Lévy flights may have naturally evolved in organisms to enhance encounters with targets (e.g. prey) when knowledge of their locations is incomplete. 2. Whether free-ranging predators exhibit the movement patterns predicted in the LFF hypothesis in response to known prey types and distributions, however, has not been determined. We tested this using vertical and horizontal movement data from electronic tagging of an apex predator, the great white shark Carcharodon carcharias, across widely differing habitats reflecting different prey types. 3. Individual white sharks exhibited movement patterns that predicted well the prey types expected under the LFF hypothesis. Shark movements were best approximated by Brownian motion when hunting near abundant, predictable sources of prey (e.g. seal colonies, fish aggregations), whereas movements approximating truncated Lévy flights were present when searching for sparsely distributed or potentially difficult-to-detect prey in oceanic or shelf environments, respectively. 4. That movement patterns approximated by truncated Lévy flights and Brownian behaviour were present in the predicted prey fields indicates search strategies adopted by white sharks appear to be the most efficient ones for encountering prey in the habitats where such patterns are observed. This suggests that C. carcharias appears capable of exhibiting search patterns that are approximated as optimal in response to encountered changes in prey type and abundance, and across diverse marine habitats, from the surf zone to the deep ocean. 5. Our results provide some support for the LFF hypothesis. However, it is possible that the observed Lévy patterns of white sharks may not arise from an adaptive behaviour but could be an emergent property arising from simple, straight-line movements between complex (e.g. fractal) distributions of prey. Experimental studies are needed in vertebrates to test for the presence of Lévy behaviour patterns in the absence of complex prey distributions. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.
Tarumi, Toshiyasu; Small, Gary W; Combs, Roger J; Kroutil, Robert T
2004-04-01
Finite impulse response (FIR) filters and finite impulse response matrix (FIRM) filters are evaluated for use in the detection of volatile organic compounds with wide spectral bands by direct analysis of interferogram data obtained from passive Fourier transform infrared (FT-IR) measurements. Short segments of filtered interferogram points are classified by support vector machines (SVMs) to implement the automated detection of heated plumes of the target analyte, ethanol. The interferograms employed in this study were acquired with a downward-looking passive FT-IR spectrometer mounted on a fixed-wing aircraft. Classifiers are trained with data collected on the ground and subsequently used for the airborne detection. The success of the automated detection depends on the effective removal of background contributions from the interferogram segments. Removing the background signature is complicated when the analyte spectral bands are broad because there is significant overlap between the interferogram representations of the analyte and background. Methods to implement the FIR and FIRM filters while excluding background contributions are explored in this work. When properly optimized, both filtering procedures provide satisfactory classification results for the airborne data. Missed detection rates of 8% or smaller for ethanol and false positive rates of at most 0.8% are realized. The optimization of filter design parameters, the starting interferogram point for filtering, and the length of the interferogram segments used in the pattern recognition is discussed.
Confounder Detection in High-Dimensional Linear Models Using First Moments of Spectral Measures.
Liu, Furui; Chan, Laiwan
2018-06-12
In this letter, we study the confounder detection problem in the linear model, where the target variable [Formula: see text] is predicted using its [Formula: see text] potential causes [Formula: see text]. Based on an assumption of a rotation-invariant generating process of the model, recent study shows that the spectral measure induced by the regression coefficient vector with respect to the covariance matrix of [Formula: see text] is close to a uniform measure in purely causal cases, but it differs from a uniform measure characteristically in the presence of a scalar confounder. Analyzing spectral measure patterns could help to detect confounding. In this letter, we propose to use the first moment of the spectral measure for confounder detection. We calculate the first moment of the regression vector-induced spectral measure and compare it with the first moment of a uniform spectral measure, both defined with respect to the covariance matrix of [Formula: see text]. The two moments coincide in nonconfounding cases and differ from each other in the presence of confounding. This statistical causal-confounding asymmetry can be used for confounder detection. Without the need to analyze the spectral measure pattern, our method avoids the difficulty of metric choice and multiple parameter optimization. Experiments on synthetic and real data show the performance of this method.
Hrabovský, Miroslav
2014-01-01
The purpose of the study is to show a proposal of an extension of a one-dimensional speckle correlation method, which is primarily intended for determination of one-dimensional object's translation, for detection of general in-plane object's translation. In that view, a numerical simulation of a displacement of the speckle field as a consequence of general in-plane object's translation is presented. The translation components a x and a y representing the projections of a vector a of the object's displacement onto both x- and y-axes in the object plane (x, y) are evaluated separately by means of the extended one-dimensional speckle correlation method. Moreover, one can perform a distinct optimization of the method by reduction of intensity values representing detected speckle patterns. The theoretical relations between the translation components a x and a y of the object and the displacement of the speckle pattern for selected geometrical arrangement are mentioned and used for the testifying of the proposed method's rightness. PMID:24592180
Detection Copy Number Variants from NGS with Sparse and Smooth Constraints.
Zhang, Yue; Cheung, Yiu-Ming; Xu, Bo; Su, Weifeng
2017-01-01
It is known that copy number variations (CNVs) are associated with complex diseases and particular tumor types, thus reliable identification of CNVs is of great potential value. Recent advances in next generation sequencing (NGS) data analysis have helped manifest the richness of CNV information. However, the performances of these methods are not consistent. Reliably finding CNVs in NGS data in an efficient way remains a challenging topic, worthy of further investigation. Accordingly, we tackle the problem by formulating CNVs identification into a quadratic optimization problem involving two constraints. By imposing the constraints of sparsity and smoothness, the reconstructed read depth signal from NGS is anticipated to fit the CNVs patterns more accurately. An efficient numerical solution tailored from alternating direction minimization (ADM) framework is elaborated. We demonstrate the advantages of the proposed method, namely ADM-CNV, by comparing it with six popular CNV detection methods using synthetic, simulated, and empirical sequencing data. It is shown that the proposed approach can successfully reconstruct CNV patterns from raw data, and achieve superior or comparable performance in detection of the CNVs compared to the existing counterparts.
Thermal luminescence spectroscopy chemical imaging sensor.
Carrieri, Arthur H; Buican, Tudor N; Roese, Erik S; Sutter, James; Samuels, Alan C
2012-10-01
The authors present a pseudo-active chemical imaging sensor model embodying irradiative transient heating, temperature nonequilibrium thermal luminescence spectroscopy, differential hyperspectral imaging, and artificial neural network technologies integrated together. We elaborate on various optimizations, simulations, and animations of the integrated sensor design and apply it to the terrestrial chemical contamination problem, where the interstitial contaminant compounds of detection interest (analytes) comprise liquid chemical warfare agents, their various derivative condensed phase compounds, and other material of a life-threatening nature. The sensor must measure and process a dynamic pattern of absorptive-emissive middle infrared molecular signature spectra of subject analytes to perform its chemical imaging and standoff detection functions successfully.
On-Chip, Amplification-Free Quantification of Nucleic Acid for Point-of-Care Diagnosis
NASA Astrophysics Data System (ADS)
Yen, Tony Minghung
This dissertation demonstrates three physical device concepts to overcome limitations in point-of-care quantification of nucleic acids. Enabling sensitive, high throughput nucleic acid quantification on a chip, outside of hospital and centralized laboratory setting, is crucial for improving pathogen detection and cancer diagnosis and prognosis. Among existing platforms, microarray have the advantages of being amplification free, low instrument cost, and high throughput, but are generally less sensitive compared to sequencing and PCR assays. To bridge this performance gap, this dissertation presents theoretical and experimental progress to develop a platform nucleic acid quantification technology that is drastically more sensitive than current microarrays while compatible with microarray architecture. The first device concept explores on-chip nucleic acid enrichment by natural evaporation of nucleic acid solution droplet. Using a micro-patterned super-hydrophobic black silicon array device, evaporative enrichment is coupled with nano-liter droplet self-assembly workflow to produce a 50 aM concentration sensitivity, 6 orders of dynamic range, and rapid hybridization time at under 5 minutes. The second device concept focuses on improving target copy number sensitivity, instead of concentration sensitivity. A comprehensive microarray physical model taking into account of molecular transport, electrostatic intermolecular interactions, and reaction kinetics is considered to guide device optimization. Device pattern size and target copy number are optimized based on model prediction to achieve maximal hybridization efficiency. At a 100-mum pattern size, a quantum leap in detection limit of 570 copies is achieved using black silicon array device with self-assembled pico-liter droplet workflow. Despite its merits, evaporative enrichment on black silicon device suffers from coffee-ring effect at 100-mum pattern size, and thus not compatible with clinical patient samples. The third device concept utilizes an integrated optomechanical laser system and a Cytop microarray device to reverse coffee-ring effect during evaporative enrichment at 100-mum pattern size. This method, named "laser-induced differential evaporation" is expected to enable 570 copies detection limit for clinical samples in near future. While the work is ongoing as of the writing of this dissertation, a clear research plan is in place to implement this method on microarray platform toward clinical sample testing for disease applications and future commercialization.
Ricciuti, Adriana; De Remigis, Alessandra; Landek-Salgado, Melissa A.; De Vincentiis, Ludovica; Guaraldi, Federica; Lupi, Isabella; Iwama, Shintaro; Wand, Gary S.; Salvatori, Roberto
2014-01-01
Context: Pituitary antibodies have been measured mainly to identify patients whose disease is caused or sustained by pituitary-specific autoimmunity. Although reported in over 100 publications, they have yielded variable results and are thus considered of limited clinical utility. Objectives: Our objectives were to analyze all publications reporting pituitary antibodies by immunofluorescence for detecting the major sources of variability, to experimentally test these sources and devise an optimized immunofluorescence protocol, and to assess prevalence and significance of pituitary antibodies in patients with pituitary diseases. Study Design and Outcome Measures: We first evaluated the effect of pituitary gland species, section fixation, autofluorescence quenching, blockade of unwanted antibody binding, and use of purified IgG on the performance of this antibody assay. We then measured cross-sectionally the prevalence of pituitary antibodies in 390 pituitary cases and 60 healthy controls, expressing results as present or absent and according to the (granular, diffuse, perinuclear, or mixed) staining pattern. Results: Human pituitary was the best substrate to detect pituitary antibodies and yielded an optimal signal-to-noise ratio when treated with Sudan black B to reduce autofluorescence. Pituitary antibodies were more common in cases (95 of 390, 24%) than controls (3 of 60, 5%, P = .001) but did not discriminate among pituitary diseases when reported dichotomously. However, when expressed according to their cytosolic staining, a granular pattern was highly predictive of pituitary autoimmunity (P < .0001). Conclusion: We report a comprehensive study of pituitary antibodies by immunofluorescence and provide a method and an interpretation scheme that should be useful for identifying and monitoring patients with pituitary autoimmunity. PMID:24606106
Oliker, Nurit; Ostfeld, Avi
2014-03-15
This study describes a decision support system, alerts for contamination events in water distribution systems. The developed model comprises a weighted support vector machine (SVM) for the detection of outliers, and a following sequence analysis for the classification of contamination events. The contribution of this study is an improvement of contamination events detection ability and a multi-dimensional analysis of the data, differing from the parallel one-dimensional analysis conducted so far. The multivariate analysis examines the relationships between water quality parameters and detects changes in their mutual patterns. The weights of the SVM model accomplish two goals: blurring the difference between sizes of the two classes' data sets (as there are much more normal/regular than event time measurements), and adhering the time factor attribute by a time decay coefficient, ascribing higher importance to recent observations when classifying a time step measurement. All model parameters were determined by data driven optimization so the calibration of the model was completely autonomic. The model was trained and tested on a real water distribution system (WDS) data set with randomly simulated events superimposed on the original measurements. The model is prominent in its ability to detect events that were only partly expressed in the data (i.e., affecting only some of the measured parameters). The model showed high accuracy and better detection ability as compared to previous modeling attempts of contamination event detection. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Learning System for Discriminating Variants of Malicious Network Traffic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beaver, Justin M; Symons, Christopher T; Gillen, Rob
Modern computer network defense systems rely primarily on signature-based intrusion detection tools, which generate alerts when patterns that are pre-determined to be malicious are encountered in network data streams. Signatures are created reactively, and only after in-depth manual analysis of a network intrusion. There is little ability for signature-based detectors to identify intrusions that are new or even variants of an existing attack, and little ability to adapt the detectors to the patterns unique to a network environment. Due to these limitations, the need exists for network intrusion detection techniques that can more comprehensively address both known unknown networkbased attacksmore » and can be optimized for the target environment. This work describes a system that leverages machine learning to provide a network intrusion detection capability that analyzes behaviors in channels of communication between individual computers. Using examples of malicious and non-malicious traffic in the target environment, the system can be trained to discriminate between traffic types. The machine learning provides insight that would be difficult for a human to explicitly code as a signature because it evaluates many interdependent metrics simultaneously. With this approach, zero day detection is possible by focusing on similarity to known traffic types rather than mining for specific bit patterns or conditions. This also reduces the burden on organizations to account for all possible attack variant combinations through signatures. The approach is presented along with results from a third-party evaluation of its performance.« less
Entropic One-Class Classifiers.
Livi, Lorenzo; Sadeghian, Alireza; Pedrycz, Witold
2015-12-01
The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed nontarget, and therefore, they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation-based approach; we embed the input data into the dissimilarity space (DS) by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented by weighted Euclidean graphs, which we use to determine the entropy of the data distribution in the DS and at the same time to derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking data sets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique.
2007-01-01
Background The usage of synonymous codons shows considerable variation among mammalian genes. How and why this usage is non-random are fundamental biological questions and remain controversial. It is also important to explore whether mammalian genes that are selectively expressed at different developmental stages bear different molecular features. Results In two models of mouse stem cell differentiation, we established correlations between codon usage and the patterns of gene expression. We found that the optimal codons exhibited variation (AT- or GC-ending codons) in different cell types within the developmental hierarchy. We also found that genes that were enriched (developmental-pivotal genes) or specifically expressed (developmental-specific genes) at different developmental stages had different patterns of codon usage and local genomic GC (GCg) content. Moreover, at the same developmental stage, developmental-specific genes generally used more GC-ending codons and had higher GCg content compared with developmental-pivotal genes. Further analyses suggest that the model of translational selection might be consistent with the developmental stage-related patterns of codon usage, especially for the AT-ending optimal codons. In addition, our data show that after human-mouse divergence, the influence of selective constraints is still detectable. Conclusion Our findings suggest that developmental stage-related patterns of gene expression are correlated with codon usage (GC3) and GCg content in stem cell hierarchies. Moreover, this paper provides evidence for the influence of natural selection at synonymous sites in the mouse genome and novel clues for linking the molecular features of genes to their patterns of expression during mammalian ontogenesis. PMID:17349061
Computer-Assisted Classification Patterns in Autoimmune Diagnostics: The AIDA Project
Benammar Elgaaied, Amel; Cascio, Donato; Bruno, Salvatore; Ciaccio, Maria Cristina; Cipolla, Marco; Fauci, Alessandro; Morgante, Rossella; Taormina, Vincenzo; Gorgi, Yousr; Marrakchi Triki, Raja; Ben Ahmed, Melika; Louzir, Hechmi; Yalaoui, Sadok; Imene, Sfar; Issaoui, Yassine; Abidi, Ahmed; Ammar, Myriam; Bedhiafi, Walid; Ben Fraj, Oussama; Bouhaha, Rym; Hamdi, Khouloud; Soumaya, Koudhi; Neili, Bilel; Asma, Gati; Lucchese, Mariano; Catanzaro, Maria; Barbara, Vincenza; Brusca, Ignazio; Fregapane, Maria; Amato, Gaetano; Friscia, Giuseppe; Neila, Trai; Turkia, Souayeh; Youssra, Haouami; Rekik, Raja; Bouokez, Hayet; Vasile Simone, Maria; Fauci, Francesco; Raso, Giuseppe
2016-01-01
Antinuclear antibodies (ANAs) are significant biomarkers in the diagnosis of autoimmune diseases in humans, done by mean of Indirect ImmunoFluorescence (IIF) method, and performed by analyzing patterns and fluorescence intensity. This paper introduces the AIDA Project (autoimmunity: diagnosis assisted by computer) developed in the framework of an Italy-Tunisia cross-border cooperation and its preliminary results. A database of interpreted IIF images is being collected through the exchange of images and double reporting and a Gold Standard database, containing around 1000 double reported images, has been settled. The Gold Standard database is used for optimization of a CAD (Computer Aided Detection) solution and for the assessment of its added value, in order to be applied along with an Immunologist as a second Reader in detection of autoantibodies. This CAD system is able to identify on IIF images the fluorescence intensity and the fluorescence pattern. Preliminary results show that CAD, used as second Reader, appeared to perform better than Junior Immunologists and hence may significantly improve their efficacy; compared with two Junior Immunologists, the CAD system showed higher Intensity Accuracy (85,5% versus 66,0% and 66,0%), higher Patterns Accuracy (79,3% versus 48,0% and 66,2%), and higher Mean Class Accuracy (79,4% versus 56,7% and 64.2%). PMID:27042658
Comparisons of neural networks to standard techniques for image classification and correlation
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1994-01-01
Neural network techniques for multispectral image classification and spatial pattern detection are compared to the standard techniques of maximum-likelihood classification and spatial correlation. The neural network produced a more accurate classification than maximum-likelihood of a Landsat scene of Tucson, Arizona. Some of the errors in the maximum-likelihood classification are illustrated using decision region and class probability density plots. As expected, the main drawback to the neural network method is the long time required for the training stage. The network was trained using several different hidden layer sizes to optimize both the classification accuracy and training speed, and it was found that one node per class was optimal. The performance improved when 3x3 local windows of image data were entered into the net. This modification introduces texture into the classification without explicit calculation of a texture measure. Larger windows were successfully used for the detection of spatial features in Landsat and Magellan synthetic aperture radar imagery.
Automatic trajectory measurement of large numbers of crowded objects
NASA Astrophysics Data System (ADS)
Li, Hui; Liu, Ye; Chen, Yan Qiu
2013-06-01
Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.
NASA Astrophysics Data System (ADS)
Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf
2016-11-01
This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.
Chen, Quansheng; Hu, Weiwei; Sun, Cuicui; Li, Huanhuan; Ouyang, Qin
2016-09-28
Rare earth-doped upconversion nanoparticles (UCNPs) have promising potentials in biodetection due to their unique frequency upconverting capability and high detection sensitivity. This paper reports an improved UCNPs-based fluorescence probe for dual-sensing of Aflatoxin B1 (AFB1) and Deoxynivalenol (DON) using a magnetism-induced separation and the specific formation of antibody-targets complex. Herein, the improved UCNPs, which were namely NaYF4:Yb/Ho/Gd and NaYF4:Yb/Tm/Gd, were systematically studied based on the optimization of reaction time, temperature and the concentration of dopant ions with simultaneous phase and size controlled NaYF4 nanoparticles; and the targets were detected using the pattern of competitive combination assay. Under an optimized condition, the advanced fluorescent probes revealed stronger fluorescent properties, broader biological applications and better storage stabilities compared to traditional UCNPs-based ones; and ultrasensitive determinations of AFB1 and DON were achieved under a wide sensing range of 0.001-0.1 ng ml(-1) with the limit of detection (LOD) of 0.001 ng ml(-1). Additionally, the applicability of the improved nanosensor for the detection of mycotoxins was also confirmed in adulterated oil samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimizing Grid Patterns on Photovoltaic Cells
NASA Technical Reports Server (NTRS)
Burger, D. R.
1984-01-01
CELCAL computer program helps in optimizing grid patterns for different photovoltaic cell geometries and metalization processes. Five different powerloss phenomena associated with front-surface metal grid pattern on photovoltaic cells.
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
Quality detection system and method of micro-accessory based on microscopic vision
NASA Astrophysics Data System (ADS)
Li, Dongjie; Wang, Shiwei; Fu, Yu
2017-10-01
Considering that the traditional manual detection of micro-accessory has some problems, such as heavy workload, low efficiency and large artificial error, a kind of quality inspection system of micro-accessory has been designed. Micro-vision technology has been used to inspect quality, which optimizes the structure of the detection system. The stepper motor is used to drive the rotating micro-platform to transfer quarantine device and the microscopic vision system is applied to get graphic information of micro-accessory. The methods of image processing and pattern matching, the variable scale Sobel differential edge detection algorithm and the improved Zernike moments sub-pixel edge detection algorithm are combined in the system in order to achieve a more detailed and accurate edge of the defect detection. The grade at the edge of the complex signal can be achieved accurately by extracting through the proposed system, and then it can distinguish the qualified products and unqualified products with high precision recognition.
Lens-free imaging of magnetic particles in DNA assays.
Colle, Frederik; Vercruysse, Dries; Peeters, Sara; Liu, Chengxun; Stakenborg, Tim; Lagae, Liesbet; Del-Favero, Jurgen
2013-11-07
We present a novel opto-magnetic system for the fast and sensitive detection of nucleic acids. The system is based on a lens-free imaging approach resulting in a compact and cheap optical readout of surface hybridized DNA fragments. In our system magnetic particles are attracted towards the detection surface thereby completing the labeling step in less than 1 min. An optimized surface functionalization combined with magnetic manipulation was used to remove all nonspecifically bound magnetic particles from the detection surface. A lens-free image of the specifically bound magnetic particles on the detection surface was recorded by a CMOS imager. This recorded interference pattern was reconstructed in software, to represent the particle image at the focal distance, using little computational power. As a result we were able to detect DNA concentrations down to 10 pM with single particle sensitivity. The possibility of integrated sample preparation by manipulation of magnetic particles, combined with the cheap and highly compact lens-free detection makes our system an ideal candidate for point-of-care diagnostic applications.
Real-time people and vehicle detection from UAV imagery
NASA Astrophysics Data System (ADS)
Gaszczak, Anna; Breckon, Toby P.; Han, Jiwan
2011-01-01
A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.
Teleconnection Paths via Climate Network Direct Link Detection.
Zhou, Dong; Gozolchiani, Avi; Ashkenazy, Yosef; Havlin, Shlomo
2015-12-31
Teleconnections describe remote connections (typically thousands of kilometers) of the climate system. These are of great importance in climate dynamics as they reflect the transportation of energy and climate change on global scales (like the El Niño phenomenon). Yet, the path of influence propagation between such remote regions, and weighting associated with different paths, are only partially known. Here we propose a systematic climate network approach to find and quantify the optimal paths between remotely distant interacting locations. Specifically, we separate the correlations between two grid points into direct and indirect components, where the optimal path is found based on a minimal total cost function of the direct links. We demonstrate our method using near surface air temperature reanalysis data, on identifying cross-latitude teleconnections and their corresponding optimal paths. The proposed method may be used to quantify and improve our understanding regarding the emergence of climate patterns on global scales.
Modular and configurable optimal sequence alignment software: Cola.
Zamani, Neda; Sundström, Görel; Höppner, Marc P; Grabherr, Manfred G
2014-01-01
The fundamental challenge in optimally aligning homologous sequences is to define a scoring scheme that best reflects the underlying biological processes. Maximising the overall number of matches in the alignment does not always reflect the patterns by which nucleotides mutate. Efficiently implemented algorithms that can be parameterised to accommodate more complex non-linear scoring schemes are thus desirable. We present Cola, alignment software that implements different optimal alignment algorithms, also allowing for scoring contiguous matches of nucleotides in a nonlinear manner. The latter places more emphasis on short, highly conserved motifs, and less on the surrounding nucleotides, which can be more diverged. To illustrate the differences, we report results from aligning 14,100 sequences from 3' untranslated regions of human genes to 25 of their mammalian counterparts, where we found that a nonlinear scoring scheme is more consistent than a linear scheme in detecting short, conserved motifs. Cola is freely available under LPGL from https://github.com/nedaz/cola.
Design of compactly supported wavelet to match singularities in medical images
NASA Astrophysics Data System (ADS)
Fung, Carrson C.; Shi, Pengcheng
2002-11-01
Analysis and understanding of medical images has important clinical values for patient diagnosis and treatment, as well as technical implications for computer vision and pattern recognition. One of the most fundamental issues is the detection of object boundaries or singularities, which is often the basis for further processes such as organ/tissue recognition, image registration, motion analysis, measurement of anatomical and physiological parameters, etc. The focus of this work involved taking a correlation based approach toward edge detection, by exploiting some of desirable properties of wavelet analysis. This leads to the possibility of constructing a bank of detectors, consisting of multiple wavelet basis functions of different scales which are optimal for specific types of edges, in order to optimally detect all the edges in an image. Our work involved developing a set of wavelet functions which matches the shape of the ramp and pulse edges. The matching algorithm used focuses on matching the edges in the frequency domain. It was proven that this technique could create matching wavelets applicable at all scales. Results have shown that matching wavelets can be obtained for the pulse edge while the ramp edge requires another matching algorithm.
Defrance, Matthieu; Janky, Rekin's; Sand, Olivier; van Helden, Jacques
2008-01-01
This protocol explains how to discover functional signals in genomic sequences by detecting over- or under-represented oligonucleotides (words) or spaced pairs thereof (dyads) with the Regulatory Sequence Analysis Tools (http://rsat.ulb.ac.be/rsat/). Two typical applications are presented: (i) predicting transcription factor-binding motifs in promoters of coregulated genes and (ii) discovering phylogenetic footprints in promoters of orthologous genes. The steps of this protocol include purging genomic sequences to discard redundant fragments, discovering over-represented patterns and assembling them to obtain degenerate motifs, scanning sequences and drawing feature maps. The main strength of the method is its statistical ground: the binomial significance provides an efficient control on the rate of false positives. In contrast with optimization-based pattern discovery algorithms, the method supports the detection of under- as well as over-represented motifs. Computation times vary from seconds (gene clusters) to minutes (whole genomes). The execution of the whole protocol should take approximately 1 h.
Strain measurement in semiconductor heterostructures by scanning transmission electron microscopy.
Müller, Knut; Rosenauer, Andreas; Schowalter, Marco; Zweck, Josef; Fritz, Rafael; Volz, Kerstin
2012-10-01
This article deals with the measurement of strain in semiconductor heterostructures from convergent beam electron diffraction patterns. In particular, three different algorithms in the field of (circular) pattern recognition are presented that are able to detect diffracted disc positions accurately, from which the strain in growth direction is calculated. Although the three approaches are very different as one is based on edge detection, one on rotational averages, and one on cross correlation with masks, it is found that identical strain profiles result for an In x Ga1-x N y As1-y /GaAs heterostructure consisting of five compressively and tensile strained layers. We achieve a precision of strain measurements of 7-9·10-4 and a spatial resolution of 0.5-0.7 nm over the whole width of the layer stack which was 350 nm. Being already very applicable to strain measurements in contemporary nanostructures, we additionally suggest future hardware and software designs optimized for fast and direct acquisition of strain distributions, motivated by the present studies.
Discovering Structural Regularity in 3D Geometry
Pauly, Mark; Mitra, Niloy J.; Wallner, Johannes; Pottmann, Helmut; Guibas, Leonidas J.
2010-01-01
We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or mesh-based models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis. PMID:21170292
Chen, Zhenning; Shao, Xinxing; Xu, Xiangyang; He, Xiaoyuan
2018-02-01
The technique of digital image correlation (DIC), which has been widely used for noncontact deformation measurements in both the scientific and engineering fields, is greatly affected by the quality of speckle patterns in terms of its performance. This study was concerned with the optimization of the digital speckle pattern (DSP) for DIC in consideration of both the accuracy and efficiency. The root-mean-square error of the inverse compositional Gauss-Newton algorithm and the average number of iterations were used as quality metrics. Moreover, the influence of subset sizes and the noise level of images, which are the basic parameters in the quality assessment formulations, were also considered. The simulated binary speckle patterns were first compared with the Gaussian speckle patterns and captured DSPs. Both the single-radius and multi-radius DSPs were optimized. Experimental tests and analyses were conducted to obtain the optimized and recommended DSP. The vector diagram of the optimized speckle pattern was also uploaded as reference.
Movement patterns of silvertip sharks ( Carcharhinus albimarginatus) on coral reefs
NASA Astrophysics Data System (ADS)
Espinoza, Mario; Heupel, Michelle. R.; Tobin, Andrew J.; Simpfendorfer, Colin A.
2015-09-01
Understanding how sharks use coral reefs is essential for assessing risk of exposure to fisheries, habitat loss, and climate change. Despite a wide Indo-Pacific distribution, little is known about the spatial ecology of silvertip sharks ( Carcharhinus albimarginatus), compromising the ability to effectively manage their populations. We examined the residency and movements of silvertip sharks in the central Great Barrier Reef (GBR). An array of 56 VR2W acoustic receivers was used to monitor shark movements on 17 semi-isolated reefs. Twenty-seven individuals tagged with acoustic transmitters were monitored from 70 to 731 d. Residency index to the study site ranged from 0.05 to 0.97, with a mean residency (±SD) of 0.57 ± 0.26, but most individuals were detected at or near their tagging reef. Clear seasonal patterns were apparent, with fewer individuals detected between September and February. A large proportion of the tagged population (>71 %) moved regularly between reefs. Silvertip sharks were detected less during daytime and exhibited a strong diel pattern in depth use, which may be a strategy for optimizing energetic budgets and foraging opportunities. This study provides the first detailed examination of the spatial ecology and behavior of silvertip sharks on coral reefs. Silvertip sharks remained resident at coral reef habitats over long periods, but our results also suggest this species may have more complex movement patterns and use larger areas of the GBR than common reef shark species. Our findings highlight the need to further understand the movement ecology of silvertip sharks at different spatial and temporal scales, which is critical for developing effective management approaches.
Use of principle velocity patterns in the analysis of structural acoustic optimization.
Johnson, Wayne M; Cunefare, Kenneth A
2007-02-01
This work presents an application of principle velocity patterns in the analysis of the structural acoustic design optimization of an eight ply composite cylindrical shell. The approach consists of performing structural acoustic optimizations of a composite cylindrical shell subject to external harmonic monopole excitation. The ply angles are used as the design variables in the optimization. The results of the ply angle design variable formulation are interpreted using the singular value decomposition of the interior acoustic potential energy. The decomposition of the acoustic potential energy provides surface velocity patterns associated with lower levels of interior noise. These surface velocity patterns are shown to correspond to those from the structural acoustic optimization results. Thus, it is demonstrated that the capacity to design multi-ply composite cylinders for quiet interiors is determined by how well the cylinder be can designed to exhibit particular surface velocity patterns associated with lower noise levels.
Cam, E.; Monnat, J.-Y.
2000-01-01
Heterogeneity in individual quality can be a major obstacle when interpreting age-specific variation in life-history traits. Heterogeneity is likely to lead to within-generation selection, and patterns observed at the population level may result from the combination of hidden patterns specific to subpopulations. Population-level patterns are not relevant to hypotheses concerning the evolution of age-specific reproductive strategies if they differ from patterns at the individual level. We addressed the influence of age and a variable used as a surrogate of quality (yearly reproductive state) on survival and breeding probability in the kittiwake. We found evidence of an effect of age and quality on both demographic parameters. Patterns observed in breeders are consistent with the selection hypothesis, which predicts age-related increases in survival and traits positively correlated with survival. Our results also reveal unexpected age effects specific to subgroups: the influence of age on survival and future breeding probability is not the same in nonbreeders and breeders. These patterns are observed in higher-quality breeding habitats, where the influence of extrinsic factors on breeding state is the weakest. Moreover, there is slight evidence of an influence of sex on breeding probability (not on survival), but the same overall pattern is observed in both sexes. Our results support the hypothesis that age-related variation in demographic parameters observed at the population level is partly shaped by heterogeneity among individuals. They also suggest processes specific to subpopulations. Recent theoreticaI developments lay emphasis on integration of sources of heterogeneity in optimization models to account for apparently 'sub-optimal' empirical patterns. Incorporation of sources of heterogeneity is also the key to investigation of age-related reproductive strategies in heterogeneous populations. Thwarting 'heterogeneity's ruses' has become a major challenge: for detecting and understanding natural processes, and a constructive confrontation between empirical and theoretical studies.
Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method
NASA Astrophysics Data System (ADS)
Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin
2017-12-01
Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.
Stiffler, Lydia L.; Anderson, James T.; Welsh, Amy B.; Harding, Sergio R.; Costanzo, Gary R.; Katzner, Todd
2017-01-01
Surveys for secretive marsh birds could be improved with refinements to address regional and species-specific variation in detection probabilities and optimal times of day to survey. Diel variation in relation to naïve occupancy, detection rates, and vocalization rates of King (Rallus elegans) and Clapper (R. crepitans) rails were studied in intracoastal waterways in Virginia, USA. Autonomous acoustic devices recorded vocalizations of King and Clapper rails at 75 locations for 48-hr periods within a marsh complex. Naïve King and Clapper rail occupancy did not vary hourly at either the marsh or the study area level. Combined King and Clapper rail detections and vocalizations varied across marshes, decreased as the sampling season progressed, and, for detections, was greatest during low rising tides (P < 0.01). Hourly variation in vocalization and detection rates did not show a pattern but occurred between 7.8% of pairwise comparisons for detections and 10.5% of pairwise comparisons for vocalizations (P < 0.01). Higher rates of detections and vocalizations occurred during the hours of 00:00–00:59, 05:00–05:59, 14:00–15:59, and lower rates during the hours of 07:00–09:59. Although statistically significant, because there were no patterns in these hourly differences, they may not be biologically relevant and are of little use to management. In fact, these findings demonstrate that surveys for King and Clapper rails in Virginia intracoastal waterways may be effectively conducted throughout the day.
Dopamine reward prediction-error signalling: a two-component response
Schultz, Wolfram
2017-01-01
Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020
Cha, Wansik; Tung, Yi-Chung; Meyerhoff, Mark E.; Takayama, Shuichi
2010-01-01
This manuscript describes a thin amperometric nitric oxide (NO) sensor that can be microchannel embedded to enable direct real-time detection of NO produced by cells cultured within the microdevice. A key for achieving the thin (~ 1 mm) planar sensor configuration required for sensor-channel integration is the use of gold/indium-tin oxide patterned electrode directly on a porous polymer membrane (pAu/ITO) as the base working electrode. Electrochemically deposited Au-hexacyanoferrate layer on pAu/ITO is used to catalyze NO oxidation to nitrite at lower applied potentials (0.65 ~ 0.75 V vs. Ag/AgCl) and stabilize current output. Furthermore, use of a gas-permeable membrane to separate internal sensor compartments from the sample phase imparts excellent NO selectivity over common interferents (e.g., nitrite, ascorbate, ammonia, etc.) present in culture media and biological fluids. The optimized sensor design reversibly detects NO down to ~1 nM level in stirred buffer and <10 nM in flowing buffer when integrated within a polymeric microfluidic device. We demonstrate utility of the channel-embedded sensor by monitoring NO generation from macrophages cultured within non-gas permeable microchannels, as they are stimulated with endotoxin. PMID:20329749
NASA Astrophysics Data System (ADS)
Crosta, Giovanni Franco; Pan, Yong-Le; Aptowicz, Kevin B.; Casati, Caterina; Pinnick, Ronald G.; Chang, Richard K.; Videen, Gorden W.
2013-12-01
Measurement of two-dimensional angle-resolved optical scattering (TAOS) patterns is an attractive technique for detecting and characterizing micron-sized airborne particles. In general, the interpretation of these patterns and the retrieval of the particle refractive index, shape or size alone, are difficult problems. By reformulating the problem in statistical learning terms, a solution is proposed herewith: rather than identifying airborne particles from their scattering patterns, TAOS patterns themselves are classified through a learning machine, where feature extraction interacts with multivariate statistical analysis. Feature extraction relies on spectrum enhancement, which includes the discrete cosine FOURIER transform and non-linear operations. Multivariate statistical analysis includes computation of the principal components and supervised training, based on the maximization of a suitable figure of merit. All algorithms have been combined together to analyze TAOS patterns, organize feature vectors, design classification experiments, carry out supervised training, assign unknown patterns to classes, and fuse information from different training and recognition experiments. The algorithms have been tested on a data set with more than 3000 TAOS patterns. The parameters that control the algorithms at different stages have been allowed to vary within suitable bounds and are optimized to some extent. Classification has been targeted at discriminating aerosolized Bacillus subtilis particles, a simulant of anthrax, from atmospheric aerosol particles and interfering particles, like diesel soot. By assuming that all training and recognition patterns come from the respective reference materials only, the most satisfactory classification result corresponds to 20% false negatives from B. subtilis particles and <11% false positives from all other aerosol particles. The most effective operations have consisted of thresholding TAOS patterns in order to reject defective ones, and forming training sets from three or four pattern classes. The presented automated classification method may be adapted into a real-time operation technique, capable of detecting and characterizing micron-sized airborne particles.
Pattern uniformity control in integrated structures
NASA Astrophysics Data System (ADS)
Kobayashi, Shinji; Okada, Soichiro; Shimura, Satoru; Nafus, Kathleen; Fonseca, Carlos; Biesemans, Serge; Enomoto, Masashi
2017-03-01
In our previous paper dealing with multi-patterning, we proposed a new indicator to quantify the quality of final wafer pattern transfer, called interactive pattern fidelity error (IPFE). It detects patterning failures resulting from any source of variation in creating integrated patterns. IPFE is a function of overlay and edge placement error (EPE) of all layers comprising the final pattern (i.e. lower and upper layers). In this paper, we extend the use cases with Via in additional to the bridge case (Block on Spacer). We propose an IPFE budget and CD budget using simple geometric and statistical models with analysis of a variance (ANOVA). In addition, we validate the model with experimental data. From the experimental results, improvements in overlay, local-CDU (LCDU) of contact hole (CH) or pillar patterns (especially, stochastic pattern noise (SPN)) and pitch walking are all critical to meet budget requirements. We also provide a special note about the importance of the line length used in analyzing LWR. We find that IPFE and CD budget requirements are consistent to the table of the ITRS's technical requirement. Therefore the IPFE concept can be adopted for a variety of integrated structures comprising digital logic circuits. Finally, we suggest how to use IPFE for yield management and optimization requirements for each process.
Optimal design of a bank of spatio-temporal filters for EEG signal classification.
Higashi, Hiroshi; Tanaka, Toshihisa
2011-01-01
The spatial weights for electrodes called common spatial pattern (CSP) are known to be effective in EEG signal classification for motor imagery based brain computer interfaces (MI-BCI). To achieve accurate classification in CSP, the frequency filter should be properly designed. To this end, several methods for designing the filter have been proposed. However, the existing methods cannot consider plural brain activities described with different frequency bands and different spatial patterns such as activities of mu and beta rhythms. In order to efficiently extract these brain activities, we propose a method to design plural filters and spatial weights which extract desired brain activity. The proposed method designs finite impulse response (FIR) filters and the associated spatial weights by optimization of an objective function which is a natural extension of CSP. Moreover, we show by a classification experiment that the bank of FIR filters which are designed by introducing an orthogonality into the objective function can extract good discriminative features. Moreover, the experiment result suggests that the proposed method can automatically detect and extract brain activities related to motor imagery.
Improved classification of small-scale urban watersheds using thematic mapper simulator data
NASA Technical Reports Server (NTRS)
Owe, M.; Ormsby, J. P.
1984-01-01
The utility of Landsat MSS classification methods in the case of small, highly urbanized hydrological basins containing complex land-use patterns is limited, and is plagued by misclassifications due to the spectral response similarity of many dissimilar surfaces. Landsat MSS data for the Conley Creek basin near Atlanta, Georgia, have been compared to thematic mapper simulator (TMS) data obtained on the same day by aircraft. The TMS data were able to alleviate many of the recurring patterns associated with MSS data, through bandwidth optimization, an increase of the number of spectral bands to seven, and an improvement of ground resolution to 30 m. The TMS is thereby able to detect small water bodies, powerline rights-of-way, and even individual buildings.
Optimization of landscape pattern [Chapter 8
John Hof; Curtis Flather
2007-01-01
A fundamental assumption in landscape ecology is that spatial patterns have significant influences on the flows of materials, energy, and information while processes create, modify, and maintain spatial patterns. Thus, it is of paramount importance in both theory and practice to address the questions of landscape pattern optimization ... For example, can landscape...
Optimizing countershading camouflage.
Cuthill, Innes C; Sanghera, N Simon; Penacchio, Olivier; Lovell, Paul George; Ruxton, Graeme D; Harris, Julie M
2016-11-15
Countershading, the widespread tendency of animals to be darker on the side that receives strongest illumination, has classically been explained as an adaptation for camouflage: obliterating cues to 3D shape and enhancing background matching. However, there have only been two quantitative tests of whether the patterns observed in different species match the optimal shading to obliterate 3D cues, and no tests of whether optimal countershading actually improves concealment or survival. We use a mathematical model of the light field to predict the optimal countershading for concealment that is specific to the light environment and then test this prediction with correspondingly patterned model "caterpillars" exposed to avian predation in the field. We show that the optimal countershading is strongly illumination-dependent. A relatively sharp transition in surface patterning from dark to light is only optimal under direct solar illumination; if there is diffuse illumination from cloudy skies or shade, the pattern provides no advantage over homogeneous background-matching coloration. Conversely, a smoother gradation between dark and light is optimal under cloudy skies or shade. The demonstration of these illumination-dependent effects of different countershading patterns on predation risk strongly supports the comparative evidence showing that the type of countershading varies with light environment.
Free-standing carbon nanotube composite sensing skin for distributed strain sensing in structures
NASA Astrophysics Data System (ADS)
Burton, Andrew R.; Minegishi, Kaede; Kurata, Masahiro; Lynch, Jerome P.
2014-04-01
The technical challenges of managing the health of critical infrastructure systems necessitate greater structural sensing capabilities. Among these needs is the ability for quantitative, spatial damage detection on critical structural components. Advances in material science have now opened the door for novel and cost-effective spatial sensing solutions specially tailored for damage detection in structures. However, challenges remain before spatial damage detection can be realized. Some of the technical challenges include sensor installations and extensive signal processing requirements. This work addresses these challenges by developing a patterned carbon nanotube composite thin film sensor whose pattern has been optimized for measuring the spatial distribution of strain. The carbon nanotube-polymer nanocomposite sensing material is fabricated on a flexible polyimide substrate using a layer-by-layer deposition process. The thin film sensors are then patterned into sensing elements using optical lithography processes common to microelectromechanical systems (MEMS) technologies. The sensor array is designed as a series of sensing elements with varying width to provide insight on the limitations of such patterning and implications of pattern geometry on sensing signals. Once fabrication is complete, the substrate and attached sensor are epoxy bonded to a poly vinyl composite (PVC) bar that is then tested with a uniaxial, cyclic load pattern and mechanical response is characterized. The fabrication processes are then utilized on a larger-scale to develop and instrument a component-specific sensing skin in order to observe the strain distribution on the web of a steel beam. The instrumented beam is part of a larger steel beam-column connection with a concrete slab in composite action. The beam-column subassembly is laterally loaded and strain trends in the web are observed using the carbon nanotube composite sensing skin. The results are discussed in the context of understanding the properties of the thin film sensor and how it may be advanced toward structural sensing applications.
Bio-inspired patterned networks (BIPS) for development of wearable/disposable biosensors
NASA Astrophysics Data System (ADS)
McLamore, E. S.; Convertino, M.; Hondred, John; Das, Suprem; Claussen, J. C.; Vanegas, D. C.; Gomes, C.
2016-05-01
Here we demonstrate a novel approach for fabricating point of care (POC) wearable electrochemical biosensors based on 3D patterning of bionanocomposite networks. To create Bio-Inspired Patterned network (BIPS) electrodes, we first generate fractal network in silico models that optimize transport of network fluxes according to an energy function. Network patterns are then inkjet printed onto flexible substrate using conductive graphene ink. We then deposit fractal nanometal structures onto the graphene to create a 3D nanocomposite network. Finally, we biofunctionalize the surface with biorecognition agents using covalent bonding. In this paper, BIPS are used to develop high efficiency, low cost biosensors for measuring glucose as a proof of concept. Our results on the fundamental performance of BIPS sensors show that the biomimetic nanostructures significantly enhance biosensor sensitivity, accuracy, response time, limit of detection, and hysteresis compared to conventional POC non fractal electrodes (serpentine, interdigitated, and screen printed electrodes). BIPs, in particular Apollonian patterned BIPS, represent a new generation of POC biosensors based on nanoscale and microscale fractal networks that significantly improve electrical connectivity, leading to enhanced sensor performance.
Research on the decision-making model of land-use spatial optimization
NASA Astrophysics Data System (ADS)
He, Jianhua; Yu, Yan; Liu, Yanfang; Liang, Fei; Cai, Yuqiu
2009-10-01
Using the optimization result of landscape pattern and land use structure optimization as constraints of CA simulation results, a decision-making model of land use spatial optimization is established coupled the landscape pattern model with cellular automata to realize the land use quantitative and spatial optimization simultaneously. And Huangpi district is taken as a case study to verify the rationality of the model.
Visualizing frequent patterns in large multivariate time series
NASA Astrophysics Data System (ADS)
Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.
2011-01-01
The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.
Inhomogeneity Based Characterization of Distribution Patterns on the Plasma Membrane
Paparelli, Laura; Corthout, Nikky; Wakefield, Devin L.; Sannerud, Ragna; Jovanovic-Talisman, Tijana; Annaert, Wim; Munck, Sebastian
2016-01-01
Cell surface protein and lipid molecules are organized in various patterns: randomly, along gradients, or clustered when segregated into discrete micro- and nano-domains. Their distribution is tightly coupled to events such as polarization, endocytosis, and intracellular signaling, but challenging to quantify using traditional techniques. Here we present a novel approach to quantify the distribution of plasma membrane proteins and lipids. This approach describes spatial patterns in degrees of inhomogeneity and incorporates an intensity-based correction to analyze images with a wide range of resolutions; we have termed it Quantitative Analysis of the Spatial distributions in Images using Mosaic segmentation and Dual parameter Optimization in Histograms (QuASIMoDOH). We tested its applicability using simulated microscopy images and images acquired by widefield microscopy, total internal reflection microscopy, structured illumination microscopy, and photoactivated localization microscopy. We validated QuASIMoDOH, successfully quantifying the distribution of protein and lipid molecules detected with several labeling techniques, in different cell model systems. We also used this method to characterize the reorganization of cell surface lipids in response to disrupted endosomal trafficking and to detect dynamic changes in the global and local organization of epidermal growth factor receptors across the cell surface. Our findings demonstrate that QuASIMoDOH can be used to assess protein and lipid patterns, quantifying distribution changes and spatial reorganization at the cell surface. An ImageJ/Fiji plugin of this analysis tool is provided. PMID:27603951
Mantel, Bruno; Stoffregen, Thomas A.; Campbell, Alain; Bardy, Benoît G.
2015-01-01
Body movement influences the structure of multiple forms of ambient energy, including optics and gravito-inertial force. Some researchers have argued that egocentric distance is derived from inferential integration of visual and non-visual stimulation. We suggest that accurate information about egocentric distance exists in perceptual stimulation as higher-order patterns that extend across optics and inertia. We formalize a pattern that specifies the egocentric distance of a stationary object across higher-order relations between optics and inertia. This higher-order parameter is created by self-generated movement of the perceiver in inertial space relative to the illuminated environment. For this reason, we placed minimal restrictions on the exploratory movements of our participants. We asked whether humans can detect and use the information available in this higher-order pattern. Participants judged whether a virtual object was within reach. We manipulated relations between body movement and the ambient structure of optics and inertia. Judgments were precise and accurate when the higher-order optical-inertial parameter was available. When only optic flow was available, judgments were poor. Our results reveal that participants perceived egocentric distance from the higher-order, optical-inertial consequences of their own exploratory activity. Analysis of participants’ movement trajectories revealed that self-selected movements were complex, and tended to optimize availability of the optical-inertial pattern that specifies egocentric distance. We argue that accurate information about egocentric distance exists in higher-order patterns of ambient energy, that self-generated movement can generate these higher-order patterns, and that these patterns can be detected and used to support perception of egocentric distance that is precise and accurate. PMID:25856410
Gomez-Cardona, Daniel; Hayes, John W; Zhang, Ran; Li, Ke; Cruz-Bastida, Juan Pablo; Chen, Guang-Hong
2018-05-01
Different low-signal correction (LSC) methods have been shown to efficiently reduce noise streaks and noise level in CT to provide acceptable images at low-radiation dose levels. These methods usually result in CT images with highly shift-variant and anisotropic spatial resolution and noise, which makes the parameter optimization process highly nontrivial. The purpose of this work was to develop a local task-based parameter optimization framework for LSC methods. Two well-known LSC methods, the adaptive trimmed mean (ATM) filter and the anisotropic diffusion (AD) filter, were used as examples to demonstrate how to use the task-based framework to optimize filter parameter selection. Two parameters, denoted by the set P, for each LSC method were included in the optimization problem. For the ATM filter, these parameters are the low- and high-signal threshold levels p l and p h ; for the AD filter, the parameters are the exponents δ and γ in the brightness gradient function. The detectability index d' under the non-prewhitening (NPW) mathematical observer model was selected as the metric for parameter optimization. The optimization problem was formulated as an unconstrained optimization problem that consisted of maximizing an objective function d'(P), where i and j correspond to the i-th imaging task and j-th spatial location, respectively. Since there is no explicit mathematical function to describe the dependence of d' on the set of parameters P for each LSC method, the optimization problem was solved via an experimentally measured d' map over a densely sampled parameter space. In this work, three high-contrast-high-frequency discrimination imaging tasks were defined to explore the parameter space of each of the LSC methods: a vertical bar pattern (task I), a horizontal bar pattern (task II), and a multidirectional feature (task III). Two spatial locations were considered for the analysis, a posterior region-of-interest (ROI) located within the noise streaks region and an anterior ROI, located further from the noise streaks region. Optimal results derived from the task-based detectability index metric were compared to other operating points in the parameter space with different noise and spatial resolution trade-offs. The optimal operating points determined through the d' metric depended on the interplay between the major spatial frequency components of each imaging task and the highly shift-variant and anisotropic noise and spatial resolution properties associated with each operating point in the LSC parameter space. This interplay influenced imaging performance the most when the major spatial frequency component of a given imaging task coincided with the direction of spatial resolution loss or with the dominant noise spatial frequency component; this was the case of imaging task II. The performance of imaging tasks I and III was influenced by this interplay in a smaller scale than imaging task II, since the major frequency component of task I was perpendicular to imaging task II, and because imaging task III did not have strong directional dependence. For both LSC methods, there was a strong dependence of the overall d' magnitude and shape of the contours on the spatial location within the phantom, particularly for imaging tasks II and III. The d' value obtained at the optimal operating point for each spatial location and imaging task was similar when comparing the LSC methods studied in this work. A local task-based detectability framework to optimize the selection of parameters for LSC methods was developed. The framework takes into account the potential shift-variant and anisotropic spatial resolution and noise properties to maximize the imaging performance of the CT system. Optimal parameters for a given LSC method depend strongly on the spatial location within the image object. © 2018 American Association of Physicists in Medicine.
Altomare, Cristina; Guglielmann, Raffaella; Riboldi, Marco; Bellazzi, Riccardo; Baroni, Guido
2015-02-01
In high precision photon radiotherapy and in hadrontherapy, it is crucial to minimize the occurrence of geometrical deviations with respect to the treatment plan in each treatment session. To this end, point-based infrared (IR) optical tracking for patient set-up quality assessment is performed. Such tracking depends on external fiducial points placement. The main purpose of our work is to propose a new algorithm based on simulated annealing and augmented Lagrangian pattern search (SAPS), which is able to take into account prior knowledge, such as spatial constraints, during the optimization process. The SAPS algorithm was tested on data related to head and neck and pelvic cancer patients, and that were fitted with external surface markers for IR optical tracking applied for patient set-up preliminary correction. The integrated algorithm was tested considering optimality measures obtained with Computed Tomography (CT) images (i.e. the ratio between the so-called target registration error and fiducial registration error, TRE/FRE) and assessing the marker spatial distribution. Comparison has been performed with randomly selected marker configuration and with the GETS algorithm (Genetic Evolutionary Taboo Search), also taking into account the presence of organs at risk. The results obtained with SAPS highlight improvements with respect to the other approaches: (i) TRE/FRE ratio decreases; (ii) marker distribution satisfies both marker visibility and spatial constraints. We have also investigated how the TRE/FRE ratio is influenced by the number of markers, obtaining significant TRE/FRE reduction with respect to the random configurations, when a high number of markers is used. The SAPS algorithm is a valuable strategy for fiducial configuration optimization in IR optical tracking applied for patient set-up error detection and correction in radiation therapy, showing that taking into account prior knowledge is valuable in this optimization process. Further work will be focused on the computational optimization of the SAPS algorithm toward fast point-of-care applications. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Xia; Fu, Lixia; Yan, Aihua; Guo, Fajun; Wu, Cong; Chen, Hong; Wang, Xinying; Lu, Ming
2018-02-01
Study on optimization of development well patterns is the core content of oilfield development and is a prerequisite for rational and effective development of oilfield. The study on well pattern optimization mainly includes types of well patterns and density of well patterns. This paper takes the Aer-3 fault block as an example. Firstly, models were built for diamond-shaped inverted 9-spot patterns, rectangular 5-spot patterns, square inverted 9-spot patterns and inverted 7-spot patterns under the same well pattern density to correlate the effect of different well patterns on development; secondly, comprehensive analysis was conducted to well pattern density in terms of economy and technology using such methods as oil reservoir engineering, numerical simulation, economic limits and economic rationality. Finally, the development mode of vertical well + horizontal well was presented according to the characteristics of oil reservoirs in some well blocks, which has realized efficient development of this fault block.
Gao, Wenhua; Zhang, An; Chen, Yunsheng; Chen, Zixuan; Chen, Yaowen; Lu, Fushen; Chen, Zhanguang
2013-11-15
Biosensor based on DNA hybridization holds great potential to get higher sensitivity as the optimal DNA hybridization efficiency can be achieved by controlling the distribution and orientation of probe strands on the transducer surface. In this work, an innovative strategy is reported to tap the sensitivity potential of current electrochemiluminescence (ECL) biosensing system by dispersedly anchoring the DNA beacons on the gold nanoparticles (GNPs) array which was electrodeposited on the glassy carbon electrode surface, rather than simply sprawling the coil-like strands onto planar gold surface. The strategy was developed by designing a "signal-on" ECL biosensing switch fabricated on the GNPs nanopatterned electrode surface for enhanced ultra-sensitivity detection of Hg(2+). A 57-mer hairpin-DNA labeled with ferrocene as ECL quencher and a 13-mer DNA labeled with Ru(bpy)3(2+) as reporter were hybridized to construct the signal generator in off-state. A 31-mer thymine (T)-rich capture-DNA was introduced to form T-T mismatches with the loop sequence of the hairpin-DNA in the presence of Hg(2+) and induce the stem-loop open, meanwhile the ECL "signal-on" was triggered. The peak sensitivity with the lowest detection limit of 0.1 nM was achieved with the optimal GNPs number density while exorbitant GNPs deposition resulted in sensitivity deterioration for the biosensor. We expect the present strategy could lead the renovation of the existing probe-immobilized ECL genosensor design to get an even higher sensitivity in ultralow level of target detection such as the identification of genetic diseases and disorders in basic research and clinical application. Copyright © 2013 Elsevier B.V. All rights reserved.
Image Correlation Pattern Optimization for Micro-Scale In-Situ Strain Measurements
NASA Technical Reports Server (NTRS)
Bomarito, G. F.; Hochhalter, J. D.; Cannon, A. H.
2016-01-01
The accuracy and precision of digital image correlation (DIC) is a function of three primary ingredients: image acquisition, image analysis, and the subject of the image. Development of the first two (i.e. image acquisition techniques and image correlation algorithms) has led to widespread use of DIC; however, fewer developments have been focused on the third ingredient. Typically, subjects of DIC images are mechanical specimens with either a natural surface pattern or a pattern applied to the surface. Research in the area of DIC patterns has primarily been aimed at identifying which surface patterns are best suited for DIC, by comparing patterns to each other. Because the easiest and most widespread methods of applying patterns have a high degree of randomness associated with them (e.g., airbrush, spray paint, particle decoration, etc.), less effort has been spent on exact construction of ideal patterns. With the development of patterning techniques such as microstamping and lithography, patterns can be applied to a specimen pixel by pixel from a patterned image. In these cases, especially because the patterns are reused many times, an optimal pattern is sought such that error introduced into DIC from the pattern is minimized. DIC consists of tracking the motion of an array of nodes from a reference image to a deformed image. Every pixel in the images has an associated intensity (grayscale) value, with discretization depending on the bit depth of the image. Because individual pixel matching by intensity value yields a non-unique scale-dependent problem, subsets around each node are used for identification. A correlation criteria is used to find the best match of a particular subset of a reference image within a deformed image. The reader is referred to references for enumerations of typical correlation criteria. As illustrated by Schreier and Sutton and Lu and Cary systematic errors can be introduced by representing the underlying deformation with under-matched shape functions. An important implication, as discussed by Sutton et al., is that in the presence of highly localized deformations (e.g., crack fronts), error can be reduced by minimizing the subset size. In other words, smaller subsets allow the more accurate resolution of localized deformations. Contrarily, the choice of optimal subset size has been widely studied and a general consensus is that larger subsets with more information content are less prone to random error. Thus, an optimal subset size balances the systematic error from under matched deformations with random error from measurement noise. The alternative approach pursued in the current work is to choose a small subset size and optimize the information content within (i.e., optimizing an applied DIC pattern), rather than finding an optimal subset size. In the literature, many pattern quality metrics have been proposed, e.g., sum of square intensity gradient (SSSIG), mean subset fluctuation, gray level co-occurrence, autocorrelation-based metrics, and speckle-based metrics. The majority of these metrics were developed to quantify the quality of common pseudo-random patterns after they have been applied, and were not created with the intent of pattern generation. As such, it is found that none of the metrics examined in this study are fit to be the objective function of a pattern generation optimization. In some cases, such as with speckle-based metrics, application to pixel by pixel patterns is ill-conditioned and requires somewhat arbitrary extensions. In other cases, such as with the SSSIG, it is shown that trivial solutions exist for the optimum of the metric which are ill-suited for DIC (such as a checkerboard pattern). In the current work, a multi-metric optimization method is proposed whereby quality is viewed as a combination of individual quality metrics. Specifically, SSSIG and two auto-correlation metrics are used which have generally competitive objectives. Thus, each metric could be viewed as a constraint imposed upon the others, thereby precluding the achievement of their trivial solutions. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. The resulting pattern, along with randomly generated patterns, is subjected to numerical deformations and analyzed with DIC software. The optimal pattern is shown to outperform randomly generated patterns.
NASA Astrophysics Data System (ADS)
Reynolds, A. M.
2008-04-01
A random Lévy-looping model of searching is devised and optimal random Lévy-looping searching strategies are identified for the location of a single target whose position is uncertain. An inverse-square power law distribution of loop lengths is shown to be optimal when the distance between the centre of the search and the target is much shorter than the size of the longest possible loop in the searching pattern. Optimal random Lévy-looping searching patterns have recently been observed in the flight patterns of honeybees (Apis mellifera) when attempting to locate their hive and when searching after a known food source becomes depleted. It is suggested that the searching patterns of desert ants (Cataglyphis) are consistent with the adoption of an optimal Lévy-looping searching strategy.
Investigation of Optimal Digital Image Correlation Patterns for Deformation Measurement
NASA Technical Reports Server (NTRS)
Bomarito, G. F.; Ruggles, T. J.; Hochhalter, J. D.; Cannon, A. H.
2016-01-01
Digital image correlation (DIC) relies on the surface texture of a specimen to measure deformation. When the specimen itself has little or no texture, a pattern is applied to the surface which deforms with the specimen and acts as an artificial surface texture. Because the applied pattern has an effect on the accuracy of DIC, an ideal pattern is sought for which the error introduced into DIC measurements is minimal. In this work, a study is performed in which several DIC pattern quality metrics from the literature are correlated to DIC measurement error. The resulting correlations give insight on the optimality of DIC patterns in general. Optimizations are then performed to produce patterns which are well suited for DIC. These patterns are tested to show their relative benefits. Chief among these benefits are a reduction in error of approximately 30 with respect to a randomly generated pattern.
NASA Astrophysics Data System (ADS)
Sui, Chaofan; Wang, Kaige; Wang, Shuang; Ren, Junying; Bai, Xiaohong; Bai, Jintao
2016-03-01
Most of SERS applications are constricted by heterogeneous hotspots and aggregates of nanostructure, which result in low sensitivity and poor reproducibility of characteristic signals. This work intends to introduce SERS properties of a type of SERS-active substrate, Au-CuCl2-AAO, which is innovatively developed on a porous anodic alumina oxide (AAO) template. Spectral measuring results of Rhodamine 6G (R6G) on this substrate optimized by controlling morphology and gold thickness showed that enhancement factor (2.30 × 107) and detection limit (10-10 M) were both improved and represented better performance than its template AAO. Homogenous hot spots across the region of interest were achieved by scanning SERS intensity distribution for the band at 1505 cm-1 in 5 × 5 μm2 area. Furthermore, the promising SERS activity of the flower-patterned substrate was theoretically explained through simulation of the electromagnetic field distribution. In addition, this SERS substrate is proposed for applications within the field of chemical and biochemical analyses.Most of SERS applications are constricted by heterogeneous hotspots and aggregates of nanostructure, which result in low sensitivity and poor reproducibility of characteristic signals. This work intends to introduce SERS properties of a type of SERS-active substrate, Au-CuCl2-AAO, which is innovatively developed on a porous anodic alumina oxide (AAO) template. Spectral measuring results of Rhodamine 6G (R6G) on this substrate optimized by controlling morphology and gold thickness showed that enhancement factor (2.30 × 107) and detection limit (10-10 M) were both improved and represented better performance than its template AAO. Homogenous hot spots across the region of interest were achieved by scanning SERS intensity distribution for the band at 1505 cm-1 in 5 × 5 μm2 area. Furthermore, the promising SERS activity of the flower-patterned substrate was theoretically explained through simulation of the electromagnetic field distribution. In addition, this SERS substrate is proposed for applications within the field of chemical and biochemical analyses. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06771e
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.
High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
dos Santos, Bruno César Diniz Brito; Flumignan, Danilo Luiz; de Oliveira, José Eduardo
2012-10-01
A three-step development, optimization and validation strategy is described for gas chromatography (GC) fingerprints of Brazilian commercial diesel fuel. A suitable GC-flame ionization detection (FID) system was selected to assay a complex matrix such as diesel. The next step was to improve acceptable chromatographic resolution with reduced analysis time, which is recommended for routine applications. Full three-level factorial designs were performed to improve flow rate, oven ramps, injection volume and split ratio in the GC system. Finally, several validation parameters were performed. The GC fingerprinting can be coupled with pattern recognition and multivariate regressions analyses to determine fuel quality and fuel physicochemical parameters. This strategy can also be applied to develop fingerprints for quality control of other fuel types.
NASA Astrophysics Data System (ADS)
Sujono, A.; Santoso, B.; Juwana, W. E.
2016-03-01
Problems of detonation (knock) on Otto engine (petrol engine) is completely unresolved problem until now, especially if want to improve the performance. This research did sound vibration signal processing engine with a microphone sensor, for the detection and identification of detonation. A microphone that can be mounted is not attached to the cylinder block, that's high temperature, so that its performance will be more stable, durable and inexpensive. However, the method of analysis is not very easy, because a lot of noise (interference). Therefore the use of new methods of pattern recognition, through filtration, and the regression function normalized envelope. The result is quite good, can achieve a success rate of about 95%.
Fabrication of Homogeneous High-Density Antibody Microarrays for Cytokine Detection
Hospach, Ingeborg; Joseph, Yvonne; Mai, Michaela Kathrin; Krasteva, Nadejda; Nelles, Gabriele
2014-01-01
Cytokine proteins are known as biomarker molecules, characteristic of a disease or specific body condition. Monitoring of the cytokine pattern in body fluids can contribute to the diagnosis of diseases. Here we report on the development of an array comprised of different anti-cytokine antibodies on an activated solid support coupled with a fluorescence readout mechanism. Optimization of the array preparation was done in regard of spot homogeneity and spot size. The proinflammatory cytokines Tumor Necrosis Factor alpha (TNFα) and Interleukin 6 (IL-6) were chosen as the first targets of interest. First, the solid support for covalent antibody immobilization and an adequate fluorescent label were selected. Three differently functionalized glass substrates for spotting were compared: amine and epoxy, both having a two-dimensional structure, and the NHS functionalized hydrogel (NHS-3D). The NHS-hydrogel functionalization of the substrate was best suited to antibody immobilization. Then, the optimization of plotting parameters and geometry as well as buffer media were investigated, considering the ambient analyte theory of Roger Ekins. As a first step towards real sample studies, a proof of principle of cytokine detection has been established. PMID:27600349
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
Arunachalam, Kavitha; Maccarini, Paolo; De Luca, Valeria; Tognolatti, Piero; Bardati, Fernando; Snow, Brent; Stauffer, Paul
2011-06-01
Microwave (MW) radiometry is proposed for passive monitoring of kidney temperature to detect vesicoureteral reflux (VUR) of urine that is externally heated by a MW hyperthermia device and thereafter reflows from the bladder to kidneys during reflux. Here, we characterize in tissue-mimicking phantoms the performance of a 1.375 GHz radiometry system connected to an electromagnetically (EM) shielded microstrip log spiral antenna optimized for VUR detection. Phantom EM properties are characterized using a coaxial dielectric probe and network analyzer (NA). Power reflection and receive patterns of the antenna are measured in layered tissue phantom. Receiver spectral measurements are used to assess EM shielding provided by a metal cup surrounding the antenna. Radiometer and fiberoptic temperature data are recorded for varying volumes (10-30 mL) and temperaturesg (40-46°C) of the urine phantom at 35 mm depth surrounded by 36.5°C muscle phantom. Directional receive pattern with about 5% power spectral density at 35 mm target depth and better than -10 dB return loss from tissue load are measured for the antenna. Antenna measurements demonstrate no deterioration in power reception and effective EM shielding in the presence of the metal cup. Radiometry power measurements are in excellent agreement with the temperature of the kidney phantom. Laboratory testing of the radiometry system in temperature-controlled phantoms supports the feasibility of passive kidney thermometry for VUR detection.
Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine
Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin
2016-01-01
This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox. PMID:26848665
Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine.
Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin
2016-02-02
This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox.
Library-based illumination synthesis for critical CMOS patterning.
Yu, Jue-Chin; Yu, Peichen; Chao, Hsueh-Yung
2013-07-01
In optical microlithography, the illumination source for critical complementary metal-oxide-semiconductor layers needs to be determined in the early stage of a technology node with very limited design information, leading to simple binary shapes. Recently, the availability of freeform sources permits us to increase pattern fidelity and relax mask complexities with minimal insertion risks to the current manufacturing flow. However, source optimization across many patterns is often treated as a design-of-experiments problem, which may not fully exploit the benefits of a freeform source. In this paper, a rigorous source-optimization algorithm is presented via linear superposition of optimal sources for pre-selected patterns. We show that analytical solutions are made possible by using Hopkins formulation and quadratic programming. The algorithm allows synthesized illumination to be linked with assorted pattern libraries, which has a direct impact on design rule studies for early planning and design automation for full wafer optimization.
Memory monitoring by animals and humans
NASA Technical Reports Server (NTRS)
Smith, J. D.; Shields, W. E.; Allendoerfer, K. R.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)
1998-01-01
The authors asked whether animals and humans would use similarly an uncertain response to escape indeterminate memories. Monkeys and humans performed serial probe recognition tasks that produced differential memory difficulty across serial positions (e.g., primacy and recency effects). Participants were given an escape option that let them avoid any trials they wished and receive a hint to the trial's answer. Across species, across tasks, and even across conspecifics with sharper or duller memories, monkeys and humans used the escape option selectively when more indeterminate memory traces were probed. Their pattern of escaping always mirrored the pattern of their primary memory performance across serial positions. Signal-detection analyses confirm the similarity of the animals' and humans' performances. Optimality analyses assess their efficiency. Several aspects of monkeys' performance suggest the cognitive sophistication of their decisions to escape.
NASA Astrophysics Data System (ADS)
Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.
2016-06-01
Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives insight into the relationship between different objective criteria and optimized stimulus patterns. In addition, the analysis of the interaction between optimized stimulus patterns and safety constraint bounds suggests that more precise current localization in the ROI, with improved safety criterion, may be achieved by careful selection of the constraint bounds.
Jones, Blake C; Lipson, Evan J; Childers, Brandon; Fishman, Elliot K; Johnson, Pamela T
The incidence of melanoma has risen dramatically over the past several decades. Oncologists rely on the ability of radiologists to identify subtle radiographic changes representing metastatic and recurrent melanoma in uncommon locations on multidetector computed tomography (MDCT) as the front-line imaging surveillance tool. To accomplish this goal, MDCT acquisition and display must be optimized and radiologist interpretation and search patterns must be tailored to identify the unique and often subtle metastatic lesions of melanoma. This article describes MDCT acquisition and display techniques that optimize the visibility of melanoma lesions, such as high-contrast display windows and multiplanar reconstructions. In addition, innovative therapies for melanoma, such as immunotherapy and small-molecule therapy, have altered clinical management and outcomes and have also changed the spectrum of therapeutic complications that can be detected on MDCT. Recent advances in melanoma therapy and potential complications that the radiologist can identify on MDCT are reviewed.
Experimental Evaluation of UWB Indoor Positioning for Sport Postures
Defraye, Jense; Steendam, Heidi; Gerlo, Joeri; De Clercq, Dirk; De Poorter, Eli
2018-01-01
Radio frequency (RF)-based indoor positioning systems (IPSs) use wireless technologies (including Wi-Fi, Zigbee, Bluetooth, and ultra-wide band (UWB)) to estimate the location of persons in areas where no Global Positioning System (GPS) reception is available, for example in indoor stadiums or sports halls. Of the above-mentioned forms of radio frequency (RF) technology, UWB is considered one of the most accurate approaches because it can provide positioning estimates with centimeter-level accuracy. However, it is not yet known whether UWB can also offer such accurate position estimates during strenuous dynamic activities in which moves are characterized by fast changes in direction and velocity. To answer this question, this paper investigates the capabilities of UWB indoor localization systems for tracking athletes during their complex (and most of the time unpredictable) movements. To this end, we analyze the impact of on-body tag placement locations and human movement patterns on localization accuracy and communication reliability. Moreover, two localization algorithms (particle filter and Kalman filter) with different optimizations (bias removal, non-line-of-sight (NLoS) detection, and path determination) are implemented. It is shown that although the optimal choice of optimization depends on the type of movement patterns, some of the improvements can reduce the localization error by up to 31%. Overall, depending on the selected optimization and on-body tag placement, our algorithms show good results in terms of positioning accuracy, with average errors in position estimates of 20 cm. This makes UWB a suitable approach for tracking dynamic athletic activities. PMID:29315267
Fourier-transform and global contrast interferometer alignment methods
Goldberg, Kenneth A.
2001-01-01
Interferometric methods are presented to facilitate alignment of image-plane components within an interferometer and for the magnified viewing of interferometer masks in situ. Fourier-transforms are performed on intensity patterns that are detected with the interferometer and are used to calculate pseudo-images of the electric field in the image plane of the test optic where the critical alignment of various components is being performed. Fine alignment is aided by the introduction and optimization of a global contrast parameter that is easily calculated from the Fourier-transform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Ashish; McNulty, Ian; Munson, Todd
We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.
NASA Astrophysics Data System (ADS)
Xu, Lili; Luo, Shuqian
2010-11-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Xu, Lili; Luo, Shuqian
2010-01-01
Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.
Vision-based surface defect inspection for thick steel plates
NASA Astrophysics Data System (ADS)
Yun, Jong Pil; Kim, Dongseob; Kim, KyuHwan; Lee, Sang Jun; Park, Chang Hyun; Kim, Sang Woo
2017-05-01
There are several types of steel products, such as wire rods, cold-rolled coils, hot-rolled coils, thick plates, and electrical sheets. Surface stains on cold-rolled coils are considered defects. However, surface stains on thick plates are not considered defects. A conventional optical structure is composed of a camera and lighting module. A defect inspection system that uses a dual lighting structure to distinguish uneven defects and color changes by surface noise is proposed. In addition, an image processing algorithm that can be used to detect defects is presented in this paper. The algorithm consists of a Gabor filter that detects the switching pattern and employs the binarization method to extract the shape of the defect. The optics module and detection algorithm optimized using a simulator were installed at a real plant, and the experimental results conducted on thick steel plate images obtained from the steel production line show the effectiveness of the proposed method.
Tumor response estimation in radar-based microwave breast cancer detection.
Kurrant, Douglas J; Fear, Elise C; Westwick, David T
2008-12-01
Radar-based microwave imaging techniques have been proposed for early stage breast cancer detection. A considerable challenge for the successful implementation of these techniques is the reduction of clutter, or components of the signal originating from objects other than the tumor. In particular, the reduction of clutter from the late-time scattered fields is required in order to detect small (subcentimeter diameter) tumors. In this paper, a method to estimate the tumor response contained in the late-time scattered fields is presented. The method uses a parametric function to model the tumor response. A maximum a posteriori estimation approach is used to evaluate the optimal values for the estimates of the parameters. A pattern classification technique is then used to validate the estimation. The ability of the algorithm to estimate a tumor response is demonstrated by using both experimental and simulated data obtained with a tissue sensing adaptive radar system.
Development of three-axis inkjet printer for gear sensors
NASA Astrophysics Data System (ADS)
Iba, Daisuke; Rodriguez Lopez, Ricardo; Kamimoto, Takahiro; Nakamura, Morimasa; Miura, Nanako; Iizuka, Takashi; Masuda, Arata; Moriwaki, Ichiro; Sone, Akira
2016-04-01
The long-term objective of our research is to develop sensor systems for detection of gear failure signs. As a very first step, this paper proposes a new method to create sensors directly printed on gears by a printer and conductive ink, and shows the printing system configuration and the procedure of sensor development. The developing printer system is a laser sintering system consisting of a laser and CNC machinery. The laser is able to synthesize micro conductive patterns, and introduced to the CNC machinery as a tool. In order to synthesize sensors on gears, we first design the micro-circuit pattern on a gear through the use of 3D-CAD, and create a program (G-code) for the CNC machinery by CAM. This paper shows initial experiments with the laser sintering process in order to obtain the optimal parameters for the laser setting. This new method proposed here may provide a new manufacturing process for mechanical parts, which have an additional functionality to detect failure, and possible improvements include creating more economical and sustainable systems.
Optimal ciliary beating patterns
NASA Astrophysics Data System (ADS)
Vilfan, Andrej; Osterman, Natan
2011-11-01
We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.
Nine-analyte detection using an array-based biosensor
NASA Technical Reports Server (NTRS)
Taitt, Chris Rowe; Anderson, George P.; Lingerfelt, Brian M.; Feldstein, s. Mark. J.; Ligler, Frances S.
2002-01-01
A fluorescence-based multianalyte immunosensor has been developed for simultaneous analysis of multiple samples. While the standard 6 x 6 format of the array sensor has been used to analyze six samples for six different analytes, this same format has the potential to allow a single sample to be tested for 36 different agents. The method described herein demonstrates proof of principle that the number of analytes detectable using a single array can be increased simply by using complementary mixtures of capture and tracer antibodies. Mixtures were optimized to allow detection of closely related analytes without significant cross-reactivity. Following this facile modification of patterning and assay procedures, the following nine targets could be detected in a single 3 x 3 array: Staphylococcal enterotoxin B, ricin, cholera toxin, Bacillus anthracis Sterne, Bacillus globigii, Francisella tularensis LVS, Yersiniapestis F1 antigen, MS2 coliphage, and Salmonella typhimurium. This work maximizes the efficiency and utility of the described array technology, increasing only reagent usage and cost; production and fabrication costs are not affected.
Linear Classifier with Reject Option for the Detection of Vocal Fold Paralysis and Vocal Fold Edema
NASA Astrophysics Data System (ADS)
Kotropoulos, Constantine; Arce, Gonzalo R.
2009-12-01
Two distinct two-class pattern recognition problems are studied, namely, the detection of male subjects who are diagnosed with vocal fold paralysis against male subjects who are diagnosed as normal and the detection of female subjects who are suffering from vocal fold edema against female subjects who do not suffer from any voice pathology. To do so, utterances of the sustained vowel "ah" are employed from the Massachusetts Eye and Ear Infirmary database of disordered speech. Linear prediction coefficients extracted from the aforementioned utterances are used as features. The receiver operating characteristic curve of the linear classifier, that stems from the Bayes classifier when Gaussian class conditional probability density functions with equal covariance matrices are assumed, is derived. The optimal operating point of the linear classifier is specified with and without reject option. First results using utterances of the "rainbow passage" are also reported for completeness. The reject option is shown to yield statistically significant improvements in the accuracy of detecting the voice pathologies under study.
NASA Astrophysics Data System (ADS)
Ballora, Mark; Hall, David L.
2010-04-01
Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.
Huang, Ning; Wang, Hong Ying; Lin, Tao; Liu, Qi Ming; Huang, Yun Feng; Li, Jian Xiong
2016-10-01
Watershed landscape pattern regulation and optimization based on 'source-sink' theory for non-point source pollution control is a cost-effective measure and still in the exploratory stage. Taking whole watershed as the research object, on the basis of landscape ecology, related theories and existing research results, a regulation framework of watershed landscape pattern for non-point source pollution control was developed at two levels based on 'source-sink' theory in this study: 1) at watershed level: reasonable basic combination and spatial pattern of 'source-sink' landscape was analyzed, and then holistic regulation and optimization method of landscape pattern was constructed; 2) at landscape patch level: key 'source' landscape was taken as the focus of regulation and optimization. Firstly, four identification criteria of key 'source' landscape including landscape pollutant loading per unit area, landscape slope, long and narrow transfer 'source' landscape, pollutant loading per unit length of 'source' landscape along the riverbank were developed. Secondly, nine types of regulation and optimization methods for different key 'source' landscape in rural and urban areas were established, according to three regulation and optimization rules including 'sink' landscape inlay, banding 'sink' landscape supplement, pollutants capacity of original 'sink' landscape enhancement. Finally, the regulation framework was applied for the watershed of Maluan Bay in Xiamen City. Holistic regulation and optimization mode of watershed landscape pattern of Maluan Bay and key 'source' landscape regulation and optimization measures for the three zones were made, based on GIS technology, remote sensing images and DEM model.
A reconfigurable image tube using an external electronic image readout
NASA Astrophysics Data System (ADS)
Lapington, J. S.; Howorth, J. R.; Milnes, J. S.
2005-08-01
We have designed and built a sealed tube microchannel plate (MCP) intensifier for optical/NUV photon counting applications suitable for 18, 25 and 40 mm diameter formats. The intensifier uses an electronic image readout to provide direct conversion of event position into electronic signals, without the drawbacks associated with phosphor screens and subsequent optical detection. The Image Charge technique is used to remove the readout from the intensifier vacuum enclosure, obviating the requirement for additional electrical vacuum feedthroughs and for the readout pattern to be UHV compatible. The charge signal from an MCP intensifier is capacitively coupled via a thin dielectric vacuum window to the electronic image readout, which is external to the sealed intensifier tube. The readout pattern is a separate item held in proximity to the dielectric window and can be easily detached, making the system easily reconfigurable. Since the readout pattern detects induced charge and is external to the tube, it can be constructed as a multilayer, eliminating the requirement for narrow insulator gaps and allowing it to be constructed using standard PCB manufacturing tolerances. We describe two readout patterns, the tetra wedge anode (TWA), an optimized 4 electrode device similar to the wedge and strip anode (WSA) but with a factor 2 improvement in resolution, and an 8 channel high speed 50 ohm device, both manufactured as multilayer PCBs. We present results of the detector imaging performance, image resolution, linearity and stability, and discuss the development of an integrated readout and electronics device based on these designs.
A random approach of test macro generation for early detection of hotspots
NASA Astrophysics Data System (ADS)
Lee, Jong-hyun; Kim, Chin; Kang, Minsoo; Hwang, Sungwook; Yang, Jae-seok; Harb, Mohammed; Al-Imam, Mohamed; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe
2016-03-01
Multiple-Patterning Technology (MPT) is still the preferred choice over EUV for the advanced technology nodes, starting the 20nm node. Down the way to 7nm and 5nm nodes, Self-Aligned Multiple Patterning (SAMP) appears to be one of the effective multiple patterning techniques in terms of achieving small pitch of printed lines on wafer, yet its yield is in question. Predicting and enhancing the yield in the early stages of technology development are some of the main objectives for creating test macros on test masks. While conventional yield ramp techniques for a new technology node have relied on using designs from previous technology nodes as a starting point to identify patterns for Design of Experiment (DoE) creation, these techniques are challenging to apply in the case of introducing an MPT technique like SAMP that did not exist in previous nodes. This paper presents a new strategy for generating test structures based on random placement of unit patterns that can construct more meaningful bigger patterns. Specifications governing the relationships between those unit patterns can be adjusted to generate layout clips that look like realistic SAMP designs. A via chain can be constructed to connect the random DoE of SAMP structures through a routing layer to external pads for electrical measurement. These clips are decomposed according to the decomposition rules of the technology into the appropriate mandrel and cut masks. The decomposed clips can be tested through simulations, or electrically on silicon to discover hotspots. The hotspots can be used in optimizing the fabrication process and models to fix them. They can also be used as learning patterns for DFM deck development. By expanding the size of the randomly generated test structures, more hotspots can be detected. This should provide a faster way to enhance the yield of a new technology node.
Optimal background matching camouflage.
Michalis, Constantine; Scott-Samuel, Nicholas E; Gibson, David P; Cuthill, Innes C
2017-07-12
Background matching is the most familiar and widespread camouflage strategy: avoiding detection by having a similar colour and pattern to the background. Optimizing background matching is straightforward in a homogeneous environment, or when the habitat has very distinct sub-types and there is divergent selection leading to polymorphism. However, most backgrounds have continuous variation in colour and texture, so what is the best solution? Not all samples of the background are likely to be equally inconspicuous, and laboratory experiments on birds and humans support this view. Theory suggests that the most probable background sample (in the statistical sense), at the size of the prey, would, on average, be the most cryptic. We present an analysis, based on realistic assumptions about low-level vision, that estimates the distribution of background colours and visual textures, and predicts the best camouflage. We present data from a field experiment that tests and supports our predictions, using artificial moth-like targets under bird predation. Additionally, we present analogous data for humans, under tightly controlled viewing conditions, searching for targets on a computer screen. These data show that, in the absence of predator learning, the best single camouflage pattern for heterogeneous backgrounds is the most probable sample. © 2017 The Authors.
Desorption of micropollutant from spent carbon filters used for water purifier.
Kwon, Da-Sol; Tak, So-Yeon; Lee, Jung-Eun; Kim, Moon-Kyung; Lee, Young Hwa; Han, Doo Won; Kang, Sanghyeon; Zoh, Kyung-Duk
2017-07-01
In this study, to examine the accumulated micropollutants in the spent carbon filter used in the water purifier, first, the method to desorb micropollutant from the activated carbon was developed and optimized. Then, using this optimized desorption conditions, we examined which micropollutants exist in spent carbon filters collected from houses in different regions in Korea where water purifiers were used. A total of 11 micropollutants (caffeine (CFF), acetaminophen (ACT), sulfamethazine (SMA), sulfamethoxazole (SMZ), metoprolol (MTP), carbamazepine (CBM), naproxen (NPX), bisphenol-A (BPA), ibuprofen (IBU), diclofenac (DCF), and triclocarban (TCB)) were analyzed using LC/MS-MS from the spent carbon filters. CFF, NPX, and DCF had the highest detection frequencies (>60%) in the carbon filters (n = 100), whereas SMA, SMZ, and MTP were only detected in the carbon filters, but not in the tap waters (n = 25), indicating that these micropollutants, which exist less than the detection limit in tap water, were accumulated in the carbon filters. The regional micropollutant detection patterns in the carbon filters showed higher levels of micropollutants, especially NPX, BPA, IBU, and DCF, in carbon filters collected in the Han River and Nakdong River basins where large cities exist. The levels of micropollutants in the carbon filter were generally lower in the regions where advanced oxidation processes (AOPs) were employed at nearby water treatment plants (WTPs), indicating that AOP process in WTP is quite effective in removing micropollutant. Our results suggest that desorption of micropollutant from the carbon filter used can be a tool to identify micropollutants present in tap water with trace amounts or below the detection limit.
A computational method for optimizing fuel treatment locations
Mark A. Finney
2006-01-01
Modeling and experiments have suggested that spatial fuel treatment patterns can influence the movement of large fires. On simple theoretical landscapes consisting of two fuel types (treated and untreated) optimal patterns can be analytically derived that disrupt fire growth efficiently (i.e. with less area treated than random patterns). Although conceptually simple,...
Optimization of an acoustic telemetry array for detecting transmitter-implanted fish
Clements, S.; Jepsen, D.; Karnowski, M.; Schreck, C.B.
2005-01-01
The development of miniature acoustic transmitters and economical, robust automated receivers has enabled researchers to study the movement patterns and survival of teleosts in estuarine and ocean environments, including many species and age-classes that were previously considered too small for implantation. During 2001-2003, we optimized a receiver mooring system to minimize gear and data loss in areas where current action or wave action and acoustic noise are high. In addition, we conducted extensive tests to determine (1) the performance of a transmitter and receiver (Vemco, Ltd.) that are widely used, particularly in North America and Europe and (2) the optimal placement of receivers for recording the passage of fish past a point in a linear-flow environment. Our results suggest that in most locations the mooring system performs well with little loss of data; however, boat traffic remains a concern due to entanglement with the mooring system. We also found that the reception efficiency of the receivers depends largely on the method and location of deployment. In many cases, we observed a range of 0-100% reception efficiency (the percentage of known transmissions that are detected while the receiver is within range of the transmitter) when using a conventional method of mooring. The efficiency was improved by removal of the mounting bar and obstructions from the mooring line. ?? Copyright by the American Fisheries Society 2005.
The time course of cancer detection performance
NASA Astrophysics Data System (ADS)
Taylor-Phillips, Sian; Clarke, Aileen; Wallis, Matthew; Wheaton, Margot; Duncan, Alison; Gale, Alastair G.
2011-03-01
The purpose of this study was to measure how mammography readers' performance varies with time of day and time spent reading. This was investigated in screening practice and when reading an enriched case set. In screening practice records of time and date that each case was read, along with outcome (whether the woman was recalled for further tests, and biopsy results where performed) was extracted from records from one breast screening centre in UK (4 readers). Patterns of performance with time spent reading was also measured using an enriched test set (160 cases, 41% malignant, read three times by eight radiologists). Recall rates varied with time of day, with different patterns for each reader. Recall rates decreased as the reading session progressed both when reading the enriched test set and in screening practice. Further work is needed to expand this work to a greater number of breast screening centres, and to determine whether these patterns of performance over time can be used to optimize overall performance.
Monazah, A; Zeinoddini, M; Saeeidinia, A R
2017-08-01
Coxsackievirus B3 (CVB3) is a member of the genus Enterovirus within the family Picornaviridae and is an important pathogen of viral myocarditis, which accounts for more than 50% viral myocarditis cases. VP1 is major capsid protein that this region has a low homology in both amino acid and nucleotide sequences among Enteroviruses. Therefore we have chosen this region for designed a set of RT-LAMP primers for CVB3 detection. For this the total RNA was extracted from 24-h post infected-HeLa cells with complete cytopathic effect (CPE), and applied to a one-step reverse transcription loop-mediated isothermal amplification reaction (RT-LAMP) using CVB3-specific primers. The optimization of RT-LAMP reaction was carried out with three variables factors including MgSO 4 concentration, temperature and time of incubation. Amplification was analyzed by using 2% agarose gel electrophoresis and ethidium bromide and SYBR Green staining. Our results were shown the ladder-like pattern of the VP1 gene amplification. The LAMP reaction mix was optimized and the best result observed at 4mM MgSO 4 and 60°C for 90min incubation. RT-LAMP had high sensitivity and specificity for detection of CVB3 infection. This method can be used as a rapid and easy diagnostic test for detection of CVB3 in clinical laboratories. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sierra-Pérez, Julián; Torres-Arredondo, M.-A.; Alvarez-Montoya, Joham
2018-01-01
Structural health monitoring consists of using sensors integrated within structures together with algorithms to perform load monitoring, damage detection, damage location, damage size and severity, and prognosis. One possibility is to use strain sensors to infer structural integrity by comparing patterns in the strain field between the pristine and damaged conditions. In previous works, the authors have demonstrated that it is possible to detect small defects based on strain field pattern recognition by using robust machine learning techniques. They have focused on methodologies based on principal component analysis (PCA) and on the development of several unfolding and standardization techniques, which allow dealing with multiple load conditions. However, before a real implementation of this approach in engineering structures, changes in the strain field due to conditions different from damage occurrence need to be isolated. Since load conditions may vary in most engineering structures and promote significant changes in the strain field, it is necessary to implement novel techniques for uncoupling such changes from those produced by damage occurrence. A damage detection methodology based on optimal baseline selection (OBS) by means of clustering techniques is presented. The methodology includes the use of hierarchical nonlinear PCA as a nonlinear modeling technique in conjunction with Q and nonlinear-T 2 damage indices. The methodology is experimentally validated using strain measurements obtained by 32 fiber Bragg grating sensors bonded to an aluminum beam under dynamic bending loads and simultaneously submitted to variations in its pitch angle. The results demonstrated the capability of the methodology for clustering data according to 13 different load conditions (pitch angles), performing the OBS and detecting six different damages induced in a cumulative way. The proposed methodology showed a true positive rate of 100% and a false positive rate of 1.28% for a 99% of confidence.
Layout optimization of DRAM cells using rigorous simulation model for NTD
NASA Astrophysics Data System (ADS)
Jeon, Jinhyuck; Kim, Shinyoung; Park, Chanha; Yang, Hyunjo; Yim, Donggyu; Kuechler, Bernd; Zimmermann, Rainer; Muelders, Thomas; Klostermann, Ulrich; Schmoeller, Thomas; Do, Mun-hoe; Choi, Jung-Hoe
2014-03-01
DRAM chip space is mainly determined by the size of the memory cell array patterns which consist of periodic memory cell features and edges of the periodic array. Resolution Enhancement Techniques (RET) are used to optimize the periodic pattern process performance. Computational Lithography such as source mask optimization (SMO) to find the optimal off axis illumination and optical proximity correction (OPC) combined with model based SRAF placement are applied to print patterns on target. For 20nm Memory Cell optimization we see challenges that demand additional tool competence for layout optimization. The first challenge is a memory core pattern of brick-wall type with a k1 of 0.28, so it allows only two spectral beams to interfere. We will show how to analytically derive the only valid geometrically limited source. Another consequence of two-beam interference limitation is a "super stable" core pattern, with the advantage of high depth of focus (DoF) but also low sensitivity to proximity corrections or changes of contact aspect ratio. This makes an array edge correction very difficult. The edge can be the most critical pattern since it forms the transition from the very stable regime of periodic patterns to non-periodic periphery, so it combines the most critical pitch and highest susceptibility to defocus. Above challenge makes the layout correction to a complex optimization task demanding a layout optimization that finds a solution with optimal process stability taking into account DoF, exposure dose latitude (EL), mask error enhancement factor (MEEF) and mask manufacturability constraints. This can only be achieved by simultaneously considering all criteria while placing and sizing SRAFs and main mask features. The second challenge is the use of a negative tone development (NTD) type resist, which has a strong resist effect and is difficult to characterize experimentally due to negative resist profile taper angles that perturb CD at bottom characterization by scanning electron microscope (SEM) measurements. High resist impact and difficult model data acquisition demand for a simulation model that hat is capable of extrapolating reliably beyond its calibration dataset. We use rigorous simulation models to provide that predictive performance. We have discussed the need of a rigorous mask optimization process for DRAM contact cell layout yielding mask layouts that are optimal in process performance, mask manufacturability and accuracy. In this paper, we have shown the step by step process from analytical illumination source derivation, a NTD and application tailored model calibration to layout optimization such as OPC and SRAF placement. Finally the work has been verified with simulation and experimental results on wafer.
Optimized temporal pattern of brain stimulation designed by computational evolution
Brocker, David T.; Swan, Brandon D.; So, Rosa Q.; Turner, Dennis A.; Gross, Robert E.; Grill, Warren M.
2017-01-01
Brain stimulation is a promising therapy for several neurological disorders, including Parkinson’s disease. Stimulation parameters are selected empirically and are limited to the frequency and intensity of stimulation. We used the temporal pattern of stimulation as a novel parameter of deep brain stimulation to ameliorate symptoms in a parkinsonian animal model and in humans with Parkinson’s disease. We used model-based computational evolution to optimize the stimulation pattern. The optimized pattern produced symptom relief comparable to that from standard high-frequency stimulation (a constant rate of 130 or 185 Hz) and outperformed frequency-matched standard stimulation in the parkinsonian rat and in patients. Both optimized and standard stimulation suppressed abnormal oscillatory activity in the basal ganglia of rats and humans. The results illustrate the utility of model-based computational evolution to design temporal pattern of stimulation to increase the efficiency of brain stimulation in Parkinson’s disease, thereby requiring substantially less energy than traditional brain stimulation. PMID:28053151
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Design and development of a microfluidic platform for use with colorimetric gold nanoprobe assays
NASA Astrophysics Data System (ADS)
Bernacka-Wojcik, Iwona
Due to the importance and wide applications of the DNA analysis, there is a need to make genetic analysis more available and more affordable. As such, the aim of this PhD thesis is to optimize a colorimetric DNA biosensor based on gold nanoprobes developed in CEMOP by reducing its price and the needed volume of solution without compromising the device sensitivity and reliability, towards the point of care use. Firstly, the price of the biosensor was decreased by replacing the silicon photodetector by a low cost, solution processed TiO2 photodetector. To further reduce the photodetector price, a novel fabrication method was developed: a cost-effective inkjet printing technology that enabled to increase TiO2 surface area. Secondly, the DNA biosensor was optimized by means of microfluidics that offer advantages of miniaturization, much lower sample/reagents consumption, enhanced system performance and functionality by integrating different components. In the developed microfluidic platform, the optical path length was extended by detecting along the channel and the light was transmitted by optical fibres enabling to guide the light very close to the analysed solution. Microfluidic chip of high aspect ratio ( 13), smooth and nearly vertical sidewalls was fabricated in PDMS using a SU-8 mould for patterning. The platform coupled to the gold nanoprobe assay enabled detection of Mycobacterium tuberculosis using 3 mul on DNA solution, i.e. 20 times less than in the previous state-of-the-art. Subsequently, the bio-microfluidic platform was optimized in terms of cost, electrical signal processing and sensitivity to colour variation, yielding 160% improvement of colorimetric AuNPs analysis. Planar microlenses were incorporated to converge light into the sample and then to the output fibre core increasing 6 times the signal-to-losses ratio. The optimized platform enabled detection of single nucleotide polymorphism related with obesity risk (FTO) using target DNA concentration below the limit of detection of the conventionally used microplate reader (i.e. 15 ng/mul) with 10 times lower solution volume (3 mul). The combination of the unique optical properties of gold nanoprobes with microfluidic platform resulted in sensitive and accurate sensor for single nucleotide polymorphism detection operating using small volumes of solutions and without the need for substrate functionalization or sophisticated instrumentation. Simultaneously, to enable on chip reagents mixing, a PDMS micromixer was developed and optimized for the highest efficiency, low pressure drop and short mixing length. The optimized device shows 80% of mixing efficiency at Re = 0.1 in 2.5 mm long mixer with the pressure drop of 6 Pa, satisfying requirements for the application in the microfluidic platform for DNA analysis.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.
Kim, Sehwi; Jung, Inkyung
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.
Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data
Kim, Sehwi
2017-01-01
The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674
Nuclear fuel management optimization using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1995-07-01
The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less
An improved genetic algorithm for designing optimal temporal patterns of neural stimulation
NASA Astrophysics Data System (ADS)
Cassar, Isaac R.; Titus, Nathan D.; Grill, Warren M.
2017-12-01
Objective. Electrical neuromodulation therapies typically apply constant frequency stimulation, but non-regular temporal patterns of stimulation may be more effective and more efficient. However, the design space for temporal patterns is exceedingly large, and model-based optimization is required for pattern design. We designed and implemented a modified genetic algorithm (GA) intended for design optimal temporal patterns of electrical neuromodulation. Approach. We tested and modified standard GA methods for application to designing temporal patterns of neural stimulation. We evaluated each modification individually and all modifications collectively by comparing performance to the standard GA across three test functions and two biophysically-based models of neural stimulation. Main results. The proposed modifications of the GA significantly improved performance across the test functions and performed best when all were used collectively. The standard GA found patterns that outperformed fixed-frequency, clinically-standard patterns in biophysically-based models of neural stimulation, but the modified GA, in many fewer iterations, consistently converged to higher-scoring, non-regular patterns of stimulation. Significance. The proposed improvements to standard GA methodology reduced the number of iterations required for convergence and identified superior solutions.
Dip-pen nanopatterning of photosensitive conducting polymer using a monomer ink
NASA Astrophysics Data System (ADS)
Su, Ming; Aslam, Mohammed; Fu, Lei; Wu, Nianqiang; Dravid, Vinayak P.
2004-05-01
Controlled patterning of conducting polymers at a micro- or nanoscale is the first step towards the fabrication of miniaturized functional devices. Here, we introduce an approach for the nanopatterning of conducting polymers using an improved monomer "ink" in dip-pen nanolithography (DPN). The nominal monomer "ink" is converted, in situ, to its conducting solid-state polymeric form after patterned. Proof-of-concept experiments have been performed with acid-promoted polymerization of pyrrole in a less reactive environment (tetrahydrofuran). The ratios of reactants are optimized to give an appropriate rate to match the operation of DPN. A similar synthesis process for the same polymer in its bulk form shows a high conductance and crystalline structure. The miniaturized conducting polymer sensors with light detection ability are fabricated by DPN using the improved ink formula, and exhibit excellent response, recovery, and sensitivity parameters.
NASA Technical Reports Server (NTRS)
Nebenfuhr, A.; Lomax, T. L.
1998-01-01
We have developed an improved method for determination of gene expression levels with RT-PCR. The procedure is rapid and does not require extensive optimization or densitometric analysis. Since the detection of individual transcripts is PCR-based, small amounts of tissue samples are sufficient for the analysis of expression patterns in large gene families. Using this method, we were able to rapidly screen nine members of the Aux/IAA family of auxin-responsive genes and identify those genes which vary in message abundance in a tissue- and light-specific manner. While not offering the accuracy of conventional semi-quantitative or competitive RT-PCR, our method allows quick screening of large numbers of genes in a wide range of RNA samples with just a thermal cycler and standard gel analysis equipment.
Optimization of algorithm of coding of genetic information of Chlamydia
NASA Astrophysics Data System (ADS)
Feodorova, Valentina A.; Ulyanov, Sergey S.; Zaytsev, Sergey S.; Saltykov, Yury V.; Ulianova, Onega V.
2018-04-01
New method of coding of genetic information using coherent optical fields is developed. Universal technique of transformation of nucleotide sequences of bacterial gene into laser speckle pattern is suggested. Reference speckle patterns of the nucleotide sequences of omp1 gene of typical wild strains of Chlamydia trachomatis of genovars D, E, F, G, J and K and Chlamydia psittaci serovar I as well are generated. Algorithm of coding of gene information into speckle pattern is optimized. Fully developed speckles with Gaussian statistics for gene-based speckles have been used as criterion of optimization.
Dounskaia, Natalia; Shimansky, Yury
2016-06-01
Optimality criteria underlying organization of arm movements are often validated by testing their ability to adequately predict hand trajectories. However, kinematic redundancy of the arm allows production of the same hand trajectory through different joint coordination patterns. We therefore consider movement optimality at the level of joint coordination patterns. A review of studies of multi-joint movement control suggests that a 'trailing' pattern of joint control is consistently observed during which a single ('leading') joint is rotated actively and interaction torque produced by this joint is the primary contributor to the motion of the other ('trailing') joints. A tendency to use the trailing pattern whenever the kinematic redundancy is sufficient and increased utilization of this pattern during skillful movements suggests optimality of the trailing pattern. The goal of this study is to determine the cost function minimization of which predicts the trailing pattern. We show that extensive experimental testing of many known cost functions cannot successfully explain optimality of the trailing pattern. We therefore propose a novel cost function that represents neural effort for joint coordination. That effort is quantified as the cost of neural information processing required for joint coordination. We show that a tendency to reduce this 'neurocomputational' cost predicts the trailing pattern and that the theoretically developed predictions fully agree with the experimental findings on control of multi-joint movements. Implications for future research of the suggested interpretation of the trailing joint control pattern and the theory of joint coordination underlying it are discussed.
On-line Machine Learning and Event Detection in Petascale Data Streams
NASA Astrophysics Data System (ADS)
Thompson, David R.; Wagstaff, K. L.
2012-01-01
Traditional statistical data mining involves off-line analysis in which all data are available and equally accessible. However, petascale datasets have challenged this premise since it is often impossible to store, let alone analyze, the relevant observations. This has led the machine learning community to investigate adaptive processing chains where data mining is a continuous process. Here pattern recognition permits triage and followup decisions at multiple stages of a processing pipeline. Such techniques can also benefit new astronomical instruments such as the Large Synoptic Survey Telescope (LSST) and Square Kilometre Array (SKA) that will generate petascale data volumes. We summarize some machine learning perspectives on real time data mining, with representative cases of astronomical applications and event detection in high volume datastreams. The first is a "supervised classification" approach currently used for transient event detection at the Very Long Baseline Array (VLBA). It injects known signals of interest - faint single-pulse anomalies - and tunes system parameters to recover these events. This permits meaningful event detection for diverse instrument configurations and observing conditions whose noise cannot be well-characterized in advance. Second, "semi-supervised novelty detection" finds novel events based on statistical deviations from previous patterns. It detects outlier signals of interest while considering known examples of false alarm interference. Applied to data from the Parkes pulsar survey, the approach identifies anomalous "peryton" phenomena that do not match previous event models. Finally, we consider online light curve classification that can trigger adaptive followup measurements of candidate events. Classifier performance analyses suggest optimal survey strategies, and permit principled followup decisions from incomplete data. These examples trace a broad range of algorithm possibilities available for online astronomical data mining. This talk describes research performed at the Jet Propulsion Laboratory, California Institute of Technology. Copyright 2012, All Rights Reserved. U.S. Government support acknowledged.
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
NASA Astrophysics Data System (ADS)
Ferrant, S.; Le Page, M.; Kerr, Y. H.; Selles, A.; Mermoz, S.; Al-Bitar, A.; Muddu, S.; Gascoin, S.; Marechal, J. C.; Durand, P.; Salmon-Monviola, J.; Ceschia, E.; Bustillo, V.
2016-12-01
Nitrogen transfers at agricultural catchment level are intricately linked to water transfers. Agro-hydrological modeling approaches aim at integrating spatial heterogeneity of catchment physical properties together with agricultural practices to spatially estimate the water and nitrogen cycles. As in hydrology, the calibration schemes are designed to optimize the performance of the temporal dynamics and biases in model simulations, while ignoring the simulated spatial pattern. Yet, crop uses, i.e. transpiration and nitrogen exported by harvest, are the main fluxes at the catchment scale, highly variable in space and time. Geo-information time-series of vegetation and water index with multi-spectral optical detection S2 together with surface roughness time series with C-band radar detection S1 are used to reset soil water holding capacity parameters (depth, porosity) and agricultural practices (sowing date, irrigated area extent) of a crop model coupled with a hydrological model. This study takes two agro-hydrological contexts as demonstrators: 1-spatial nitrogen excess estimation in south-west of France, and 2-groundwater extraction for rice irrigation in south-India. Spatio-temporal patterns are involved in respectively surface water contamination due to over-fertilization and local groundwater shortages due to over-pumping for above rice inundation. Optimized Leaf Area Index profiles are simulated at the satellite images pixel level using an agro-hydrological model to reproduce spatial and temporal crop growth dynamics in south-west of France, improving the in-stream nitrogen fluxes by 12%. Accurate detection of irrigated area extents are obtained with the thresholding method based on optical indices, with a kappa of 0.81 for the dry season 2016. The actual monsoon season is monitored and will be presented. These extents drive the groundwater pumping and are highly variable in time (from 2 to 8% of the total area).
[Optimized application of nested PCR method for detection of malaria].
Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C
2017-04-28
Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.
Image projection optical system for measuring pattern electroretinograms
NASA Astrophysics Data System (ADS)
Starkey, Douglas E.; Taboada, John; Peters, Daniel
1994-06-01
The use of the pattern-electroretinogram (PERG) as a noninvasive diagnostic tool for the early detection of glaucoma has been supported by a number of recent studies. We have developed a unique device which uses a laser interferometer to generate a sinusoidal fringe pattern that is presented to the eye in Maxwellian view for the purpose of producing a PERG response. The projection system stimulates a large visual field and is designed to bypass the optics of the eye in order to measure the true retinal response to a temporally alternating fringe pattern. The contrast, spatial frequency, total power output, orientation, alternating temporal frequency, and field location of the fringe pattern presented to the eye can all be varied by the device. It is critical for these parameters to be variable so that optimal settings may be determined for the normal state and any deviation from it, i.e. early or preclinical glaucoma. Several interferometer designs and optical projection systems were studied in order to design a compact system which provided the desired variable pattern stimulus to the eye. This paper will present a description of the clinical research instrument and its performance with the primary emphasis on the optical system design as it relates to the fringe pattern generation and other optical parameters. Examples of its use in the study of glaucoma diagnosis will also be presented.
Zhang, Changsheng; Cai, Hongmin; Huang, Jingying; Song, Yan
2016-09-17
Variations in DNA copy number have an important contribution to the development of several diseases, including autism, schizophrenia and cancer. Single-cell sequencing technology allows the dissection of genomic heterogeneity at the single-cell level, thereby providing important evolutionary information about cancer cells. In contrast to traditional bulk sequencing, single-cell sequencing requires the amplification of the whole genome of a single cell to accumulate enough samples for sequencing. However, the amplification process inevitably introduces amplification bias, resulting in an over-dispersing portion of the sequencing data. Recent study has manifested that the over-dispersed portion of the single-cell sequencing data could be well modelled by negative binomial distributions. We developed a read-depth based method, nbCNV to detect the copy number variants (CNVs). The nbCNV method uses two constraints-sparsity and smoothness to fit the CNV patterns under the assumption that the read signals are negatively binomially distributed. The problem of CNV detection was formulated as a quadratic optimization problem, and was solved by an efficient numerical solution based on the classical alternating direction minimization method. Extensive experiments to compare nbCNV with existing benchmark models were conducted on both simulated data and empirical single-cell sequencing data. The results of those experiments demonstrate that nbCNV achieves superior performance and high robustness for the detection of CNVs in single-cell sequencing data.
Miranda-Miranda, Estefan; Sánchez-Reyes, Ayixón; Cuervo-Soto, Laura; Aceves-Zamudio, Denise; Atriztán-Hernández, Karina; Morales-Herrera, Catalina; Rodríguez-Hernández, Rocío; Folch-Mallol, Jorge
2014-01-01
A moderate halophile and thermotolerant fungal strain was isolated from a sugarcane bagasse fermentation in the presence of 2 M NaCl that was set in the laboratory. This strain was identified by polyphasic criteria as Aspergillus caesiellus. The fungus showed an optimal growth rate in media containing 1 M NaCl at 28°C and could grow in media added with up to 2 M NaCl. This strain was able to grow at 37 and 42°C, with or without NaCl. A. caesiellus H1 produced cellulases, xylanases, manganese peroxidase (MnP) and esterases. No laccase activity was detected in the conditions we tested. The cellulase activity was thermostable, halostable, and no differential expression of cellulases was observed in media with different salt concentrations. However, differential band patterns for cellulase and xylanase activities were detected in zymograms when the fungus was grown in different lignocellulosic substrates such as wheat straw, maize stover, agave fibres, sugarcane bagasse and sawdust. Optimal temperature and pH were similar to other cellulases previously described. These results support the potential of this fungus to degrade lignocellulosic materials and its possible use in biotechnological applications. PMID:25162614
Kislinger, Thomas; Gramolini, Anthony O; MacLennan, David H; Emili, Andrew
2005-08-01
An optimized analytical expression profiling strategy based on gel-free multidimensional protein identification technology (MudPIT) is reported for the systematic investigation of biochemical (mal)-adaptations associated with healthy and diseased heart tissue. Enhanced shotgun proteomic detection coverage and improved biological inference is achieved by pre-fractionation of excised mouse cardiac muscle into subcellular components, with each organellar fraction investigated exhaustively using multiple repeat MudPIT analyses. Functional-enrichment, high-confidence identification, and relative quantification of hundreds of organelle- and tissue-specific proteins are achieved readily, including detection of low abundance transcriptional regulators, signaling factors, and proteins linked to cardiac disease. Important technical issues relating to data validation, including minimization of artifacts stemming from biased under-sampling and spurious false discovery, together with suggestions for further fine-tuning of sample preparation, are discussed. A framework for follow-up bioinformatic examination, pattern recognition, and data mining is also presented in the context of a stringent application of MudPIT for probing fundamental aspects of heart muscle physiology as well as the discovery of perturbations associated with heart failure.
Multicutter machining of compound parametric surfaces
NASA Astrophysics Data System (ADS)
Hatna, Abdelmadjid; Grieve, R. J.; Broomhead, P.
2000-10-01
Parametric free forms are used in industries as disparate as footwear, toys, sporting goods, ceramics, digital content creation, and conceptual design. Optimizing tool path patterns and minimizing the total machining time is a primordial issue in numerically controlled (NC) machining of free form surfaces. We demonstrate in the present work that multi-cutter machining can achieve as much as 60% reduction in total machining time for compound sculptured surfaces. The given approach is based upon the pre-processing as opposed to the usual post-processing of surfaces for the detection and removal of interference followed by precise tracking of unmachined areas.
NASA Astrophysics Data System (ADS)
Chen, Yi-Chieh; Li, Tsung-Han; Lin, Hung-Yu; Chen, Kao-Tun; Wu, Chun-Sheng; Lai, Ya-Chieh; Hurat, Philippe
2018-03-01
Along with process improvement and integrated circuit (IC) design complexity increased, failure rate caused by optical getting higher in the semiconductor manufacture. In order to enhance chip quality, optical proximity correction (OPC) plays an indispensable rule in the manufacture industry. However, OPC, includes model creation, correction, simulation and verification, is a bottleneck from design to manufacture due to the multiple iterations and advanced physical behavior description in math. Thus, this paper presented a pattern-based design technology co-optimization (PB-DTCO) flow in cooperation with OPC to find out patterns which will negatively affect the yield and fixed it automatically in advance to reduce the run-time in OPC operation. PB-DTCO flow can generate plenty of test patterns for model creation and yield gaining, classify candidate patterns systematically and furthermore build up bank includes pairs of match and optimization patterns quickly. Those banks can be used for hotspot fixing, layout optimization and also be referenced for the next technology node. Therefore, the combination of PB-DTCO flow with OPC not only benefits for reducing the time-to-market but also flexible and can be easily adapted to diversity OPC flow.
[Caloric intake in parenteral nutrition of very low weight infants].
Maggio, L; Gallini, F; De Carolis, M P; Frezza, S; Greco, F
1994-10-01
To evaluate the efficacy of a measure able to compare energy intake from parenteral and enteral nutrition we documented growth patterns in a group of VLBW infants treated with parenteral nutrition (PN). To analyze comparative energy intake from the two sources we expressed a percentage of both parenteral and enteral calories: the former (RCP%) related to an optimal value of 85 non protein calories and the latter (RCE%) to an optimal value of 150 total calories. Total energy intake was planned on the RCT% (RCP% + RCE%). We studied 75 VLBW infants with a mean BW of 1040 g and a mean GA of 29.5 weeks. The mean duration of PN was 25.8 +/- 10.4 days. The initial weight loss (10.2 +/- 5.3%), the time to regain BW (5.5 +/- 4 days) and the day of lowest weight (5.2 +/- 1.6 day of life) were in the normal range; the subsequent growth rate resulted 25.9 +/- 9.2 g/kg/die and did not change for different GA or BW. Growth pattern about head circumference and length were above the third percentile. The mean age of RCT% = 100% was 11.4 +/- 4.8 days of PN; this value was higher for the more premature infants. Severe metabolic abnormalities were not detected. Our observations show the efficacy of the RCT% as index of energy from both enteral and parenteral source during PN: the growth pattern seems to be quite satisfactory without any severe metabolic complication.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Wenk, Jonathan F; Wall, Samuel T; Peterson, Robert C; Helgerson, Sam L; Sabbah, Hani N; Burger, Mike; Stander, Nielen; Ratcliffe, Mark B; Guccione, Julius M
2009-12-01
Heart failure continues to present a significant medical and economic burden throughout the developed world. Novel treatments involving the injection of polymeric materials into the myocardium of the failing left ventricle (LV) are currently being developed, which may reduce elevated myofiber stresses during the cardiac cycle and act to retard the progression of heart failure. A finite element (FE) simulation-based method was developed in this study that can automatically optimize the injection pattern of the polymeric "inclusions" according to a specific objective function, using commercially available software tools. The FE preprocessor TRUEGRID((R)) was used to create a parametric axisymmetric LV mesh matched to experimentally measured end-diastole and end-systole metrics from dogs with coronary microembolization-induced heart failure. Passive and active myocardial material properties were defined by a pseudo-elastic-strain energy function and a time-varying elastance model of active contraction, respectively, that were implemented in the FE software LS-DYNA. The companion optimization software LS-OPT was used to communicate directly with TRUEGRID((R)) to determine FE model parameters, such as defining the injection pattern and inclusion characteristics. The optimization resulted in an intuitive optimal injection pattern (i.e., the one with the greatest number of inclusions) when the objective function was weighted to minimize mean end-diastolic and end-systolic myofiber stress and ignore LV stroke volume. In contrast, the optimization resulted in a nonintuitive optimal pattern (i.e., 3 inclusions longitudinallyx6 inclusions circumferentially) when both myofiber stress and stroke volume were incorporated into the objective function with different weights.
Structural damage detection-oriented multi-type sensor placement with multi-objective optimization
NASA Astrophysics Data System (ADS)
Lin, Jian-Fu; Xu, You-Lin; Law, Siu-Seong
2018-05-01
A structural damage detection-oriented multi-type sensor placement method with multi-objective optimization is developed in this study. The multi-type response covariance sensitivity-based damage detection method is first introduced. Two objective functions for optimal sensor placement are then introduced in terms of the response covariance sensitivity and the response independence. The multi-objective optimization problem is formed by using the two objective functions, and the non-dominated sorting genetic algorithm (NSGA)-II is adopted to find the solution for the optimal multi-type sensor placement to achieve the best structural damage detection. The proposed method is finally applied to a nine-bay three-dimensional frame structure. Numerical results show that the optimal multi-type sensor placement determined by the proposed method can avoid redundant sensors and provide satisfactory results for structural damage detection. The restriction on the number of each type of sensors in the optimization can reduce the searching space in the optimization to make the proposed method more effective. Moreover, how to select a most optimal sensor placement from the Pareto solutions via the utility function and the knee point method is demonstrated in the case study.
Patterning control strategies for minimum edge placement error in logic devices
NASA Astrophysics Data System (ADS)
Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim
2017-03-01
In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
A fuzzy pattern matching method based on graph kernel for lithography hotspot detection
NASA Astrophysics Data System (ADS)
Nitta, Izumi; Kanazawa, Yuzi; Ishida, Tsutomu; Banno, Koji
2017-03-01
In advanced technology nodes, lithography hotspot detection has become one of the most significant issues in design for manufacturability. Recently, machine learning based lithography hotspot detection has been widely investigated, but it has trade-off between detection accuracy and false alarm. To apply machine learning based technique to the physical verification phase, designers require minimizing undetected hotspots to avoid yield degradation. They also need a ranking of similar known patterns with a detected hotspot to prioritize layout pattern to be corrected. To achieve high detection accuracy and to prioritize detected hotspots, we propose a novel lithography hotspot detection method using Delaunay triangulation and graph kernel based machine learning. Delaunay triangulation extracts features of hotspot patterns where polygons locate irregularly and closely one another, and graph kernel expresses inner structure of graphs. Additionally, our method provides similarity between two patterns and creates a list of similar training patterns with a detected hotspot. Experiments results on ICCAD 2012 benchmarks show that our method achieves high accuracy with allowable range of false alarm. We also show the ranking of the similar known patterns with a detected hotspot.
Okubo, Hitomi; Sasaki, Satoshi; Murakami, Kentaro; Yokoyama, Tetsuji; Hirota, Naoko; Notsu, Akiko; Fukui, Mitsuru; Date, Chigusa
2015-06-06
Simultaneous dietary achievement of a full set of nutritional recommendations is difficult. Diet optimization model using linear programming is a useful mathematical means of translating nutrient-based recommendations into realistic nutritionally-optimal food combinations incorporating local and culture-specific foods. We used this approach to explore optimal food intake patterns that meet the nutrient recommendations of the Dietary Reference Intakes (DRIs) while incorporating typical Japanese food selections. As observed intake values, we used the food and nutrient intake data of 92 women aged 31-69 years and 82 men aged 32-69 years living in three regions of Japan. Dietary data were collected with semi-weighed dietary record on four non-consecutive days in each season of the year (16 days total). The linear programming models were constructed to minimize the differences between observed and optimized food intake patterns while also meeting the DRIs for a set of 28 nutrients, setting energy equal to estimated requirements, and not exceeding typical quantities of each food consumed by each age (30-49 or 50-69 years) and gender group. We successfully developed mathematically optimized food intake patterns that met the DRIs for all 28 nutrients studied in each sex and age group. Achieving nutritional goals required minor modifications of existing diets in older groups, particularly women, while major modifications were required to increase intake of fruit and vegetables in younger groups of both sexes. Across all sex and age groups, optimized food intake patterns demanded greatly increased intake of whole grains and reduced-fat dairy products in place of intake of refined grains and full-fat dairy products. Salt intake goals were the most difficult to achieve, requiring marked reduction of salt-containing seasoning (65-80%) in all sex and age groups. Using a linear programming model, we identified optimal food intake patterns providing practical food choices and meeting nutritional recommendations for Japanese populations. Dietary modifications from current eating habits required to fulfil nutritional goals differed by age: more marked increases in food volume were required in younger groups.
Esteve, Rosa; López-Martínez, Alicia E; Peters, Madelon L; Serrano-Ibáñez, Elena R; Ruiz-Párraga, Gema T; Ramírez-Maestre, Carmen
2018-01-01
Activity patterns are the product of pain and of the self-regulation of current goals in the context of pain. The aim of this study was to investigate the association between goal management strategies and activity patterns while taking into account the role of optimism/pessimism and positive/negative affect. Two hundred and thirty-seven patients with chronic musculoskeletal pain filled out questionnaires on optimism, positive and negative affect, pain intensity, and the activity patterns they employed in dealing with their pain. Questionnaires were also administered to assess their general goal management strategies: goal persistence, flexible goal adjustment, and disengagement and reengagement with goals. Structural equation modelling showed that higher levels of optimism were related to persistence, flexible goal management, and commitment to new goals. These strategies were associated with higher positive affect, persistence in finishing tasks despite pain, and infrequent avoidance behaviour in the presence or anticipation of pain. The strategies used by the patients with chronic musculoskeletal pain to manage their life goals are related to their activity patterns.
Optimal management strategies in variable environments: Stochastic optimal control methods
Williams, B.K.
1985-01-01
Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both the discount rate and the climatic patterns on optimal harvest strategics. In general, decreases in either the discount rate or in the frequency of favorable weather patterns lcd to a more conservative defoliation policy. This did not hold, however, for plants in states of low vigor. Optimal control for shadscale and winterfat tended to stabilize on a policy of heavy defoliation stress, followed by one or more seasons of rest. Big sagebrush required a policy of heavy summer defoliation when sufficient active shoot material is present at the beginning of the growing season. The comparison of fixed and optimal strategies indicated considerable improvement in defoliation yields when optimal strategies are followed. The superior performance was attributable to increased defoliation of plants in states of high vigor. Improvements were found for both discounted and undiscounted yields.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2016-11-01
In the present paper, the optimal detector design is investigated for both detecting and locating light atoms from high resolution scanning transmission electron microscopy (HR STEM) images. The principles of detection theory are used to quantify the probability of error for the detection of light atoms from HR STEM images. To determine the optimal experiment design for locating light atoms, use is made of the so-called Cramér-Rao Lower Bound (CRLB). It is investigated if a single optimal design can be found for both the detection and location problem of light atoms. Furthermore, the incoming electron dose is optimised for both research goals and it is shown that picometre range precision is feasible for the estimation of the atom positions when using an appropriate incoming electron dose under the optimal detector settings to detect light atoms. Copyright © 2016 Elsevier B.V. All rights reserved.
Co-optimization of lithographic and patterning processes for improved EPE performance
NASA Astrophysics Data System (ADS)
Maslow, Mark J.; Timoshkov, Vadim; Kiers, Ton; Jee, Tae Kwon; de Loijer, Peter; Morikita, Shinya; Demand, Marc; Metz, Andrew W.; Okada, Soichiro; Kumar, Kaushik A.; Biesemans, Serge; Yaegashi, Hidetami; Di Lorenzo, Paolo; Bekaert, Joost P.; Mao, Ming; Beral, Christophe; Larivière, Stephane
2017-03-01
Complimentary lithography is already being used for advanced logic patterns. The tight pitches for 1D Metal layers are expected to be created using spacer based multiple patterning ArF-i exposures and the more complex cut/block patterns are made using EUV exposures. At the same time, control requirements of CDU, pattern shift and pitch-walk are approaching sub-nanometer levels to meet edge placement error (EPE) requirements. Local variability, such as Line Edge Roughness (LER), Local CDU, and Local Placement Error (LPE), are dominant factors in the total Edge Placement error budget. In the lithography process, improving the imaging contrast when printing the core pattern has been shown to improve the local variability. In the etch process, it has been shown that the fusion of atomic level etching and deposition can also improve these local variations. Co-optimization of lithography and etch processing is expected to further improve the performance over individual optimizations alone. To meet the scaling requirements and keep process complexity to a minimum, EUV is increasingly seen as the platform for delivering the exposures for both the grating and the cut/block patterns beyond N7. In this work, we evaluated the overlay and pattern fidelity of an EUV block printed in a negative tone resist on an ArF-i SAQP grating. High-order Overlay modeling and corrections during the exposure can reduce overlay error after development, a significant component of the total EPE. During etch, additional degrees of freedom are available to improve the pattern placement error in single layer processes. Process control of advanced pitch nanoscale-multi-patterning techniques as described above is exceedingly complicated in a high volume manufacturing environment. Incorporating potential patterning optimizations into both design and HVM controls for the lithography process is expected to bring a combined benefit over individual optimizations. In this work we will show the EPE performance improvement for a 32nm pitch SAQP + block patterned Metal 2 layer by cooptimizing the lithography and etch processes. Recommendations for further improvements and alternative processes will be given.
Cancer Detection in Microarray Data Using a Modified Cat Swarm Optimization Clustering Approach
M, Pandi; R, Balamurugan; N, Sadhasivam
2017-12-29
Objective: A better understanding of functional genomics can be obtained by extracting patterns hidden in gene expression data. This could have paramount implications for cancer diagnosis, gene treatments and other domains. Clustering may reveal natural structures and identify interesting patterns in underlying data. The main objective of this research was to derive a heuristic approach to detection of highly co-expressed genes related to cancer from gene expression data with minimum Mean Squared Error (MSE). Methods: A modified CSO algorithm using Harmony Search (MCSO-HS) for clustering cancer gene expression data was applied. Experiment results are analyzed using two cancer gene expression benchmark datasets, namely for leukaemia and for breast cancer. Result: The results indicated MCSO-HS to be better than HS and CSO, 13% and 9% with the leukaemia dataset. For breast cancer dataset improvement was by 22% and 17%, respectively, in terms of MSE. Conclusion: The results showed MCSO-HS to outperform HS and CSO with both benchmark datasets. To validate the clustering results, this work was tested with internal and external cluster validation indices. Also this work points to biological validation of clusters with gene ontology in terms of function, process and component. Creative Commons Attribution License
NASA Astrophysics Data System (ADS)
Korganbayev, Sanzhar; Orazayev, Yerzhan; Sovetov, Sultan; Bazyl, Ali; Schena, Emiliano; Massaroni, Carlo; Gassino, Riccardo; Vallan, Alberto; Perrone, Guido; Saccomandi, Paola; Arturo Caponero, Michele; Palumbo, Giovanna; Campopiano, Stefania; Iadicicco, Agostino; Tosi, Daniele
2018-03-01
In this paper, we describe a novel method for spatially distributed temperature measurement with Chirped Fiber Bragg Grating (CFBG) fiber-optic sensors. The proposed method determines the thermal profile in the CFBG region from demodulation of the CFBG optical spectrum. The method is based on an iterative optimization that aims at minimizing the mismatch between the measured CFBG spectrum and a CFBG model based on coupled-mode theory (CMT), perturbed by a temperature gradient. In the demodulation part, we simulate different temperature distribution patterns with Monte-Carlo approach on simulated CFBG spectra. Afterwards, we obtain cost function that minimizes difference between measured and simulated spectra, and results in final temperature profile. Experiments and simulations have been carried out first with a linear gradient, demonstrating a correct operation (error 2.9 °C); then, a setup has been arranged to measure the temperature pattern on a 5-cm long section exposed to medical laser thermal ablation. Overall, the proposed method can operate as a real-time detection technique for thermal gradients over 1.5-5 cm regions, and turns as a key asset for the estimation of thermal gradients at the micro-scale in biomedical applications.
Optimality approaches to describe characteristic fluvial patterns on landscapes
Paik, Kyungrock; Kumar, Praveen
2010-01-01
Mother Nature has left amazingly regular geomorphic patterns on the Earth's surface. These patterns are often explained as having arisen as a result of some optimal behaviour of natural processes. However, there is little agreement on what is being optimized. As a result, a number of alternatives have been proposed, often with little a priori justification with the argument that successful predictions will lend a posteriori support to the hypothesized optimality principle. Given that maximum entropy production is an optimality principle attempting to predict the microscopic behaviour from a macroscopic characterization, this paper provides a review of similar approaches with the goal of providing a comparison and contrast between them to enable synthesis. While assumptions of optimal behaviour approach a system from a macroscopic viewpoint, process-based formulations attempt to resolve the mechanistic details whose interactions lead to the system level functions. Using observed optimality trends may help simplify problem formulation at appropriate levels of scale of interest. However, for such an approach to be successful, we suggest that optimality approaches should be formulated at a broader level of environmental systems' viewpoint, i.e. incorporating the dynamic nature of environmental variables and complex feedback mechanisms between fluvial and non-fluvial processes. PMID:20368257
Proanthocyanidin screening by LC-ESI-MS of Portuguese red wines made with teinturier grapes.
Teixeira, Natércia; Azevedo, Joana; Mateus, Nuno; de Freitas, Victor
2016-01-01
Proanthocyanidins (PAs) are one of the most important polyphenolic compounds in wine. Among PAs, prodelphinidin (PD) dimers and trimers have not been widely detected in wines due to the lack of available commercial standards and the difficulty to detect and isolate them from natural sources. LC-ESI-MS (liquid chromatography-electrospray ionization-mass spectrometry) with the right chromatographic conditions has proven to be a powerful tool for PAs detection and identification in complex samples. This technique has been applied to an exhaustive study of PA composition of two Portuguese red wines made with teinturier grapes, especially for the identification of PD dimers and trimers. Tandem mass spectrometry (MS/MS) with ion trap provided additional information about the structures of these compounds through the fragmentation patterns of the pseudomolecular ions. A LC-ESI-MS method was optimized and 41 different compounds were found. Among them are included 8 PD dimers and 13 PD trimers. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ground-based very high energy gamma ray astronomy: Observational highlights
NASA Technical Reports Server (NTRS)
Turver, K. E.
1986-01-01
It is now more than 20 years since the first ground based gamma ray experiments involving atmospheric Cerenkov radiation were undertaken. The present highlights in observational ground-based very high energy (VHE) gamma ray astronomy and the optimism about an interesting future for the field follow progress in these areas: (1) the detection at increased levels of confidence of an enlarged number of sources so that at present claims were made for the detection, at the 4 to 5 sd level of significance, of 8 point sources; (2) the replication of the claimed detections with, for the first time, confirmation of the nature and detail of the emission; and (3) the extension of gamma ray astronomy to the ultra high energy (UHE) domain. The pattern, if any, to emerge from the list of sources claimed so far is that X-ray binary sources appear to be copious emitters of gamma rays over at least 4 decades of energy. These X-ray sources which behave as VHE and UHE gamma ray emitters are examined.
Antibiotic Resistance in Sepsis Patients: Evaluation and Recommendation of Antibiotic Use
Pradipta, Ivan Surya; Sodik, Dian Chairunnisa; Lestari, Keri; Parwati, Ida; Halimah, Eli; Diantini, Ajeng; Abdulah, Rizky
2013-01-01
Background: The appropriate selection of empirical antibiotics based on the pattern of local antibiotic resistance can reduce the mortality rate and increase the rational use of antibiotics. Aims: We analyze the pattern of antibiotic use and the sensitivity patterns of antibiotics to support the rational use of antibiotics in patients with sepsis. Materials and Methods: A retrospective observational study was conducted in adult sepsis patient at one of Indonesian hospital during January-December 2011. Data were collected from the hospital medical record department. Descriptive analysis was used in the processing and interpretation of data. Results: A total of 76 patients were included as research subjects. Lung infection was the highest source of infection. In the 66.3% of clinical specimens that were culture positive for microbes, Klebsiella pneumoniae, Escherichia coli, Staphylococcus hominis were detected with the highest frequency. The six most frequently used antibiotics, levofloxacin, ceftazidime, ciprofloxacin, cefotaxime, ceftriaxone, and erythromycin, showed an average resistance above 50%. Conclusions: The high use of antibiotic with a high level resistance requires a policy to support its rational use. Local microbial pattern based on site infection and pattern of antibiotics sensitivity test can be used as supporting data to optimize appropriateness of empirical antibiotics therapy in sepsis patients. PMID:23923107
Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.
Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc
2017-10-01
The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.
EUV process establishment through litho and etch for N7 node
NASA Astrophysics Data System (ADS)
Kuwahara, Yuhei; Kawakami, Shinichiro; Kubota, Minoru; Matsunaga, Koichi; Nafus, Kathleen; Foubert, Philippe; Mao, Ming
2016-03-01
Extreme ultraviolet lithography (EUVL) technology is steadily reaching high volume manufacturing for 16nm half pitch node and beyond. However, some challenges, for example scanner availability and resist performance (resolution, CD uniformity (CDU), LWR, etch behavior and so on) are remaining. Advance EUV patterning on the ASML NXE:3300/ CLEAN TRACK LITHIUS Pro Z- EUV litho cluster is launched at imec, allowing for finer pitch patterns for L/S and CH. Tokyo Electron Ltd. and imec are continuously collabo rating to develop manufacturing quality POR processes for NXE:3300. TEL's technologies to enhance CDU, defectivity and LWR/LER can improve patterning performance. The patterning is characterized and optimized in both litho and etch for a more complete understanding of the final patterning performance. This paper reports on post-litho CDU improvement by litho process optimization and also post-etch LWR reduction by litho and etch process optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMillan, Kyle; Marleau, Peter; Brubaker, Erik
In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed imagemore » quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.« less
Fundamental uncertainty limit for speckle displacement measurements.
Fischer, Andreas
2017-09-01
The basic metrological task in speckle photography is to quantify displacements of speckle patterns, allowing for instance the investigation of the mechanical load and modification of objects with rough surfaces. However, the fundamental limit of the measurement uncertainty due to photon shot noise is unknown. For this reason, the Cramér-Rao bound (CRB) is derived for speckle displacement measurements, representing the squared minimal achievable measurement uncertainty. As result, the CRB for speckle patterns is only two times the CRB for an ideal point light source. Hence, speckle photography is an optimal measurement approach for contactless displacement measurements on rough surfaces. In agreement with a derivation from Heisenberg's uncertainty principle, the CRB depends on the number of detected photons and the diffraction limit of the imaging system described by the speckle size. The theoretical results are verified and validated, demonstrating the capability for displacement measurements with nanometer resolution.
ImpulseDE: detection of differentially expressed genes in time series data using impulse models.
Sander, Jil; Schultze, Joachim L; Yosef, Nir
2017-03-01
Perturbations in the environment lead to distinctive gene expression changes within a cell. Observed over time, those variations can be characterized by single impulse-like progression patterns. ImpulseDE is an R package suited to capture these patterns in high throughput time series datasets. By fitting a representative impulse model to each gene, it reports differentially expressed genes across time points from a single or between two time courses from two experiments. To optimize running time, the code uses clustering and multi-threading. By applying ImpulseDE , we demonstrate its power to represent underlying biology of gene expression in microarray and RNA-Seq data. ImpulseDE is available on Bioconductor ( https://bioconductor.org/packages/ImpulseDE/ ). niryosef@berkeley.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Generation-3 programmable array microscope (PAM) with digital micro-mirror device (DMD)
NASA Astrophysics Data System (ADS)
De Beule, Pieter A. A.; de Vries, Anthony H. B.; Arndt-Jovin, Donna J.; Jovin, Thomas M.
2011-03-01
We report progress on the construction of an optical sectioning programmable array microscope (PAM) implemented with a digital micro-mirror device (DMD) spatial light modulator (SLM) utilized for both fluorescence illumination and detection. The introduction of binary intensity modulation at the focal plane of a microscope objective in a computer controlled pixilated mode allows the recovery of an optically sectioned image. Illumination patterns can be changed very quickly, in contrast to static Nipkow disk or aperture correlation implementations, thereby creating an optical system that can be optimized to the optical specimen in a convenient manner, e.g. for patterned photobleaching, photobleaching reduction, or spatial superresolution. We present a third generation (Gen-3) dual path PAM module incorporating the 25 kHz binary frame rate TI 1080p DMD and a newly developed optical system that offers diffraction limited imaging with compensation of tilt angle distortion.
Rumore, Jillian Leigh; Tschetter, Lorelee; Nadon, Celine
2016-05-01
The lack of pattern diversity among pulsed-field gel electrophoresis (PFGE) profiles for Escherichia coli O157:H7 in Canada does not consistently provide optimal discrimination, and therefore, differentiating temporally and/or geographically associated sporadic cases from potential outbreak cases can at times impede investigations. To address this limitation, DNA sequence-based methods such as multilocus variable-number tandem-repeat analysis (MLVA) have been explored. To assess the performance of MLVA as a supplemental method to PFGE from the Canadian perspective, a retrospective analysis of all E. coli O157:H7 isolated in Canada from January 2008 to December 2012 (inclusive) was conducted. A total of 2285 E. coli O157:H7 isolates and 63 clusters of cases (by PFGE) were selected for the study. Based on the qualitative analysis, the addition of MLVA improved the categorization of cases for 60% of clusters and no change was observed for ∼40% of clusters investigated. In such situations, MLVA serves to confirm PFGE results, but may not add further information per se. The findings of this study demonstrate that MLVA data, when used in combination with PFGE-based analyses, provide additional resolution to the detection of clusters lacking PFGE diversity as well as demonstrate good epidemiological concordance. In addition, MLVA is able to identify cluster-associated isolates with variant PFGE pattern combinations that may have been previously missed by PFGE alone. Optimal laboratory surveillance in Canada is achieved with the application of PFGE and MLVA in tandem for routine surveillance, cluster detection, and outbreak response.
Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin
2015-12-01
The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimization and Characterization of a Novel Self Powered Solid State Neutron Detector
NASA Astrophysics Data System (ADS)
Clinton, Justin
There is a strong interest in detecting both the diversion of special nuclear material (SNM) from legitimate, peaceful purposes and the transport of illicit SNM across domestic and international borders and ports. A simple solid-state detector employs a planar solar-cell type p-n junction and a thin conversion layer that converts incident neutrons into detectable charged particles, such as protons, alpha-particles, and heavier ions. Although simple planar devices can act as highly portable, low cost detectors, they have historically been limited to relatively low detection efficiencies; ˜10% and ˜0.2% for thermal and fast detectors, respectively. To increase intrinsic detection efficiency, the incorporation of 3D microstructures into p-i-n silicon devices was proposed. In this research, a combination of existing and new types of detector microstructures were investigated; Monte Carlo models, based on analytical calculations, were constructed and characterized using the GEANT4 simulation toolkit. The simulation output revealed that an array of etched hexagonal holes arranged in a honeycomb pattern and filled with either enriched (99% 10B) boron or parylene resulted in the highest intrinsic detection efficiencies of 48% and 0.88% for thermal and fast neutrons, respectively. The optimal parameters corresponding to each model were utilized as the basis for the fabrication of several prototype detectors. A calibrated 252Cf spontaneous fission source was utilized to generate fast neutrons, while thermal neutrons were created by placing the 252Cf in an HDPE housing designed and optimized using the MCNP simulation software. Upon construction, thermal neutron calibration was performed via activation analysis of gold foils and measurements from a 6Li loaded glass scintillator. Experimental testing of the prototype detectors resulted in maximum intrinsic efficiencies of 4.5 and 0.12% for the thermal and fast devices, respectively. The prototype thermal device was filled with natural (19% 10B) boron; scaling the response to 99% 10B enriched boron resulted in an intrinsic efficiency of 22.5%, one of the highest results in the literature. A comparison of simulated and experimental detector responses demonstrated a high degree of correlation, validating the conceptual models.
Tang, Xianyan; Geater, Alan; McNeil, Edward; Deng, Qiuyun; Dong, Aihu; Zhong, Ge
2017-04-04
Outbreaks of measles re-emerged in Guangxi province during 2013-2014, where measles again became a major public health concern. A better understanding of the patterns of measles cases would help in identifying high-risk areas and periods for optimizing preventive strategies, yet these patterns remain largely unknown. Thus, this study aimed to determine the patterns of measles clusters in space, time and space-time at the county level over the period 2004-2014 in Guangxi. Annual data on measles cases and population sizes for each county were obtained from Guangxi CDC and Guangxi Bureau of Statistics, respectively. Epidemic curves and Kulldorff's temporal scan statistics were used to identify seasonal peaks and high-risk periods. Tango's flexible scan statistics were implemented to determine irregular spatial clusters. Spatio-temporal clusters in elliptical cylinder shapes were detected by Kulldorff's scan statistics. Population attributable risk percent (PAR%) of children aged ≤24 months was used to identify regions with a heavy burden of measles. Seasonal peaks occurred between April and June, and a temporal measles cluster was detected in 2014. Spatial clusters were identified in West, Southwest and North Central Guangxi. Three phases of spatio-temporal clusters with high relative risk were detected: Central Guangxi during 2004-2005, Midwest Guangxi in 2007, and West and Southwest Guangxi during 2013-2014. Regions with high PAR% were mainly clustered in West, Southwest, North and Central Guangxi. A temporal uptrend of measles incidence existed in Guangxi between 2010 and 2014, while downtrend during 2004-2009. The hotspots shifted from Central to West and Southwest Guangxi, regions overburdened with measles. Thus, intensifying surveillance of timeliness and completeness of routine vaccination and implementing supplementary immunization activities for measles should prioritized in these regions.
Network anomaly detection system with optimized DS evidence theory.
Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu
2014-01-01
Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly.
Network Anomaly Detection System with Optimized DS Evidence Theory
Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu
2014-01-01
Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258
Variability-aware double-patterning layout optimization for analog circuits
NASA Astrophysics Data System (ADS)
Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Lee, Zhao Chuan; Tseng, I.-Lun; Ong, Jonathan Yoong Seang
2018-03-01
The semiconductor industry has adopted multi-patterning techniques to manage the delay in the extreme ultraviolet lithography technology. During the design process of double-patterning lithography layout masks, two polygons are assigned to different masks if their spacing is less than the minimum printable spacing. With these additional design constraints, it is very difficult to find experienced layout-design engineers who have a good understanding of the circuit to manually optimize the mask layers in order to minimize color-induced circuit variations. In this work, we investigate the impact of double-patterning lithography on analog circuits and provide quantitative analysis for our designers to select the optimal mask to minimize the circuit's mismatch. To overcome the problem and improve the turn-around time, we proposed our smart "anchoring" placement technique to optimize mask decomposition for analog circuits. We have developed a software prototype that is capable of providing anchoring markers in the layout, allowing industry standard tools to perform automated color decomposition process.
Structured illumination diffuse optical tomography for noninvasive functional neuroimaging in mice.
Reisman, Matthew D; Markow, Zachary E; Bauer, Adam Q; Culver, Joseph P
2017-04-01
Optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to superficial cortical tissues. Diffuse optical tomography (DOT) techniques provide noninvasive imaging, but previous DOT systems for rodent neuroimaging have been limited either by sparse spatial sampling or by slow speed. Here, we develop a DOT system with asymmetric source-detector sampling that combines the high-density spatial sampling (0.4 mm) detection of a scientific complementary metal-oxide-semiconductor camera with the rapid (2 Hz) imaging of a few ([Formula: see text]) structured illumination (SI) patterns. Analysis techniques are developed to take advantage of the system's flexibility and optimize trade-offs among spatial sampling, imaging speed, and signal-to-noise ratio. An effective source-detector separation for the SI patterns was developed and compared with light intensity for a quantitative assessment of data quality. The light fall-off versus effective distance was also used for in situ empirical optimization of our light model. We demonstrated the feasibility of this technique by noninvasively mapping the functional response in the somatosensory cortex of the mouse following electrical stimulation of the forepaw.
Image-enhanced endoscopy with I-scan technology for the evaluation of duodenal villous patterns.
Cammarota, Giovanni; Ianiro, Gianluca; Sparano, Lucia; La Mura, Rossella; Ricci, Riccardo; Larocca, Luigi M; Landolfi, Raffaele; Gasbarrini, Antonio
2013-05-01
I-scan technology is the newly developed endoscopic tool that works in real time and utilizes a digital contrast method to enhance endoscopic image. We performed a feasibility study aimed to determine the diagnostic accuracy of i-scan technology for the evaluation of duodenal villous patterns, having histology as the reference standard. In this prospective, single center, open study, patients undergoing upper endoscopy for an histological evaluation of duodenal mucosa were enrolled. All patients underwent upper endoscopy using high resolution view in association with i-scan technology. During endoscopy, duodenal villous patterns were evaluated and classified as normal, partial villous atrophy, or marked villous atrophy. Results were then compared with histology. One hundred fifteen subjects were recruited in this study. The endoscopist was able to find marked villous atrophy of the duodenum in 12 subjects, partial villous atrophy in 25, and normal villi in the remaining 78 individuals. The i-scan system was demonstrated to have great accuracy (100 %) in the detection of marked villous atrophy patterns. I-scan technology showed quite lower accuracy in determining partial villous atrophy or normal villous patterns (respectively, 90 % for both items). Image-enhancing endoscopic technology allows a clear visualization of villous patterns in the duodenum. By switching from the standard to the i-scan view, it is possible to optimize the accuracy of endoscopy in recognizing villous alteration in subjects undergoing endoscopic evaluation.
Pneumothorax detection in chest radiographs using local and global texture signatures
NASA Astrophysics Data System (ADS)
Geva, Ofer; Zimmerman-Moreno, Gali; Lieberman, Sivan; Konen, Eli; Greenspan, Hayit
2015-03-01
A novel framework for automatic detection of pneumothorax abnormality in chest radiographs is presented. The suggested method is based on a texture analysis approach combined with supervised learning techniques. The proposed framework consists of two main steps: at first, a texture analysis process is performed for detection of local abnormalities. Labeled image patches are extracted in the texture analysis procedure following which local analysis values are incorporated into a novel global image representation. The global representation is used for training and detection of the abnormality at the image level. The presented global representation is designed based on the distinctive shape of the lung, taking into account the characteristics of typical pneumothorax abnormalities. A supervised learning process was performed on both the local and global data, leading to trained detection system. The system was tested on a dataset of 108 upright chest radiographs. Several state of the art texture feature sets were experimented with (Local Binary Patterns, Maximum Response filters). The optimal configuration yielded sensitivity of 81% with specificity of 87%. The results of the evaluation are promising, establishing the current framework as a basis for additional improvements and extensions.
Brawanski, Alexander
2017-01-01
Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data. PMID:28255331
Proescholdt, Martin A; Faltermeier, Rupert; Bele, Sylvia; Brawanski, Alexander
2017-01-01
Multimodal brain monitoring has been utilized to optimize treatment of patients with critical neurological diseases. However, the amount of data requires an integrative tool set to unmask pathological events in a timely fashion. Recently we have introduced a mathematical model allowing the simulation of pathophysiological conditions such as reduced intracranial compliance and impaired autoregulation. Utilizing a mathematical tool set called selected correlation analysis (sca), correlation patterns, which indicate impaired autoregulation, can be detected in patient data sets (scp). In this study we compared the results of the sca with the pressure reactivity index (PRx), an established marker for impaired autoregulation. Mean PRx values were significantly higher in time segments identified as scp compared to segments showing no selected correlations (nsc). The sca based approach predicted cerebral autoregulation failure with a sensitivity of 78.8% and a specificity of 62.6%. Autoregulation failure, as detected by the results of both analysis methods, was significantly correlated with poor outcome. Sca of brain monitoring data detects impaired autoregulation with high sensitivity and sufficient specificity. Since the sca approach allows the simultaneous detection of both major pathological conditions, disturbed autoregulation and reduced compliance, it may become a useful analysis tool for brain multimodal monitoring data.
Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique
NASA Astrophysics Data System (ADS)
Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.
2017-12-01
Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Zhenhuan; Boyuka, David; Zou, X
Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less
Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI
2006-07-01
This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for themore » refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)« less
Pierzyńska-Mach, Agnieszka; Szczurek, Aleksander; Cella Zanacchi, Francesca; Pennacchietti, Francesca; Drukała, Justyna; Diaspro, Alberto; Cremer, Christoph; Darzynkiewicz, Zbigniew; Dobrucki, Jurek W
2016-01-01
Unscheduled DNA synthesis (UDS) is the final stage of the process of repair of DNA lesions induced by UVC. We detected UDS using a DNA precursor, 5-ethynyl-2'-deoxyuridine (EdU). Using wide-field, confocal and super-resolution fluorescence microscopy and normal human fibroblasts, derived from healthy subjects, we demonstrate that the sub-nuclear pattern of UDS detected via incorporation of EdU is different from that when BrdU is used as DNA precursor. EdU incorporation occurs evenly throughout chromatin, as opposed to just a few small and large repair foci detected by BrdU. We attribute this difference to the fact that BrdU antibody is of much larger size than EdU, and its accessibility to the incorporated precursor requires the presence of denatured sections of DNA. It appears that under the standard conditions of immunocytochemical detection of BrdU only fragments of DNA of various length are being denatured. We argue that, compared with BrdU, the UDS pattern visualized by EdU constitutes a more faithful representation of sub-nuclear distribution of the final stage of nucleotide excision repair induced by UVC. Using the optimized integrated EdU detection procedure we also measured the relative amount of the DNA precursor incorporated by cells during UDS following exposure to various doses of UVC. Also described is the high degree of heterogeneity in terms of the UVC-induced EdU incorporation per cell, presumably reflecting various DNA repair efficiencies or differences in the level of endogenous dT competing with EdU within a population of normal human fibroblasts.
Pierzyńska-Mach, Agnieszka; Szczurek, Aleksander; Cella Zanacchi, Francesca; Pennacchietti, Francesca; Drukała, Justyna; Diaspro, Alberto; Cremer, Christoph; Darzynkiewicz, Zbigniew; Dobrucki, Jurek W.
2016-01-01
ABSTRACT Unscheduled DNA synthesis (UDS) is the final stage of the process of repair of DNA lesions induced by UVC. We detected UDS using a DNA precursor, 5-ethynyl-2′-deoxyuridine (EdU). Using wide-field, confocal and super-resolution fluorescence microscopy and normal human fibroblasts, derived from healthy subjects, we demonstrate that the sub-nuclear pattern of UDS detected via incorporation of EdU is different from that when BrdU is used as DNA precursor. EdU incorporation occurs evenly throughout chromatin, as opposed to just a few small and large repair foci detected by BrdU. We attribute this difference to the fact that BrdU antibody is of much larger size than EdU, and its accessibility to the incorporated precursor requires the presence of denatured sections of DNA. It appears that under the standard conditions of immunocytochemical detection of BrdU only fragments of DNA of various length are being denatured. We argue that, compared with BrdU, the UDS pattern visualized by EdU constitutes a more faithful representation of sub-nuclear distribution of the final stage of nucleotide excision repair induced by UVC. Using the optimized integrated EdU detection procedure we also measured the relative amount of the DNA precursor incorporated by cells during UDS following exposure to various doses of UVC. Also described is the high degree of heterogeneity in terms of the UVC-induced EdU incorporation per cell, presumably reflecting various DNA repair efficiencies or differences in the level of endogenous dT competing with EdU within a population of normal human fibroblasts. PMID:27097376
2016-01-01
The objectives of the study were to (1) investigate the potential of using monopolar psychophysical detection thresholds for estimating spatial selectivity of neural excitation with cochlear implants and to (2) examine the effect of site removal on speech recognition based on the threshold measure. Detection thresholds were measured in Cochlear Nucleus® device users using monopolar stimulation for pulse trains that were of (a) low rate and long duration, (b) high rate and short duration, and (c) high rate and long duration. Spatial selectivity of neural excitation was estimated by a forward-masking paradigm, where the probe threshold elevation in the presence of a forward masker was measured as a function of masker-probe separation. The strength of the correlation between the monopolar thresholds and the slopes of the masking patterns systematically reduced as neural response of the threshold stimulus involved interpulse interactions (refractoriness and sub-threshold adaptation), and spike-rate adaptation. Detection threshold for the low-rate stimulus most strongly correlated with the spread of forward masking patterns and the correlation reduced for long and high rate pulse trains. The low-rate thresholds were then measured for all electrodes across the array for each subject. Subsequently, speech recognition was tested with experimental maps that deactivated five stimulation sites with the highest thresholds and five randomly chosen ones. Performance with deactivating the high-threshold sites was better than performance with the subjects’ clinical map used every day with all electrodes active, in both quiet and background noise. Performance with random deactivation was on average poorer than that with the clinical map but the difference was not significant. These results suggested that the monopolar low-rate thresholds are related to the spatial neural excitation patterns in cochlear implant users and can be used to select sites for more optimal speech recognition performance. PMID:27798658
Pailler, Emma; Adam, Julien; Barthélémy, Amélie; Oulhen, Marianne; Auger, Nathalie; Valent, Alexander; Borget, Isabelle; Planchard, David; Taylor, Melissa; André, Fabrice; Soria, Jean Charles; Vielh, Philippe; Besse, Benjamin; Farace, Françoise
2013-06-20
The diagnostic test for ALK rearrangement in non-small-cell lung cancer (NSCLC) for crizotinib treatment is currently done on tumor biopsies or fine-needle aspirations. We evaluated whether ALK rearrangement diagnosis could be performed by using circulating tumor cells (CTCs). The presence of an ALK rearrangement was examined in CTCs of 18 ALK-positive and 14 ALK-negative patients by using a filtration enrichment technique and filter-adapted fluorescent in situ hybridization (FA-FISH), a FISH method optimized for filters. ALK-rearrangement patterns were determined in CTCs and compared with those present in tumor biopsies. ALK-rearranged CTCs and tumor specimens were characterized for epithelial (cytokeratins, E-cadherin) and mesenchymal (vimentin, N-cadherin) marker expression. ALK-rearranged CTCs were monitored in five patients treated with crizotinib. All ALK-positive patients had four or more ALK-rearranged CTCs per 1 mL of blood (median, nine CTCs per 1 mL; range, four to 34 CTCs per 1 mL). No or only one ALK-rearranged CTC (median, one per 1 mL; range, zero to one per 1 mL) was detected in ALK-negative patients. ALK-rearranged CTCs harbored a unique (3'5') split pattern, and heterogeneous patterns (3'5', only 3') of splits were present in tumors. ALK-rearranged CTCs expressed a mesenchymal phenotype contrasting with heterogeneous epithelial and mesenchymal marker expressions in tumors. Variations in ALK-rearranged CTC levels were detected in patients being treated with crizotinib. ALK rearrangement can be detected in CTCs of patients with ALK-positive NSCLC by using a filtration technique and FA-FISH, enabling both diagnostic testing and monitoring of crizotinib treatment. Our results suggest that CTCs harboring a unique ALK rearrangement and mesenchymal phenotype may arise from clonal selection of tumor cells that have acquired the potential to drive metastatic progression of ALK-positive NSCLC.
A novel method for repeatedly generating speckle patterns used in digital image correlation
NASA Astrophysics Data System (ADS)
Zhang, Juan; Sweedy, Ahmed; Gitzhofer, François; Baroud, Gamal
2018-01-01
Speckle patterns play a key role in Digital Image Correlation (DIC) measurement, and generating an optimal speckle pattern has been the goal for decades now. The usual method of generating a speckle pattern is by manually spraying the paint on the specimen. However, this makes it difficult to reproduce the optimal pattern for maintaining identical testing conditions and achieving consistent DIC results. This study proposed and evaluated a novel method using an atomization system to repeatedly generate speckle patterns. To verify the repeatability of the speckle patterns generated by this system, simulation and experimental studies were systematically performed. The results from both studies showed that the speckle patterns and, accordingly, the DIC measurements become highly accurate and repeatable using the proposed atomization system.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Jamema, Swamidas V; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D; Shrivastava, Shyam K; Pötter, Richard
2010-12-01
Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK(O/T)) was used to compare the loading patterns. The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2) cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p=0.35, 0.38, 0.4). Dose to bladder (7.8±1.6 Gy) and sigmoid (5.6±1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1±1.7 Gy p=0.006) and sigmoid (4.5±1.0 Gy p=0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5±1.4 Gy, p=0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04) cGy m(-2) for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK(O/T) was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Dai, Shengfa; Wei, Qingguo
2017-01-01
Common spatial pattern algorithm is widely used to estimate spatial filters in motor imagery based brain-computer interfaces. However, use of a large number of channels will make common spatial pattern tend to over-fitting and the classification of electroencephalographic signals time-consuming. To overcome these problems, it is necessary to choose an optimal subset of the whole channels to save computational time and improve the classification accuracy. In this paper, a novel method named backtracking search optimization algorithm is proposed to automatically select the optimal channel set for common spatial pattern. Each individual in the population is a N-dimensional vector, with each component representing one channel. A population of binary codes generate randomly in the beginning, and then channels are selected according to the evolution of these codes. The number and positions of 1's in the code denote the number and positions of chosen channels. The objective function of backtracking search optimization algorithm is defined as the combination of classification error rate and relative number of channels. Experimental results suggest that higher classification accuracy can be achieved with much fewer channels compared to standard common spatial pattern with whole channels.
Pattern formations and optimal packing.
Mityushev, Vladimir
2016-04-01
Patterns of different symmetries may arise after solution to reaction-diffusion equations. Hexagonal arrays, layers and their perturbations are observed in different models after numerical solution to the corresponding initial-boundary value problems. We demonstrate an intimate connection between pattern formations and optimal random packing on the plane. The main study is based on the following two points. First, the diffusive flux in reaction-diffusion systems is approximated by piecewise linear functions in the framework of structural approximations. This leads to a discrete network approximation of the considered continuous problem. Second, the discrete energy minimization yields optimal random packing of the domains (disks) in the representative cell. Therefore, the general problem of pattern formations based on the reaction-diffusion equations is reduced to the geometric problem of random packing. It is demonstrated that all random packings can be divided onto classes associated with classes of isomorphic graphs obtained from the Delaunay triangulation. The unique optimal solution is constructed in each class of the random packings. If the number of disks per representative cell is finite, the number of classes of isomorphic graphs, hence, the number of optimal packings is also finite. Copyright © 2016 Elsevier Inc. All rights reserved.
Optimization of a multi-well array SERS chip
NASA Astrophysics Data System (ADS)
Abell, J. L.; Driskell, J. D.; Dluhy, R. A.; Tripp, R. A.; Zhao, Y.-P.
2009-05-01
SERS-active substrates are fabricated by oblique angle deposition and patterned by a polymer-molding technique to provide a uniform array for high throughput biosensing and multiplexing. Using a conventional SERS-active molecule, 1,2-Bis(4-pyridyl)ethylene (BPE), we show that this device provides a uniform Raman signal enhancement from well to well. The patterning technique employed in this study demonstrates a flexibility allowing for patterning control and customization, and performance optimization of the substrate. Avian influenza is analyzed to demonstrate the ability of this multi-well patterned SERS substrate for biosensing.
Sereshti, Hassan; Poursorkh, Zahra; Aliakbarzadeh, Ghazaleh; Zarre, Shahin; Ataolahi, Sahar
2018-01-15
Quality of saffron, a valuable food additive, could considerably affect the consumers' health. In this work, a novel preprocessing strategy for image analysis of saffron thin layer chromatographic (TLC) patterns was introduced. This includes performing a series of image pre-processing techniques on TLC images such as compression, inversion, elimination of general baseline (using asymmetric least squares (AsLS)), removing spots shift and concavity (by correlation optimization warping (COW)), and finally conversion to RGB chromatograms. Subsequently, an unsupervised multivariate data analysis including principal component analysis (PCA) and k-means clustering was utilized to investigate the soil salinity effect, as a cultivation parameter, on saffron TLC patterns. This method was used as a rapid and simple technique to obtain the chemical fingerprints of saffron TLC images. Finally, the separated TLC spots were chemically identified using high-performance liquid chromatography-diode array detection (HPLC-DAD). Accordingly, the saffron quality from different areas of Iran was evaluated and classified. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas
2010-03-01
Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.
The Relationship between Distributed Leadership and Teachers' Academic Optimism
ERIC Educational Resources Information Center
Mascall, Blair; Leithwood, Kenneth; Straus, Tiiu; Sacks, Robin
2008-01-01
Purpose: The goal of this study was to examine the relationship between four patterns of distributed leadership and a modified version of a variable Hoy et al. have labeled "teachers' academic optimism." The distributed leadership patterns reflect the extent to which the performance of leadership functions is consciously aligned across…
In this paper, the methodological concept of landscape optimization presented by Seppelt and Voinov [Ecol. Model. 151 (2/3) (2002) 125] is analyzed. Two aspects are chosen for detailed study. First, we generalize the performance criterion to assess a vector of ecosystem functi...
A novel approach to describing and detecting performance anti-patterns
NASA Astrophysics Data System (ADS)
Sheng, Jinfang; Wang, Yihan; Hu, Peipei; Wang, Bin
2017-08-01
Anti-pattern, as an extension to pattern, describes a widely used poor solution which can bring negative influence to application systems. Aiming at the shortcomings of the existing anti-pattern descriptions, an anti-pattern description method based on first order predicate is proposed. This method synthesizes anti-pattern forms and symptoms, which makes the description more accurate and has good scalability and versatility as well. In order to improve the accuracy of anti-pattern detection, a Bayesian classification method is applied in validation for detection results, which can reduce false negatives and false positives of anti-pattern detection. Finally, the proposed approach in this paper is applied to a small e-commerce system, the feasibility and effectiveness of the approach is demonstrated further through experiments.
Multi-MHz time-of-flight electronic bandstructure imaging of graphene on Ir(111)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tusche, C., E-mail: c.tusche@fz-juelich.de; Peter Grünberg Institut; Goslawski, P.
2016-06-27
In the quest for detailed spectroscopic insight into the electronic structure at solid surfaces in a large momentum range, we have developed an advanced experimental approach. It combines the 3D detection scheme of a time-of-flight momentum microscope with an optimized filling pattern of the BESSY II storage ring. Here, comprehensive data sets covering the full surface Brillouin zone have been used to study faint substrate-film hybridization effects in the electronic structure of graphene on Ir(111), revealed by a pronounced linear dichroism in angular distribution. The method paves the way to 3D electronic bandmapping with unprecedented data recording efficiency.
Metallocarbohedrenes: Transmission Electron Microscopy of Mass Gated Deposits
NASA Astrophysics Data System (ADS)
Castleman, M. E. Lyn, Jr.
2002-03-01
Titanium and zirconium Met-Car cluster ions have been detected from the direct laser vaporization of metal-graphite mixtures using time-of-flight mass spectrometry. Optimization of the production conditions enabled sufficient intensities to mass select and deposit Met-Cars on surfaces. High-resolution transmission electron microscopy images of mass gated Met-Car species reveals deposited nanocrystals 2 nm in diameter. Diffraction patterns indicate the presence of multiple species and shows that the deposits have spatial orientation. Lattice parameters have been extracted. The implication of the findings will be discussed. Support for the work has been from the AFOSR F49620-01-1-0122.
Optimization of imprintable nanostructured a-Si solar cells: FDTD study.
Fisker, Christian; Pedersen, Thomas Garm
2013-03-11
We present a finite-difference time-domain (FDTD) study of an amorphous silicon (a-Si) thin film solar cell, with nano scale patterns on the substrate surface. The patterns, based on the geometry of anisotropically etched silicon gratings, are optimized with respect to the period and anti-reflection (AR) coating thickness for maximal absorption in the range of the solar spectrum. The structure is shown to increase the cell efficiency by 10.2% compared to a similar flat solar cell with an optimized AR coating thickness. An increased back reflection can be obtained with a 50 nm zinc oxide layer on the back reflector, which gives an additional efficiency increase, leading to a total of 14.9%. In addition, the patterned cells are shown to be up to 3.8% more efficient than an optimized textured reference cell based on the Asahi U-type glass surface. The effects of variations of the optimized solar cell structure due to the manufacturing process are investigated, and shown to be negligible for variations below ±10%.
NASA Astrophysics Data System (ADS)
Radzicki, Vincent R.; Boutte, David; Taylor, Paul; Lee, Hua
2017-05-01
Radar based detection of human targets behind walls or in dense urban environments is an important technical challenge with many practical applications in security, defense, and disaster recovery. Radar reflections from a human can be orders of magnitude weaker than those from objects encountered in urban settings such as walls, cars, or possibly rubble after a disaster. Furthermore, these objects can act as secondary reflectors and produce multipath returns from a person. To mitigate these issues, processing of radar return data needs to be optimized for recognizing human motion features such as walking, running, or breathing. This paper presents a theoretical analysis on the modulation effects human motion has on the radar waveform and how high levels of multipath can distort these motion effects. From this analysis, an algorithm is designed and optimized for tracking human motion in heavily clutter environments. The tracking results will be used as the fundamental detection/classification tool to discriminate human targets from others by identifying human motion traits such as predictable walking patterns and periodicity in breathing rates. The theoretical formulations will be tested against simulation and measured data collected using a low power, portable see-through-the-wall radar system that could be practically deployed in real-world scenarios. Lastly, the performance of the algorithm is evaluated in a series of experiments where both a single person and multiple people are moving in an indoor, cluttered environment.
NASA Astrophysics Data System (ADS)
Manousaki, D.; Panagiotopoulou, A.; Bizimi, V.; Haynes, M. S.; Love, S.; Kallergi, M.
2017-11-01
The purpose of this study was the generation of ground truth files (GTFs) of the breast ducts from 3D images of the Invenia™ Automated Breast Ultrasound System (ABUS) system (GE Healthcare, Little Chalfont, UK) and the application of these GTFs for the optimization of the imaging protocol and the evaluation of a computer aided detection (CADe) algorithm developed for automated duct detection. Six lactating, nursing volunteers were scanned with the ABUS before and right after breastfeeding their infants. An expert in breast ultrasound generated rough outlines of the milk-filled ducts in the transaxial slices of all image volumes and the final GTFs were created by using thresholding and smoothing tools in ImageJ. In addition, a CADe algorithm automatically segmented duct like areas and its results were compared to the expert’s GTFs by estimating true positive fraction (TPF) or % overlap. The CADe output differed significantly from the expert’s but both detected a smaller than expected volume of the ducts due to insufficient contrast (ducts were partially filled with milk), discontinuities, and artifacts. GTFs were used to modify the imaging protocol and improve the CADe method. In conclusion, electronic GTFs provide a valuable tool in the optimization of a tomographic imaging system, the imaging protocol, and the CADe algorithms. Their generation, however, is an extremely time consuming, strenuous process, particularly for multi-slice examinations, and alternatives based on phantoms or simulations are highly desirable.
2011-01-01
Background Epilepsy is a common neurological disorder characterized by recurrent electrophysiological activities, known as seizures. Without the appropriate detection strategies, these seizure episodes can dramatically affect the quality of life for those afflicted. The rationale of this study is to develop an unsupervised algorithm for the detection of seizure states so that it may be implemented along with potential intervention strategies. Methods Hidden Markov model (HMM) was developed to interpret the state transitions of the in vitro rat hippocampal slice local field potentials (LFPs) during seizure episodes. It can be used to estimate the probability of state transitions and the corresponding characteristics of each state. Wavelet features were clustered and used to differentiate the electrophysiological characteristics at each corresponding HMM states. Using unsupervised training method, the HMM and the clustering parameters were obtained simultaneously. The HMM states were then assigned to the electrophysiological data using expert guided technique. Minimum redundancy maximum relevance (mRMR) analysis and Akaike Information Criterion (AICc) were applied to reduce the effect of over-fitting. The sensitivity, specificity and optimality index of chronic seizure detection were compared for various HMM topologies. The ability of distinguishing early and late tonic firing patterns prior to chronic seizures were also evaluated. Results Significant improvement in state detection performance was achieved when additional wavelet coefficient rates of change information were used as features. The final HMM topology obtained using mRMR and AICc was able to detect non-ictal (interictal), early and late tonic firing, chronic seizures and postictal activities. A mean sensitivity of 95.7%, mean specificity of 98.9% and optimality index of 0.995 in the detection of chronic seizures was achieved. The detection of early and late tonic firing was validated with experimental intracellular electrical recordings of seizures. Conclusions The HMM implementation of a seizure dynamics detector is an improvement over existing approaches using visual detection and complexity measures. The subjectivity involved in partitioning the observed data prior to training can be eliminated. It can also decipher the probabilities of seizure state transitions using the magnitude and rate of change wavelet information of the LFPs. PMID:21504608
Comparison of dual-biomarker PIB-PET and dual-tracer PET in AD diagnosis.
Fu, Liping; Liu, Linwen; Zhang, Jinming; Xu, Baixuan; Fan, Yong; Tian, Jiahe
2014-11-01
To identify the optimal time window for capturing perfusion information from early (11)C-PIB imaging frames (perfusion PIB, (11)C-pPIB) and to compare the performance of (18)F-FDG PET and "dual biomarker" (11)C-PIB PET [(11)C-pPIB and amyloid PIB ((11)C-aPIB)] for classification of AD, MCI and CN subjects. Forty subjects (14 CN, 12 MCI and 14 AD patients) underwent (18)F-FDG and (11)C-PIB PET studies. Pearson correlation between the (18)F-FDG image and sum of early (11)C-PIB frames was maximised to identify the optimal time window for (11)C-pPIB. The classification power of imaging parameters was evaluated with a leave-one-out validation. A 7-min time window yielded the highest correlation between (18)F-FDG and (11)C-pPIB. (11)C-pPIB and (18)F-FDG images shared a similar radioactive distribution pattern. (18)F-FDG performed better than (11)C-pPIB for the classification of both AD vs. CN and MCI vs. CN. (11)C-pPIB + (11)C-aPIB and (18)F-FDG + (11)C-aPIB yielded the highest classification accuracy for the classification of AD vs. CN, and (18)F-FDG + (11)C-aPIB had the best classification performance for the classification of MCI vs. C-pPIB could serve as a useful biomarker of rCBF for measuring neural activity and improve the diagnostic power of PET for AD in conjunction with (11)C-aPIB. (18)F-FDG and (11)C-PIB dual-tracer PET examination could better detect MCI. • Dual-tracer PET examination provides neurofunctional and neuropathological information for AD diagnosis. • The identified optimal 11C-pPIB time frames had highest correlation with 18F-FDG. • 11C-pPIB images shared a similar radioactive distribution pattern with 18F-FDG images. • 11C-pPIB can provide neurofunctional information. • Dual-tracer PET examination could better detect MCI.
Optimal pattern distributions in Rete-based production systems
NASA Technical Reports Server (NTRS)
Scott, Stephen L.
1994-01-01
Since its introduction into the AI community in the early 1980's, the Rete algorithm has been widely used. This algorithm has formed the basis for many AI tools, including NASA's CLIPS. One drawback of Rete-based implementation, however, is that the network structures used internally by the Rete algorithm make it sensitive to the arrangement of individual patterns within rules. Thus while rules may be more or less arbitrarily placed within source files, the distribution of individual patterns within these rules can significantly affect the overall system performance. Some heuristics have been proposed to optimize pattern placement, however, these suggestions can be conflicting. This paper describes a systematic effort to measure the effect of pattern distribution on production system performance. An overview of the Rete algorithm is presented to provide context. A description of the methods used to explore the pattern ordering problem area are presented, using internal production system metrics such as the number of partial matches, and coarse-grained operating system data such as memory usage and time. The results of this study should be of interest to those developing and optimizing software for Rete-based production systems.
Rodríguez-Medina, Inmaculada C; Beltrán-Debón, Raúl; Molina, Vicente Micol; Alonso-Villaverde, Carlos; Joven, Jorge; Menéndez, Javier A; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto
2009-10-01
The phenolic fraction and other polar compounds of the Hibiscus sabdariffa were separated and identified by HPLC with diode array detection coupled to electrospray TOF and IT tandem MS (DAD-HPLC-ESI-TOF-MS and IT-MS). The H. sabdariffa aqueous extract was filtered and directly injected into the LC system. The analysis of the compounds was carried out by RP HPLC coupled to DAD and TOF-MS in order to obtain molecular formula and exact mass. Posterior analyses with IT-MS were performed and the fragmentation pattern and confirmation of the structures were achieved. The H. sabdariffa samples were successfully analyzed in positive and negative ionization modes with two optimized linear gradients. In positive mode, the two most representative anthocyanins and other compounds were identified whereas the phenolic fraction, hydroxycitric acid and its lactone were identified using the negative ionization mode.
Flexible pressure sensors for burnt skin patient monitoring
NASA Astrophysics Data System (ADS)
Hong, Gwang-Wook; Kim, Se-Hoon; Kim, Joo-Hyung
2015-04-01
To monitor hypertrophic scars in burnt skin we proposed and demonstrated a hybrid polymer/carbon tube-based flexible pressure sensor. To monitor the pressure on skin by measurement, we were focusing on the fabrication of a well-defined hybrid polydimethylsiloxsane/functionalized multi-walled carbon tube array formed on the patterned interdigital transducer in a controllable way for the application of flexible pressure sensing devices. As a result, the detection at the pressure of 20 mmHg is achieved, which is a suggested optimal value of resistance for sensing pressure. It should be noted that the achieved value of resistance at the pressure of 20 mmHg is highly desirable for the further development of sensitive flexible pressure sensors. In addition we demonstrate a feasibility of a wearable pressure sensor which can be in real-time detection of local pressure by wireless communication module. Keywords:
The scent of disease: human body odor contains an early chemosensory cue of sickness.
Olsson, Mats J; Lundström, Johan N; Kimball, Bruce A; Gordon, Amy R; Karshikoff, Bianka; Hosseini, Nishteman; Sorjonen, Kimmo; Olgart Höglund, Caroline; Solares, Carmen; Soop, Anne; Axelsson, John; Lekander, Mats
2014-03-01
Observational studies have suggested that with time, some diseases result in a characteristic odor emanating from different sources on the body of a sick individual. Evolutionarily, however, it would be more advantageous if the innate immune response were detectable by healthy individuals as a first line of defense against infection by various pathogens, to optimize avoidance of contagion. We activated the innate immune system in healthy individuals by injecting them with endotoxin (lipopolysaccharide). Within just a few hours, endotoxin-exposed individuals had a more aversive body odor relative to when they were exposed to a placebo. Moreover, this effect was statistically mediated by the individuals' level of immune activation. This chemosensory detection of the early innate immune response in humans represents the first experimental evidence that disease smells and supports the notion of a "behavioral immune response" that protects healthy individuals from sick ones by altering patterns of interpersonal contact.
Monosomy 3 by FISH in uveal melanoma: variability in techniques and results.
Aronow, Mary; Sun, Yang; Saunthararajah, Yogen; Biscotti, Charles; Tubbs, Raymond; Triozzi, Pierre; Singh, Arun D
2012-09-01
Tumor monosomy 3 confers a poor prognosis in patients with uveal melanoma. We critically review the techniques used for fluorescence in situ hybridization (FISH) detection of monosomy 3 in order to assess variability in practice patterns and to explain differences in results. Significant variability that has likely affected reported results was found in tissue sampling methods, selection of FISH probes, number of cells counted, and the cut-off point used to determine monosomy 3 status. Clinical parameters and specific techniques employed to report FISH results should be specified so as to allow meta-analysis of published studies. FISH-based detection of monosomy 3 in uveal melanoma has not been performed in a standardized manner, which limits conclusions regarding its clinical utility. FISH is a widely available, versatile technology, and when performed optimally has the potential to be a valuable tool for determining the prognosis of uveal melanoma. Copyright © 2012 Elsevier Inc. All rights reserved.
High granularity tracker based on a Triple-GEM optically read by a CMOS-based camera
NASA Astrophysics Data System (ADS)
Marafini, M.; Patera, V.; Pinci, D.; Sarti, A.; Sciubba, A.; Spiriti, E.
2015-12-01
The detection of photons produced during the avalanche development in gas chambers has been the subject of detailed studies in the past. The great progresses achieved in last years in the performance of micro-pattern gas detectors on one side and of photo-sensors on the other provide the possibility of making high granularity and very sensitive particle trackers. In this paper, the results obtained with a triple-GEM structure read-out by a CMOS based sensor are described. The use of an He/CF4 (60/40) gas mixture and a detailed optimization of the electric fields made possible to obtain a very high GEM light yield. About 80 photons per primary electron were detected by the sensor resulting in a very good capability of tracking both muons from cosmic rays and electrons from natural radioactivity.
Paz, Beatriz; Riobó, Pilar; Souto, María L; Gil, Laura V; Norte, Manuel; Fernández, José J; Franco, José M
2006-11-01
The toxin composition of a culture of the dinoflagellate Protoceratium reticulatum was investigated using LC-FLD, after derivatization with DMEQ-TAD (4-(2-(6,7-dimethoxy-4-methyl-3-oxo-3,4-dihydroquinoxalimylethyl)-1,2,4-triazoline-3,5-dione)). Besides yessotoxin (YTX), the new YTX analogue, glycoyessotoxin A (G-YTXA) was detected in culture medium as well as in cells. The conditions for extraction were optimized and the production profile established. Retention time of the resulting fluorescent G-YTXA adduct was identified by comparison of the appropriate standard. Additionally, both G-YTXA and the DMEQ-TAD-G-YTXA adduct were confirmed by LC-MS showing ion peaks at m/z 1273 [M-2Na+H](-) and m/z 1618 [M-2Na+H](-), respectively. The LC-MS(n) displayed a fragmentation pattern similar to that of the YTX series.
Immunological multimetal deposition for rapid visualization of sweat fingerprints.
He, Yayun; Xu, Linru; Zhu, Yu; Wei, Qianhui; Zhang, Meiqin; Su, Bin
2014-11-10
A simple method termed immunological multimetal deposition (iMMD) was developed for rapid visualization of sweat fingerprints with bare eyes, by combining the conventional MMD with the immunoassay technique. In this approach, antibody-conjugated gold nanoparticles (AuNPs) were used to specifically interact with the corresponding antigens in the fingerprint residue. The AuNPs serve as the nucleation sites for autometallographic deposition of silver particles from the silver staining solution, generating a dark ridge pattern for visual detection. Using fingerprints inked with human immunoglobulin G (hIgG), we obtained the optimal formulation of iMMD, which was then successfully applied to visualize sweat fingerprints through the detection of two secreted polypeptides, epidermal growth factor and lysozyme. In comparison with the conventional MMD, iMMD is faster and can provide additional information than just identification. Moreover, iMMD is facile and does not need expensive instruments. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Integration of mask and silicon metrology in DFM
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Mito, Hiroaki; Sugiyama, Akiyuki; Toyoda, Yasutaka
2009-03-01
We have developed a highly integrated method of mask and silicon metrology. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. We have inspected the high accuracy, stability and reproducibility in the experiments of integration. The accuracy is comparable with that of the mask and silicon CD-SEM metrology. In this report, we introduce the experimental results and the application. As shrinkage of design rule for semiconductor device advances, OPC (Optical Proximity Correction) goes aggressively dense in RET (Resolution Enhancement Technology). However, from the view point of DFM (Design for Manufacturability), the cost of data process for advanced MDP (Mask Data Preparation) and mask producing is a problem. Such trade-off between RET and mask producing is a big issue in semiconductor market especially in mask business. Seeing silicon device production process, information sharing is not completely organized between design section and production section. Design data created with OPC and MDP should be linked to process control on production. But design data and process control data are optimized independently. Thus, we provided a solution of DFM: advanced integration of mask metrology and silicon metrology. The system we propose here is composed of followings. 1) Design based recipe creation: Specify patterns on the design data for metrology. This step is fully automated since they are interfaced with hot spot coordinate information detected by various verification methods. 2) Design based image acquisition: Acquire the images of mask and silicon automatically by a recipe based on the pattern design of CD-SEM.It is a robust automated step because a wide range of design data is used for the image acquisition. 3) Contour profiling and GDS data generation: An image profiling process is applied to the acquired image based on the profiling method of the field proven CD metrology algorithm. The detected edges are then converted to GDSII format, which is a standard format for a design data, and utilized for various DFM systems such as simulation. Namely, by integrating pattern shapes of mask and silicon formed during a manufacturing process into GDSII format, it makes it possible to bridge highly accurate pattern profile information over to the design field of various EDA systems. These are fully integrated into design data and automated. Bi-directional cross probing between mask data and process control data is allowed by linking them. This method is a solution for total optimization that covers Design, MDP, mask production and silicon device producing. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Effective protein extraction protocol for proteomics studies of Jerusalem artichoke leaves.
Zhang, Meide; Shen, Shihua
2013-07-01
Protein extraction is a crucial step for proteomics studies. To establish an effective protein extraction protocol suitable for two-dimensional electrophoresis (2DE) analysis in Jerusalem artichoke (Helianthus tuberosus L.), three different protein extraction methods-trichloroacetic acid/acetone, Mg/NP-40, and phenol/ammonium acetate-were evaluated using Jerusalem artichoke leaves as source materials. Of the three methods, trichloroacetic acid/acetone yielded the best protein separation pattern and highest number of protein spots in 2DE analysis. Proteins highly abundant in leaves, such as Rubisco, are typically problematic during leaf 2DE analysis, however, and this disadvantage was evident using trichloroacetic acid/acetone. To reduce the influence of abundant proteins on the detection of low-abundance proteins, we optimized the trichloroacetic acid/acetone method by incorporating a PEG fractionation approach. After optimization, 363 additional (36.2%) protein spots were detected on the 2DE gel. Our results suggest that trichloroacetic acid/acetone method is a better protein extraction technique than Mg/NP-40 and phenol/ammonium acetate in Jerusalem artichoke leaf 2DE analysis, and that trichloroacetic acid/acetone method combined with PEG fractionation procedure is the most effective approach for leaf 2DE analysis of Jerusalem artichoke. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
McKenney, Jesse K; Wei, Wei; Hawley, Sarah; Auman, Heidi; Newcomb, Lisa F; Boyer, Hilary D; Fazli, Ladan; Simko, Jeff; Hurtado-Coll, Antonio; Troyer, Dean A; Tretiakova, Maria S; Vakar-Lopez, Funda; Carroll, Peter R; Cooperberg, Matthew R; Gleave, Martin E; Lance, Raymond S; Lin, Dan W; Nelson, Peter S; Thompson, Ian M; True, Lawrence D; Feng, Ziding; Brooks, James D
2016-11-01
Histologic grading remains the gold standard for prognosis in prostate cancer, and assessment of Gleason score plays a critical role in active surveillance management. We sought to optimize the prognostic stratification of grading and developed a method of recording and studying individual architectural patterns by light microscopic evaluation that is independent of standard Gleason grade. Some of the evaluated patterns are not assessed by current Gleason grading (eg, reactive stromal response). Individual histologic patterns were correlated with recurrence-free survival in a retrospective postradical prostatectomy cohort of 1275 patients represented by the highest-grade foci of carcinoma in tissue microarrays. In univariable analysis, fibromucinous rupture with varied epithelial complexity had a significantly lower relative risk of recurrence-free survival in cases graded as 3+4=7. Cases having focal "poorly formed glands," which could be designated as pattern 3+4=7, had lower risk than cribriform patterns with either small cribriform glands or expansile cribriform growth. In separate multivariable Cox proportional hazard analyses of both Gleason score 3+3=6 and 3+4=7 carcinomas, reactive stromal patterns were associated with worse recurrence-free survival. Decision tree models demonstrate potential regrouping of architectural patterns into categories with similar risk. In summary, we argue that Gleason score assignment by current consensus guidelines are not entirely optimized for clinical use, including active surveillance. Our data suggest that focal poorly formed gland and cribriform patterns, currently classified as Gleason pattern 4, should be in separate prognostic groups, as the latter is associated with worse outcome. Patterns with extravasated mucin are likely overgraded in a subset of cases with more complex epithelial bridges, whereas stromogenic cancers have a worse outcome than conveyed by Gleason grade alone. These findings serve as a foundation to facilitate optimization of histologic grading and strongly support incorporating reactive stroma into routine assessment.
Efficient discovery of risk patterns in medical data.
Li, Jiuyong; Fu, Ada Wai-chee; Fahey, Paul
2009-01-01
This paper studies a problem of efficiently discovering risk patterns in medical data. Risk patterns are defined by a statistical metric, relative risk, which has been widely used in epidemiological research. To avoid fruitless search in the complete exploration of risk patterns, we define optimal risk pattern set to exclude superfluous patterns, i.e. complicated patterns with lower relative risk than their corresponding simpler form patterns. We prove that mining optimal risk pattern sets conforms an anti-monotone property that supports an efficient mining algorithm. We propose an efficient algorithm for mining optimal risk pattern sets based on this property. We also propose a hierarchical structure to present discovered patterns for the easy perusal by domain experts. The proposed approach is compared with two well-known rule discovery methods, decision tree and association rule mining approaches on benchmark data sets and applied to a real world application. The proposed method discovers more and better quality risk patterns than a decision tree approach. The decision tree method is not designed for such applications and is inadequate for pattern exploring. The proposed method does not discover a large number of uninteresting superfluous patterns as an association mining approach does. The proposed method is more efficient than an association rule mining method. A real world case study shows that the method reveals some interesting risk patterns to medical practitioners. The proposed method is an efficient approach to explore risk patterns. It quickly identifies cohorts of patients that are vulnerable to a risk outcome from a large data set. The proposed method is useful for exploratory study on large medical data to generate and refine hypotheses. The method is also useful for designing medical surveillance systems.
NASA Astrophysics Data System (ADS)
Cekli, Hakki Ergun; Nije, Jelle; Ypma, Alexander; Bastani, Vahid; Sonntag, Dag; Niesing, Henk; Zhang, Linmiao; Ullah, Zakir; Subramony, Venky; Somasundaram, Ravin; Susanto, William; Matsunobu, Masazumi; Johnson, Jeff; Tabery, Cyrus; Lin, Chenxi; Zou, Yi
2018-03-01
In addition to lithography process and equipment induced variations, processes like etching, annealing, film deposition and planarization exhibit variations, each having their own intrinsic characteristics and leaving an effect, a `fingerprint', on the wafers. With ever tighter requirements for CD and overlay, controlling these process induced variations is both increasingly important and increasingly challenging in advanced integrated circuit (IC) manufacturing. For example, the on-product overlay (OPO) requirement for future nodes is approaching <3nm, requiring the allowable budget for process induced variance to become extremely small. Process variance control is seen as an bottleneck to further shrink which drives the need for more sophisticated process control strategies. In this context we developed a novel `computational process control strategy' which provides the capability of proactive control of each individual wafer with aim to maximize the yield, without introducing a significant impact on metrology requirements, cycle time or productivity. The complexity of the wafer process is approached by characterizing the full wafer stack building a fingerprint library containing key patterning performance parameters like Overlay, Focus, etc. Historical wafer metrology is decomposed into dominant fingerprints using Principal Component Analysis. By associating observed fingerprints with their origin e.g. process steps, tools and variables, we can give an inline assessment of the strength and origin of the fingerprints on every wafer. Once the fingerprint library is established, a wafer specific fingerprint correction recipes can be determined based on its processing history. Data science techniques are used in real-time to ensure that the library is adaptive. To realize this concept, ASML TWINSCAN scanners play a vital role with their on-board full wafer detection and exposure correction capabilities. High density metrology data is created by the scanner for each wafer and on every layer during the lithography steps. This metrology data will be used to obtain the process fingerprints. Also, the per exposure and per wafer correction potential of the scanners will be utilized for improved patterning control. Additionally, the fingerprint library will provide early detection of excursions for inline root cause analysis and process optimization guidance.
10th Annual Systems Engineering Conference: Volume 2 Wednesday
2007-10-25
intelligently optimize resource performance. Self - Healing Detect hardware/software failures and reconfigure to permit continued operations. Self ...Types Wake Ice WEAPON/PLATFORM ACOUSTICS Self -Noise Radiated Noise Beam Forming Pulse Types Submarines, surface ships, and platform sensors P r o p P r o...Computing Self -Protecting Detect internal/external attacks and protect it’s resources from exploitation. Self -Optimizing Detect sub-optimal behaviors and
Multiple Detector Optimization for Hidden Radiation Source Detection
2015-03-26
important in achieving operationally useful methods for optimizing detector emplacement, the 2-D attenuation model approach promises to speed up the...process of hidden source detection significantly. The model focused on detection of the full energy peak of a radiation source. Methods to optimize... radioisotope identification is possible without using a computationally intensive stochastic model such as the Monte Carlo n-Particle (MCNP) code
Graph rigidity, cyclic belief propagation, and point pattern matching.
McAuley, Julian J; Caetano, Tibério S; Barbosa, Marconi S
2008-11-01
A recent paper [1] proposed a provably optimal polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph that is also globally rigid but has an advantage over the graph proposed in [1]: Its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal, and thus, standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that in [1] when there is noise in the point patterns.
Simultaneous optimization of loading pattern and burnable poison placement for PWRs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alim, F.; Ivanov, K.; Yilmaz, S.
2006-07-01
To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less
A rational eating model of binges, diets and obesity.
Dragone, Davide
2009-07-01
This paper addresses the rapid diffusion of obesity and the existence of different individual patterns of food consumption between non-dieters and chronic dieters. I propose a rational eating model where a forward-looking agent optimizes the intertemporal satisfaction from eating, taking into account the cost of changing consumption habits and the negative health consequences of having a non-optimal body weight. Consistent with the evidence, I show that the intertemporal maximization problem leads to a condition of overweightness, and that heterogeneity in the individual relevance of habits in consumption can determine the observed differences in the individual intertemporal patterns of food consumption and body weight. Sufficient conditions for determining when the convergence to the steady state implies oscillations or is monotonic are given. In the former case, the agent optimally alternates diets and binges until the steady state is reached, in the latter a regular intertemporal pattern of food consumption is optimal.
Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.
Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh
2012-07-19
Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.
Production and perception rules underlying visual patterns: effects of symmetry and hierarchy
Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh
2012-01-01
Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636
Cotton fabric-based electrochemical device for lactate measurement in saliva.
Malon, Radha S P; Chua, K Y; Wicaksono, Dedy H B; Córcoles, Emma P
2014-06-21
Lactate measurement is vital in clinical diagnostics especially among trauma and sepsis patients. In recent years, it has been shown that saliva samples are an excellent applicable alternative for non-invasive measurement of lactate. In this study, we describe a method for the determination of lactate concentration in saliva samples by using a simple and low-cost cotton fabric-based electrochemical device (FED). The device was fabricated using template method for patterning the electrodes and wax-patterning technique for creating the sample placement/reaction zone. Lactate oxidase (LOx) enzyme was immobilised at the reaction zone using a simple entrapment method. The LOx enzymatic reaction product, hydrogen peroxide (H2O2) was measured using chronoamperometric measurements at the optimal detection potential (-0.2 V vs. Ag/AgCl), in which the device exhibited a linear working range between 0.1 to 5 mM, sensitivity (slope) of 0.3169 μA mM(-1) and detection limit of 0.3 mM. The low detection limit and wide linear range were suitable to measure salivary lactate (SL) concentration, thus saliva samples obtained under fasting conditions and after meals were evaluated using the FED. The measured SL varied among subjects and increased after meals randomly. The proposed device provides a suitable analytical alternative for rapid and non-invasive determination of lactate in saliva samples. The device can also be adapted to a variety of other assays that requires simplicity, low-cost, portability and flexibility.
NASA Astrophysics Data System (ADS)
Rocha, Humberto; Dias, Joana M.; Ferreira, Brígida C.; Lopes, Maria C.
2013-05-01
Generally, the inverse planning of radiation therapy consists mainly of the fluence optimization. The beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) consists of selecting appropriate radiation incidence directions and may influence the quality of the IMRT plans, both to enhance better organ sparing and to improve tumor coverage. However, in clinical practice, most of the time, beam directions continue to be manually selected by the treatment planner without objective and rigorous criteria. The goal of this paper is to introduce a novel approach that uses beam’s-eye-view dose ray tracing metrics within a pattern search method framework in the optimization of the highly non-convex BAO problem. Pattern search methods are derivative-free optimization methods that require a few function evaluations to progress and converge and have the ability to better avoid local entrapment. The pattern search method framework is composed of a search step and a poll step at each iteration. The poll step performs a local search in a mesh neighborhood and ensures the convergence to a local minimizer or stationary point. The search step provides the flexibility for a global search since it allows searches away from the neighborhood of the current iterate. Beam’s-eye-view dose metrics assign a score to each radiation beam direction and can be used within the pattern search framework furnishing a priori knowledge of the problem so that directions with larger dosimetric scores are tested first. A set of clinical cases of head-and-neck tumors treated at the Portuguese Institute of Oncology of Coimbra is used to discuss the potential of this approach in the optimization of the BAO problem.
Estimation and detection information trade-off for x-ray system optimization
NASA Astrophysics Data System (ADS)
Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali
2016-05-01
X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.
Soy foods: are they useful for optimal bone health?
2011-01-01
Numerous studies have investigated the relationship between soy foods, soy protein, or isoflavone extracts and markers of bone health and osteoporosis prevention, and have come to conflicting conclusions. Research on dietary patterns, rather than on specific food ingredients or individual foods, may offer an opportunity for better understanding the role of soy foods in bone health. Evidence is reviewed regarding the question of whether soy foods contribute to a dietary pattern in humans that supports and promotes bone health. Soy foods are associated with improved markers of bone health and improved outcomes, especially among Asian women. Although the optimal amounts and types of soy foods needed to support bone health are not yet clear, dietary pattern evidence suggests that regular consumption of soy foods is likely to be useful for optimal bone health as an integral part of a dietary pattern that is built largely from whole plant foods. PMID:22870487
Soy foods: are they useful for optimal bone health?
Lanou, Amy J
2011-12-01
Numerous studies have investigated the relationship between soy foods, soy protein, or isoflavone extracts and markers of bone health and osteoporosis prevention, and have come to conflicting conclusions. Research on dietary patterns, rather than on specific food ingredients or individual foods, may offer an opportunity for better understanding the role of soy foods in bone health. Evidence is reviewed regarding the question of whether soy foods contribute to a dietary pattern in humans that supports and promotes bone health. Soy foods are associated with improved markers of bone health and improved outcomes, especially among Asian women. Although the optimal amounts and types of soy foods needed to support bone health are not yet clear, dietary pattern evidence suggests that regular consumption of soy foods is likely to be useful for optimal bone health as an integral part of a dietary pattern that is built largely from whole plant foods.
False alarm reduction by the And-ing of multiple multivariate Gaussian classifiers
NASA Astrophysics Data System (ADS)
Dobeck, Gerald J.; Cobb, J. Tory
2003-09-01
The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. This paper describes a method for training several multivariate Gaussian classifiers such that their And-ing dramatically reduces false alarms while maintaining a high probability of classification. This training approach is referred to as the Focused- Training method. This work extends our 2001-2002 work where the Focused-Training method was used with three other types of classifiers: the Attractor-based K-Nearest Neighbor Neural Network (a type of radial-basis, probabilistic neural network), the Optimal Discrimination Filter Classifier (based linear discrimination theory), and the Quadratic Penalty Function Support Vector Machine (QPFSVM). Although our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to a wide range of pattern recognition and automatic target recognition (ATR) problems.
Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points
Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.
2015-01-01
Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758
Cortical membrane potential signature of optimal states for sensory signal detection
McGinley, Matthew J.; David, Stephen V.; McCormick, David A.
2015-01-01
The neural correlates of optimal states for signal detection task performance are largely unknown. One hypothesis holds that optimal states exhibit tonically depolarized cortical neurons with enhanced spiking activity, such as occur during movement. We recorded membrane potentials of auditory cortical neurons in mice trained on a challenging tone-in-noise detection task while assessing arousal with simultaneous pupillometry and hippocampal recordings. Arousal measures accurately predicted multiple modes of membrane potential activity, including: rhythmic slow oscillations at low arousal, stable hyperpolarization at intermediate arousal, and depolarization during phasic or tonic periods of hyper-arousal. Walking always occurred during hyper-arousal. Optimal signal detection behavior and sound-evoked responses, at both sub-threshold and spiking levels, occurred at intermediate arousal when pre-decision membrane potentials were stably hyperpolarized. These results reveal a cortical physiological signature of the classically-observed inverted-U relationship between task performance and arousal, and that optimal detection exhibits enhanced sensory-evoked responses and reduced background synaptic activity. PMID:26074005
Sparse Substring Pattern Set Discovery Using Linear Programming Boosting
NASA Astrophysics Data System (ADS)
Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki
In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.
Debaize, Lydie; Jakobczyk, Hélène; Rio, Anne-Gaëlle; Gandemer, Virginie; Troadec, Marie-Bérengère
2017-01-01
Genetic abnormalities, including chromosomal translocations, are described for many hematological malignancies. From the clinical perspective, detection of chromosomal abnormalities is relevant not only for diagnostic and treatment purposes but also for prognostic risk assessment. From the translational research perspective, the identification of fusion proteins and protein interactions has allowed crucial breakthroughs in understanding the pathogenesis of malignancies and consequently major achievements in targeted therapy. We describe the optimization of the Proximity Ligation Assay (PLA) to ascertain the presence of fusion proteins, and protein interactions in non-adherent pre-B cells. PLA is an innovative method of protein-protein colocalization detection by molecular biology that combines the advantages of microscopy with the advantages of molecular biology precision, enabling detection of protein proximity theoretically ranging from 0 to 40 nm. We propose an optimized PLA procedure. We overcome the issue of maintaining non-adherent hematological cells by traditional cytocentrifugation and optimized buffers, by changing incubation times, and modifying washing steps. Further, we provide convincing negative and positive controls, and demonstrate that optimized PLA procedure is sensitive to total protein level. The optimized PLA procedure allows the detection of fusion proteins and protein interactions on non-adherent cells. The optimized PLA procedure described here can be readily applied to various non-adherent hematological cells, from cell lines to patients' cells. The optimized PLA protocol enables detection of fusion proteins and their subcellular expression, and protein interactions in non-adherent cells. Therefore, the optimized PLA protocol provides a new tool that can be adopted in a wide range of applications in the biological field.
NASA Astrophysics Data System (ADS)
Clemens, Joshua William
Game theory has application across multiple fields, spanning from economic strategy to optimal control of an aircraft and missile on an intercept trajectory. The idea of game theory is fascinating in that we can actually mathematically model real-world scenarios and determine optimal decision making. It may not always be easy to mathematically model certain real-world scenarios, nonetheless, game theory gives us an appreciation for the complexity involved in decision making. This complexity is especially apparent when the players involved have access to different information upon which to base their decision making (a nonclassical information pattern). Here we will focus on the class of adversarial two-player games (sometimes referred to as pursuit-evasion games) with nonclassical information pattern. We present a two-sided (simultaneous) optimization solution method for the two-player linear quadratic Gaussian (LQG) multistage game. This direct solution method allows for further interpretation of each player's decision making (strategy) as compared to previously used formal solution methods. In addition to the optimal control strategies, we present a saddle point proof and we derive an expression for the optimal performance index value. We provide some numerical results in order to further interpret the optimal control strategies and to highlight real-world application of this game-theoretic optimal solution.
Parallel Molecular Distributed Detection With Brownian Motion.
Rogers, Uri; Koh, Min-Sung
2016-12-01
This paper explores the in vivo distributed detection of an undesired biological agent's (BAs) biomarkers by a group of biological sized nanomachines in an aqueous medium under drift. The term distributed, indicates that the system information relative to the BAs presence is dispersed across the collection of nanomachines, where each nanomachine possesses limited communication, computation, and movement capabilities. Using Brownian motion with drift, a probabilistic detection and optimal data fusion framework, coined molecular distributed detection, will be introduced that combines theory from both molecular communication and distributed detection. Using the optimal data fusion framework as a guide, simulation indicates that a sub-optimal fusion method exists, allowing for a significant reduction in implementation complexity while retaining BA detection accuracy.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Learning optimal embedded cascades.
Saberian, Mohammad Javad; Vasconcelos, Nuno
2012-10-01
The problem of automatic and optimal design of embedded object detector cascades is considered. Two main challenges are identified: optimization of the cascade configuration and optimization of individual cascade stages, so as to achieve the best tradeoff between classification accuracy and speed, under a detection rate constraint. Two novel boosting algorithms are proposed to address these problems. The first, RCBoost, formulates boosting as a constrained optimization problem which is solved with a barrier penalty method. The constraint is the target detection rate, which is met at all iterations of the boosting process. This enables the design of embedded cascades of known configuration without extensive cross validation or heuristics. The second, ECBoost, searches over cascade configurations to achieve the optimal tradeoff between classification risk and speed. The two algorithms are combined into an overall boosting procedure, RCECBoost, which optimizes both the cascade configuration and its stages under a detection rate constraint, in a fully automated manner. Extensive experiments in face, car, pedestrian, and panda detection show that the resulting detectors achieve an accuracy versus speed tradeoff superior to those of previous methods.
2014-01-01
Background Melanoma incidence is growing and more people require follow-up to detect recurrent melanoma quickly. Those detecting their own recurrent melanoma appear to have the best prognosis, so total skin self examination (TSSE) is advocated, but practice is suboptimal. A digital intervention to support TSSE has potential but it is not clear which patient groups could benefit most. The aim of this study was to explore cutaneous melanoma recurrence patterns between 1991 and 2012 in Northeast Scotland. The objectives were to: determine how recurrent melanomas were detected during the period; explore factors potentially predictive of mode of recurrence detection; identify groups least likely to detect their own recurrent melanoma and with most potential to benefit from digital TSSE support. Methods Pathology records were used to identify those with a potential recurrent melanoma of any type (local, regional and distant). Following screening of potential cases available secondary care-held records were subsequently scrutinised. Data was collected on demographics and clinical characteristics of the initial and recurrent melanoma. Data were handled in Microsoft Excel and transported into SPSS 20.0 for statistical analysis. Factors predicting detection at interval or scheduled follow-up were explored using univariate techniques, with potentially influential factors combined in a multivariate binary logistic model to adjust for confounding. Results 149 potential recurrences were identified from the pathology database held at Aberdeen Royal Infirmary. Reliable data could be obtained on 94 cases of recurrent melanoma of all types. 30 recurrences (31.9%) were found by doctors at follow-up, and 64 (68.1%) in the interval between visits, usually by the patient themselves. Melanoma recurrences of all types occurring within one-year were significantly more likely to be found at follow-up visits, and this remained so following adjustment for other factors that could be used to target digital TSSE support. Conclusions A digital intervention should be offered to all newly diagnosed patients. This group could benefit most from optimal TSSE practice. PMID:24612627
Digital authentication with copy-detection patterns
NASA Astrophysics Data System (ADS)
Picard, Justin
2004-06-01
Technologies for making high-quality copies of documents are getting more available, cheaper, and more efficient. As a result, the counterfeiting business engenders huge losses, ranging to 5% to 8% of worldwide sales of brand products, and endangers the reputation and value of the brands themselves. Moreover, the growth of the Internet drives the business of counterfeited documents (fake IDs, university diplomas, checks, and so on), which can be bought easily and anonymously from hundreds of companies on the Web. The incredible progress of digital imaging equipment has put in question the very possibility of verifying the authenticity of documents: how can we discern genuine documents from seemingly "perfect" copies? This paper proposes a solution based on creating digital images with specific properties, called a Copy-detection patterns (CDP), that is printed on arbitrary documents, packages, etc. CDPs make an optimal use of an "information loss principle": every time an imae is printed or scanned, some information is lost about the original digital image. That principle applies even for the highest quality scanning, digital imaging, printing or photocopying equipment today, and will likely remain true for tomorrow. By measuring the amount of information contained in a scanned CDP, the CDP detector can take a decision on the authenticity of the document.
NASA Astrophysics Data System (ADS)
Octarina, Sisca; Radiana, Mutia; Bangun, Putra B. J.
2018-01-01
Two dimensional cutting stock problem (CSP) is a problem in determining the cutting pattern from a set of stock with standard length and width to fulfill the demand of items. Cutting patterns were determined in order to minimize the usage of stock. This research implemented pattern generation algorithm to formulate Gilmore and Gomory model of two dimensional CSP. The constraints of Gilmore and Gomory model was performed to assure the strips which cut in the first stage will be used in the second stage. Branch and Cut method was used to obtain the optimal solution. Based on the results, it found many patterns combination, if the optimal cutting patterns which correspond to the first stage were combined with the second stage.
Patterns of Movement in Foster Care: An Optimal Matching Analysis
Havlicek, Judy
2011-01-01
Placement instability remains a vexing problem for child welfare agencies across the country. This study uses child welfare administrative data to retrospectively follow the entire placement histories (birth to age 17.5) of 474 foster youth who reached the age of majority in the state of Illinois and to search for patterns in their movement through the child welfare system. Patterns are identified through optimal matching and hierarchical cluster analyses. Multiple logistic regression is used to analyze administrative and survey data in order to examine covariates related to patterns. Five distinct patterns of movement are differentiated: Late Movers, Settled with Kin, Community Care, Institutionalized, and Early Entry. These patterns suggest high but variable rates of movement. Implications for child welfare policy and service provision are discussed. PMID:20873020
Chemical Space Mapping and Structure-Activity Analysis of the ChEMBL Antiviral Compound Set.
Klimenko, Kyrylo; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre
2016-08-22
Curation, standardization and data fusion of the antiviral information present in the ChEMBL public database led to the definition of a robust data set, providing an association of antiviral compounds to seven broadly defined antiviral activity classes. Generative topographic mapping (GTM) subjected to evolutionary tuning was then used to produce maps of the antiviral chemical space, providing an optimal separation of compound families associated with the different antiviral classes. The ability to pinpoint the specific spots occupied (responsibility patterns) on a map by various classes of antiviral compounds opened the way for a GTM-supported search for privileged structural motifs, typical for each antiviral class. The privileged locations of antiviral classes were analyzed in order to highlight underlying privileged common structural motifs. Unlike in classical medicinal chemistry, where privileged structures are, almost always, predefined scaffolds, privileged structural motif detection based on GTM responsibility patterns has the decisive advantage of being able to automatically capture the nature ("resolution detail"-scaffold, detailed substructure, pharmacophore pattern, etc.) of the relevant structural motifs. Responsibility patterns were found to represent underlying structural motifs of various natures-from very fuzzy (groups of various "interchangeable" similar scaffolds), to the classical scenario in medicinal chemistry (underlying motif actually being the scaffold), to very precisely defined motifs (specifically substituted scaffolds).
Sensor fusion approaches for EMI and GPR-based subsurface threat identification
NASA Astrophysics Data System (ADS)
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Despite advances in both electromagnetic induction (EMI) and ground penetrating radar (GPR) sensing and related signal processing, neither sensor alone provides a perfect tool for detecting the myriad of possible buried objects that threaten the lives of Soldiers and civilians. However, while neither GPR nor EMI sensing alone can provide optimal detection across all target types, the two approaches are highly complementary. As a result, many landmine systems seek to make use of both sensing modalities simultaneously and fuse the results from both sensors to improve detection performance for targets with widely varying metal content and GPR responses. Despite this, little work has focused on large-scale comparisons of different approaches to sensor fusion and machine learning for combining data from these highly orthogonal phenomenologies. In this work we explore a wide array of pattern recognition techniques for algorithm development and sensor fusion. Results with the ARA Nemesis landmine detection system suggest that nonlinear and non-parametric classification algorithms provide significant performance benefits for single-sensor algorithm development, and that fusion of multiple algorithms can be performed satisfactorily using basic parametric approaches, such as logistic discriminant classification, for the targets under consideration in our data sets.
Bui, Minh-Phuong N; Seo, Seong S
2014-01-01
We have developed an optical chemical sensor for the detection of organophosphate (OP) compounds using a polymerized crystalline colloidal array (PCCA) thin film composed of a close-packed colloidal array of polystyrene particles. The PCCA thin film was modified with β-cyclodextrin (β-CD) polymer as a capping cavity for the selective detection of paraoxon-ethyl and parathion-ethyl chemical agents. The fabrication of the modified PCCA thin film was optimized and the structure was characterized using scanning electron microscopy (SEM). The arrangement of polystyrene particles in the PCCA follows a pattern of the fcc (111) planes with strong diffraction peak in the visible spectral region and pH dependence. The diffraction peak of the β-CD modified PCCA thin film showed a red shift according to the change of paraoxon-ethyl and parathion-ethyl concentrations at a fast response time (10 s) and high sensitivity with detection limits of 2.0 and 3.4 ppb, respectively. Furthermore, the proposed interaction mechanism of β-CD with paraoxon-ethyl and parathion-ethyl in the β-CD modified PCCA thin film were discussed.
Transducer selection and application in magnetoacoustic tomography with magnetic induction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yuqi; Wang, Jiawei; Ma, Qingyu, E-mail: maqingyu@njnu.edu.cn
2016-03-07
As an acoustic receiver, transducer plays a vital role in signal acquisition and image reconstruction for magnetoacoustic tomography with magnetic induction (MAT-MI). In order to optimize signal acquisition, the expressions of acoustic pressure detection and waveform collection are theoretically studied based on the radiation theory of acoustic dipole and the reception pattern of transducer. Pressure distributions are simulated for a cylindrical phantom model using a planar piston transducer with different radii and bandwidths. The proposed theory is also verified by the experimental measurements of acoustic waveform detection for an aluminum foil cylinder. It is proved that acoustic pressure with sharpmore » and clear boundary peaks can be detected by the large-radius transducer with wide bandwidth, reflecting the differential of the induced Lorentz force accurately, which is helpful for precise conductivity reconstruction. To detect acoustic pressure with acceptable pressure amplitude, peak pressure ratio, amplitude ratio, and improved signal to noise ratio, the scanning radius of 5–10 times the radius of the object should be selected to improve the accuracy of image reconstruction. This study provides a theoretical and experimental basis for transducer selection and application in MAT-MI to obtain reconstructed images with improved resolution and definition.« less
Grid-Optimization Program for Photovoltaic Cells
NASA Technical Reports Server (NTRS)
Daniel, R. E.; Lee, T. S.
1986-01-01
CELLOPT program developed to assist in designing grid pattern of current-conducting material on photovoltaic cell. Analyzes parasitic resistance losses and shadow loss associated with metallized grid pattern on both round and rectangular solar cells. Though performs sensitivity studies, used primarily to optimize grid design in terms of bus bar and grid lines by minimizing power loss. CELLOPT written in APL.
Field-design optimization with triangular heliostat pods
NASA Astrophysics Data System (ADS)
Domínguez-Bravo, Carmen-Ana; Bode, Sebastian-James; Heiming, Gregor; Richter, Pascal; Carrizosa, Emilio; Fernández-Cara, Enrique; Frank, Martin; Gauché, Paul
2016-05-01
In this paper the optimization of a heliostat field with triangular heliostat pods is addressed. The use of structures which allow the combination of several heliostats into a common pod system aims to reduce the high costs associated with the heliostat field and therefore reduces the Levelized Cost of Electricity value. A pattern-based algorithm and two pattern-free algorithms are adapted to handle the field layout problem with triangular heliostat pods. Under the Helio100 project in South Africa, a new small-scale Solar Power Tower plant has been recently constructed. The Helio100 plant has 20 triangular pods (each with 6 heliostats) whose positions follow a linear pattern. The obtained field layouts after optimization are compared against the reference field Helio100.
Optimization of single photon detection model based on GM-APD
NASA Astrophysics Data System (ADS)
Chen, Yu; Yang, Yi; Hao, Peiyu
2017-11-01
One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.
Constant-Envelope Waveform Design for Optimal Target-Detection and Autocorrelation Performances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
2013-01-01
We propose an algorithm to directly synthesize in time-domain a constant-envelope transmit waveform that achieves the optimal performance in detecting an extended target in the presence of signal-dependent interference. This approach is in contrast to the traditional indirect methods that synthesize the transmit signal following the computation of the optimal energy spectral density. Additionally, we aim to maintain a good autocorrelation property of the designed signal. Therefore, our waveform design technique solves a bi-objective optimization problem in order to simultaneously improve the detection and autocorrelation performances, which are in general conflicting in nature. We demonstrate this compromising characteristics of themore » detection and autocorrelation performances with numerical examples. Furthermore, in the absence of the autocorrelation criterion, our designed signal is shown to achieve a near-optimum detection performance.« less
Geurts, Sofie; van der Werf, Sieberen P.; Kessels, Roy P. C.
2015-01-01
The main focus of this review was to evaluate whether long-term forgetting rates (delayed tests, days, to weeks, after initial learning) are more sensitive measures than standard delayed recall measures to detect memory problems in various patient groups. It has been suggested that accelerated forgetting might be characteristic for epilepsy patients, but little research has been performed in other populations. Here, we identified eleven studies in a wide range of brain injured patient groups, whose long-term forgetting patterns were compared to those of healthy controls. Signs of accelerated forgetting were found in three studies. The results of eight studies showed normal forgetting over time for the patient groups. However, most of the studies used only a recognition procedure, after optimizing initial learning. Based on these results, we recommend the use of a combined recall and recognition procedure to examine accelerated forgetting and we discuss the relevance of standard and optimized learning procedures in clinical practice. PMID:26106343
Recent developments of artificial intelligence in drying of fresh food: A review.
Sun, Qing; Zhang, Min; Mujumdar, Arun S
2018-03-01
Intellectualization is an important direction of drying development and artificial intelligence (AI) technologies have been widely used to solve problems of nonlinear function approximation, pattern detection, data interpretation, optimization, simulation, diagnosis, control, data sorting, clustering, and noise reduction in different food drying technologies due to the advantages of self-learning ability, adaptive ability, strong fault tolerance and high degree robustness to map the nonlinear structures of arbitrarily complex and dynamic phenomena. This article presents a comprehensive review on intelligent drying technologies and their applications. The paper starts with the introduction of basic theoretical knowledge of ANN, fuzzy logic and expert system. Then, we summarize the AI application of modeling, predicting, and optimization of heat and mass transfer, thermodynamic performance parameters, and quality indicators as well as physiochemical properties of dried products in artificial biomimetic technology (electronic nose, computer vision) and different conventional drying technologies. Furthermore, opportunities and limitations of AI technique in drying are also outlined to provide more ideas for researchers in this area.
NASA Astrophysics Data System (ADS)
Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith
2005-11-01
Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.
Optimization of optical systems.
Champagne, E B
1966-11-01
The power signal-to-noise ratios for coherent and noncoherent optical detection are presented, with the expression for noncoherent detection being examined in detail. It is found that for the long range optical system to compete with its microwave counterpart it is necessary to optimize the optical system. The optical system may be optimized by using coherent detection, or noncoherent detection if the signal is the dominate noise factor. A design procedure is presented which, in principle, always allows one to obtain signal shot-noise limited operation with noncoherent detection if pulsed operation is used. The technique should make reasonable extremely long range, high data rate systems of relatively simple design.
Design technology co-optimization for 14/10nm metal1 double patterning layer
NASA Astrophysics Data System (ADS)
Duan, Yingli; Su, Xiaojing; Chen, Ying; Su, Yajuan; Shao, Feng; Zhang, Recco; Lei, Junjiang; Wei, Yayi
2016-03-01
Design and technology co-optimization (DTCO) can satisfy the needs of the design, generate robust design rule, and avoid unfriendly patterns at the early stage of design to ensure a high level of manufacturability of the product by the technical capability of the present process. The DTCO methodology in this paper includes design rule translation, layout analysis, model validation, hotspots classification and design rule optimization mainly. The correlation of the DTCO and double patterning (DPT) can optimize the related design rule and generate friendlier layout which meets the requirement of the 14/10nm technology node. The experiment demonstrates the methodology of DPT-compliant DTCO which is applied to a metal1 layer from the 14/10nm node. The DTCO workflow proposed in our job is an efficient solution for optimizing the design rules for 14/10 nm tech node Metal1 layer. And the paper also discussed and did the verification about how to tune the design rule of the U-shape and L-shape structures in a DPT-aware metal layer.
Design optimization of large-size format edge-lit light guide units
NASA Astrophysics Data System (ADS)
Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.
2016-04-01
In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.
NASA Astrophysics Data System (ADS)
Wang, Qingrui; Liu, Ruimin; Men, Cong; Guo, Lijia
2018-05-01
The genetic algorithm (GA) was combined with the Conversion of Land Use and its Effect at Small regional extent (CLUE-S) model to obtain an optimized land use pattern for controlling non-point source (NPS) pollution. The performance of the combination was evaluated. The effect of the optimized land use pattern on the NPS pollution control was estimated by the Soil and Water Assessment Tool (SWAT) model and an assistant map was drawn to support the land use plan for the future. The Xiangxi River watershed was selected as the study area. Two scenarios were used to simulate the land use change. Under the historical trend scenario (Markov chain prediction), the forest area decreased by 2035.06 ha, and was mainly converted into paddy and dryland area. In contrast, under the optimized scenario (genetic algorithm (GA) prediction), up to 3370 ha of dryland area was converted into forest area. Spatially, the conversion of paddy and dryland into forest occurred mainly in the northwest and southeast of the watershed, where the slope land occupied a large proportion. The organic and inorganic phosphorus loads decreased by 3.6% and 3.7%, respectively, in the optimized scenario compared to those in the historical trend scenario. GA showed a better performance in optimized land use prediction. A comparison of the land use patterns in 2010 under the real situation and in 2020 under the optimized situation showed that Shennongjia and Shuiyuesi should convert 1201.76 ha and 1115.33 ha of dryland into forest areas, respectively, which represented the greatest changes in all regions in the watershed. The results of this study indicated that GA and the CLUE-S model can be used to optimize the land use patterns in the future and that SWAT can be used to evaluate the effect of land use optimization on non-point source pollution control. These methods may provide support for land use plan of an area.
Hayashi, Takahito; Ago, Kazutoshi; Nakamae, Takuma; Higo, Eri; Ogata, Mamoru
2015-09-01
Immunostaining for beta-amyloid precursor protein (APP) is recognized as an effective tool for detecting traumatic axonal injury, but it also detects axonal injury due to ischemic or other metabolic causes. Previously, we reported two different patterns of APP staining: labeled axons oriented along with white matter bundles (pattern 1) and labeled axons scattered irregularly (pattern 2) (Hayashi et al. (Leg Med (Tokyo) 11:S171-173, 2009). In this study, we investigated whether these two patterns are consistent with patterns of trauma and hypoxic brain damage, respectively. Sections of the corpus callosum from 44 cases of blunt head injury and equivalent control tissue were immunostained for APP. APP was detected in injured axons such as axonal bulbs and varicose axons in 24 of the 44 cases of head injuries that also survived for three or more hours after injury. In 21 of the 24 APP-positive cases, pattern 1 alone was observed in 14 cases, pattern 2 alone was not observed in any cases, and both patterns 1 and 2 were detected in 7 cases. APP-labeled injured axons were detected in 3 of the 44 control cases, all of which were pattern 2. These results suggest that pattern 1 indicates traumatic axonal injury, while pattern 2 results from hypoxic insult. These patterns may be useful to differentiate between traumatic and nontraumatic axonal injuries.
Galievsky, Victor A; Stasheuski, Alexander S; Krylov, Sergey N
2017-10-17
The limit-of-detection (LOD) in analytical instruments with fluorescence detection can be improved by reducing noise of optical background. Efficiently reducing optical background noise in systems with spectrally nonuniform background requires complex optimization of an emission filter-the main element of spectral filtration. Here, we introduce a filter-optimization method, which utilizes an expression for the signal-to-noise ratio (SNR) as a function of (i) all noise components (dark, shot, and flicker), (ii) emission spectrum of the analyte, (iii) emission spectrum of the optical background, and (iv) transmittance spectrum of the emission filter. In essence, the noise components and the emission spectra are determined experimentally and substituted into the expression. This leaves a single variable-the transmittance spectrum of the filter-which is optimized numerically by maximizing SNR. Maximizing SNR provides an accurate way of filter optimization, while a previously used approach based on maximizing a signal-to-background ratio (SBR) is the approximation that can lead to much poorer LOD specifically in detection of fluorescently labeled biomolecules. The proposed filter-optimization method will be an indispensable tool for developing new and improving existing fluorescence-detection systems aiming at ultimately low LOD.
Optimization of plasma etching of SiO2 as hard mask for HgCdTe dry etching
NASA Astrophysics Data System (ADS)
Chen, Yiyu; Ye, Zhenhua; Sun, Changhong; Zhang, Shan; Xin, Wen; Hu, Xiaoning; Ding, Ruijun; He, Li
2016-10-01
HgCdTe is one of the dominating materials for infrared detection. To pattern this material, our group has proven the feasibility of SiO2 as a hard mask in dry etching process. In recent years, the SiO2 mask patterned by plasma with an auto-stopping layer of ZnS sandwiched between HgCdTe and SiO2 has been developed by our group. In this article, we will report the optimization of SiO2 etching on HgCdTe. The etching of SiO2 is very mature nowadays. Multiple etching recipes with deferent gas mixtures can be used. We utilized a recipe containing Ar and CHF3. With strictly controlled photolithography, the high aspect-ratio profile of SiO2 was firstly achieved on GaAs substrate. However, the same recipe could not work well on MCT because of the low thermal conductivity of HgCdTe and CdTe, resulting in overheated and deteriorated photoresist. By decreasing the temperature, the photoresist maintained its good profile. A starting table temperature around -5°C worked well enough. And a steep profile was achieved as checked by the SEM. Further decreasing of temperature introduced profile with beveled corner. The process window of the temperature is around 10°C. Reproducibility and uniformity were also confirmed for this recipe.
Testing models of parental investment strategy and offspring size in ants.
Gilboa, Smadar; Nonacs, Peter
2006-01-01
Parental investment strategies can be fixed or flexible. A fixed strategy predicts making all offspring a single 'optimal' size. Dynamic models predict flexible strategies with more than one optimal size of offspring. Patterns in the distribution of offspring sizes may thus reveal the investment strategy. Static strategies should produce normal distributions. Dynamic strategies should often result in non-normal distributions. Furthermore, variance in morphological traits should be positively correlated with the length of developmental time the traits are exposed to environmental influences. Finally, the type of deviation from normality (i.e., skewed left or right, or platykurtic) should be correlated with the average offspring size. To test the latter prediction, we used simulations to detect significant departures from normality and categorize distribution types. Data from three species of ants strongly support the predicted patterns for dynamic parental investment. Offspring size distributions are often significantly non-normal. Traits fixed earlier in development, such as head width, are less variable than final body weight. The type of distribution observed correlates with mean female dry weight. The overall support for a dynamic parental investment model has implications for life history theory. Predicted conflicts over parental effort, sex investment ratios, and reproductive skew in cooperative breeders follow from assumptions of static parental investment strategies and omnipresent resource limitations. By contrast, with flexible investment strategies such conflicts can be either absent or maladaptive.
Muscle coordination is habitual rather than optimal.
de Rugy, Aymar; Loeb, Gerald E; Carroll, Timothy J
2012-05-23
When sharing load among multiple muscles, humans appear to select an optimal pattern of activation that minimizes costs such as the effort or variability of movement. How the nervous system achieves this behavior, however, is unknown. Here we show that contrary to predictions from optimal control theory, habitual muscle activation patterns are surprisingly robust to changes in limb biomechanics. We first developed a method to simulate joint forces in real time from electromyographic recordings of the wrist muscles. When the model was altered to simulate the effects of paralyzing a muscle, the subjects simply increased the recruitment of all muscles to accomplish the task, rather than recruiting only the useful muscles. When the model was altered to make the force output of one muscle unusually noisy, the subjects again persisted in recruiting all muscles rather than eliminating the noisy one. Such habitual coordination patterns were also unaffected by real modifications of biomechanics produced by selectively damaging a muscle without affecting sensory feedback. Subjects naturally use different patterns of muscle contraction to produce the same forces in different pronation-supination postures, but when the simulation was based on a posture different from the actual posture, the recruitment patterns tended to agree with the actual rather than the simulated posture. The results appear inconsistent with computation of motor programs by an optimal controller in the brain. Rather, the brain may learn and recall command programs that result in muscle coordination patterns generated by lower sensorimotor circuitry that are functionally "good-enough."
Singhal, Chaitali; Ingle, Aviraj; Chakraborty, Dhritiman; Pn, Anoop Krishna; Pundir, C S; Narang, Jagriti
2017-05-01
An impedimetric genosensor was fabricated for detection of hepatitis C virus (HCV) genotype 1 in serum, based on hybridization of the probe with complementary target cDNA from sample. To achieve it, probe DNA complementary to HCVgene was immobilized on the surface of methylene blue (MB) doped silica nanoparticles MB@SiNPs) modified fluorine doped tin oxide (FTO) electrode. The synthesized MB@SiNPs was characterized using scanning electron microscopy (SEM), high resolution transmission electron microscopy (HRTEM) and X-ray diffraction (XRD) pattern. This modified electrode (ssDNA/MB@SiNPs/FTO) served both as a signal amplification platform (due to silica nanoparticles (SiNPs) as well as an electrochemical indicator (due to methylene blue (MB)) for the detection of the HCV DNA in patient serum sample. The genosensor was optimized and evaluated. The sensor showed a dynamic linear range 100-10 6 copies/mL, with a detection limit of 90 copies/mL. The sensor was applied for detection of HCV in sera of hepatitis patient and could be renewed. The half life of the sensor was 4 weeks. The MB@SiNPs/FTO electrode could be used for preparation of other gensensors also. Copyright © 2017 Elsevier B.V. All rights reserved.
Visual information processing II; Proceedings of the Meeting, Orlando, FL, Apr. 14-16, 1993
NASA Technical Reports Server (NTRS)
Huck, Friedrich O. (Editor); Juday, Richard D. (Editor)
1993-01-01
Various papers on visual information processing are presented. Individual topics addressed include: aliasing as noise, satellite image processing using a hammering neural network, edge-detetion method using visual perception, adaptive vector median filters, design of a reading test for low-vision image warping, spatial transformation architectures, automatic image-enhancement method, redundancy reduction in image coding, lossless gray-scale image compression by predictive GDF, information efficiency in visual communication, optimizing JPEG quantization matrices for different applications, use of forward error correction to maintain image fidelity, effect of peanoscanning on image compression. Also discussed are: computer vision for autonomous robotics in space, optical processor for zero-crossing edge detection, fractal-based image edge detection, simulation of the neon spreading effect by bandpass filtering, wavelet transform (WT) on parallel SIMD architectures, nonseparable 2D wavelet image representation, adaptive image halftoning based on WT, wavelet analysis of global warming, use of the WT for signal detection, perfect reconstruction two-channel rational filter banks, N-wavelet coding for pattern classification, simulation of image of natural objects, number-theoretic coding for iconic systems.
Detection of Drug-Resistant Mycobacterium tuberculosis.
Engström, Anna; Juréen, Pontus
2015-01-01
Tuberculosis (TB) remains a global health problem. The increasing prevalence of drug-resistant Mycobacterium tuberculosis, the causative agent of TB, demands new measures to combat the situation. Rapid and accurate diagnosis of the pathogen and its drug susceptibility pattern is essential for timely initiation of optimal treatment, and, ultimately, control of the disease. We have developed a molecular method for detection of first- and second-line drug resistance in M. tuberculosis by Pyrosequencing(®). The method consists of seven Pyrosequencing assays for the detection of mutations in the genes or promoter regions, which are most commonly responsible for resistance to the drugs rifampicin, isoniazid, ethambutol, amikacin, kanamycin, capreomycin, and fluoroquinolones. The method was validated on clinical isolates and it was shown that the sensitivity and specificity of the method were comparable to those of Sanger sequencing. In the protocol in this chapter we describe the steps necessary for setting up and performing Pyrosequencing for M. tuberculosis. The first part of the protocol describes the assay development and the second part of the protocol describes utilization of the method.
Miyata, Ryota; Ota, Keisuke; Aonishi, Toru
2013-01-01
Recently reported experimental findings suggest that the hippocampal CA1 network stores spatio-temporal spike patterns and retrieves temporally reversed and spread-out patterns. In this paper, we explore the idea that the properties of the neural interactions and the synaptic plasticity rule in the CA1 network enable it to function as a hetero-associative memory recalling such reversed and spread-out spike patterns. In line with Lengyel’s speculation (Lengyel et al., 2005), we firstly derive optimally designed spike-timing-dependent plasticity (STDP) rules that are matched to neural interactions formalized in terms of phase response curves (PRCs) for performing the hetero-associative memory function. By maximizing object functions formulated in terms of mutual information for evaluating memory retrieval performance, we search for STDP window functions that are optimal for retrieval of normal and doubly spread-out patterns under the constraint that the PRCs are those of CA1 pyramidal neurons. The system, which can retrieve normal and doubly spread-out patterns, can also retrieve reversed patterns with the same quality. Finally, we demonstrate that purposely designed STDP window functions qualitatively conform to typical ones found in CA1 pyramidal neurons. PMID:24204822
Mortality patterns and detection bias from carcass data: An example from wolf recovery in Wisconsin
Stenglein, Jennifer L.; Van Deelen, Timothy R.; Wydeven, Adrian P.; Mladenoff, David J.; Wiedenhoft, Jane E.; Businga, Nancy K.; Langenberg, Julia A.; Thomas, Nancy J.; Heisey, Dennis M.
2015-01-01
We developed models and provide computer code to make carcass recovery data more useful to wildlife managers. With these tools, wildlife managers can understand the spatial, temporal (e.g., across time periods, seasons), and demographic patterns in mortality causes from carcass recovery datasets. From datasets of radio-collared and non-collared carcasses, managers can calculate the detection bias by mortality cause in a non-collared carcass dataset compared to a collared carcass dataset. As a first step, we provide a standard procedure to assign mortality causes to carcasses. We provide an example of these methods for radio-collared wolves (n = 208) and non-collared wolves (n = 668) found dead in Wisconsin (1979–2012). We analyzed differences in mortality cause relative to season, age and sex classes, wolf harvest zones, and recovery phase (1979–1995: initial recovery, 1996–2002: early growth, 2003–2012: late growth). Seasonally, illegal kills and natural deaths were proportionally higher in winter (Oct–Mar) than summer (Apr–Sep) for collared wolves, whereas vehicle strikes and legal kills were higher in summer than winter. Spatially, more illegally killed collared wolves occurred in eastern wolf harvest zones where wolves reestablished more slowly and in the central forest region where optimal habitat is isolated by agriculture. Natural mortalities of collared wolves (e.g., disease, intraspecific strife, or starvation) were highest in western wolf harvest zones where wolves established earlier and existed at higher densities. Calculating detection bias in the non-collared dataset revealed that more than half of the non-collared carcasses on the landscape are not found. The lowest detection probabilities for non-collared carcasses (0.113–0.176) occurred in winter for natural, illegal, and unknown mortality causes.
Thin films with disordered nanohole patterns for solar radiation absorbers
NASA Astrophysics Data System (ADS)
Fang, Xing; Lou, Minhan; Bao, Hua; Zhao, C. Y.
2015-06-01
The radiation absorption in thin films with three disordered nanohole patterns, i.e., random position, non-uniform radius, and amorphous pattern, are numerically investigated by finite-difference time-domain (FDTD) simulations. Disorder can alter the absorption spectra and has an impact on the broadband absorption performance. Compared to random position and non-uniform radius nanoholes, amorphous pattern can induce a much better integrated absorption. The power density spectra indicate that amorphous pattern nanoholes reduce the symmetry and provide more resonance modes that are desired for the broadband absorption. The application condition for amorphous pattern nanoholes shows that they are much more appropriate in absorption enhancement for weak absorption materials. Amorphous silicon thin films with disordered nanohole patterns are applied in solar radiation absorbers. Four configurations of thin films with different nanohole patterns show that interference between layers in absorbers will change the absorption performance. Therefore, it is necessary to optimize the whole radiation absorbers although single thin film with amorphous pattern nanohole has reached optimal absorption.
A model for optimizing file access patterns using spatio-temporal parallelism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boonthanome, Nouanesengsy; Patchett, John; Geveci, Berk
2013-01-01
For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible filemore » access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.« less
Akbar, Umer; Raike, Robert S.; Hack, Nawaz; Hess, Christopher W.; Skinner, Jared; Martinez‐Ramirez, Daniel; DeJesus, Sol
2016-01-01
Objectives Evidence suggests that nonconventional programming may improve deep brain stimulation (DBS) therapy for movement disorders. The primary objective was to assess feasibility of testing the tolerability of several nonconventional settings in Parkinson's disease (PD) and essential tremor (ET) subjects in a single office visit. Secondary objectives were to explore for potential efficacy signals and to assess the energy demand on the implantable pulse‐generators (IPGs). Materials and Methods A custom firmware (FW) application was developed and acutely uploaded to the IPGs of eight PD and three ET subjects, allowing delivery of several nonconventional DBS settings, including narrow pulse widths, square biphasic pulses, and irregular pulse patterns. Standard clinical rating scales and several objective measures were used to compare motor outcomes with sham, clinically‐optimal and nonconventional settings. Blinded and randomized testing was conducted in a traditional office setting. Results Overall, the nonconventional settings were well tolerated. Under these conditions it was also possible to detect clinically‐relevant differences in DBS responses using clinical rating scales but not objective measures. Compared to the clinically‐optimal settings, some nonconventional settings appeared to offer similar benefit (e.g., narrow pulse widths) and others lesser benefit. Moreover, the results suggest that square biphasic pulses may deliver greater benefit. No unexpected IPG efficiency disadvantages were associated with delivering nonconventional settings. Conclusions It is feasible to acutely screen nonconventional DBS settings using controlled study designs in traditional office settings. Simple IPG FW upgrades may provide more DBS programming options for optimizing therapy. Potential advantages of narrow and biphasic pulses deserve follow up. PMID:27000764
Akbar, Umer; Raike, Robert S; Hack, Nawaz; Hess, Christopher W; Skinner, Jared; Martinez-Ramirez, Daniel; DeJesus, Sol; Okun, Michael S
2016-06-01
Evidence suggests that nonconventional programming may improve deep brain stimulation (DBS) therapy for movement disorders. The primary objective was to assess feasibility of testing the tolerability of several nonconventional settings in Parkinson's disease (PD) and essential tremor (ET) subjects in a single office visit. Secondary objectives were to explore for potential efficacy signals and to assess the energy demand on the implantable pulse-generators (IPGs). A custom firmware (FW) application was developed and acutely uploaded to the IPGs of eight PD and three ET subjects, allowing delivery of several nonconventional DBS settings, including narrow pulse widths, square biphasic pulses, and irregular pulse patterns. Standard clinical rating scales and several objective measures were used to compare motor outcomes with sham, clinically-optimal and nonconventional settings. Blinded and randomized testing was conducted in a traditional office setting. Overall, the nonconventional settings were well tolerated. Under these conditions it was also possible to detect clinically-relevant differences in DBS responses using clinical rating scales but not objective measures. Compared to the clinically-optimal settings, some nonconventional settings appeared to offer similar benefit (e.g., narrow pulse widths) and others lesser benefit. Moreover, the results suggest that square biphasic pulses may deliver greater benefit. No unexpected IPG efficiency disadvantages were associated with delivering nonconventional settings. It is feasible to acutely screen nonconventional DBS settings using controlled study designs in traditional office settings. Simple IPG FW upgrades may provide more DBS programming options for optimizing therapy. Potential advantages of narrow and biphasic pulses deserve follow up. © 2016 The Authors. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.
Fuzzy usage pattern in customizing public transport fleet and its maintenance options
NASA Astrophysics Data System (ADS)
Husniah, H.; Herdiani, L.; Kusmaya; Supriatna, A. K.
2018-05-01
In this paper we study a two-dimensional maintenance contract for a fleet of public transport, such as buses, shuttle etc. The buses are sold with a two-dimensional warranty. The warranty and the maintenance contract are characterized by two parameters – age and usage – which define a two-dimensional region. However, we use one dimensional approach to model these age and usage of the buses. The under-laying maintenance service contracts is the one which offers policy limit cost to protect a service provider (an agent) from over claim and to pursue the owner to do maintenance under specified cost in house. This in turn gives benefit for both the owner of the buses and the agent of service contract. The decision problem for an agent is to determine the optimal price for each option offered, and for the owner is to select the best contract option. We use a Nash game theory formulation in order to obtain a win-win solution – i.e. the optimal price for the agent and the optimal option for the owner. We further assume that there will be three different usage pattern of the buses, i.e. low, medium, and high pattern of the usage rate. In many situations it is often that we face a blur boundary between the adjacent patterns. In this paper we look for the optimal price for the agent and the optimal option for the owner, which minimizes the expected total cost while considering the fuzziness of the usage rate pattern.
Rizzo, Austin A.; Brown, Donald J.; Welsh, Stuart A.; Thompson, Patricia A.
2017-01-01
Population monitoring is an essential component of endangered species recovery programs. The federally endangered Diamond Darter Crystallaria cincotta is in need of an effective monitoring design to improve our understanding of its distribution and track population trends. Because of their small size, cryptic coloration, and nocturnal behavior, along with limitations associated with current sampling methods, individuals are difficult to detect at known occupied sites. Therefore, research is needed to determine if survey efforts can be improved by increasing probability of individual detection. The primary objective of this study was to determine if there are seasonal and diel patterns in Diamond Darter detectability during population surveys. In addition to temporal factors, we also assessed five habitat variables that might influence individual detection. We used N-mixture models to estimate site abundances and relationships between covariates and individual detectability and ranked models using Akaike's information criteria. During 2015 three known occupied sites were sampled 15 times each between May and Oct. The best supported model included water temperature as a quadratic function influencing individual detectability, with temperatures around 22 C resulting in the highest detection probability. Detection probability when surveying at the optimal temperature was approximately 6% and 7.5% greater than when surveying at 16 C and 29 C, respectively. Time of Night and day of year were not strong predictors of Diamond Darter detectability. The results of this study will allow researchers and agencies to maximize detection probability when surveying populations, resulting in greater monitoring efficiency and likely more precise abundance estimates.
NASA Astrophysics Data System (ADS)
Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin
2018-02-01
This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.
Inversion method based on stochastic optimization for particle sizing.
Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix
2016-08-01
A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.
An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data
Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800
Decision Aids for Naval Air ASW
1980-03-15
Algorithm for Zone Optimization Investigation) NADC Developing Sonobuoy Pattern for Air ASW Search DAISY (Decision Aiding Information System) Wharton...sion making behavior. 0 Artificial intelligence sequential pattern recognition algorithm for reconstructing the decision maker’s utility functions. 0...display presenting the uncertainty area of the target. 3.1.5 Algorithm for Zone Optimization Investigation (AZOI) -- Naval Air Development Center 0 A
Soft-information flipping approach in multi-head multi-track BPMR systems
NASA Astrophysics Data System (ADS)
Warisarn, C.; Busyatras, W.; Myint, L. M. M.
2018-05-01
Inter-track interference is one of the most severe impairments in bit-patterned media recording system. This impairment can be effectively handled by a modulation code and a multi-head array jointly processing multiple tracks; however, such a modulation constraint has never been utilized to improve the soft-information. Therefore, this paper proposes the utilization of modulation codes with an encoded constraint defined by the criteria for soft-information flipping during a three-track data detection process. Moreover, we also investigate the optimal offset position of readheads to provide the most improvement in system performance. The simulation results indicate that the proposed systems with and without position jitter are significantly superior to uncoded systems.
Superconducting YBa2Cu3O7- δ Thin Film Detectors for Picosecond THz Pulses
NASA Astrophysics Data System (ADS)
Probst, P.; Scheuring, A.; Hofherr, M.; Wünsch, S.; Il'in, K.; Semenov, A.; Hübers, H.-W.; Judin, V.; Müller, A.-S.; Hänisch, J.; Holzapfel, B.; Siegel, M.
2012-06-01
Ultra-fast THz detectors from superconducting YBa2Cu3O7- δ (YBCO) thin films were developed to monitor picosecond THz pulses. YBCO thin films were optimized by the introduction of CeO2 and PrBaCuO buffer layers. The transition temperature of 10 nm thick films reaches 79 K. A 15 nm thick YBCO microbridge (transition temperature—83 K, critical current density at 77 K—2.4 MA/cm2) embedded in a planar log-spiral antenna was used to detect pulsed THz radiation of the ANKA storage ring. First time resolved measurements of the multi-bunch filling pattern are presented.
Spectral anomaly methods for aerial detection using KUT nuisance rejection
NASA Astrophysics Data System (ADS)
Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.
2015-06-01
This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.
Analysis of dynamically stable patterns in a maze-like corridor using the Wasserstein metric.
Ishiwata, Ryosuke; Kinukawa, Ryota; Sugiyama, Yuki
2018-04-23
The two-dimensional optimal velocity (2d-OV) model represents a dissipative system with asymmetric interactions, thus being suitable to reproduce behaviours such as pedestrian dynamics and the collective motion of living organisms. In this study, we found that particles in the 2d-OV model form optimal patterns in a maze-like corridor. Then, we estimated the stability of such patterns using the Wasserstein metric. Furthermore, we mapped these patterns into the Wasserstein metric space and represented them as points in a plane. As a result, we discovered that the stability of the dynamical patterns is strongly affected by the model sensitivity, which controls the motion of each particle. In addition, we verified the existence of two stable macroscopic patterns which were cohesive, stable, and appeared regularly over the time evolution of the model.
NASA Astrophysics Data System (ADS)
Bohra, Murtaza
Legged rovers are often considered as viable solutions for traversing unknown terrain. This work addresses the optimal locomotion reconfigurability of quadruped rovers, which consists of obtaining optimal locomotion modes, and transitioning between them. A 2D sagittal plane rover model is considered based on a domestic cat. Using a Genetic Algorithm, the gait, pose and control variables that minimize torque or maximize speed are found separately. The optimization approach takes into account the elimination of leg impact, while considering the entire variable spectrum. The optimal solutions are consistent with other works on gait optimization, and are similar to gaits found in quadruped animals as well. An online model-free gait planning framework is also implemented, that is based on Central Pattern Generators is implemented. It is used to generate joint and control trajectories for any arbitrarily varying speed profile, and shown to regulate locomotion transition and speed modulation, both endogenously and continuously.
Research on intrusion detection based on Kohonen network and support vector machine
NASA Astrophysics Data System (ADS)
Shuai, Chunyan; Yang, Hengcheng; Gong, Zeweiyi
2018-05-01
In view of the problem of low detection accuracy and the long detection time of support vector machine, which directly applied to the network intrusion detection system. Optimization of SVM parameters can greatly improve the detection accuracy, but it can not be applied to high-speed network because of the long detection time. a method based on Kohonen neural network feature selection is proposed to reduce the optimization time of support vector machine parameters. Firstly, this paper is to calculate the weights of the KDD99 network intrusion data by Kohonen network and select feature by weight. Then, after the feature selection is completed, genetic algorithm (GA) and grid search method are used for parameter optimization to find the appropriate parameters and classify them by support vector machines. By comparing experiments, it is concluded that feature selection can reduce the time of parameter optimization, which has little influence on the accuracy of classification. The experiments suggest that the support vector machine can be used in the network intrusion detection system and reduce the missing rate.
CNV detection method optimized for high-resolution arrayCGH by normality test.
Ahn, Jaegyoon; Yoon, Youngmi; Park, Chihyun; Park, Sanghyun
2012-04-01
High-resolution arrayCGH platform makes it possible to detect small gains and losses which previously could not be measured. However, current CNV detection tools fitted to early low-resolution data are not applicable to larger high-resolution data. When CNV detection tools are applied to high-resolution data, they suffer from high false-positives, which increases validation cost. Existing CNV detection tools also require optimal parameter values. In most cases, obtaining these values is a difficult task. This study developed a CNV detection algorithm that is optimized for high-resolution arrayCGH data. This tool operates up to 1500 times faster than existing tools on a high-resolution arrayCGH of whole human chromosomes which has 42 million probes whose average length is 50 bases, while preserving false positive/negative rates. The algorithm also uses a normality test, thereby removing the need for optimal parameters. To our knowledge, this is the first formulation for CNV detecting problems that results in a near-linear empirical overall complexity for real high-resolution data. Copyright © 2012 Elsevier Ltd. All rights reserved.
Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic.
Bai, Xin; Tang, Kujin; Ren, Jie; Waterman, Michael; Sun, Fengzhu
2017-10-03
Alignment-free sequence comparison using counts of word patterns (grams, k-tuples) has become an active research topic due to the large amount of sequence data from the new sequencing technologies. Genome sequences are frequently modelled by Markov chains and the likelihood ratio test or the corresponding approximate χ 2 -statistic has been suggested to compare two sequences. However, it is not known how to best choose the word length k in such studies. We develop an optimal strategy to choose k by maximizing the statistical power of detecting differences between two sequences. Let the orders of the Markov chains for the two sequences be r 1 and r 2 , respectively. We show through both simulations and theoretical studies that the optimal k= max(r 1 ,r 2 )+1 for both long sequences and next generation sequencing (NGS) read data. The orders of the Markov chains may be unknown and several methods have been developed to estimate the orders of Markov chains based on both long sequences and NGS reads. We study the power loss of the statistics when the estimated orders are used. It is shown that the power loss is minimal for some of the estimators of the orders of Markov chains. Our studies provide guidelines on choosing the optimal word length for the comparison of Markov sequences.
Scavengers on the move: behavioural changes in foraging search patterns during the annual cycle.
López-López, Pascual; Benavent-Corai, José; García-Ripollés, Clara; Urios, Vicente
2013-01-01
Optimal foraging theory predicts that animals will tend to maximize foraging success by optimizing search strategies. However, how organisms detect sparsely distributed food resources remains an open question. When targets are sparse and unpredictably distributed, a Lévy strategy should maximize foraging success. By contrast, when resources are abundant and regularly distributed, simple brownian random movement should be sufficient. Although very different groups of organisms exhibit Lévy motion, the shift from a Lévy to a brownian search strategy has been suggested to depend on internal and external factors such as sex, prey density, or environmental context. However, animal response at the individual level has received little attention. We used GPS satellite-telemetry data of Egyptian vultures Neophron percnopterus to examine movement patterns at the individual level during consecutive years, with particular interest in the variations in foraging search patterns during the different periods of the annual cycle (i.e. breeding vs. non-breeding). Our results show that vultures followed a brownian search strategy in their wintering sojourn in Africa, whereas they exhibited a more complex foraging search pattern at breeding grounds in Europe, including Lévy motion. Interestingly, our results showed that individuals shifted between search strategies within the same period of the annual cycle in successive years. Results could be primarily explained by the different environmental conditions in which foraging activities occur. However, the high degree of behavioural flexibility exhibited during the breeding period in contrast to the non-breeding period is challenging, suggesting that not only environmental conditions explain individuals' behaviour but also individuals' cognitive abilities (e.g., memory effects) could play an important role. Our results support the growing awareness about the role of behavioural flexibility at the individual level, adding new empirical evidence about how animals in general, and particularly scavengers, solve the problem of efficiently finding food resources.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Fogarty, Barbara A; Heppert, Kathleen E; Cory, Theodore J; Hulbutta, Kalonie R; Martin, R Scott; Lunte, Susan M
2005-06-01
The use of CO(2) laser ablation for the patterning of capillary electrophoresis (CE) microchannels in poly(dimethylsiloxane)(PDMS) is described. Low-cost polymer devices were produced using a relatively inexpensive CO(2) laser system that facilitated rapid patterning and ablation of microchannels. Device designs were created using a commercially available software package. The effects of PDMS thickness, laser focusing, power, and speed on the resulting channel dimensions were investigated. Using optimized settings, the smallest channels that could be produced averaged 33 microm in depth (11.1% RSD, N= 6) and 110 microm in width (5.7% RSD, N= 6). The use of a PDMS substrate allowed reversible sealing of microchip components at room temperature without the need for cleanroom facilities. Using a layer of pre-cured polymer, devices were designed, ablated, and assembled within minutes. The final devices were used for microchip CE separation and detection of the fluorescently labeled neurotransmitters aspartate and glutamate.
2010-11-01
pected target motion. Along this line, Wettergren [5] analyzed the performance of the track - before - detect schemes for the sensor networks. Furthermore...dressed by Baumgartner and Ferrari [11] for the reorganization of the sensor field to achieve the maximum coverage. The track - before - detect -based optimal...confirming a target. In accordance with the track - before - detect paradigm [4], a moving target is detected if the kd (typically kd = 3 or 4) sensors detect
Routing performance analysis and optimization within a massively parallel computer
Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen
2013-04-16
An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.
NASA Astrophysics Data System (ADS)
Selj, G. K.; Søderblom, M.
2015-10-01
Detection of a camouflaged object in natural sceneries requires the target to be distinguishable from its local background. The development of any new camouflage pattern therefore has to rely on a well-founded test methodology - which has to be correlated with the final purpose of the pattern - as well as an evaluation procedure, containing the optimal criteria for i) discriminating between the targets and then eventually ii) for a final rank of the targets. In this study we present results from a recent camouflage assessment trial where human observers were used in a search by photo methodology to assess generic test camouflage patterns. We conducted a study to investigate possible improvements in camouflage patterns for battle dress uniforms. The aim was to do a comparative study of potential, and generic patterns intended for use in arid areas (sparsely vegetated, semi desert). We developed a test methodology that was intended to be simple, reliable and realistic with respect to the operational benefit of camouflage. Therefore we chose to conduct a human based observer trial founded on imagery of realistic targets in natural backgrounds. Inspired by a recent and similar trial in the UK, we developed new and purpose-based software to be able to conduct the observer trial. Our preferred assessment methodology - the observer trial - was based on target recordings in 12 different, but operational relevant scenes, collected in a dry and sparsely vegetated area (Rhodes). The scenes were chosen with the intention to span as broadly as possible. The targets were human-shaped mannequins and were situated identically in each of the scenes to allow for a relative comparison of camouflage effectiveness in each scene. Test of significance, among the targets' performance, was carried out by non-parametric tests as the corresponding time of detection distributions in overall were found to be difficult to parameterize. From the trial, containing 12 different scenes from sparsely vegetated areas we collected detection time's distributions for 6 generic targets through visual search by 148 observers. We found that the different targets performed differently, given by their corresponding time of detection distributions, within a single scene. Furthermore, we gained an overall ranking over all the 12 scenes by performing a weighted sum over all scenes, intended to keep as much of the vital information on the targets' signature effectiveness as possible. Our results show that it was possible to measure the targets performance relatively to another also when summing over all scenes. We also compared our ranking based on our preferred criterion (detection time) with a secondary (probability of detection) to assess the sensitivity of a final ranking based upon the test set-up and evaluation criterion. We found our observer-based approach to be well suited regarding its ability to discriminate between similar targets and to assign numeric values to the observed differences in performance. We believe our approach will be well suited as a tool whenever different aspects of camouflage are to be evaluated and understood further.
Lee, Ju Seok; Chen, Junghuei; Deaton, Russell; Kim, Jin-Woo
2014-01-01
Genetic material extracted from in situ microbial communities has high promise as an indicator of biological system status. However, the challenge is to access genomic information from all organisms at the population or community scale to monitor the biosystem's state. Hence, there is a need for a better diagnostic tool that provides a holistic view of a biosystem's genomic status. Here, we introduce an in vitro methodology for genomic pattern classification of biological samples that taps large amounts of genetic information from all genes present and uses that information to detect changes in genomic patterns and classify them. We developed a biosensing protocol, termed Biological Memory, that has in vitro computational capabilities to "learn" and "store" genomic sequence information directly from genomic samples without knowledge of their explicit sequences, and that discovers differences in vitro between previously unknown inputs and learned memory molecules. The Memory protocol was designed and optimized based upon (1) common in vitro recombinant DNA operations using 20-base random probes, including polymerization, nuclease digestion, and magnetic bead separation, to capture a snapshot of the genomic state of a biological sample as a DNA memory and (2) the thermal stability of DNA duplexes between new input and the memory to detect similarities and differences. For efficient read out, a microarray was used as an output method. When the microarray-based Memory protocol was implemented to test its capability and sensitivity using genomic DNA from two model bacterial strains, i.e., Escherichia coli K12 and Bacillus subtilis, results indicate that the Memory protocol can "learn" input DNA, "recall" similar DNA, differentiate between dissimilar DNA, and detect relatively small concentration differences in samples. This study demonstrated not only the in vitro information processing capabilities of DNA, but also its promise as a genomic pattern classifier that could access information from all organisms in a biological system without explicit genomic information. The Memory protocol has high potential for many applications, including in situ biomonitoring of ecosystems, screening for diseases, biosensing of pathological features in water and food supplies, and non-biological information processing of memory devices, among many.
Application of APTES-Anti-E-cadherin film for early cancer monitoring.
Ben Ismail, Manel; Carreiras, Franck; Agniel, Rémy; Mili, Donia; Sboui, Dejla; Zanina, Nahla; Othmane, Ali
2016-10-01
Cancer staging is a way to classify cancer according to the extent of the disease in the body. The stage is usually determined by several factors such as the location of the primary tumor, the tumor size, the degree of spread in the surrounding tissues, etc. The study of E-cadherin (EC) expression on cancerous cells of patients has revealed variations in the molecular expression patterns of primary tumors and metastatic tumors. The detection of these cells requires a long procedure involving conventional techniques, thus, the requirement for development of new rapid devices that permit direct and highly sensitive detection stimulates the sensing field progress. Here, we explore if E-cadherin could be used as a biomarker to bind and detect epithelial cancer cells. Hence, the sensitive and specific detection of E-cadherin expressed on epithelial cells is approached by immobilizing anti-E-cadherin antibody (AEC) onto aminosilanized indium-tin oxide (ITO) surface. The immunosensing surfaces have been characterized by electrochemical measurements, wettability and confocal microscopy and their performance has been assessed in the presence of cancer cell lines. Under optimal conditions, the resulting immunosensor displayed a selective detection of E-cadherin expressing cells, which could be detected either by fluorescence or electrochemical techniques. The developed immunosensing surface could provide a simple tool that can be applied to cancer staging. Copyright © 2016 Elsevier B.V. All rights reserved.
Thibault, Vincent; Gaudy-Graffin, Catherine; Colson, Philippe; Gozlan, Joël; Schnepf, Nathalie; Trimoulet, Pascale; Pallier, Coralie; Saune, Karine; Branger, Michel; Coste, Marianne; Thoraval, Francoise Roudot
2013-03-15
Chronic hepatitis B (CHB) is a clinical concern in human immunodeficiency virus (HIV)-infected individuals due to substantial prevalence, difficulties to treat, and severe liver disease outcome. A large nationwide cross-sectional multicentre analysis of HIV-HBV co-infected patients was designed to describe and identify parameters associated with virological and clinical outcome of CHB in HIV-infected individuals with detectable HBV viremia. A multicenter collaborative cross-sectional study was launched in 19 French University hospitals distributed through the country. From January to December 2007, HBV load, genotype, clinical and epidemiological characteristics of 223 HBV-HIV co-infected patients with an HBV replication over 1000 IU/mL were investigated. Patients were mostly male (82%, mean age 42 years). Genotype distribution (A 52%; E 23.3%; D 16.1%) was linked to risk factors, geographic origin, and co-infection with other hepatitis viruses. This genotypic pattern highlights divergent contamination event timelines by HIV and HBV viruses. Most patients (74.7%) under antiretroviral treatment were receiving a drug with anti-HBV activity, including 47% receiving TDF. Genotypic lamivudine-resistance detected in 26% of the patients was linked to duration of lamivudine exposure, age, CD4 count and HIV load. Resistance to adefovir (rtA181T/V) was detected in 2.7% of patients. Advanced liver lesions were observed in 54% of cases and were associated with an older age and lower CD4 counts but not with viral load or genotype. Immune escape HBsAg variants were seldom detected. Despite the detection of advanced liver lesions in most patients, few were not receiving anti-HBV drugs and for those treated with the most potent anti-HBV drugs, persistent replication suggested non-optimal adherence. Heterogeneity in HBV strains reflects epidemiological differences that may impact liver disease progression. These findings are strong arguments to further optimize clinical management and to promote vaccination in HIV-infected patients.
Luo, Tian; Ke, Jing; Xie, Yunfei; Dong, Yuming
2017-10-01
Capillary electrophoresis (CE) with ultraviolet detection was applied to determine underivatized amino acids in beer, based on the coordination interaction of copper ions and amino acids. An online sweeping technique was combined with CE to improve detection sensitivity. Using the United Nations Food Agriculture Organization/World Health Organization model of essential amino acid pattern and flavor of amino acids, the quality and taste in three kinds of beer were evaluated. It was found that Beer2 had higher quality than the other two kinds and the content of phenylalanine, proline, serine, and isoleucine was relatively large in all three kinds of beers with a great influence on beer flavor. Optimal conditions for separation were as follows: 50mM CuSO 4 at pH 4.40 as buffer; total length of fused silica capillary, 73 cm; effective length, 65 cm; separation voltage, 22.5 kV; and optimized sweeping condition, 70 seconds. In the appropriate range, linearity (r 2 > 0.9989), precision with a relative standard deviation < 8.05% (n = 5), limits of detection (0.13-0.25 μg/mL), limit of quantification (0.43-0.83 μg/mL), and recovery (80.5-115.8%) were measured. This method was shown to be applicable to the separation of amino acids in beer and to perform quantitative analysis directly without derivatization for the first time. Copyright © 2017. Published by Elsevier B.V.
Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobayashi, Yoko; Aiyoshi, Eitaro
2002-10-15
A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less
Pixel-based OPC optimization based on conjugate gradients.
Ma, Xu; Arce, Gonzalo R
2011-01-31
Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.
KIT gene mutations and patterns of protein expression in mucosal and acral melanoma.
Abu-Abed, Suzan; Pennell, Nancy; Petrella, Teresa; Wright, Frances; Seth, Arun; Hanna, Wedad
2012-01-01
Recently characterized KIT (CD117) gene mutations have revealed new pathways involved in melanoma pathogenesis. In particular, certain subtypes harbor mutations similar to those observed in gastrointestinal stromal tumors, which are sensitive to treatment with tyrosine kinase inhibitors. The purpose of this study was to characterize KIT gene mutations and patterns of protein expression in mucosal and acral melanoma. Formalin-fixed, paraffin-embedded tissues were retrieved from our archives. Histologic assessment included routine hematoxylin-eosin stains and immunohistochemical staining for KIT. Genomic DNA was used for polymerase chain reaction-based amplification of exons 11 and 13. We identified 59 acral and mucosal melanoma cases, of which 78% showed variable levels of KIT expression. Sequencing of exons 11 and 13 was completed on all cases, and 4 (6.8%) mutant cases were isolated. We successfully optimized conditions for the detection of KIT mutations and showed that 8.6% of mucosal and 4.2% of acral melanoma cases at our institution harbor KIT mutations; all mutant cases showed strong, diffuse KIT protein expression. Our case series represents the first Canadian study to characterize KIT gene mutations and patterns of protein expression in acral and mucosal melanoma.
A Quantitative Evaluation of Drive Pattern Selection for Optimizing EIT-Based Stretchable Sensors
Nefti-Meziani, Samia; Carbonaro, Nicola
2017-01-01
Electrical Impedance Tomography (EIT) is a medical imaging technique that has been recently used to realize stretchable pressure sensors. In this method, voltage measurements are taken at electrodes placed at the boundary of the sensor and are used to reconstruct an image of the applied touch pressure points. The drawback with EIT-based sensors, however, is their low spatial resolution due to the ill-posed nature of the EIT reconstruction. In this paper, we show our performance evaluation of different EIT drive patterns, specifically strategies for electrode selection when performing current injection and voltage measurements. We compare voltage data with Signal-to-Noise Ratio (SNR) and Boundary Voltage Changes (BVC), and study image quality with Size Error (SE), Position Error (PE) and Ringing (RNG) parameters, in the case of one-point and two-point simultaneous contact locations. The study shows that, in order to improve the performance of EIT based sensors, the electrode selection strategies should dynamically change correspondingly to the location of the input stimuli. In fact, the selection of one drive pattern over another can improve the target size detection and position accuracy up to 4.7% and 18%, respectively. PMID:28858252
A Quantitative Evaluation of Drive Pattern Selection for Optimizing EIT-Based Stretchable Sensors.
Russo, Stefania; Nefti-Meziani, Samia; Carbonaro, Nicola; Tognetti, Alessandro
2017-08-31
Electrical Impedance Tomography (EIT) is a medical imaging technique that has been recently used to realize stretchable pressure sensors. In this method, voltage measurements are taken at electrodes placed at the boundary of the sensor and are used to reconstruct an image of the applied touch pressure points. The drawback with EIT-based sensors, however, is their low spatial resolution due to the ill-posed nature of the EIT reconstruction. In this paper, we show our performance evaluation of different EIT drive patterns, specifically strategies for electrode selection when performing current injection and voltage measurements. We compare voltage data with Signal-to-Noise Ratio (SNR) and Boundary Voltage Changes (BVC), and study image quality with Size Error (SE), Position Error (PE) and Ringing (RNG) parameters, in the case of one-point and two-point simultaneous contact locations. The study shows that, in order to improve the performance of EIT based sensors, the electrode selection strategies should dynamically change correspondingly to the location of the input stimuli. In fact, the selection of one drive pattern over another can improve the target size detection and position accuracy up to 4.7% and 18%, respectively.
Drew, Gary S.; Bissonette, John A.
1997-01-01
Despite their temperate to subarctic geographic range, American martens (Martes americana) possess a thermally inefficient morphology. The lack of morphological adaptations for reducing thermal costs suggests that marten may use behavioral strategies to optimize thermal budgets. During the winters of 1989–1990 and 1990–1991, we radio-collared and monitored the diel activity of 7 martens. A log-linear model suggested that the presence or absence of light was the only factor associated with marten activity patterns (p < 0.001). A regression of the percentage of active fixes on ambient temperature failed to detect an association (b = −4.45, p = 0.084, n = 12). Contents of marten scats suggested that their activity was consistent with the prey-vulnerability hypothesis. While martens must balance multiple life requisites, their activity patterns suggest that they accept increased thermal costs in order to increase foraging efficiency. However, the nocturnal activity of martens during winter was also consistent with the hypothesis that they may be able to limit their own exposure to predation risk. The nocturnal habits of Newfoundland martens in the winter were consistent with the hypothesis of avoidance of predation risk.
Optimization of Contrast Detection Power with Probabilistic Behavioral Information
Cordes, Dietmar; Herzmann, Grit; Nandy, Rajesh; Curran, Tim
2012-01-01
Recent progress in the experimental design for event-related fMRI experiments made it possible to find the optimal stimulus sequence for maximum contrast detection power using a genetic algorithm. In this study, a novel algorithm is proposed for optimization of contrast detection power by including probabilistic behavioral information, based on pilot data, in the genetic algorithm. As a particular application, a recognition memory task is studied and the design matrix optimized for contrasts involving the familiarity of individual items (pictures of objects) and the recollection of qualitative information associated with the items (left/right orientation). Optimization of contrast efficiency is a complicated issue whenever subjects’ responses are not deterministic but probabilistic. Contrast efficiencies are not predictable unless behavioral responses are included in the design optimization. However, available software for design optimization does not include options for probabilistic behavioral constraints. If the anticipated behavioral responses are included in the optimization algorithm, the design is optimal for the assumed behavioral responses, and the resulting contrast efficiency is greater than what either a block design or a random design can achieve. Furthermore, improvements of contrast detection power depend strongly on the behavioral probabilities, the perceived randomness, and the contrast of interest. The present genetic algorithm can be applied to any case in which fMRI contrasts are dependent on probabilistic responses that can be estimated from pilot data. PMID:22326984
NASA Astrophysics Data System (ADS)
Welch, Kevin; Leonard, Jerry; Jones, Richard D.
2010-08-01
Increasingly stringent requirements on the performance of diffractive optical elements (DOEs) used in wafer scanner illumination systems are driving continuous improvements in their associated manufacturing processes. Specifically, these processes are designed to improve the output pattern uniformity of off-axis illumination systems to minimize degradation in the ultimate imaging performance of a lithographic tool. In this paper, we discuss performance improvements in both photolithographic patterning and RIE etching of fused silica diffractive optical structures. In summary, optimized photolithographic processes were developed to increase critical dimension uniformity and featuresize linearity across the substrate. The photoresist film thickness was also optimized for integration with an improved etch process. This etch process was itself optimized for pattern transfer fidelity, sidewall profile (wall angle, trench bottom flatness), and across-wafer etch depth uniformity. Improvements observed with these processes on idealized test structures (for ease of analysis) led to their implementation in product flows, with comparable increases in performance and yield on customer designs.
Active control of the spatial MRI phase distribution with optimal control theory
NASA Astrophysics Data System (ADS)
Lefebvre, Pauline M.; Van Reeth, Eric; Ratiney, Hélène; Beuf, Olivier; Brusseau, Elisabeth; Lambert, Simon A.; Glaser, Steffen J.; Sugny, Dominique; Grenier, Denis; Tse Ve Koon, Kevin
2017-08-01
This paper investigates the use of Optimal Control (OC) theory to design Radio-Frequency (RF) pulses that actively control the spatial distribution of the MRI magnetization phase. The RF pulses are generated through the application of the Pontryagin Maximum Principle and optimized so that the resulting transverse magnetization reproduces various non-trivial and spatial phase patterns. Two different phase patterns are defined and the resulting optimal pulses are tested both numerically with the ODIN MRI simulator and experimentally with an agar gel phantom on a 4.7 T small-animal MR scanner. Phase images obtained in simulations and experiments are both consistent with the defined phase patterns. A practical application of phase control with OC-designed pulses is also presented, with the generation of RF pulses adapted for a Magnetic Resonance Elastography experiment. This study demonstrates the possibility to use OC-designed RF pulses to encode information in the magnetization phase and could have applications in MRI sequences using phase images.
Edge detection and localization with edge pattern analysis and inflection characterization
NASA Astrophysics Data System (ADS)
Jiang, Bo
2012-05-01
In general edges are considered to be abrupt changes or discontinuities in two dimensional image signal intensity distributions. The accuracy of front-end edge detection methods in image processing impacts the eventual success of higher level pattern analysis downstream. To generalize edge detectors designed from a simple ideal step function model to real distortions in natural images, research on one dimensional edge pattern analysis to improve the accuracy of edge detection and localization proposes an edge detection algorithm, which is composed by three basic edge patterns, such as ramp, impulse, and step. After mathematical analysis, general rules for edge representation based upon the classification of edge types into three categories-ramp, impulse, and step (RIS) are developed to reduce detection and localization errors, especially reducing "double edge" effect that is one important drawback to the derivative method. But, when applying one dimensional edge pattern in two dimensional image processing, a new issue is naturally raised that the edge detector should correct marking inflections or junctions of edges. Research on human visual perception of objects and information theory pointed out that a pattern lexicon of "inflection micro-patterns" has larger information than a straight line. Also, research on scene perception gave an idea that contours have larger information are more important factor to determine the success of scene categorization. Therefore, inflections or junctions are extremely useful features, whose accurate description and reconstruction are significant in solving correspondence problems in computer vision. Therefore, aside from adoption of edge pattern analysis, inflection or junction characterization is also utilized to extend traditional derivative edge detection algorithm. Experiments were conducted to test my propositions about edge detection and localization accuracy improvements. The results support the idea that these edge detection method improvements are effective in enhancing the accuracy of edge detection and localization.
Toward an optimal online checkpoint solution under a two-level HPC checkpoint model
Di, Sheng; Robert, Yves; Vivien, Frederic; ...
2016-03-29
The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less
Impact of lesion characteristics on the prediction of optimal poststent fractional flow reserve.
Ando, Hirohiko; Takashima, Hiroaki; Suzuki, Akihiro; Sakurai, Shinichiro; Kumagai, Soichiro; Kurita, Akiyoshi; Waseda, Katsuhisa; Amano, Tetsuya
2016-12-01
Poststent fractional flow reserve (FFR) is a useful indicator of optimal percutaneous coronary intervention, and higher poststent FFR is associated with favorable long-term clinical outcome. However, little is known about the factors influencing poststent FFR. The purpose of this study was to determine the impact of lesion characteristics on poststent FFR. For patients who had scheduled stent implantation for stable angina, FFR measurements at maximum hyperemia were performed before and after coronary stent implantation. As one of lesion characteristics, the FFR pressure drop pattern was evaluated and classified as either an abrupt or a gradual pattern according to the pullback curve of FFR. A total of 205 lesions with physiological significant stenosis were evaluated. Fractional flow reserve value increased from 0.67±0.10 to 0.87±0.07 after stent implantation. Optimal poststent FFR was achieved in 75 lesions (36.6%). Logistic regression analysis demonstrated that optimal poststent FFR was positively correlated with an abrupt pressure drop pattern (hazard ratio [HR] 2.11, 95% CI 1.06-4.15, P=.03) and prestent FFR (HR 1.04, 95% CI 1.03-2.04, P=.03; per 0.1 increase), and negatively correlated with lesion localization to the left anterior descending artery (HR 0.18, 95% CI 0.09-0.36, P<.0001). The c statistic for predicting optimal poststent FFR was 0.763 (95% CI 0.702-0.819). Abrupt pressure drop patterns, prestent FFR, and lesion localization to the left anterior descending artery were independent predictors of optimal poststent FFR. Copyright © 2016 Elsevier Inc. All rights reserved.
Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias
2016-11-01
Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS 2 experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10 5 in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Altman, Michael B.
The increasing prevalence of intensity modulated radiation therapy (IMRT) as a treatment modality has led to a renewed interest in the potential for interaction between prolonged treatment time, as frequently associated with IMRT, and the underlying radiobiology of the irradiated tissue. A particularly relevant aspect of radiobiology is cell repair capacity, which influences cell survival, and thus directly relates to the ability to control tumors and spare normal tissues. For a single fraction of radiation, the linear quadratic (LQ) model is commonly used to relate the radiation dose to the fraction of cells surviving. The LQ model implies a dependence on two time-related factors which correlate to radiobiological effects: the duration of radiation application, and the functional form of how the dose is applied over that time (the "temporal pattern of applied dose"). Although the former has been well studied, the latter has not. Thus, the goal of this research is to investigate the impact of the temporal pattern of applied dose on the survival of human cells and to explore how the manipulation of this temporal dose pattern may be incorporated into an IMRT-based radiation therapy treatment planning scheme. The hypothesis is that the temporal pattern of applied dose in a single fraction of radiation can be optimized to maximize or minimize cell kill. Furthermore, techniques which utilize this effect could have clinical ramifications. In situations where increased cell kill is desirable, such as tumor control, or limiting the degree of cell kill is important, such as the sparing of normal tissue, temporal sequences of dose which maximize or minimize cell kill (temporally "optimized" sequences) may provide greater benefit than current clinically used radiation patterns. In the first part of this work, an LQ-based modeling analysis of effects of the temporal pattern of dose on cell kill is performed. Through this, patterns are identified for maximizing cell kill for a given radiation pattern by concentrating the highest doses in the middle of a fraction (a "Triangle" pattern), or minimizing cell kill by placing the highest doses near the beginning and end (a "V-shaped" pattern). The conditions under which temporal optimization effects are most acute are also identified: irradiation of low alpha/beta tissues, long fraction durations, and high doses/fx. An in vitro study is then performed which verifies that the temporal effects and trends predicted by the modeling study are clearly manifested in human cells. Following this a phantom which could allow similar in vitro radiobiological experiments in a 3-dimensional clinically-based environment is designed, created, and dosimetrically assessed using TLDs, film, and biological assay-based techniques. The phantom is found to be a useful and versatile tool for such experiments. A scheme for utilizing the phantom in a clinical treatment environment is then developed. This includes a demonstration of prototype methods for optimizing the temporal pattern of applied dose in clinical IMRT plans to manipulate tissue-dependent effects. Looking toward future experimental validation of such plans using the phantom, an analysis of the suitability of biological assays for use in phantom-based in vitro experiments is performed. Finally, a discussion is provided about the steps necessary to integrate temporal optimization into in vivo experiments and ultimately into a clinical radiation therapy environment. If temporal optimization is ultimately shown to have impact in vivo, the successful implementation of the methods developed in this study could enhance the efficacy and care of thousands of patients receiving radiotherapy.
Automated image based prominent nucleoli detection
Yap, Choon K.; Kalaw, Emarene M.; Singh, Malay; Chong, Kian T.; Giron, Danilo M.; Huang, Chao-Hui; Cheng, Li; Law, Yan N.; Lee, Hwee Kuan
2015-01-01
Introduction: Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Materials and Methods: Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. Results: The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Conclusions: Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings. PMID:26167383
Automated image based prominent nucleoli detection.
Yap, Choon K; Kalaw, Emarene M; Singh, Malay; Chong, Kian T; Giron, Danilo M; Huang, Chao-Hui; Cheng, Li; Law, Yan N; Lee, Hwee Kuan
2015-01-01
Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings.
The Optimization Design of An AC-Electroosmotic Micro mixer
NASA Astrophysics Data System (ADS)
Wang, Yangyang; Suh, Yongkweon; Kang, Sangmo
2007-11-01
We propose the optimization design of an AC-electroosmotic micro-mixer, which is composed of a channel and a series of pairs of electrodes attached on the bottom wall in zigzag patterns. The AC electric field is applied to the electrodes so that a fluid flow takes place around the electrodes across the channel, thus contributing to the mixing of the fluid within the channel. We have performed numerical simulations by using a commercial code (CFX 10) to optimize the shape and pattern of the electrodes via the concept of mixing index. It is found that the best combination of two kinds of electrodes, which leads to good mixing performance, is not simply harmonic one. When the length ratio of the two kinds of electrodes closes to 2:1, we can get the best mixing effect. Furthermore, we will visualize the flow pattern and measure the velocity field with a PTV technique to validate the numerical simulations. In addition, the mixing pattern will be visualized via the experiment.
A Neuro-Musculo-Skeletal Model for Insects With Data-driven Optimization.
Guo, Shihui; Lin, Juncong; Wöhrl, Toni; Liao, Minghong
2018-02-01
Simulating the locomotion of insects is beneficial to many areas such as experimental biology, computer animation and robotics. This work proposes a neuro-musculo-skeletal model, which integrates the biological inspirations from real insects and reproduces the gait pattern on virtual insects. The neural system is a network of spiking neurons, whose spiking patterns are controlled by the input currents. The spiking pattern provides a uniform representation of sensory information, high-level commands and control strategy. The muscle models are designed following the characteristic Hill-type muscle with customized force-length and force-velocity relationships. The model parameters, including both the neural and muscular components, are optimized via an approach of evolutionary optimization, with the data captured from real insects. The results show that the simulated gait pattern, including joint trajectories, matches the experimental data collected from real ants walking in the free mode. The simulated character is capable of moving at different directions and traversing uneven terrains.
Repurposing Blu-ray movie discs as quasi-random nanoimprinting templates for photon management
NASA Astrophysics Data System (ADS)
Smith, Alexander J.; Wang, Chen; Guo, Dongning; Sun, Cheng; Huang, Jiaxing
2014-11-01
Quasi-random nanostructures have attracted significant interests for photon management purposes. To optimize such patterns, typically very expensive fabrication processes are needed to create the pre-designed, subwavelength nanostructures. While quasi-random photonic nanostructures are abundant in nature (for example, in structural coloration), interestingly, they also exist in Blu-ray movie discs, an already mass-produced consumer product. Here we uncover that Blu-ray disc patterns are surprisingly well suited for light-trapping applications. While the algorithms in the Blu-ray industrial standard were developed with the intention of optimizing data compression and error tolerance, they have also created quasi-random arrangement of islands and pits on the final media discs that are nearly optimized for photon management over the solar spectrum, regardless of the information stored on the discs. As a proof-of-concept, imprinting polymer solar cells with the Blu-ray patterns indeed increases their efficiencies. Simulation suggests that Blu-ray patterns could be broadly applied for solar cells made of other materials.
NASA Astrophysics Data System (ADS)
Vaidya, Manushka
Although 1.5 and 3 Tesla (T) magnetic resonance (MR) systems remain the clinical standard, the number of 7 T MR systems has increased over the past decade because of the promise of higher signal-to-noise ratio (SNR), which can translate to images with higher resolution, improved image quality and faster acquisition times. However, there are a number of technical challenges that have prevented exploiting the full potential of ultra-high field (≥ 7 T) MR imaging (MRI), such as the inhomogeneous distribution of the radiofrequency (RF) electromagnetic field and specific energy absorption rate (SAR), which can compromise image quality and patient safety. To better understand the origin of these issues, we first investigated the dependence of the spatial distribution of the magnetic field associated with a surface RF coil on the operating frequency and electrical properties of the sample. Our results demonstrated that the asymmetries between the transmit (B1+) and receive (B 1-) circularly polarized components of the magnetic field, which are in part responsible for RF inhomogeneity, depend on the electric conductivity of the sample. On the other hand, when sample conductivity is low, a high relative permittivity can result in an inhomogeneous RF field distribution, due to significant constructive and destructive interference patterns between forward and reflected propagating magnetic field within the sample. We then investigated the use of high permittivity materials (HPMs) as a method to alter the field distribution and improve transmit and receive coil performance in MRI. We showed that HPM placed at a distance from an RF loop coil can passively shape the field within the sample. Our results showed improvement in transmit and receive sensitivity overlap, extension of coil field-of-view, and enhancement in transmit/receive efficiency. We demonstrated the utility of this concept by employing HPM to improve performance of an existing commercial head coil for the inferior regions of the brain, where the specific coil's imaging efficiency was inherently poor. Results showed a gain in SNR, while the maximum local and head SAR values remained below the prescribed limits. We showed that increasing coil performance with HPM could improve detection of functional MR activation during a motor-based task for whole brain fMRI. Finally, to gain an intuitive understanding of how HPM improves coil performance, we investigated how HPM separately affects signal and noise sensitivity to improve SNR. For this purpose, we employed a theoretical model based on dyadic Green's functions to compare the characteristics of current patterns, i.e. the optimal spatial distribution of coil conductors, that would either maximize SNR (ideal current patterns), maximize signal reception (signal-only optimal current patterns), or minimize sample noise (dark mode current patterns). Our results demonstrated that the presence of a lossless HPM changed the relative balance of signal-only optimal and dark mode current patterns. For a given relative permittivity, increasing the thickness of the HPM altered the magnitude of the currents required to optimize signal sensitivity at the voxel of interest as well as decreased the net electric field in the sample, which is associated, via reciprocity, to the noise received from the sample. Our results also suggested that signal-only current patterns could be used to identify HPM configurations that lead to high SNR gain for RF coil arrays. We anticipate that physical insights from this work could be utilized to build the next generation of high performing RF coils integrated with HPM.
Pürerfellner, Helmut; Sanders, Prashanthan; Sarkar, Shantanu; Reisfeld, Erin; Reiland, Jerry; Koehler, Jodi; Pokushalov, Evgeny; Urban, Luboš; Dekker, Lukas R C
2017-10-03
Intermittent change in p-wave discernibility during periods of ectopy and sinus arrhythmia is a cause of inappropriate atrial fibrillation (AF) detection in insertable cardiac monitors (ICM). To address this, we developed and validated an enhanced AF detection algorithm. Atrial fibrillation detection in Reveal LINQ ICM uses patterns of incoherence in RR intervals and absence of P-wave evidence over a 2-min period. The enhanced algorithm includes P-wave evidence during RR irregularity as evidence of sinus arrhythmia or ectopy to adaptively optimize sensitivity for AF detection. The algorithm was developed and validated using Holter data from the XPECT and LINQ Usability studies which collected surface electrocardiogram (ECG) and continuous ICM ECG over a 24-48 h period. The algorithm detections were compared with Holter annotations, performed by multiple reviewers, to compute episode and duration detection performance. The validation dataset comprised of 3187 h of valid Holter and LINQ recordings from 138 patients, with true AF in 37 patients yielding 108 true AF episodes ≥2-min and 449 h of AF. The enhanced algorithm reduced inappropriately detected episodes by 49% and duration by 66% with <1% loss in true episodes or duration. The algorithm correctly identified 98.9% of total AF duration and 99.8% of total sinus or non-AF rhythm duration. The algorithm detected 97.2% (99.7% per-patient average) of all AF episodes ≥2-min, and 84.9% (95.3% per-patient average) of detected episodes involved AF. An enhancement that adapts sensitivity for AF detection reduced inappropriately detected episodes and duration with minimal reduction in sensitivity. © The Author 2017. Published by Oxford University Press on behalf of the European Society of Cardiology
Katsagoni, Christina N; Papatheodoridis, George V; Papageorgiou, Maria-Vasiliki; Ioannidou, Panagiota; Deutsch, Melanie; Alexopoulou, Alexandra; Papadopoulos, Nikolaos; Fragopoulou, Elisabeth; Kontogianni, Meropi D
2017-03-01
Several lifestyle habits have been described as risk factors for nonalcoholic fatty liver disease (NAFLD). Given that both healthy and unhealthy habits tend to cluster, the aim of this study was to identify lifestyle patterns and explore their potential associations with clinical characteristics of individuals with NAFLD. One hundred and thirty-six consecutive patients with ultrasound-proven NAFLD were included. Diet and physical activity level were assessed through appropriate questionnaires. Habitual night sleep hours and duration of midday naps were recorded. Optimal sleep duration was defined as sleep hours ≥ 7 and ≤ 9 h/day. Lifestyle patterns were identified using principal component analysis. Eight components were derived explaining 67% of total variation of lifestyle characteristics. Lifestyle pattern 3, namely high consumption of low-fat dairy products, vegetables, fish, and optimal sleep duration was negatively associated with insulin resistance (β = -1.66, P = 0.008) and liver stiffness (β = -1.62, P = 0.05) after controlling for age, sex, body mass index, energy intake, smoking habits, adiponectin, and tumor necrosis factor-α. Lifestyle pattern 1, namely high consumption of full-fat dairy products, refined cereals, potatoes, red meat, and high television viewing time was positively associated with insulin resistance (β = 1.66, P = 0.005), although this association was weakened after adjusting for adiponectin and tumor necrosis factor-α. A "healthy diet-optimal sleep" lifestyle pattern was beneficially associated with insulin resistance and liver stiffness in NAFLD patients independent of body weight status and energy intake.
Automation for pattern library creation and in-design optimization
NASA Astrophysics Data System (ADS)
Deng, Rock; Zou, Elain; Hong, Sid; Wang, Jinyan; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh; Ding, Hua; Huang, Jason
2015-03-01
Semiconductor manufacturing technologies are becoming increasingly complex with every passing node. Newer technology nodes are pushing the limits of optical lithography and requiring multiple exposures with exotic material stacks for each critical layer. All of this added complexity usually amounts to further restrictions in what can be designed. Furthermore, the designs must be checked against all these restrictions in verification and sign-off stages. Design rules are intended to capture all the manufacturing limitations such that yield can be maximized for any given design adhering to all the rules. Most manufacturing steps employ some sort of model based simulation which characterizes the behavior of each step. The lithography models play a very big part of the overall yield and design restrictions in patterning. However, lithography models are not practical to run during design creation due to their slow and prohibitive run times. Furthermore, the models are not usually given to foundry customers because of the confidential and sensitive nature of every foundry's processes. The design layout locations where a model flags unacceptable simulated results can be used to define pattern rules which can be shared with customers. With advanced technology nodes we see a large growth of pattern based rules. This is due to the fact that pattern matching is very fast and the rules themselves can be very complex to describe in a standard DRC language. Therefore, the patterns are left as either pattern layout clips or abstracted into pattern-like syntax which a pattern matcher can use directly. The patterns themselves can be multi-layered with "fuzzy" designations such that groups of similar patterns can be found using one description. The pattern matcher is often integrated with a DRC tool such that verification and signoff can be done in one step. The patterns can be layout constructs that are "forbidden", "waived", or simply low-yielding in nature. The patterns can also contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.
Geometry-based ensembles: toward a structural characterization of the classification boundary.
Pujol, Oriol; Masip, David
2009-06-01
This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.
A Fault Tolerance Mechanism for On-Road Sensor Networks
Feng, Lei; Guo, Shaoyong; Sun, Jialu; Yu, Peng; Li, Wenjing
2016-01-01
On-Road Sensor Networks (ORSNs) play an important role in capturing traffic flow data for predicting short-term traffic patterns, driving assistance and self-driving vehicles. However, this kind of network is prone to large-scale communication failure if a few sensors physically fail. In this paper, to ensure that the network works normally, an effective fault-tolerance mechanism for ORSNs which mainly consists of backup on-road sensor deployment, redundant cluster head deployment and an adaptive failure detection and recovery method is proposed. Firstly, based on the N − x principle and the sensors’ failure rate, this paper formulates the backup sensor deployment problem in the form of a two-objective optimization, which explains the trade-off between the cost and fault resumption. In consideration of improving the network resilience further, this paper introduces a redundant cluster head deployment model according to the coverage constraint. Then a common solving method combining integer-continuing and sequential quadratic programming is explored to determine the optimal location of these two deployment problems. Moreover, an Adaptive Detection and Resume (ADR) protocol is deigned to recover the system communication through route and cluster adjustment if there is a backup on-road sensor mismatch. The final experiments show that our proposed mechanism can achieve an average 90% recovery rate and reduce the average number of failed sensors at most by 35.7%. PMID:27918483
Offshore seismicity in the southeastern sea of Korea
NASA Astrophysics Data System (ADS)
Park, H.; Kang, T. S.
2017-12-01
The offshore southeastern sea area of Korea appear to have a slightly higher seismicity compared to the rest of the Korean Peninsula. According to the earthquake report by Korean Meteorological Administration (KMA), earthquakes over ML 3 has persistently occurred over once a year during the last ten years. In this study, we used 33 events in KMA catalog, which occurred in the offshore Ulsan (35.0°N-35.85°N, 129.45°E-130.75°E) from April 2007 to June 2017, as mother earthquakes. The waveform matching filter technique was used to precisely detect microearthquakes (child earthquakes) that occurred after mother earthquakes. It is the optimal linear filter for maximizing the signal-to-noise ratio in the presence of additive stochastic noise. Initially, we used the continuous seismic waveforms available from KMA and the Korea Institute of Geosciences and Mineral Resources. We added the data of F-net to increase the reliability of the results. The detected events were located by using P- and S-wave arrival times. The hypocentral depths were constrained by an iterative optimal solution technique which is proven to be effective under the poorly known structure. Focal mechanism solutions were obtained from the analysis of P-wave first-motion polarities. Seismicity patterns of microearthquakes and their focal mechanism results were analyzed to understand their seismogenic characteristics and their relationship to subsea seismotectonic structures.
EUV focus sensor: design and modeling
NASA Astrophysics Data System (ADS)
Goldberg, Kenneth A.; Teyssier, Maureen E.; Liddle, J. Alexander
2005-05-01
We describe performance modeling and design optimization of a prototype EUV focus sensor (FS) designed for use with existing 0.3-NA EUV projection-lithography tools. At 0.3-NA and 13.5-nm wavelength, the depth of focus shrinks to 150 nm increasing the importance of high-sensitivity focal-plane detection tools. The FS is a free-standing Ni grating structure that works in concert with a simple mask pattern of regular lines and spaces at constant pitch. The FS pitch matches that of the image-plane aerial-image intensity: it transmits the light with high efficiency when the grating is aligned with the aerial image laterally and longitudinally. Using a single-element photodetector, to detect the transmitted flux, the FS is scanned laterally and longitudinally so the plane of peak aerial-image contrast can be found. The design under consideration has a fixed image-plane pitch of 80-nm, with aperture widths of 12-40-nm (1-3 wave-lengths), and aspect ratios of 2-8. TEMPEST-3D is used to model the light transmission. Careful attention is paid to the annular, partially coherent, unpolarized illumination and to the annular pupil of the Micro-Exposure Tool (MET) optics for which the FS is designed. The system design balances the opposing needs of high sensitivity and high throughput opti-mizing the signal-to-noise ratio in the measured intensity contrast.
EUV Focus Sensor: Design and Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Kenneth A.; Teyssier, Maureen E.; Liddle, J. Alexander
We describe performance modeling and design optimization of a prototype EUV focus sensor (FS) designed for use with existing 0.3-NA EUV projection-lithography tools. At 0.3-NA and 13.5-nm wavelength, the depth of focus shrinks to 150 nm increasing the importance of high-sensitivity focal-plane detection tools. The FS is a free-standing Ni grating structure that works in concert with a simple mask pattern of regular lines and spaces at constant pitch. The FS pitch matches that of the image-plane aerial-image intensity: it transmits the light with high efficiency when the grating is aligned with the aerial image laterally and longitudinally. Using amore » single-element photodetector, to detect the transmitted flux, the FS is scanned laterally and longitudinally so the plane of peak aerial-image contrast can be found. The design under consideration has a fixed image-plane pitch of 80-nm, with aperture widths of 12-40-nm (1-3 wavelengths), and aspect ratios of 2-8. TEMPEST-3D is used to model the light transmission. Careful attention is paid to the annular, partially coherent, unpolarized illumination and to the annular pupil of the Micro-Exposure Tool (MET) optics for which the FS is designed. The system design balances the opposing needs of high sensitivity and high throughput optimizing the signal-to-noise ratio in the measured intensity contrast.« less
Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants
NASA Technical Reports Server (NTRS)
Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.
1998-01-01
Control of air contaminants is a crucial factor in the safety considerations of crewed space flight. Indoor air quality needs to be closely monitored during long range missions such as a Mars mission, and also on large complex space structures such as the International Space Station. This work mainly pertains to the detection and simulation of air contaminants in the space station, though much of the work is easily extended to buildings, and issues of ventilation systems. Here we propose a method with which to track the presence of contaminants using an accurate physical model, and also develop a robust procedure that would raise alarms when certain tolerance levels are exceeded. A part of this research concerns the modeling of air flow inside a spacecraft, and the consequent dispersal pattern of contaminants. Our objective is to also monitor the contaminants on-line, so we develop a state estimation procedure that makes use of the measurements from a sensor system and determines an optimal estimate of the contamination in the system as a function of time and space. The real-time optimal estimates in turn are used to detect faults in the system and also offer diagnoses as to their sources. This work is concerned with the monitoring of air contaminants aboard future generation spacecraft and seeks to satisfy NASA's requirements as outlined in their Strategic Plan document (Technology Development Requirements, 1996).
Adapted random sampling patterns for accelerated MRI.
Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf
2011-02-01
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
Analysis of a Compressed Thin Film Bonded to a Compliant Substrate: The Energy Scaling Law
NASA Astrophysics Data System (ADS)
Kohn, Robert V.; Nguyen, Hoai-Minh
2013-06-01
We consider the deformation of a thin elastic film bonded to a thick compliant substrate, when the (compressive) misfit is far beyond critical. We take a variational viewpoint—focusing on the total elastic energy, i.e. the membrane and bending energy of the film plus the elastic energy of the substrate—viewing the buckling of the film as a problem of energy-driven pattern formation. We identify the scaling law of the minimum energy with respect to the physical parameters of the problem, and we prove that a herringbone pattern achieves the optimal scaling. These results complement previous numerical studies, which have shown that an optimized herringbone pattern has lower energy than a number of other patterns. Our results are different, because (i) we make the scaling law achieved by the herringbone pattern explicit, and (ii) we give an elementary, ansatz-free proof that no pattern can achieve a better law.
NASA Astrophysics Data System (ADS)
Simpson, R. A.; Davis, D. E.
1982-09-01
This paper describes techniques to detect submicron pattern defects on optical photomasks with an enhanced direct-write, electron-beam lithographic tool. EL-3 is a third generation, shaped spot, electron-beam lithography tool developed by IBM to fabricate semiconductor devices and masks. This tool is being upgraded to provide 100% inspection of optical photomasks for submicron pattern defects, which are subsequently repaired. Fixed-size overlapped spots are stepped over the mask patterns while a signal derived from the back-scattered electrons is monitored to detect pattern defects. Inspection does not require pattern recognition because the inspection scan patterns are derived from the original design data. The inspection spot is square and larger than the minimum defect to be detected, to improve throughput. A new registration technique provides the beam-to-pattern overlay required to locate submicron defects. The 'guard banding" of inspection shapes prevents mask and system tolerances from producing false alarms that would occur should the spots be mispositioned such that they only partially covered a shape being inspected. A rescanning technique eliminates noise-related false alarms and significantly improves throughput. Data is accumulated during inspection and processed offline, as required for defect repair. EL-3 will detect 0.5 um pattern defects at throughputs compatible with mask manufacturing.
Optimization of Actuating Origami Networks
NASA Astrophysics Data System (ADS)
Buskohl, Philip; Fuchi, Kazuko; Bazzan, Giorgio; Joo, James; Gregory, Reich; Vaia, Richard
2015-03-01
Origami structures morph between 2D and 3D conformations along predetermined fold lines that efficiently program the form, function and mobility of the structure. By leveraging design concepts from action origami, a subset of origami art focused on kinematic mechanisms, reversible folding patterns for applications such as solar array packaging, tunable antennae, and deployable sensing platforms may be designed. However, the enormity of the design space and the need to identify the requisite actuation forces within the structure places a severe limitation on design strategies based on intuition and geometry alone. The present work proposes a topology optimization method, using truss and frame element analysis, to distribute foldline mechanical properties within a reference crease pattern. Known actuating patterns are placed within a reference grid and the optimizer adjusts the fold stiffness of the network to optimally connect them. Design objectives may include a target motion, stress level, or mechanical energy distribution. Results include the validation of known action origami structures and their optimal connectivity within a larger network. This design suite offers an important step toward systematic incorporation of origami design concepts into new, novel and reconfigurable engineering devices. This research is supported under the Air Force Office of Scientific Research (AFOSR) funding, LRIR 13RQ02COR.
Mining Recent Temporal Patterns for Event Detection in Multivariate Time Series Data
Batal, Iyad; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
Improving the performance of classifiers using pattern mining techniques has been an active topic of data mining research. In this work we introduce the recent temporal pattern mining framework for finding predictive patterns for monitoring and event detection problems in complex multivariate time series data. This framework first converts time series into time-interval sequences of temporal abstractions. It then constructs more complex temporal patterns backwards in time using temporal operators. We apply our framework to health care data of 13,558 diabetic patients and show its benefits by efficiently finding useful patterns for detecting and diagnosing adverse medical conditions that are associated with diabetes. PMID:25937993
Wilson, Nick; Nghiem, Nhung; Ni Mhurchu, Cliona; Eyles, Helen; Baker, Michael G; Blakely, Tony
2013-01-01
Global health challenges include non-communicable disease burdens, ensuring food security in the context of rising food prices, and environmental constraints around food production, e.g., greenhouse gas [GHG] emissions. We therefore aimed to consider optimized solutions to the mix of food items in daily diets for a developed country population: New Zealand (NZ). We conducted scenario development and linear programming to model 16 diets (some with uncertainty). Data inputs included nutrients in foods, food prices, food wastage and food-specific GHG emissions. This study identified daily dietary patterns that met key nutrient requirements for as little as a median of NZ$ 3.17 per day (US$ 2.41/d) (95% simulation interval [SI] = NZ$ 2.86 to 3.50/d). Diets that included "more familiar meals" for New Zealanders, increased the cost. The optimized diets also had low GHG emission profiles compared with the estimate for the 'typical NZ diet' e.g., 1.62 kg CO2e/d for one scenario (95%SI = 1.39 to 1.85 kg CO2e) compared with 10.1 kg CO2e/d, respectively. All of the optimized low-cost and low-GHG dietary patterns had likely health advantages over the current NZ dietary pattern, i.e., lower cardiovascular disease and cancer risk. We identified optimal foods and dietary patterns that would lower the risk of non-communicable diseases at low cost and with low greenhouse gas emission profiles. These results could help guide central and local government decisions around which foods to focus policies on. That is which foods are most suitable for: food taxes (additions and exemptions); healthy food vouchers and subsidies; and for increased use by public institutions involved in food preparation.
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition
Munoz-Organero, Mario; Ruiz-Blazquez, Ramona
2017-01-01
Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware. PMID:28208736
Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition.
Munoz-Organero, Mario; Ruiz-Blazquez, Ramona
2017-02-08
Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates ( F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware.
Intermodal transport and distribution patterns in ports relationship to hinterland
NASA Astrophysics Data System (ADS)
Dinu, O.; Dragu, V.; Ruscă, F.; Ilie, A.; Oprea, C.
2017-08-01
It is of great importance to examine all interactions between ports, terminals, intermodal transport and logistic actors of distribution channels, as their optimization can lead to operational improvement. Proposed paper starts with a brief overview of different goods types and allocation of their logistic costs, with emphasis on storage component. Present trend is to optimize storage costs by means of port storage area buffer function, by making the best use of free storage time available, most of the ports offer. As a research methodology, starting point is to consider the cost structure of a generic intermodal transport (storage, handling and transport costs) and to link this to intermodal distribution patterns most frequently cast-off in port relationship to hinterland. The next step is to evaluate storage costs impact on distribution pattern selection. For a given value of port free storage time, a corresponding value of total storage time in the distribution channel can be identified, in order to substantiate a distribution pattern shift. Different scenarios for transport and handling costs variation, recorded when distribution pattern shift, are integrated in order to establish the reaction of the actors involved in port related logistic and intermodal transport costs evolution is analysed in order to optimize distribution pattern selection.
NASA Astrophysics Data System (ADS)
Baxandall, Shalese; Sharma, Shrushrita; Zhai, Peng; Pridham, Glen; Zhang, Yunyan
2018-03-01
Structural changes to nerve fiber tracts are extremely common in neurological diseases such as multiple sclerosis (MS). Accurate quantification is vital. However, while nerve fiber damage is often seen as multi-focal lesions in magnetic resonance imaging (MRI), measurement through visual perception is limited. Our goal was to characterize the texture pattern of the lesions in MRI and determine how texture orientation metrics relate to lesion structure using two new methods: phase congruency and multi-resolution spatial-frequency analysis. The former aims to optimize the detection of the `edges and corners' of a structure, and the latter evaluates both the radial and angular distributions of image texture associated with the various forming scales of a structure. The radial texture spectra were previously confirmed to measure the severity of nerve fiber damage, and were thus included for validation. All measures were also done in the control brain white matter for comparison. Using clinical images of MS patients, we found that both phase congruency and weighted mean phase detected invisible lesion patterns and were significantly greater in lesions, suggesting higher structure complexity, than the control tissue. Similarly, multi-angular spatial-frequency analysis detected much higher texture across the whole frequency spectrum in lesions than the control areas. Such angular complexity was consistent with findings from radial texture. Analysis of the phase and texture alignment may prove to be a useful new approach for assessing invisible changes in lesions using clinical MRI and thereby lead to improved management of patients with MS and similar disorders.
Abstract ID: 242 Simulation of a Fast Timing Micro-Pattern Gaseous Detector for TOF-PET.
Radogna, Raffaella; Verwilligen, Piet
2018-01-01
Micro-Pattern Gas Detectors (MPGDs) are a new generation of gaseous detectors that have been developed thanks to advances in micro-structure technology. The main features of the MPGDs are: high rate capability (>50 MHz/cm 2 ); excellent spatial resolution (down to 50 μm); good time resolution (down to 3 ns); reduced radiation length, affordable costs, and possible flexible geometries. A new detector layout has been recently proposed that aims at combining both the high spatial resolution and high rate capability (100 MHz/cm 2 ) of the current state-of-the-art MPGDs with a high time resolution. This new type of MPGD is named the Fast Timing MPGD (FTM) detector [1,2]. The FTM developed for detecting charged particles can potentially reach sub-millimeter spatial resolution and 100 ps time resolution. This contribution introduces a Fast Timing MPGD technology optimized to detect photons, as an innovative PET imaging detector concept and emphases the importance of full detector simulation to guide the design of the detector geometry. The design and development of a new FTM, combining excellent time and spatial resolution, while exploiting the advantages of a reasonable energy resolution, will be a boost for the design of affordable TOF-PET scanner with improved image contrast. The use of such an affordable gas detector allows to instrument large areas in a cost-effective way, and to increase in image contrast for shorter scanning times (lowering the risk for the patient) and better diagnosis of the disease. In this report a dedicated simulation study is performed to optimize the detector design in the contest of the INFN project MPGD-Fatima. Results are obtained with ANSYS, COMSOL, GARFIELD++ and GEANT4 simulation tools. The final detector layout will be trade-off between fast time and good energy resolution. Copyright © 2017.
Identification of nuclear weapons
Mihalczo, J.T.; King, W.T.
1987-04-10
A method and apparatus for non-invasively indentifying different types of nuclear weapons is disclosed. A neutron generator is placed against the weapon to generate a stream of neutrons causing fissioning within the weapon. A first detects the generation of the neutrons and produces a signal indicative thereof. A second particle detector located on the opposite side of the weapon detects the fission particles and produces signals indicative thereof. The signals are converted into a detected pattern and a computer compares the detected pattern with known patterns of weapons and indicates which known weapon has a substantially similar pattern. Either a time distribution pattern or noise analysis pattern, or both, is used. Gamma-neutron discrimination and a third particle detector for fission particles adjacent the second particle detector are preferably used. The neutrons are generated by either a decay neutron source or a pulled neutron particle accelerator.
Optimal Detection of Global Warming using Temperature Profiles
NASA Technical Reports Server (NTRS)
Leroy, Stephen S.
1997-01-01
Optimal fingerprinting is applied to estimate the amount of time it would take to detect warming by increased concentrations of carbon dioxide in monthly averages of temperature profiles over the Indian Ocean.
Model-Based Design of Tree WSNs for Decentralized Detection.
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-08-20
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches.
Adedeji, A. J.; Abdu, P. A.; Luka, P. D.; Owoade, A. A.; Joannis, T. M.
2017-01-01
Aim: This study was designed to optimize and apply the use of loop-mediated isothermal amplification (LAMP) as an alternative to conventional polymerase chain reaction (PCR) for the detection of herpesvirus of turkeys (HVT) (FC 126 strain) in vaccinated and non-vaccinated poultry in Nigeria. Materials and Methods: HVT positive control (vaccine) was used for optimization of LAMP using six primers that target the HVT070 gene sequence of the virus. These primers can differentiate HVT, a Marek’s disease virus (MDV) serotype 3 from MDV serotypes 1 and 2. Samples were collected from clinical cases of Marek’s disease (MD) in chickens, processed and subjected to LAMP and PCR. Results: LAMP assay for HVT was optimized. HVT was detected in 60% (3/5) and 100% (5/5) of the samples analyzed by PCR and LAMP, respectively. HVT was detected in the feathers, liver, skin, and spleen with average DNA purity of 3.05-4.52 μg DNA/mg (A260/A280) using LAMP. Conventional PCR detected HVT in two vaccinated and one unvaccinated chicken samples, while LAMP detected HVT in two vaccinated and three unvaccinated corresponding chicken samples. However, LAMP was a faster and simpler technique to carry out than PCR. Conclusion: LAMP assay for the detection of HVT was optimized. LAMP and PCR detected HVT in clinical samples collected. LAMP assay can be a very good alternative to PCR for detection of HVT and other viruses. This is the first report of the use of LAMP for the detection of viruses of veterinary importance in Nigeria. LAMP should be optimized as a diagnostic and research tool for investigation of poultry diseases such as MD in Nigeria. PMID:29263603
High Grazing Angle Sea-Clutter Literature Review
2013-03-01
Optimal and sub-optimal detection .................................................................... 37 7.3 Polarimetry ... polarimetry for target detection from high grazing angles. UNCLASSIFIED DSTO-GD-0736 UNCLASSIFIED 36 7.1 Parametric modelling There have not been...relationships were also found to be intrinsically related to Gaussian detection counterparts. 7.3 Polarimetry Early studies by Stacy et al. [45, 46] and
Optimal detection and control strategies for invasive species management
Shefali V. Mehta; Robert G. Haight; Frances R. Homans; Stephen Polasky; Robert C. Venette
2007-01-01
The increasing economic and environmental losses caused by non-native invasive species amplify the value of identifying and implementing optimal management options to prevent, detect, and control invasive species. Previous literature has focused largely on preventing introductions of invasive species and post-detection control activities; few have addressed the role of...
Anomalous Cases of Astronaut Helmet Detection
NASA Technical Reports Server (NTRS)
Dolph, Chester; Moore, Andrew J.; Schubert, Matthew; Woodell, Glenn
2015-01-01
An astronaut's helmet is an invariant, rigid image element that is well suited for identification and tracking using current machine vision technology. Future space exploration will benefit from the development of astronaut detection software for search and rescue missions based on EVA helmet identification. However, helmets are solid white, except for metal brackets to attach accessories such as supplementary lights. We compared the performance of a widely used machine vision pipeline on a standard-issue NASA helmet with and without affixed experimental feature-rich patterns. Performance on the patterned helmet was far more robust. We found that four different feature-rich patterns are sufficient to identify a helmet and determine orientation as it is rotated about the yaw, pitch, and roll axes. During helmet rotation the field of view changes to frames containing parts of two or more feature-rich patterns. We took reference images in these locations to fill in detection gaps. These multiple feature-rich patterns references added substantial benefit to detection, however, they generated the majority of the anomalous cases. In these few instances, our algorithm keys in on one feature-rich pattern of the multiple feature-rich pattern reference and makes an incorrect prediction of the location of the other feature-rich patterns. We describe and make recommendations on ways to mitigate anomalous cases in which detection of one or more feature-rich patterns fails. While the number of cases is only a small percentage of the tested helmet orientations, they illustrate important design considerations for future spacesuits. In addition to our four successful feature-rich patterns, we present unsuccessful patterns and discuss the cause of their poor performance from a machine vision perspective. Future helmets designed with these considerations will enable automated astronaut detection and thereby enhance mission operations and extraterrestrial search and rescue.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
New well pattern optimization methodology in mature low-permeability anisotropic reservoirs
NASA Astrophysics Data System (ADS)
Qin, Jiazheng; Liu, Yuetian; Feng, Yueli; Ding, Yao; Liu, Liu; He, Youwei
2018-02-01
In China, lots of well patterns were designed before people knew the principal permeability direction in low-permeability anisotropic reservoirs. After several years’ production, it turns out that well line direction is unparallel with principal permeability direction. However, traditional well location optimization methods (in terms of the objective function such as net present value and/or ultimate recovery) are inapplicable, since wells are not free to move around in a mature oilfield. Thus, the well pattern optimization (WPO) of mature low-permeability anisotropic reservoirs is a significant but challenging task, since the original well pattern (WP) will be distorted and reconstructed due to permeability anisotropy. In this paper, we investigate the destruction and reconstruction of WP when the principal permeability direction and well line direction are unparallel. A new methodology was developed to quantitatively optimize the well locations of mature large-scale WP through a WPO algorithm on the basis of coordinate transformation (i.e. rotating and stretching). For a mature oilfield, large-scale WP has settled, so it is not economically viable to carry out further infill drilling. This paper circumvents this difficulty by combining the WPO algorithm with the well status (open or shut-in) and schedule adjustment. Finally, this methodology is applied to an example. Cumulative oil production rates of the optimized WP are higher, and water-cut is lower, which highlights the potential of the WPO methodology application in mature large-scale field development projects.
Derivation of Optimal Cropping Pattern in Part of Hirakud Command using Cuckoo Search
NASA Astrophysics Data System (ADS)
Rath, Ashutosh; Biswal, Sudarsan; Samantaray, Sandeep; Swain, Prakash Chandra, PROF.
2017-08-01
The economicgrowth of a Nation depends on agriculture which relies on the obtainable water resources, available land and crops. The contribution of water in an appropriate quantity at appropriate time plays avitalrole to increase the agricultural production. Optimal utilization of available resources can be achieved by proper planning and management of water resources projects and adoption of appropriate technology. In the present work, the command area of Sambalpur distribrutary System is taken up for investigation. Further, adoption of a fixed cropping pattern causes the reduction of yield. The present study aims at developing different crop planning strategies to increase the net benefit from the command area with minimum investment. Optimization models are developed for Kharif season using LINDO and Cuckoo Search (CS) algorithm for maximization of the net benefits. In process of development of Optimization model the factors such as cultivable land, seeds, fertilizers, man power, water cost, etc. are taken as constraints. The irrigation water needs of major crops and the total available water through canals in the command of Sambalpur Distributary are estimated. LINDO and Cuckoo Search models are formulated and used to derive the optimal cropping pattern yielding maximum net benefits. The net benefits of Rs.585.0 lakhs in Kharif Season are obtained by adopting LINGO and 596.07 lakhs from Cuckoo Search, respectively, whereas the net benefits of 447.0 lakhs is received by the farmers of the locality with the adopting present cropping pattern.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
He, Jianfang; Fang, Xiaohui; Lin, Yuanhai; Zhang, Xinping
2015-05-04
Half-wave plates were introduced into an interference-lithography scheme consisting of three fibers that were arranged into a rectangular triangle. Such a flexible and compact geometry allows convenient tuning of the polarizations of both the UV laser source and each branch arm. This not only enables optimization of the contrast of the produced photonic structures with expected square lattices, but also multiplies the nano-patterning functions of a fixed design of fiber-based interference lithography. The patterns of the photonic structures can be thus tuned simply by rotating a half-wave plate.
Application of Hyperspectral Imaging to Detect Sclerotinia sclerotiorum on Oilseed Rape Stems
Kong, Wenwen; Zhang, Chu; Huang, Weihao
2018-01-01
Hyperspectral imaging covering the spectral range of 384–1034 nm combined with chemometric methods was used to detect Sclerotinia sclerotiorum (SS) on oilseed rape stems by two sample sets (60 healthy and 60 infected stems for each set). Second derivative spectra and PCA loadings were used to select the optimal wavelengths. Discriminant models were built and compared to detect SS on oilseed rape stems, including partial least squares-discriminant analysis, radial basis function neural network, support vector machine and extreme learning machine. The discriminant models using full spectra and optimal wavelengths showed good performance with classification accuracies of over 80% for the calibration and prediction set. Comparing all developed models, the optimal classification accuracies of the calibration and prediction set were over 90%. The similarity of selected optimal wavelengths also indicated the feasibility of using hyperspectral imaging to detect SS on oilseed rape stems. The results indicated that hyperspectral imaging could be used as a fast, non-destructive and reliable technique to detect plant diseases on stems. PMID:29300315
NASA Astrophysics Data System (ADS)
Wang, Hongyan
2017-04-01
This paper addresses the waveform optimization problem for improving the detection performance of multi-input multioutput (MIMO) orthogonal frequency division multiplexing (OFDM) radar-based space-time adaptive processing (STAP) in the complex environment. By maximizing the output signal-to-interference-and-noise-ratio (SINR) criterion, the waveform optimization problem for improving the detection performance of STAP, which is subjected to the constant modulus constraint, is derived. To tackle the resultant nonlinear and complicated optimization issue, a diagonal loading-based method is proposed to reformulate the issue as a semidefinite programming one; thereby, this problem can be solved very efficiently. In what follows, the optimized waveform can be obtained to maximize the output SINR of MIMO-OFDM such that the detection performance of STAP can be improved. The simulation results show that the proposed method can improve the output SINR detection performance considerably as compared with that of uncorrelated waveforms and the existing MIMO-based STAP method.
Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2015-11-30
Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.
Storyline Visualization: A Compelling Way to Understand Patterns over Time and Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2017-10-16
Storyline visualization is a compelling way to understand patterns over time and space. Much effort has been spent developing efficient and aesthetically pleasing layout optimization algorithms. But what if those algorithms are optimizing the wrong things? To answer this question, we conducted a design study with different storyline layout algorithms. We found that users with our new design principles for storyline visualization outperform existing methods.
X-ray Polarimetry with a Micro-Pattern Gas Detector
NASA Technical Reports Server (NTRS)
Hill, Joe
2005-01-01
Topics covered include: Science drivers for X-ray polarimetry; Previous X-ray polarimetry designs; The photoelectric effect and imaging tracks; Micro-pattern gas polarimeter design concept. Further work includes: Verify results against simulator; Optimize pressure and characterize different gases for a given energy band; Optimize voltages for resolution and sensitivity; Test meshes with 80 micron pitch; Characterize ASIC operation; and Quantify quantum efficiency for optimum polarization sensitivity.
Extreme ultraviolet patterning of tin-oxo cages
NASA Astrophysics Data System (ADS)
Haitjema, Jarich; Zhang, Yu; Vockenhuber, Michaela; Kazazis, Dimitrios; Ekinci, Yasin; Brouwer, Albert M.
2017-07-01
We report on the extreme ultraviolet (EUV) patterning performance of tin-oxo cages. These cage molecules were already known to function as a negative tone photoresist for EUV radiation, but in this work, we significantly optimized their performance. Our results show that sensitivity and resolution are only meaningful photoresist parameters if the process conditions are optimized. We focus on contrast curves of the materials using large area EUV exposures and patterning of the cages using EUV interference lithography. It is shown that baking steps, such as postexposure baking, can significantly affect both the sensitivity and contrast in the open-frame experiments as well as the patterning experiments. A layer thickness increase reduced the necessary dose to induce a solubility change but decreased the patterning quality. The patterning experiments were affected by minor changes in processing conditions such as an increased rinsing time. In addition, we show that the anions of the cage can influence the sensitivity and quality of the patterning, probably through their effect on physical properties of the materials.
Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction
NASA Astrophysics Data System (ADS)
Ródenas, Josep A.; Aarts, Ronald M.; Janssen, A. J. E. M.
2003-01-01
In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern opt that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived opt. Informal listening tests have shown that the opt worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as ``Position-Independent (PI) stereo.''
Elimination of Hot Tears in Steel Castings by Means of Solidification Pattern Optimization
NASA Astrophysics Data System (ADS)
Kotas, Petr; Tutum, Cem Celal; Thorborg, Jesper; Hattel, Jesper Henri
2012-06-01
A methodology of how to exploit the Niyama criterion for the elimination of various defects such as centerline porosity, macrosegregation, and hot tearing in steel castings is presented. The tendency of forming centerline porosity is governed by the temperature distribution close to the end of the solidification interval, specifically by thermal gradients and cooling rates. The physics behind macrosegregation and hot tears indicate that these two defects also are dependent heavily on thermal gradients and pressure drop in the mushy zone. The objective of this work is to show that by optimizing the solidification pattern, i.e., establishing directional and progressive solidification with the help of the Niyama criterion, macrosegregation and hot tearing issues can be both minimized or eliminated entirely. An original casting layout was simulated using a transient three-dimensional (3-D) thermal fluid model incorporated in a commercial simulation software package to determine potential flaws and inadequacies. Based on the initial casting process assessment, multiobjective optimization of the solidification pattern of the considered steel part followed. That is, the multiobjective optimization problem of choosing the proper riser and chill designs has been investigated using genetic algorithms while simultaneously considering their impact on centerline porosity, the macrosegregation pattern, and primarily on hot tear formation.
Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction.
Ródenas, Josep A; Aarts, Ronald M; Janssen, A J E M
2003-01-01
In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern l(opt) that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived l(opt). Informal listening tests have shown that the l(opt) worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as "Position-Independent (PI) stereo."
NASA Astrophysics Data System (ADS)
Close, Dan M.; Hahn, Ruth E.; Patterson, Stacey S.; Baek, Seung J.; Ripp, Steven A.; Sayler, Gary S.
2011-04-01
Bioluminescent and fluorescent reporter systems have enabled the rapid and continued growth of the optical imaging field over the last two decades. Of particular interest has been noninvasive signal detection from mammalian tissues under both cell culture and whole animal settings. Here we report on the advantages and limitations of imaging using a recently introduced bacterial luciferase (lux) reporter system engineered for increased bioluminescent expression in the mammalian cellular environment. Comparison with the bioluminescent firefly luciferase (Luc) system and green fluorescent protein system under cell culture conditions demonstrated a reduced average radiance, but maintained a more constant level of bioluminescent output without the need for substrate addition or exogenous excitation to elicit the production of signal. Comparison with the Luc system following subcutaneous and intraperitoneal injection into nude mice hosts demonstrated the ability to obtain similar detection patterns with in vitro experiments at cell population sizes above 2.5 × 104 cells but at the cost of increasing overall image integration time.
Artificial Intelligence in Medical Practice: The Question to the Answer?
Miller, D Douglas; Brown, Eric W
2018-02-01
Computer science advances and ultra-fast computing speeds find artificial intelligence (AI) broadly benefitting modern society-forecasting weather, recognizing faces, detecting fraud, and deciphering genomics. AI's future role in medical practice remains an unanswered question. Machines (computers) learn to detect patterns not decipherable using biostatistics by processing massive datasets (big data) through layered mathematical models (algorithms). Correcting algorithm mistakes (training) adds to AI predictive model confidence. AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts. While diagnostic confidence never reaches 100%, combining machines plus physicians reliably enhances system performance. Cognitive programs are impacting medical practice by applying natural language processing to read the rapidly expanding scientific literature and collate years of diverse electronic medical records. In this and other ways, AI may optimize the care trajectory of chronic disease patients, suggest precision therapies for complex illnesses, reduce medical errors, and improve subject enrollment into clinical trials. Copyright © 2018 Elsevier Inc. All rights reserved.
Addeh, Abdoljalil; Khormali, Aminollah; Golilarz, Noorbakhsh Amiri
2018-05-04
The control chart patterns are the most commonly used statistical process control (SPC) tools to monitor process changes. When a control chart produces an out-of-control signal, this means that the process has been changed. In this study, a new method based on optimized radial basis function neural network (RBFNN) is proposed for control chart patterns (CCPs) recognition. The proposed method consists of four main modules: feature extraction, feature selection, classification and learning algorithm. In the feature extraction module, shape and statistical features are used. Recently, various shape and statistical features have been presented for the CCPs recognition. In the feature selection module, the association rules (AR) method has been employed to select the best set of the shape and statistical features. In the classifier section, RBFNN is used and finally, in RBFNN, learning algorithm has a high impact on the network performance. Therefore, a new learning algorithm based on the bees algorithm has been used in the learning module. Most studies have considered only six patterns: Normal, Cyclic, Increasing Trend, Decreasing Trend, Upward Shift and Downward Shift. Since three patterns namely Normal, Stratification, and Systematic are very similar to each other and distinguishing them is very difficult, in most studies Stratification and Systematic have not been considered. Regarding to the continuous monitoring and control over the production process and the exact type detection of the problem encountered during the production process, eight patterns have been investigated in this study. The proposed method is tested on a dataset containing 1600 samples (200 samples from each pattern) and the results showed that the proposed method has a very good performance. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
2013-01-01
Background The freshwater planarian Schmidtea mediterranea has emerged as a powerful model for studies of regenerative, stem cell, and germ cell biology. Whole-mount in situ hybridization (WISH) and whole-mount fluorescent in situ hybridization (FISH) are critical methods for determining gene expression patterns in planarians. While expression patterns for a number of genes have been elucidated using established protocols, determining the expression patterns for particularly low-abundance transcripts remains a challenge. Results We show here that a short bleaching step in formamide dramatically enhances signal intensity of WISH and FISH. To further improve signal sensitivity we optimized blocking conditions for multiple anti-hapten antibodies, developed a copper sulfate quenching step that virtually eliminates autofluorescence, and enhanced signal intensity through iterative rounds of tyramide signal amplification. For FISH on regenerating planarians, we employed a heat-induced antigen retrieval step that provides a better balance between permeabilization of mature tissues and preservation of regenerating tissues. We also show that azide most effectively quenches peroxidase activity between rounds of development for multicolor FISH experiments. Finally, we apply these modifications to elucidate the expression patterns of a few low-abundance transcripts. Conclusion The modifications we present here provide significant improvements in signal intensity and signal sensitivity for WISH and FISH in planarians. Additionally, these modifications might be of widespread utility for whole-mount FISH in other model organisms. PMID:23497040
Optimization and evaluation of a method to detect adenoviruses in river water
This dataset includes the recoveries of spiked adenovirus through various stages of experimental optimization procedures. This dataset is associated with the following publication:McMinn , B., A. Korajkic, and A. Grimm. Optimization and evaluation of a method to detect adenoviruses in river water. JOURNAL OF VIROLOGICAL METHODS. Elsevier Science Ltd, New York, NY, USA, 231(1): 8-13, (2016).
NASA Technical Reports Server (NTRS)
Clem, Michelle M.; Woike, Mark R.
2013-01-01
The Aeronautical Sciences Project under NASA`s Fundamental Aeronautics Program is extremely interested in the development of novel measurement technologies, such as optical surface measurements in the internal parts of a flow path, for in situ health monitoring of gas turbine engines. In situ health monitoring has the potential to detect flaws, i.e. cracks in key components, such as engine turbine disks, before the flaws lead to catastrophic failure. In the present study, a cross-correlation imaging technique is investigated in a proof-of-concept study as a possible optical technique to measure the radial growth and strain field on an already cracked sub-scale turbine engine disk under loaded conditions in the NASA Glenn Research Center`s High Precision Rotordynamics Laboratory. The optical strain measurement technique under investigation offers potential fault detection using an applied high-contrast random speckle pattern and imaging the pattern under unloaded and loaded conditions with a CCD camera. Spinning the cracked disk at high speeds induces an external load, resulting in a radial growth of the disk of approximately 50.0-im in the flawed region and hence, a localized strain field. When imaging the cracked disk under static conditions, the disk will be undistorted; however, during rotation the cracked region will grow radially, thus causing the applied particle pattern to be .shifted`. The resulting particle displacements between the two images will then be measured using the two-dimensional cross-correlation algorithms implemented in standard Particle Image Velocimetry (PIV) software to track the disk growth, which facilitates calculation of the localized strain field. In order to develop and validate this optical strain measurement technique an initial proof-of-concept experiment is carried out in a controlled environment. Using PIV optimization principles and guidelines, three potential speckle patterns, for future use on the rotating disk, are developed and investigated in the controlled experiment. A range of known shifts are induced on the patterns; reference and data images are acquired before and after the induced shift, respectively, and the images are processed using the cross-correlation algorithms in order to determine the particle displacements. The effectiveness of each pattern at resolving the known shift is evaluated and discussed in order to choose the most suitable pattern to be implemented onto a rotating disk in the Rotordynamics Lab. Although testing on the rotating disk has not yet been performed, the driving principles behind the development of the present optical technique are based upon critical aspects of the future experiment, such as the amount of expected radial growth, disk analysis, and experimental design and are therefore addressed in the paper.
Enhanced Sensitivity of a Surface Acoustic Wave Gyroscope
NASA Astrophysics Data System (ADS)
Zhang, Yanhua; Wang, Wen
2009-10-01
In this paper, we present an optimal design and performance evaluation of a surface acoustic wave (SAW) gyroscope. It consists of a two-port SAW resonator (SAWR) and a SAW sensor (SAWS) structured using a delay line pattern. The SAW resonator provides a stable reference vibration and creates a standing wave, and the vibrating metallic dot array at antinodes of the standing wave induces the second SAW in the normal direction by the Coriolis force, and the SAW sensor is used to detect the secondary SAW. By using the coupling of modes (COM), the SAW resonator was simulated, and the effects of the design parameters on the frequency response of the device were investigated. Also, a theoretical analysis was performed to investigate the effect of metallic dots on the frequency response of the SAW device. The measured frequency response S21 of the fabricated 80 MHz two-port SAW resonator agrees well with the simulated result, that is, a low insertion loss (˜5 dB) and a single steep resonance peak were observed. In the gyroscopic experiments using a rate table, optimal metallic dot thickness was determined, and the sensitivity of the fabricated SAW gyroscope with an optimal metallic dot thickness of ˜350 nm was determined to be 3.2 µV deg-1 s-1.
The Hydrogen Epoch of Reionization Array Dish. I. Beam Pattern Measurements and Science Implications
NASA Astrophysics Data System (ADS)
Neben, Abraham R.; Bradley, Richard F.; Hewitt, Jacqueline N.; DeBoer, David R.; Parsons, Aaron R.; Aguirre, James E.; Ali, Zaki S.; Cheng, Carina; Ewall-Wice, Aaron; Patra, Nipanjana; Thyagarajan, Nithyanandan; Bowman, Judd; Dickenson, Roger; Dillon, Joshua S.; Doolittle, Phillip; Egan, Dennis; Hedrick, Mike; Jacobs, Daniel C.; Kohn, Saul A.; Klima, Patricia J.; Moodley, Kavilan; Saliwanchik, Benjamin R. B.; Schaffner, Patrick; Shelton, John; Taylor, H. A.; Taylor, Rusty; Tegmark, Max; Wirt, Butch; Zheng, Haoxuan
2016-08-01
The Hydrogen Epoch of Reionization Array (HERA) is a radio interferometer aiming to detect the power spectrum of 21 cm fluctuations from neutral hydrogen from the epoch of reionization (EOR). Drawing on lessons from the Murchison Widefield Array and the Precision Array for Probing the EOR, HERA is a hexagonal array of large (14 m diameter) dishes with suspended dipole feeds. The dish not only determines overall sensitivity, but also affects the observed frequency structure of foregrounds in the interferometer. This is the first of a series of four papers characterizing the frequency and angular response of the dish with simulations and measurements. In this paper, we focus on the angular response (I.e., power pattern), which sets the relative weighting between sky regions of high and low delay and thus apparent source frequency structure. We measure the angular response at 137 MHz using the ORBCOMM beam mapping system of Neben et al. We measure a collecting area of 93 m2 in the optimal dish/feed configuration, implying that HERA-320 should detect the EOR power spectrum at z ˜ 9 with a signal-to-noise ratio of 12.7 using a foreground avoidance approach with a single season of observations and 74.3 using a foreground subtraction approach. Finally, we study the impact of these beam measurements on the distribution of foregrounds in Fourier space.
Alvarez-Meza, Andres M.; Orozco-Gutierrez, Alvaro; Castellanos-Dominguez, German
2017-01-01
We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. PMID:29056897
THE HYDROGEN EPOCH OF REIONIZATION ARRAY DISH. I. BEAM PATTERN MEASUREMENTS AND SCIENCE IMPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neben, Abraham R.; Hewitt, Jacqueline N.; Ewall-Wice, Aaron
2016-08-01
The Hydrogen Epoch of Reionization Array (HERA) is a radio interferometer aiming to detect the power spectrum of 21 cm fluctuations from neutral hydrogen from the epoch of reionization (EOR). Drawing on lessons from the Murchison Widefield Array and the Precision Array for Probing the EOR, HERA is a hexagonal array of large (14 m diameter) dishes with suspended dipole feeds. The dish not only determines overall sensitivity, but also affects the observed frequency structure of foregrounds in the interferometer. This is the first of a series of four papers characterizing the frequency and angular response of the dish withmore » simulations and measurements. In this paper, we focus on the angular response (i.e., power pattern), which sets the relative weighting between sky regions of high and low delay and thus apparent source frequency structure. We measure the angular response at 137 MHz using the ORBCOMM beam mapping system of Neben et al. We measure a collecting area of 93 m{sup 2} in the optimal dish/feed configuration, implying that HERA-320 should detect the EOR power spectrum at z ∼ 9 with a signal-to-noise ratio of 12.7 using a foreground avoidance approach with a single season of observations and 74.3 using a foreground subtraction approach. Finally, we study the impact of these beam measurements on the distribution of foregrounds in Fourier space.« less
A Bioinformatics Approach for Detecting Repetitive Nested Motifs using Pattern Matching.
Romero, José R; Carballido, Jessica A; Garbus, Ingrid; Echenique, Viviana C; Ponzoni, Ignacio
2016-01-01
The identification of nested motifs in genomic sequences is a complex computational problem. The detection of these patterns is important to allow the discovery of transposable element (TE) insertions, incomplete reverse transcripts, deletions, and/or mutations. In this study, a de novo strategy for detecting patterns that represent nested motifs was designed based on exhaustive searches for pairs of motifs and combinatorial pattern analysis. These patterns can be grouped into three categories, motifs within other motifs, motifs flanked by other motifs, and motifs of large size. The methodology used in this study, applied to genomic sequences from the plant species Aegilops tauschii and Oryza sativa , revealed that it is possible to identify putative nested TEs by detecting these three types of patterns. The results were validated through BLAST alignments, which revealed the efficacy and usefulness of the new method, which is called Mamushka.
NASA Astrophysics Data System (ADS)
Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.
2014-12-01
In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.
Optimality Principles for Model-Based Prediction of Human Gait
Ackermann, Marko; van den Bogert, Antonie J.
2010-01-01
Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient’s gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait. PMID:20074736
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
NASA Astrophysics Data System (ADS)
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
Tee-ngam, Prinjaporn; Nunant, Namthip; Rattanarat, Poomrat; Siangproh, Weena; Chailapakul, Orawon
2013-01-01
Ferulic acid is an important phenolic antioxidant found in or added to diet supplements, beverages, and cosmetic creams. Two designs of paper-based platforms for the fast, simple and inexpensive evaluation of ferulic acid contents in food and pharmaceutical cosmetics were evaluated. The first, a paper-based electrochemical device, was developed for ferulic acid detection in uncomplicated matrix samples and was created by the photolithographic method. The second, a paper-based colorimetric device was preceded by thin layer chromatography (TLC) for the separation and detection of ferulic acid in complex samples using a silica plate stationary phase and an 85:15:1 (v/v/v) chloroform: methanol: formic acid mobile phase. After separation, ferulic acid containing section of the TLC plate was attached onto the patterned paper containing the colorimetric reagent and eluted with ethanol. The resulting color change was photographed and quantitatively converted to intensity. Under the optimal conditions, the limit of detection of ferulic acid was found to be 1 ppm and 7 ppm (S/N = 3) for first and second designs, respectively, with good agreement with the standard HPLC-UV detection method. Therefore, these methods can be used for the simple, rapid, inexpensive and sensitive quantification of ferulic acid in a variety of samples. PMID:24077320
Small acid soluble proteins for rapid spore identification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branda, Steven S.; Lane, Todd W.; VanderNoot, Victoria A.
2006-12-01
This one year LDRD addressed the problem of rapid characterization of bacterial spores such as those from the genus Bacillus, the group that contains pathogenic spores such as B. anthracis. In this effort we addressed the feasibility of using a proteomics based approach to spore characterization using a subset of conserved spore proteins known as the small acid soluble proteins or SASPs. We proposed developing techniques that built on our previous expertise in microseparations to rapidly characterize or identify spores. An alternative SASP extraction method was developed that was amenable to both the subsequent fluorescent labeling required for laser-induced fluorescencemore » detection and the low ionic strength requirements for isoelectric focusing. For the microseparations, both capillary isoelectric focusing and chip gel electrophoresis were employed. A variety of methods were evaluated to improve the molecular weight resolution for the SASPs, which are in a molecular weight range that is not well resolved by the current methods. Isoelectric focusing was optimized and employed to resolve the SASPs using UV absorbance detection. Proteomic signatures of native wild type Bacillus spores and clones genetically engineered to produce altered SASP patterns were assessed by slab gel electrophoresis, capillary isoelectric focusing with absorbance detection as well as microchip based gel electrophoresis employing sensitive laser-induced fluorescence detection.« less
Ground penetrating radar applied to rebar corrosion inspection
NASA Astrophysics Data System (ADS)
Eisenmann, David; Margetan, Frank; Chiou, Chien-Ping T.; Roberts, Ron; Wendt, Scott
2013-01-01
In this paper we investigate the use of ground penetrating radar (GPR) to detect corrosion-induced thinning of rebar in concrete bridge structures. We consider a simple pulse/echo amplitude-based inspection, positing that the backscattered response from a thinned rebar will be smaller than the similar response from a fully-intact rebar. Using a commercial 1600-MHz GPR system we demonstrate that, for laboratory specimens, backscattered amplitude measurements can detect a thinning loss of 50% in rebar diameter over a short length. GPR inspections on a highway bridge then identify several rebar with unexpectedly low amplitudes, possibly signaling thinning. To field a practical amplitude-based system for detecting thinned rebar, one must be able to quantify and assess the many factors that can potentially contribute to GPR signal amplitude variations. These include variability arising from the rebar itself (e.g., thinning) and from other factors (concrete properties, antenna orientation and liftoff, etc.). We report on early efforts to model the GPR instrument and the inspection process so as to assess such variability and to optimize inspections. This includes efforts to map the antenna radiation pattern, to predict how backscattered responses will vary with rebar size and location, and to assess detectability improvements via synthetic aperture focusing techniques (SAFT).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, M; Li, R; Xing, L
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less
spsann - optimization of sample patterns using spatial simulated annealing
NASA Astrophysics Data System (ADS)
Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia
2015-04-01
There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a computationally intensive method. As such, many strategies were used to reduce the computation time and memory usage: a) bottlenecks were implemented in C++, b) a finite set of candidate locations is used for perturbing the sample points, and c) data matrices are computed only once and then updated at each iteration instead of being recomputed. spsann is available at GitHub under a licence GLP Version 2.0 and will be further developed to: a) allow the use of a cost surface, b) implement other sensitive parts of the source code in C++, c) implement other optimizing criteria, d) allow to add or delete points to/from an existing point pattern.
Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi
2015-12-01
A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (<7) were extracted more efficiently under acidic conditions and antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.
Model-Based Design of Tree WSNs for Decentralized Detection †
Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam
2015-01-01
The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches. PMID:26307989
Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario
2014-01-01
In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.
Shu, Ting; Zhang, Bob; Yan Tang, Yuan
2017-04-01
Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Min, Qing-xu; Zhu, Jun-zhen; Feng, Fu-zhou; Xu, Chao; Sun, Ji-wei
2017-06-01
In this paper, the lock-in vibrothermography (LVT) is utilized for defect detection. Specifically, for a metal plate with an artificial fatigue crack, the temperature rise of the defective area is used for analyzing the influence of different test conditions, i.e. engagement force, excitation intensity, and modulated frequency. The multivariate nonlinear and logistic regression models are employed to estimate the POD (probability of detection) and POA (probability of alarm) of fatigue crack, respectively. The resulting optimal selection of test conditions is presented. The study aims to provide an optimized selection method of the test conditions in the vibrothermography system with the enhanced detection ability.
NASA Astrophysics Data System (ADS)
Larumbe, Belen; Laviada, Jaime; Ibáñez-Loinaz, Asier; Teniente, Jorge
2018-01-01
A real-time imaging system based on a frequency scanning antenna for conveyor belt setups is presented in this paper. The frequency scanning antenna together with an inexpensive parabolic reflector operates at the W band enabling the detection of details with dimensions in the order of 2 mm. In addition, a low level of sidelobes is achieved by optimizing unequal dividers to window the power distribution for sidelobe reduction. Furthermore, the quality of the images is enhanced by the radiation pattern properties. The performance of the system is validated by showing simulation as well as experimental results obtained in real time, proving the feasibility of these kinds of frequency scanning antennas for cost-effective imaging applications.
Wang, Yu; Zhang, Yaonan; Yao, Zhaomin; Zhao, Ruixue; Zhou, Fengfeng
2016-01-01
Non-lethal macular diseases greatly impact patients’ life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples. PMID:28018716
Application of DNA Machineries for the Barcode Patterned Detection of Genes or Proteins.
Zhou, Zhixin; Luo, Guofeng; Wulf, Verena; Willner, Itamar
2018-06-05
The study introduces an analytical platform for the detection of genes or aptamer-ligand complexes by nucleic acid barcode patterns generated by DNA machineries. The DNA machineries consist of nucleic acid scaffolds that include specific recognition sites for the different genes or aptamer-ligand analytes. The binding of the analytes to the scaffolds initiate, in the presence of the nucleotide mixture, a cyclic polymerization/nicking machinery that yields displaced strands of variable lengths. The electrophoretic separation of the resulting strands provides barcode patterns for the specific detection of the different analytes. Mixtures of DNA machineries that yield, upon sensing of different genes (or aptamer ligands), one-, two-, or three-band barcode patterns are described. The combination of nucleic acid scaffolds acting, in the presence of polymerase/nicking enzyme and nucleotide mixture, as DNA machineries, that generate multiband barcode patterns provide an analytical platform for the detection of an individual gene out of many possible genes. The diversity of genes (or other analytes) that can be analyzed by the DNA machineries and the barcode patterned imaging is given by the Pascal's triangle. As a proof-of-concept, the detection of one of six genes, that is, TP53, Werner syndrome, Tay-Sachs normal gene, BRCA1, Tay-Sachs mutant gene, and cystic fibrosis disorder gene by six two-band barcode patterns is demonstrated. The advantages and limitations of the detection of analytes by polymerase/nicking DNA machineries that yield barcode patterns as imaging readout signals are discussed.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Maximizing the Biochemical Resolving Power of Fluorescence Microscopy
Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R.
2013-01-01
Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems. PMID:24204821
Optimization of entanglement witnesses
NASA Astrophysics Data System (ADS)
Lewenstein, M.; Kraus, B.; Cirac, J. I.; Horodecki, P.
2000-11-01
An entanglement witness (EW) is an operator that allows the detection of entangled states. We give necessary and sufficient conditions for such operators to be optimal, i.e., to detect entangled states in an optimal way. We show how to optimize general EW, and then we particularize our results to the nondecomposable ones; the latter are those that can detect positive partial transpose entangled states (PPTES's). We also present a method to systematically construct and optimize this last class of operators based on the existence of ``edge'' PPTES's, i.e., states that violate the range separability criterion [Phys. Lett. A 232, 333 (1997)] in an extreme manner. This method also permits a systematic construction of nondecomposable positive maps (PM's). Our results lead to a sufficient condition for entanglement in terms of nondecomposable EW's and PM's. Finally, we illustrate our results by constructing optimal EW acting on H=C2⊗C4. The corresponding PM's constitute examples of PM's with minimal ``qubit'' domains, or-equivalently-minimal Hermitian conjugate codomains.
NASA Astrophysics Data System (ADS)
Watanabe, Shuji; Takano, Hiroshi; Fukuda, Hiroya; Hiraki, Eiji; Nakaoka, Mutsuo
This paper deals with a digital control scheme of multiple paralleled high frequency switching current amplifier with four-quadrant chopper for generating gradient magnetic fields in MRI (Magnetic Resonance Imaging) systems. In order to track high precise current pattern in Gradient Coils (GC), the proposal current amplifier cancels the switching current ripples in GC with each other and designed optimum switching gate pulse patterns without influences of the large filter current ripple amplitude. The optimal control implementation and the linear control theory in GC current amplifiers have affinity to each other with excellent characteristics. The digital control system can be realized easily through the digital control implementation, DSPs or microprocessors. Multiple-parallel operational microprocessors realize two or higher paralleled GC current pattern tracking amplifier with optimal control design and excellent results are given for improving the image quality of MRI systems.
Raghavendra, U; Gudigar, Anjan; Maithri, M; Gertych, Arkadiusz; Meiburger, Kristen M; Yeong, Chai Hong; Madla, Chakri; Kongmebhol, Pailin; Molinari, Filippo; Ng, Kwan Hoong; Acharya, U Rajendra
2018-04-01
Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the location of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intra-observer variabilities. Thus, a computer-aided diagnosis (CAD) system can be helpful to cross-verify the severity of nodules. This paper proposes a new CAD system to characterize thyroid nodules using optimized multi-level elongated quinary patterns. In this study, higher order spectral (HOS) entropy features extracted from these patterns appropriately distinguished benign and malignant nodules under particle swarm optimization (PSO) and support vector machine (SVM) frameworks. Our CAD algorithm achieved a maximum accuracy of 97.71% and 97.01% in private and public datasets respectively. The evaluation of this CAD system on both private and public datasets confirmed its effectiveness as a secondary tool in assisting radiological findings. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Khalilnia, M. H.; Ghaemirad, T.; Abbaspour, R. A.
2013-09-01
In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM+ for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75 %.
Optimization of forest wildlife objectives
John Hof; Robert Haight
2007-01-01
This chapter presents an overview of methods for optimizing wildlife-related objectives. These objectives hinge on landscape pattern, so we refer to these methods as "spatial optimization." It is currently possible to directly capture deterministic characterizations of the most basic spatial relationships: proximity relationships (including those that lead to...
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
Standardizing a simpler, more sensitive and accurate tail bleeding assay in mice
Liu, Yang; Jennings, Nicole L; Dart, Anthony M; Du, Xiao-Jun
2012-01-01
AIM: To optimize the experimental protocols for a simple, sensitive and accurate bleeding assay. METHODS: Bleeding assay was performed in mice by tail tip amputation, immersing the tail in saline at 37 °C, continuously monitoring bleeding patterns and measuring bleeding volume from changes in the body weight. Sensitivity and extent of variation of bleeding time and bleeding volume were compared in mice treated with the P2Y receptor inhibitor prasugrel at various doses or in mice deficient of FcRγ, a signaling protein of the glycoprotein VI receptor. RESULTS: We described details of the bleeding assay with the aim of standardizing this commonly used assay. The bleeding assay detailed here was simple to operate and permitted continuous monitoring of bleeding pattern and detection of re-bleeding. We also reported a simple and accurate way of quantifying bleeding volume from changes in the body weight, which correlated well with chemical assay of hemoglobin levels (r2 = 0.990, P < 0.0001). We determined by tail bleeding assay the dose-effect relation of the anti-platelet drug prasugrel from 0.015 to 5 mg/kg. Our results showed that the correlation of bleeding time and volume was unsatisfactory and that compared with the bleeding time, bleeding volume was more sensitive in detecting a partial inhibition of platelet’s haemostatic activity (P < 0.01). Similarly, in mice with genetic disruption of FcRγ as a signaling molecule of P-selectin glycoprotein ligand-1 leading to platelet dysfunction, both increased bleeding volume and repeated bleeding pattern defined the phenotype of the knockout mice better than that of a prolonged bleeding time. CONCLUSION: Determination of bleeding pattern and bleeding volume, in addition to bleeding time, improved the sensitivity and accuracy of this assay, particularly when platelet function is partially inhibited. PMID:24520531
Detection of artifacts from high energy bursts in neonatal EEG.
Bhattacharyya, Sourya; Biswas, Arunava; Mukherjee, Jayanta; Majumdar, Arun Kumar; Majumdar, Bandana; Mukherjee, Suchandra; Singh, Arun Kumar
2013-11-01
Detection of non-cerebral activities or artifacts, intermixed within the background EEG, is essential to discard them from subsequent pattern analysis. The problem is much harder in neonatal EEG, where the background EEG contains spikes, waves, and rapid fluctuations in amplitude and frequency. Existing artifact detection methods are mostly limited to detect only a subset of artifacts such as ocular, muscle or power line artifacts. Few methods integrate different modules, each for detection of one specific category of artifact. Furthermore, most of the reference approaches are implemented and tested on adult EEG recordings. Direct application of those methods on neonatal EEG causes performance deterioration, due to greater pattern variation and inherent complexity. A method for detection of a wide range of artifact categories in neonatal EEG is thus required. At the same time, the method should be specific enough to preserve the background EEG information. The current study describes a feature based classification approach to detect both repetitive (generated from ECG, EMG, pulse, respiration, etc.) and transient (generated from eye blinking, eye movement, patient movement, etc.) artifacts. It focuses on artifact detection within high energy burst patterns, instead of detecting artifacts within the complete background EEG with wide pattern variation. The objective is to find true burst patterns, which can later be used to identify the Burst-Suppression (BS) pattern, which is commonly observed during newborn seizure. Such selective artifact detection is proven to be more sensitive to artifacts and specific to bursts, compared to the existing artifact detection approaches applied on the complete background EEG. Several time domain, frequency domain, statistical features, and features generated by wavelet decomposition are analyzed to model the proposed bi-classification between burst and artifact segments. A feature selection method is also applied to select the feature subset producing highest classification accuracy. The suggested feature based classification method is executed using our recorded neonatal EEG dataset, consisting of burst and artifact segments. We obtain 78% sensitivity and 72% specificity as the accuracy measures. The accuracy obtained using the proposed method is found to be about 20% higher than that of the reference approaches. Joint use of the proposed method with our previous work on burst detection outperforms reference methods on simultaneous burst and artifact detection. As the proposed method supports detection of a wide range of artifact patterns, it can be improved to incorporate the detection of artifacts within other seizure patterns and background EEG information as well. © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Mingjian; Zhang, Jie; Feng, Cong
Here, one of the biggest concerns associated with integrating a large amount of renewable energy into the power grid is the ability to handle large ramps in the renewable power output. For the sake of system reliability and economics, it is essential for power system operators to better understand the ramping features of renewable, load, and netload. An optimized swinging door algorithm (OpSDA) is used and extended to accurately and efficiently detect ramping events. For wind power ramps detection, a process of merging 'bumps' (that have a different changing direction) into adjacent ramping segments is included to improve the performancemore » of the OpSDA method. For solar ramps detection, ramping events that occur in both clear-sky and measured (or forecasted) solar power are removed to account for the diurnal pattern of solar generation. Ramping features are extracted and extensively compared between load and netload under different renewable penetration levels (9.77%, 15.85%, and 51.38%). Comparison results show that (i) netload ramp events with shorter durations and smaller magnitudes occur more frequently when renewable penetration level increases, and the total number of ramping events also increases; and (ii) different ramping characteristics are observed in load and netload even with a low renewable penetration level.« less
Results of land cover change detection analysis in and around Cordillera Azul National Park, Peru
Sleeter, Benjamin M.; Halsing, David L.
2005-01-01
The first product of the Optimizing Design and Management of Protected Areas for Conservation Project is a land cover change detection analysis based on Landsat thematic mapper (TM) and enhanced thematic mapper plus (ETM+) imagery collected at intervals between 1989 and 2002. The goal of this analysis was to quantify and analyze patterns of forest clearing, land conversion, and other disturbances in and around the Cordillera Azul National Park in Peru. After removing clouds and cloud shadows from the imagery using a series of automatic and manual processes, a Tasseled Cap Transformation was used to detect pixels of high reflectance, which were classified as bare ground and areas of likely forest clearing. Results showed a slow but steady increase in cleared ground prior to 1999 and a rapid and increasing conversion rate after that time. The highest concentrations of clearings have spread upward from the western border of the study area on the Huallaga River. To date, most disturbances have taken place in the buffer zone around the park, not within it, but the data show dense clearings occurring closer to the park border each year.
A novel method for overlapping community detection using Multi-objective optimization
NASA Astrophysics Data System (ADS)
Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa
2018-09-01
The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.
DSP-Based dual-polarity mass spectrum pattern recognition for bio-detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riot, V; Coffee, K; Gard, E
2006-04-21
The Bio-Aerosol Mass Spectrometry (BAMS) instrument analyzes single aerosol particles using a dual-polarity time-of-flight mass spectrometer recording simultaneously spectra of thirty to a hundred thousand points on each polarity. We describe here a real-time pattern recognition algorithm developed at Lawrence Livermore National Laboratory that has been implemented on a nine Digital Signal Processor (DSP) system from Signatec Incorporated. The algorithm first preprocesses independently the raw time-of-flight data through an adaptive baseline removal routine. The next step consists of a polarity dependent calibration to a mass-to-charge representation, reducing the data to about five hundred to a thousand channels per polarity. Themore » last step is the identification step using a pattern recognition algorithm based on a library of known particle signatures including threat agents and background particles. The identification step includes integrating the two polarities for a final identification determination using a score-based rule tree. This algorithm, operating on multiple channels per-polarity and multiple polarities, is well suited for parallel real-time processing. It has been implemented on the PMP8A from Signatec Incorporated, which is a computer based board that can interface directly to the two one-Giga-Sample digitizers (PDA1000 from Signatec Incorporated) used to record the two polarities of time-of-flight data. By using optimized data separation, pipelining, and parallel processing across the nine DSPs it is possible to achieve a processing speed of up to a thousand particles per seconds, while maintaining the recognition rate observed on a non-real time implementation. This embedded system has allowed the BAMS technology to improve its throughput and therefore its sensitivity while maintaining a large dynamic range (number of channels and two polarities) thus maintaining the systems specificity for bio-detection.« less
A simple and low-cost biofilm quantification method using LED and CMOS image sensor.
Kwak, Yeon Hwa; Lee, Junhee; Lee, Junghoon; Kwak, Soo Hwan; Oh, Sangwoo; Paek, Se-Hwan; Ha, Un-Hwan; Seo, Sungkyu
2014-12-01
A novel biofilm detection platform, which consists of a cost-effective red, green, and blue light-emitting diode (RGB LED) as a light source and a lens-free CMOS image sensor as a detector, is designed. This system can measure the diffraction patterns of cells from their shadow images, and gather light absorbance information according to the concentration of biofilms through a simple image processing procedure. Compared to a bulky and expensive commercial spectrophotometer, this platform can provide accurate and reproducible biofilm concentration detection and is simple, compact, and inexpensive. Biofilms originating from various bacterial strains, including Pseudomonas aeruginosa (P. aeruginosa), were tested to demonstrate the efficacy of this new biofilm detection approach. The results were compared with the results obtained from a commercial spectrophotometer. To utilize a cost-effective light source (i.e., an LED) for biofilm detection, the illumination conditions were optimized. For accurate and reproducible biofilm detection, a simple, custom-coded image processing algorithm was developed and applied to a five-megapixel CMOS image sensor, which is a cost-effective detector. The concentration of biofilms formed by P. aeruginosa was detected and quantified by varying the indole concentration, and the results were compared with the results obtained from a commercial spectrophotometer. The correlation value of the results from those two systems was 0.981 (N = 9, P < 0.01) and the coefficients of variation (CVs) were approximately threefold lower at the CMOS image-sensor platform. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hausmann, Anita; Duschek, Frank; Fischbach, Thomas; Pargmann, Carsten; Aleksejev, Valeri; Poryvkina, Larisa; Sobolev, Innokenti; Babichenko, Sergey; Handke, Jürgen
2014-05-01
The challenges of detecting hazardous biological materials are manifold: Such material has to be discriminated from other substances in various natural surroundings. The detection sensitivity should be extremely high. As living material may reproduce itself, already one single bacterium may represent a high risk. Of course, identification should be quite fast with a low false alarm rate. Up to now, there is no single technique to solve this problem. Point sensors may collect material and identify it, but the problems of fast identification and especially of appropriate positioning of local collectors are sophisticated. On the other hand, laser based standoff detection may instantaneously provide the information of some accidental spillage of material by detecting the generated thin cloud. LIF technique may classify but hardly identify the substance. A solution can be the use of LIF technique in a first step to collect primary data and - if necessary- followed by utilizing these data for an optimized positioning of point sensors. We perform studies on an open air laser test range at distances between 20 and 135 m applying LIF technique to detect and classify aerosols. In order to employ LIF capability, we use a laser source emitting two wavelengths alternatively, 280 and 355 nm, respectively. Moreover, the time dependence of fluorescence spectra is recorded by a gated intensified CCD camera. Signal processing is performed by dedicated software for spectral pattern recognition. The direct comparison of all results leads to a basic classification of the various compounds.
Mai, Uyen; Mirarab, Siavash
2018-05-08
Sequence data used in reconstructing phylogenetic trees may include various sources of error. Typically errors are detected at the sequence level, but when missed, the erroneous sequences often appear as unexpectedly long branches in the inferred phylogeny. We propose an automatic method to detect such errors. We build a phylogeny including all the data then detect sequences that artificially inflate the tree diameter. We formulate an optimization problem, called the k-shrink problem, that seeks to find k leaves that could be removed to maximally reduce the tree diameter. We present an algorithm to find the exact solution for this problem in polynomial time. We then use several statistical tests to find outlier species that have an unexpectedly high impact on the tree diameter. These tests can use a single tree or a set of related gene trees and can also adjust to species-specific patterns of branch length. The resulting method is called TreeShrink. We test our method on six phylogenomic biological datasets and an HIV dataset and show that the method successfully detects and removes long branches. TreeShrink removes sequences more conservatively than rogue taxon removal and often reduces gene tree discordance more than rogue taxon removal once the amount of filtering is controlled. TreeShrink is an effective method for detecting sequences that lead to unrealistically long branch lengths in phylogenetic trees. The tool is publicly available at https://github.com/uym2/TreeShrink .
Fringe-period selection for a multifrequency fringe-projection phase unwrapping method
NASA Astrophysics Data System (ADS)
Zhang, Chunwei; Zhao, Hong; Jiang, Kejian
2016-08-01
The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.
Scale-dependent feedbacks between patch size and plant reproduction in desert grassland
Svejcar, Lauren N.; Bestelmeyer, Brandon T.; Duniway, Michael C.; James, Darren K.
2015-01-01
Theoretical models suggest that scale-dependent feedbacks between plant reproductive success and plant patch size govern transitions from highly to sparsely vegetated states in drylands, yet there is scant empirical evidence for these mechanisms. Scale-dependent feedback models suggest that an optimal patch size exists for growth and reproduction of plants and that a threshold patch organization exists below which positive feedbacks between vegetation and resources can break down, leading to critical transitions. We examined the relationship between patch size and plant reproduction using an experiment in a Chihuahuan Desert grassland. We tested the hypothesis that reproductive effort and success of a dominant grass (Bouteloua eriopoda) would vary predictably with patch size. We found that focal plants in medium-sized patches featured higher rates of grass reproductive success than when plants occupied either large patch interiors or small patches. These patterns support the existence of scale-dependent feedbacks in Chihuahuan Desert grasslands and indicate an optimal patch size for reproductive effort and success in B. eriopoda. We discuss the implications of these results for detecting ecological thresholds in desert grasslands.
Fürbass, F; Hartmann, M M; Halford, J J; Koren, J; Herta, J; Gruber, A; Baumgartner, C; Kluge, T
2015-09-01
Continuous EEG from critical care patients needs to be evaluated time efficiently to maximize the treatment effect. A computational method will be presented that detects rhythmic and periodic patterns according to the critical care EEG terminology (CCET) of the American Clinical Neurophysiology Society (ACNS). The aim is to show that these detected patterns support EEG experts in writing neurophysiological reports. First of all, three case reports exemplify the evaluation procedure using graphically presented detections. Second, 187 hours of EEG from 10 critical care patients were used in a comparative trial study. For each patient the result of a review session using the EEG and the visualized pattern detections was compared to the original neurophysiology report. In three out of five patients with reported seizures, all seizures were reported correctly. In two patients, several subtle clinical seizures with unclear EEG correlation were missed. Lateralized periodic patterns (LPD) were correctly found in 2/2 patients and EEG slowing was correctly found in 7/9 patients. In 8/10 patients, additional EEG features were found including LPDs, EEG slowing, and seizures. The use of automatic pattern detection will assist in review of EEG and increase efficiency. The implementation of bedside surveillance devices using our detection algorithm appears to be feasible and remains to be confirmed in further multicenter studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
The performance of matched-field track-before-detect methods using shallow-water Pacific data.
Tantum, Stacy L; Nolte, Loren W; Krolik, Jeffrey L; Harmanci, Kerem
2002-07-01
Matched-field track-before-detect processing, which extends the concept of matched-field processing to include modeling of the source dynamics, has recently emerged as a promising approach for maintaining the track of a moving source. In this paper, optimal Bayesian and minimum variance beamforming track-before-detect algorithms which incorporate a priori knowledge of the source dynamics in addition to the underlying uncertainties in the ocean environment are presented. A Markov model is utilized for the source motion as a means of capturing the stochastic nature of the source dynamics without assuming uniform motion. In addition, the relationship between optimal Bayesian track-before-detect processing and minimum variance track-before-detect beamforming is examined, revealing how an optimal tracking philosophy may be used to guide the modification of existing beamforming techniques to incorporate track-before-detect capabilities. Further, the benefits of implementing an optimal approach over conventional methods are illustrated through application of these methods to shallow-water Pacific data collected as part of the SWellEX-1 experiment. The results show that incorporating Markovian dynamics for the source motion provides marked improvement in the ability to maintain target track without the use of a uniform velocity hypothesis.
Zheng, Qianwang; Mikš-Krajnik, Marta; Yang, Yishan; Xu, Wang; Yuk, Hyun-Gyun
2014-09-01
Conventional culture detection methods are time consuming and labor-intensive. For this reason, an alternative rapid method combining real-time PCR and immunomagnetic separation (IMS) was investigated in this study to detect both healthy and heat-injured Salmonella Typhimurium on raw duck wings. Firstly, the IMS method was optimized by determining the capture efficiency of Dynabeads(®) on Salmonella cells on raw duck wings with different bead incubation (10, 30 and 60 min) and magnetic separation (3, 10 and 30 min) times. Secondly, three Taqman primer sets, Sal, invA and ttr, were evaluated to optimize the real-time PCR protocol by comparing five parameters: inclusivity, exclusivity, PCR efficiency, detection probability and limit of detection (LOD). Thirdly, the optimized real-time PCR, in combination with IMS (PCR-IMS) assay, was compared with a standard ISO and a real-time PCR (PCR) method by analyzing artificially inoculated raw duck wings with healthy and heat-injured Salmonella cells at 10(1) and 10(0) CFU/25 g. Finally, the optimized PCR-IMS assay was validated for Salmonella detection in naturally contaminated raw duck wing samples. Under optimal IMS conditions (30 min bead incubation and 3 min magnetic separation times), approximately 85 and 64% of S. Typhimurium cells were captured by Dynabeads® from pure culture and inoculated raw duck wings, respectively. Although Sal and ttr primers exhibited 100% inclusivity and exclusivity for 16 Salmonella spp. and 36 non-Salmonella strains, the Sal primer showed lower LOD (10(3) CFU/ml) and higher PCR efficiency (94.1%) than the invA and ttr primers. Moreover, for Sal and invA primers, 100% detection probability on raw duck wings suspension was observed at 10(3) and 10(4) CFU/ml with and without IMS, respectively. Thus, the Sal primer was chosen for further experiments. The optimized PCR-IMS method was significantly (P=0.0011) better at detecting healthy Salmonella cells after 7-h enrichment than traditional PCR method. However there was no significant difference between the two methods with longer enrichment time (14 h). The diagnostic accuracy of PCR-IMS was shown to be 98.3% through the validation study. These results indicate that the optimized PCR-IMS method in this study could provide a sensitive, specific and rapid detection method for Salmonella on raw duck wings, enabling 10-h detection. However, a longer enrichment time could be needed for resuscitation and reliable detection of heat-injured cells. Copyright © 2014 Elsevier B.V. All rights reserved.
Vilas, Carlos; Balsa-Canto, Eva; García, Maria-Sonia G; Banga, Julio R; Alonso, Antonio A
2012-07-02
Systems biology allows the analysis of biological systems behavior under different conditions through in silico experimentation. The possibility of perturbing biological systems in different manners calls for the design of perturbations to achieve particular goals. Examples would include, the design of a chemical stimulation to maximize the amplitude of a given cellular signal or to achieve a desired pattern in pattern formation systems, etc. Such design problems can be mathematically formulated as dynamic optimization problems which are particularly challenging when the system is described by partial differential equations.This work addresses the numerical solution of such dynamic optimization problems for spatially distributed biological systems. The usual nonlinear and large scale nature of the mathematical models related to this class of systems and the presence of constraints on the optimization problems, impose a number of difficulties, such as the presence of suboptimal solutions, which call for robust and efficient numerical techniques. Here, the use of a control vector parameterization approach combined with efficient and robust hybrid global optimization methods and a reduced order model methodology is proposed. The capabilities of this strategy are illustrated considering the solution of a two challenging problems: bacterial chemotaxis and the FitzHugh-Nagumo model. In the process of chemotaxis the objective was to efficiently compute the time-varying optimal concentration of chemotractant in one of the spatial boundaries in order to achieve predefined cell distribution profiles. Results are in agreement with those previously published in the literature. The FitzHugh-Nagumo problem is also efficiently solved and it illustrates very well how dynamic optimization may be used to force a system to evolve from an undesired to a desired pattern with a reduced number of actuators. The presented methodology can be used for the efficient dynamic optimization of generic distributed biological systems.
Transcranial Electrical Neuromodulation Based on the Reciprocity Principle
Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don
2016-01-01
A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints. PMID:27303311
Transcranial Electrical Neuromodulation Based on the Reciprocity Principle.
Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don
2016-01-01
A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.
Noise assisted pattern fabrication
NASA Astrophysics Data System (ADS)
Roy, Tanushree; Agarwal, V.; Singh, B. P.; Parmananda, P.
2018-04-01
Pre-selected patterns on an n-type Si surface are fabricated by electrochemical etching in the presence of a weak optical signal. The constructive role of noise, namely, stochastic resonance (SR), is exploited for these purposes. SR is a nonlinear phenomenon wherein at an optimal amplitude of noise, the information transfer from weak input sub-threshold signals to the system output is maximal. In the present work, the amplitude of internal noise was systematically regulated by varying the molar concentration of hydrofluoric acid (HF) in the electrolyte. Pattern formation on the substrate for two different amplitudes (25 ± 2 and 11 ± 1 mW) of the optical template (sub-threshold signal) was considered. To quantify the fidelity/quality of pattern formation, the spatial cross-correlation coefficient (CCC) between the constructed pattern and the template of the applied signal was calculated. The maximum CCC is obtained for the pattern formed at an optimal HF concentration, indicating SR. Simulations, albeit using external noise, on a spatial array of coupled FitzHugh-Nagumo oscillators revealed similar results.
Optic disc detection using ant colony optimization
NASA Astrophysics Data System (ADS)
Dias, Marcy A.; Monteiro, Fernando C.
2012-09-01
The retinal fundus images are used in the treatment and diagnosis of several eye diseases, such as diabetic retinopathy and glaucoma. This paper proposes a new method to detect the optic disc (OD) automatically, due to the fact that the knowledge of the OD location is essential to the automatic analysis of retinal images. Ant Colony Optimization (ACO) is an optimization algorithm inspired by the foraging behaviour of some ant species that has been applied in image processing for edge detection. Recently, the ACO was used in fundus images to detect edges, and therefore, to segment the OD and other anatomical retinal structures. We present an algorithm for the detection of OD in the retina which takes advantage of the Gabor wavelet transform, entropy and ACO algorithm. Forty images of the retina from DRIVE database were used to evaluate the performance of our method.
Dictionary Indexing of Electron Channeling Patterns.
Singh, Saransh; De Graef, Marc
2017-02-01
The dictionary-based approach to the indexing of diffraction patterns is applied to electron channeling patterns (ECPs). The main ingredients of the dictionary method are introduced, including the generalized forward projector (GFP), the relevant detector model, and a scheme to uniformly sample orientation space using the "cubochoric" representation. The GFP is used to compute an ECP "master" pattern. Derivative free optimization algorithms, including the Nelder-Mead simplex and the bound optimization by quadratic approximation are used to determine the correct detector parameters and to refine the orientation obtained from the dictionary approach. The indexing method is applied to poly-silicon and shows excellent agreement with the calibrated values. Finally, it is shown that the method results in a mean disorientation error of 1.0° with 0.5° SD for a range of detector parameters.
Neural mechanism of optimal limb coordination in crustacean swimming
Zhang, Calvin; Guy, Robert D.; Mulloney, Brian; Zhang, Qinghai; Lewis, Timothy J.
2014-01-01
A fundamental challenge in neuroscience is to understand how biologically salient motor behaviors emerge from properties of the underlying neural circuits. Crayfish, krill, prawns, lobsters, and other long-tailed crustaceans swim by rhythmically moving limbs called swimmerets. Over the entire biological range of animal size and paddling frequency, movements of adjacent swimmerets maintain an approximate quarter-period phase difference with the more posterior limbs leading the cycle. We use a computational fluid dynamics model to show that this frequency-invariant stroke pattern is the most effective and mechanically efficient paddling rhythm across the full range of biologically relevant Reynolds numbers in crustacean swimming. We then show that the organization of the neural circuit underlying swimmeret coordination provides a robust mechanism for generating this stroke pattern. Specifically, the wave-like limb coordination emerges robustly from a combination of the half-center structure of the local central pattern generating circuits (CPGs) that drive the movements of each limb, the asymmetric network topology of the connections between local CPGs, and the phase response properties of the local CPGs, which we measure experimentally. Thus, the crustacean swimmeret system serves as a concrete example in which the architecture of a neural circuit leads to optimal behavior in a robust manner. Furthermore, we consider all possible connection topologies between local CPGs and show that the natural connectivity pattern generates the biomechanically optimal stroke pattern most robustly. Given the high metabolic cost of crustacean swimming, our results suggest that natural selection has pushed the swimmeret neural circuit toward a connection topology that produces optimal behavior. PMID:25201976
DOE Office of Scientific and Technical Information (OSTI.GOV)
DuPont, Bryony; Cagan, Jonathan; Moriarty, Patrick
This paper presents a system of modeling advances that can be applied in the computational optimization of wind plants. These modeling advances include accurate cost and power modeling, partial wake interaction, and the effects of varying atmospheric stability. To validate the use of this advanced modeling system, it is employed within an Extended Pattern Search (EPS)-Multi-Agent System (MAS) optimization approach for multiple wind scenarios. The wind farm layout optimization problem involves optimizing the position and size of wind turbines such that the aerodynamic effects of upstream turbines are reduced, which increases the effective wind speed and resultant power at eachmore » turbine. The EPS-MAS optimization algorithm employs a profit objective, and an overarching search determines individual turbine positions, with a concurrent EPS-MAS determining the optimal hub height and rotor diameter for each turbine. Two wind cases are considered: (1) constant, unidirectional wind, and (2) three discrete wind speeds and varying wind directions, each of which have a probability of occurrence. Results show the advantages of applying the series of advanced models compared to previous application of an EPS with less advanced models to wind farm layout optimization, and imply best practices for computational optimization of wind farms with improved accuracy.« less
Investigation of Natural Gas Fugitive Leak Detection Using an Unmanned Aerial Vehicle
NASA Astrophysics Data System (ADS)
Yang, S.; Talbot, R. W.; Frish, M. B.; Golston, L.; Aubut, N. F.; Zondlo, M. A.
2017-12-01
The U.S is now the world's largest natural gas producer, of which methane (CH4) is the main component. About 2% of the CH4 is lost through fugitive leaks. This research is under the DOE Methane Observation Networks with Innovative Technology to Obtain Reductions (MONITOR) program of ARPA-E. Our sentry measurement system is composed of four state-of-the-art technologies centered around the RMLDTM (Remote Methane Leak Detector). An open path RMLDTM measures column-integrated CH4 concentration that incorporates fluctuations in the vertical CH4 distribution. Based on Backscatter Tunable Diode Laser Absorption Spectroscopy and Small Unmanned Aerial Vehicles, the sentry system can autonomously, consistently and cost-effectively monitor and quantify CH4 leakage from sites associated with natural gas production. This system provides an advanced capability in detecting leaks at hard-to-access sites (e.g., wellheads) compared to traditional manual methods. Automated leak detecting and reporting algorithms combined with wireless data link implement real-time leak information reporting. Early data were gathered to set up and test the prototype system, and to optimize the leak localization and calculation strategies. The flight pattern is based on a raster scan which can generate interpolated CH4 concentration maps. The localization and quantification algorithms can be derived from the plume images combined with wind vectors. Currently, the accuracy of localization algorithm can reach 2 m and the calculation algorithm has a factor of 2 accuracy. This study places particular emphasis on flux quantification. The data collected at Colorado and Houston test fields were processed, and the correlation between flux and other parameters analyzed. Higher wind speeds and lower wind variation are preferred to optimize flux estimation. Eventually, this system will supply an enhanced detection capability to significantly reduce fugitive CH4 emissions in the natural gas industry.
Structural damage identification using an enhanced thermal exchange optimization algorithm
NASA Astrophysics Data System (ADS)
Kaveh, A.; Dadras, A.
2018-03-01
The recently developed optimization algorithm-the so-called thermal exchange optimization (TEO) algorithm-is enhanced and applied to a damage detection problem. An offline parameter tuning approach is utilized to set the internal parameters of the TEO, resulting in the enhanced heat transfer optimization (ETEO) algorithm. The damage detection problem is defined as an inverse problem, and ETEO is applied to a wide range of structures. Several scenarios with noise and noise-free modal data are tested and the locations and extents of damages are identified with good accuracy.
Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Liu, Ze; Xu, Jing
2016-01-01
Shearers play an important role in fully mechanized coal mining face and accurately identifying their cutting pattern is very helpful for improving the automation level of shearers and ensuring the safety of coal mining. The least squares support vector machine (LSSVM) has been proven to offer strong potential in prediction and classification issues, particularly by employing an appropriate meta-heuristic algorithm to determine the values of its two parameters. However, these meta-heuristic algorithms have the drawbacks of being hard to understand and reaching the global optimal solution slowly. In this paper, an improved fly optimization algorithm (IFOA) to optimize the parameters of LSSVM was presented and the LSSVM coupled with IFOA (IFOA-LSSVM) was used to identify the shearer cutting pattern. The vibration acceleration signals of five cutting patterns were collected and the special state features were extracted based on the ensemble empirical mode decomposition (EEMD) and the kernel function. Some examples on the IFOA-LSSVM model were further presented and the results were compared with LSSVM, PSO-LSSVM, GA-LSSVM and FOA-LSSVM models in detail. The comparison results indicate that the proposed approach was feasible, efficient and outperformed the others. Finally, an industrial application example at the coal mining face was demonstrated to specify the effect of the proposed system. PMID:26771615
Time and frequency constrained sonar signal design for optimal detection of elastic objects.
Hamschin, Brandon; Loughlin, Patrick J
2013-04-01
In this paper, the task of model-based transmit signal design for optimizing detection is considered. Building on past work that designs the spectral magnitude for optimizing detection, two methods for synthesizing minimum duration signals with this spectral magnitude are developed. The methods are applied to the design of signals that are optimal for detecting elastic objects in the presence of additive noise and self-noise. Elastic objects are modeled as linear time-invariant systems with known impulse responses, while additive noise (e.g., ocean noise or receiver noise) and acoustic self-noise (e.g., reverberation or clutter) are modeled as stationary Gaussian random processes with known power spectral densities. The first approach finds the waveform that preserves the optimal spectral magnitude while achieving the minimum temporal duration. The second approach yields a finite-length time-domain sequence by maximizing temporal energy concentration, subject to the constraint that the spectral magnitude is close (in a least-squares sense) to the optimal spectral magnitude. The two approaches are then connected analytically, showing the former is a limiting case of the latter. Simulation examples that illustrate the theory are accompanied by discussions that address practical applicability and how one might satisfy the need for target and environmental models in the real-world.
Towards high efficiency heliostat fields
NASA Astrophysics Data System (ADS)
Arbes, Florian; Wöhrbach, Markus; Gebreiter, Daniel; Weinrebe, Gerhard
2017-06-01
CSP power plants have great potential to substantially contribute to world energy supply. To set this free, cost reductions are required for future projects. Heliostat field layout optimization offers a great opportunity to improve field efficiency. Field efficiency primarily depends on the positions of the heliostats around the tower, commonly known as the heliostat field layout. Heliostat shape also influences efficiency. Improvements to optical efficiency results in electricity cost reduction without adding any extra technical complexity. Due to computational challenges heliostat fields are often arranged in patterns. The mathematical models of the radial staggered or spiral patterns are based on two parameters and thus lead to uniform patterns. Optical efficiencies of a heliostat field do not change uniformly with the distance to the tower, they even differ in the northern and southern field. A fixed pattern is not optimal in many parts of the heliostat field, especially when used as large scaled heliostat field. In this paper, two methods are described which allow to modify field density suitable to inconsistent field efficiencies. A new software for large scale heliostat field evaluation is presented, it allows for fast optimizations of several parameters for pattern modification routines. It was used to design a heliostat field with 23,000 heliostats, which is currently planned for a site in South Africa.