NASA Astrophysics Data System (ADS)
Shahid, Nauman; Perraudin, Nathanael; Kalofolias, Vassilis; Puy, Gilles; Vandergheynst, Pierre
2016-06-01
Mining useful clusters from high dimensional data has received significant attention of the computer vision and pattern recognition community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) and susceptibility to gross corruptions in the data. In this paper we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient and scalable for huge datasets with O(nlog(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data.
A Fast and Robust Text Spotter
Qin, Siyang; Manduchi, Roberto
2016-01-01
We introduce an algorithm for text detection and localization (“spotting”) that is computationally efficient and produces state-of-the-art results. Our system uses multi-channel MSERs to detect a large number of promising regions, then subsamples these regions using a clustering approach. Representatives of region clusters are binarized and then passed on to a deep network. A final line grouping stage forms word-level segments. On the ICDAR 2011 and 2015 benchmarks, our algorithm obtains an F-score of 82% and 83%, respectively, at a computational cost of 1.2 seconds per frame. We also introduce a version that is three times as fast, with only a slight reduction in performance.
Robust and fast learning for fuzzy cerebellar model articulation controllers.
Su, Shun-Feng; Lee, Zne-Jung; Wang, Yan-Ping
2006-02-01
In this paper, the online learning capability and the robust property for the learning algorithms of cerebellar model articulation controllers (CMAC) are discussed. Both the traditional CMAC and fuzzy CMAC are considered. In the study, we find a way of embeding the idea of M-estimators into the CMAC learning algorithms to provide the robust property against outliers existing in training data. An annealing schedule is also adopted for the learning constant to fulfill robust learning. In the study, we also extend our previous work of adopting the credit assignment idea into CMAC learning to provide fast learning for fuzzy CMAC. From demonstrated examples, it is clearly evident that the proposed algorithm indeed has faster and more robust learning. In our study, we then employ the proposed CMAC for an online learning control scheme used in the literature. In the implementation, we also propose to use a tuning parameter instead of a fixed constant to achieve both online learning and fine-tuning effects. The simulation results indeed show the effectiveness of the proposed approaches. PMID:16468579
Fast and robust quantum computation with ionic Wigner crystals
Baltrusch, J. D.; Negretti, A.; Taylor, J. M.; Calarco, T.
2011-04-15
We present a detailed analysis of the modulated-carrier quantum phase gate implemented with Wigner crystals of ions confined in Penning traps. We elaborate on a recent scheme, proposed by two of the authors, to engineer two-body interactions between ions in such crystals. We analyze the situation in which the cyclotron ({omega}{sub c}) and the crystal rotation ({omega}{sub r}) frequencies do not fulfill the condition {omega}{sub c}=2{omega}{sub r}. It is shown that even in the presence of the magnetic field in the rotating frame the many-body (classical) Hamiltonian describing small oscillations from the ion equilibrium positions can be recast in canonical form. As a consequence, we are able to demonstrate that fast and robust two-qubit gates are achievable within the current experimental limitations. Moreover, we describe a realization of the state-dependent sign-changing dipole forces needed to realize the investigated quantum computing scheme.
Reasoning with Vectors: A Continuous Model for Fast Robust Inference
Widdows, Dominic; Cohen, Trevor
2015-01-01
This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.1 PMID:26582967
Fast and robust microseismic event detection using very fast simulated annealing
NASA Astrophysics Data System (ADS)
Velis, Danilo R.; Sabbione, Juan I.; Sacchi, Mauricio D.
2013-04-01
The study of microseismic data has become an essential tool in many geoscience fields, including oil reservoir geophysics, mining and CO2 sequestration. In hydraulic fracturing, microseismicity studies permit the characterization and monitoring of the reservoir dynamics in order to optimize the production and the fluid injection process itself. As the number of events is usually large and the signal-to-noise ratio is in general very low, fast, automated, and robust detection algorithms are required for most applications. Also, real-time functionality is commonly needed to control the fluid injection in the field. Generally, events are located by means of grid search algorithms that rely on some approximate velocity model. These techniques are very effective and accurate, but computationally intensive when dealing with large three or four-dimensional grids. Here, we present a fast and robust method that allows to automatically detect and pick an event in 3C microseismic data without any input information about the velocity model. The detection is carried out by means of a very fast simulated annealing (VFSA) algorithm. To this end, we define an objective function that measures the energy of a potential microseismic event along the multichannel signal. This objective function is based on the stacked energy of the envelope of the signals calculated within a predefined narrow time window that depends on the source position, receivers geometry and velocity. Once an event has been detected, the source location can be estimated, in a second stage, by inverting the corresponding traveltimes using a standard technique, which would naturally require some knowledge of the velocity model. Since the proposed technique focuses on the detection of the microseismic events only, the velocity model is not required, leading to a fast algorithm that carries out the detection in real-time. Besides, the strategy is applicable to data with very low signal-to-noise ratios, for it relies
Fast swept-volume distance for robust collision detection
Xavier, P.G.
1997-04-01
The need for collision detection arises in several robotics areas, including motion-planning, online collision avoidance, and simulation. At the heart of most current methods are algorithms for interference detection and/or distance computation. A few recent algorithms and implementations are very fast, but to use them for accurate collision detection, very small step sizes can be necessary, reducing their effective efficiency. We present a fast, implemented technique for doing exact distance computation and interference detection for translationally-swept bodies. For rotationally swept bodies, we adapt this technique to improve accuracy, for any given step size, in distance computation and interference detection. We present preliminary experiments that show that the combination of basic and swept-body calculations holds much promise for faster accurate collision detection.
Improvement in fast particle track reconstruction with robust statistics
NASA Astrophysics Data System (ADS)
Aartsen, M. G.; Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Altmann, D.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Bechet, S.; Becker Tjus, J.; Becker, K.-H.; Benabderrahmane, M. L.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohaichuk, S.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brayeur, L.; Bretz, H.-P.; Brown, A. M.; Bruijn, R.; Brunner, J.; Carson, M.; Casey, J.; Casier, M.; Chirkin, D.; Christov, A.; Christy, B.; Clark, K.; Clevermann, F.; Coenders, S.; Cohen, S.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; Day, M.; De Clercq, C.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; Dunkman, M.; Eagan, R.; Eberhardt, B.; Eisch, J.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Franckowiak, A.; Frantzen, K.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Goodman, J. A.; Góra, D.; Grandmont, D. T.; Grant, D.; Groß, A.; Ha, C.; Haj Ismail, A.; Hallen, P.; Hallgren, A.; Halzen, F.; Hanson, K.; Heereman, D.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Homeier, A.; Hoshina, K.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobi, E.; Jacobsen, J.; Jagielski, K.; Japaridze, G. S.; Jero, K.; Jlelati, O.; Kaminsky, B.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kiryluk, J.; Kläs, J.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krasberg, M.; Krings, K.; Kroll, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Landsman, H.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leute, J.; Lünemann, J.; Macías, O.; Madsen, J.; Maggi, G.; Maruyama, R.; Mase, K.; Matis, H. S.; McNally, F.; Meagher, K.; Merck, M.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Odrowski, S.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Rädel, L.; Rameez, M.; Rawlins, K.; Redl, P.; Reimann, R.; Resconi, E.; Rhode, W.; Ribordy, M.; Richman, M.; Riedel, B.; Rodrigues, J. P.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Salameh, T.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Scheriau, F.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schukraft, A.; Schulte, L.; Schulz, O.; Seckel, D.; Sestayo, Y.; Seunarine, S.; Shanidze, R.; Sheremata, C.; Smith, M. W. E.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Sullivan, G. W.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tešić, G.; Tilav, S.; Toale, P. A.; Toscano, S.; Unger, E.; Usner, M.; Vallecorsa, S.; van Eijndhoven, N.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Weaver, Ch.; Wellons, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Ziemann, J.; Zierke, S.; Zoll, M.; Recht, B.; Ré, C.
2014-02-01
The IceCube project has transformed 1 km3 of deep natural Antarctic ice into a Cherenkov detector. Muon neutrinos are detected and their direction is inferred by mapping the light produced by the secondary muon track inside the volume instrumented with photomultipliers. Reconstructing the muon track from the observed light is challenging due to noise, light scattering in the ice medium, and the possibility of simultaneously having multiple muons inside the detector, resulting from the large flux of cosmic ray muons. This paper describes work on two problems: (1) the track reconstruction problem, in which, given a set of observations, the goal is to recover the track of a muon; and (2) the coincident event problem, which is to determine how many muons are active in the detector during a time window. Rather than solving these problems by developing more complex physical models that are applied at later stages of the analysis, our approach is to augment the detector's early reconstruction with data filters and robust statistical techniques. These can be implemented at the level of on-line reconstruction and, therefore, improve all subsequent reconstructions. Using the metric of median angular resolution, a standard metric for track reconstruction, we improve the accuracy in the initial reconstruction direction by 13%. We also present improvements in measuring the number of muons in coincident events: we can accurately determine the number of muons 98% of the time.
Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)
NASA Astrophysics Data System (ADS)
Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.
2015-12-01
The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.
Robust, accurate and fast automatic segmentation of the spinal cord.
De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien
2014-09-01
Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord. PMID:24780696
Fast and robust segmentation in the SDO-AIA era
NASA Astrophysics Data System (ADS)
Verbeeck, Cis; Delouille, Véronique; Mampaey, Benjamin; Hochedez, Jean-François; Boyes, David; Barra, Vincent
Solar images from the Atmospheric Imaging Assembly (AIA) aboard the Solar Dynamics Ob-servatory (SDO) will flood the solar physics community with a wealth of information on solar variability, of great importance both in solar physics and in view of Space Weather applica-tions. Obtaining this information, however, requires the ability to automatically process large amounts of data in an objective fashion. In previous work, we have proposed a multi-channel unsupervised spatially-constrained multi-channel fuzzy clustering algorithm (SPoCA) that automatically segments EUV solar images into Active Regions (AR), Coronal Holes (CH), and Quiet Sun (QS). This algorithm will run in near real time on AIA data as part of the SDO Feature Finding Project, a suite of software pipeline modules for automated feature recognition and analysis for the imagery from SDO. After having corrected for the limb brightening effect, SPoCA computes an optimal clustering with respect to the regions of interest using fuzzy logic on a quality criterion to manage the various noises present in the images and the imprecision in the definition of the above regions. Next, the algorithm applies a morphological opening operation, smoothing the cluster edges while preserving their general shape. The process is fast and automatic. A lower size limit is used to distinguish AR from Bright Points. As the algorithm segments the coronal images according to their brightness, it might happen that an AR is detected as several disjoint pieces, if the brightness in between is somewhat lower. Morphological dilation is employed to reconstruct the AR themselves from their constituent pieces. Combining SPoCA's detection of AR, CH, and QS on subsequent images allows automatic tracking and naming of any region of interest. In the SDO software pipeline, SPoCA will auto-matically populate the Heliophysics Events Knowledgebase(HEK) with Active Region events. Further, the algorithm has a huge potential for correct and
Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds
NASA Astrophysics Data System (ADS)
Roynard, X.; Deschaud, J.-E.; Goulette, F.
2016-06-01
Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.
A Matrix Computation View of the FastMap and RobustMap Dimension Reduction Algorithms
Ostrouchov, George
2009-01-01
Given a set of pairwise object distances and a dimension $k$, FastMap and RobustMap algorithms compute a set of $k$-dimensional coordinates for the objects. These metric space embedding methods implicitly assume a higher-dimensional coordinate representation and are a sequence of translations and orthogonal projections based on a sequence of object pair selections (called pivot pairs). We develop a matrix computation viewpoint of these algorithms that operates on the coordinate representation explicitly using Householder reflections. The resulting Coordinate Mapping Algorithm (CMA) is a fast approximate alternative to truncated principal component analysis (PCA) and it brings the FastMap and RobustMap algorithms into the mainstream of numerical computation where standard BLAS building blocks are used. Motivated by the geometric nature of the embedding methods, we further show that truncated PCA can be computed with CMA by specific pivot pair selections. Describing FastMap, RobustMap, and PCA as CMA computations with different pivot pair choices unifies the methods along a pivot pair selection spectrum. We also sketch connections to the semi-discrete decomposition and the QLP decomposition.
A Fast and Robust Ellipse-Detection Method Based on Sorted Merging
Ren, Guanghui; Zhao, Yaqin; Jiang, Lihui
2014-01-01
A fast and robust ellipse-detection method based on sorted merging is proposed in this paper. This method first represents the edge bitmap approximately with a set of line segments and then gradually merges the line segments into elliptical arcs and ellipses. To achieve high accuracy, a sorted merging strategy is proposed: the merging degrees of line segments/elliptical arcs are estimated, and line segments/elliptical arcs are merged in descending order of the merging degrees, which significantly improves the merging accuracy. During the merging process, multiple properties of ellipses are utilized to filter line segment/elliptical arc pairs, making the method very efficient. In addition, an ellipse-fitting method is proposed that restricts the maximum ratio of the semimajor axis and the semiminor axis, further improving the merging accuracy. Experimental results indicate that the proposed method is robust to outliers, noise, and partial occlusion and is fast enough for real-time applications. PMID:24782661
NASA Technical Reports Server (NTRS)
Ryan, R.
1993-01-01
Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.
NASA Astrophysics Data System (ADS)
Tilly, David; Ahnesjö, Anders
2015-07-01
A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan. For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel. Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable.
Tilly, David; Ahnesjö, Anders
2015-07-21
A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan.For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel.Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable. PMID:26118844
a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen
2016-06-01
Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.
Robust, Scalable, and Fast Bootstrap Method for Analyzing Large Scale Data
NASA Astrophysics Data System (ADS)
Basiri, Shahab; Ollila, Esa; Koivunen, Visa
2016-02-01
In this paper we address the problem of performing statistical inference for large scale data sets i.e., Big Data. The volume and dimensionality of the data may be so high that it cannot be processed or stored in a single computing node. We propose a scalable, statistically robust and computationally efficient bootstrap method, compatible with distributed processing and storage systems. Bootstrap resamples are constructed with smaller number of distinct data points on multiple disjoint subsets of data, similarly to the bag of little bootstrap method (BLB) [1]. Then significant savings in computation is achieved by avoiding the re-computation of the estimator for each bootstrap sample. Instead, a computationally efficient fixed-point estimation equation is analytically solved via a smart approximation following the Fast and Robust Bootstrap method (FRB) [2]. Our proposed bootstrap method facilitates the use of highly robust statistical methods in analyzing large scale data sets. The favorable statistical properties of the method are established analytically. Numerical examples demonstrate scalability, low complexity and robust statistical performance of the method in analyzing large data sets.
A robust and fast line segment detector based on top-down smaller eigenvalue analysis
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Yongtao; Tang, Zhi; Lu, Xiaoqing
2014-01-01
In this paper, we propose a robust and fast line segment detector, which achieves accurate results with a controlled number of false detections and requires no parameter tuning. It consists of three steps: first, we propose a novel edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input image; second, we propose a top-down scheme based on smaller eigenvalue analysis to extract line segments within each obtained edge segment; third, we employ Desolneux et al.'s method to reject false detections. Experiments demonstrate that it is very efficient and more robust than two state of the art methods—LSD and EDLines.
Sawaya, Nicolas P D; Huh, Joonsuk; Fujita, Takatoshi; Saikin, Semion K; Aspuru-Guzik, Alán
2015-03-11
Chlorosomes are efficient light-harvesting antennas containing up to hundreds of thousands of bacteriochlorophyll molecules. With massively parallel computer hardware, we use a nonperturbative stochastic Schrödinger equation, while including an atomistically derived spectral density, to study excitonic energy transfer in a realistically sized chlorosome model. We find that fast short-range delocalization leads to robust long-range transfer due to the antennae's concentric-roll structure. Additionally, we discover anomalous behavior arising from different initial conditions, and outline general considerations for simulating excitonic systems on the nanometer to micrometer scale. PMID:25694170
Ghiglia, D.C.; Romero, L.A. )
1994-01-01
Two-dimensional (2D) phase unwrapping continues to find applications in a wide variety of scientific and engineering areas including optical and microwave interferometry, adaptive optics, compensated imaging, and synthetic-aperture-radar phase correction, and image processing. We have developed a robust method (not based on any path-following scheme) for unwrapping 2D phase principal values (in a least-squares sense) by using fast cosine transforms. If the 2D phase values are associated with a 2D weighting, the fast transforms can still be used in iterative methods for solving the weighted unwrapping problem. Weighted unwrapping can be used to isolate inconsistent regions (i.e., phase shear) in an elegant fashion.
Real-Time Robust Tracking for Motion Blur and Fast Motion via Correlation Filters.
Xu, Lingyun; Luo, Haibo; Hui, Bin; Chang, Zheng
2016-01-01
Visual tracking has extensive applications in intelligent monitoring and guidance systems. Among state-of-the-art tracking algorithms, Correlation Filter methods perform favorably in robustness, accuracy and speed. However, it also has shortcomings when dealing with pervasive target scale variation, motion blur and fast motion. In this paper we proposed a new real-time robust scheme based on Kernelized Correlation Filter (KCF) to significantly improve performance on motion blur and fast motion. By fusing KCF and STC trackers, our algorithm also solve the estimation of scale variation in many scenarios. We theoretically analyze the problem for CFs towards motions and utilize the point sharpness function of the target patch to evaluate the motion state of target. Then we set up an efficient scheme to handle the motion and scale variation without much time consuming. Our algorithm preserves the properties of KCF besides the ability to handle special scenarios. In the end extensive experimental results on benchmark of VOT datasets show our algorithm performs advantageously competed with the top-rank trackers. PMID:27618046
Towards binary robust fast features using the comparison of pixel blocks
NASA Astrophysics Data System (ADS)
Oszust, Mariusz
2016-03-01
Binary descriptors have become popular in many vision-based applications, as a fast and efficient replacement of floating point, heavy counterparts. They achieve a short computation time and low memory footprint due to many simplifications. Consequently, their robustness against a variety of image transformations is lowered, since they rely on pairwise pixel intensity comparisons. This observation has led to the emergence of techniques performing tests on intensities of predefined pixel regions. These approaches, despite a visible improvement in the quality of the obtained results, suffer from a long computation time, and their patch partitioning strategies produce long binary strings requiring the use of salient bit detection techniques. In this paper, a novel binary descriptor is proposed to address these shortcomings. The approach selects image patches around a keypoint, divides them into a small number of pixel blocks and performs binary tests on gradients which are determined for the blocks. The size of each patch depends on the keypoint’s scale. The robustness and distinctiveness of the descriptor are evaluated according to five demanding image benchmarks. The experimental results show that the proposed approach is faster to compute, produces a short binary string and offers a better performance than state-of-the-art binary and floating point descriptors.
SERF: A Simple, Effective, Robust, and Fast Image Super-Resolver From Cascaded Linear Regression.
Hu, Yanting; Wang, Nannan; Tao, Dacheng; Gao, Xinbo; Li, Xuelong
2016-09-01
Example learning-based image super-resolution techniques estimate a high-resolution image from a low-resolution input image by relying on high- and low-resolution image pairs. An important issue for these techniques is how to model the relationship between high- and low-resolution image patches: most existing complex models either generalize hard to diverse natural images or require a lot of time for model training, while simple models have limited representation capability. In this paper, we propose a simple, effective, robust, and fast (SERF) image super-resolver for image super-resolution. The proposed super-resolver is based on a series of linear least squares functions, namely, cascaded linear regression. It has few parameters to control the model and is thus able to robustly adapt to different image data sets and experimental settings. The linear least square functions lead to closed form solutions and therefore achieve computationally efficient implementations. To effectively decrease these gaps, we group image patches into clusters via k-means algorithm and learn a linear regressor for each cluster at each iteration. The cascaded learning process gradually decreases the gap of high-frequency detail between the estimated high-resolution image patch and the ground truth image patch and simultaneously obtains the linear regression parameters. Experimental results show that the proposed method achieves superior performance with lower time consumption than the state-of-the-art methods. PMID:27323364
NASA Astrophysics Data System (ADS)
Hughes, Steve; Chapman, David
2009-11-01
Development of a robust hybrid code is useful for efficient calculation of fast electron transport, in conjunction with a radiation hydrodynamics code. The code THOR has been developed for coupling to a fluid code in this fashion for modelling this fast electron population generated during short-pulse laser experiments. It is built on the hybrid philosophy of work by J.R. Davies, which provides an intuitive and relatively straightforward computational framework, and makes it easier to take advantage of parallelism for reducing noise in the solution. The basic algorithms of the code are described along with the approximations and limitations of the current implementation. Recent experiments by D. Hoarty at AWE have demonstrated a method of heating solid density Aluminium layers to hundreds of eV, buried at various depths in a plastic target. Application of the THOR code in reproducing these measurements is shown with encouraging results. The quality of the match to the data is discussed with layers placed at various depths as in the experiments, and with different laser sources. The problems of comparing the code outputs with the measurement technique used in the experiment are also described.
Robust fast automatic skull stripping of MRI-T2 data
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Karwoski, Ronald A.; Robb, Richard
2005-04-01
The efficacy of image processing and analysis on skull stripped MR images vis-a-vis the original images is well established. Additionally, compliance with the Health Insurance Portability and Accountability Act (HIPAA) requires neuroimage repositories to anonymise the images before sharing them. This makes the non-trivial skull stripping process all the more significant. While a number of optimal approaches exist to strip the skull from T1-weighted MR images to the best of our knowledge, there is no simple, robust, fast, parameter free and fully automatic technique to perform the same on T2-weighted images. This paper presents a strategy to fill this gap. It employs a fast parameterization of the T2 image intensity onto a standardized T1 intensity scale. The parametric "T1-like" image obtained via the transformation, which takes only a few seconds to compute, is subsequently processed by any of the many T1-based brain extraction techniques to derive the brain mask. Masking the original T2 image with this brain mask strips the skull. By standardizing the intensity of the parametric image, preset algorithm-specific parameters (if any) could be used across multiple datasets. The proposed scheme has been used in a number of phantom and clinical T2 brain datasets to successfully strip the skull.
Robust Blur Kernel Estimation for License Plate Images From Fast Moving Vehicles.
Lu, Qingbo; Zhou, Wengang; Fang, Lu; Li, Houqiang
2016-05-01
As the unique identification of a vehicle, license plate is a key clue to uncover over-speed vehicles or the ones involved in hit-and-run accidents. However, the snapshot of over-speed vehicle captured by surveillance camera is frequently blurred due to fast motion, which is even unrecognizable by human. Those observed plate images are usually in low resolution and suffer severe loss of edge information, which cast great challenge to existing blind deblurring methods. For license plate image blurring caused by fast motion, the blur kernel can be viewed as linear uniform convolution and parametrically modeled with angle and length. In this paper, we propose a novel scheme based on sparse representation to identify the blur kernel. By analyzing the sparse representation coefficients of the recovered image, we determine the angle of the kernel based on the observation that the recovered image has the most sparse representation when the kernel angle corresponds to the genuine motion angle. Then, we estimate the length of the motion kernel with Radon transform in Fourier domain. Our scheme can well handle large motion blur even when the license plate is unrecognizable by human. We evaluate our approach on real-world images and compare with several popular state-of-the-art blind image deblurring algorithms. Experimental results demonstrate the superiority of our proposed approach in terms of effectiveness and robustness. PMID:26955030
A fast, robust algorithm for power line interference cancellation in neural recording
NASA Astrophysics Data System (ADS)
Keshtkaran, Mohammad Reza; Yang, Zhi
2014-04-01
Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed
Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors
Ponciroli, Roberto; Passerini, Stefano; Vilim, Richard B.
2016-01-01
In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based on the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.
Fast and robust measurement of microstructural dimensions using temporal diffusion spectroscopy
NASA Astrophysics Data System (ADS)
Li, Hua; Gore, John C.; Xu, Junzhong
2014-05-01
Mapping axon sizes non-invasively is of interest for neuroscientists and may have significant clinical potential because nerve conduction velocity is directly dependent on axon size. Current approaches to measuring axon sizes using diffusion-weighted MRI, e.g. q-space imaging with pulsed gradient spin echo (PGSE) sequences usually require long scan times and high q-values to detect small axons (diameter <2 μm). The oscillating gradient spin echo (OGSE) method has been shown to be able to achieve very short diffusion times and hence may be able to detect smaller axons with high sensitivity. In the current study, OGSE experiments were performed to measure the inner diameters of hollow microcapillaries with a range of sizes (∼1.5-19.3 μm) that mimic axons in the human central nervous system. The results suggest that OGSE measurements, even with only moderately high frequencies, are highly sensitive to compartment sizes, and a minimum of two ADC values with different frequencies may be sufficient to extract the microcapillary size accurately. This suggests that the OGSE method may serve as a fast and robust measurement method for mapping axon sizes non-invasively.
Fast and robust 3D ultrasound registration--block and game theoretic matching.
Banerjee, Jyotirmoy; Klink, Camiel; Peters, Edward D; Niessen, Wiro J; Moelker, Adriaan; van Walsum, Theo
2015-02-01
Real-time 3D US has potential for image guidance in minimally invasive liver interventions. However, motion caused by patient breathing makes it hard to visualize a localized area, and to maintain alignment with pre-operative information. In this work we develop a fast affine registration framework to compensate in real-time for liver motion/displacement due to breathing. The affine registration of two consecutive ultrasound volumes in time is performed using block-matching. For a set of evenly distributed points in one volume and their correspondences in the other volume, we propose a robust outlier rejection method to reject false matches. The inliers are then used to determine the affine transformation. The approach is evaluated on 13 4D ultrasound sequences acquired from 8 subjects. For 91 pairs of 3D ultrasound volumes selected from these sequences, a mean registration error of 1.8mm is achieved. A graphics processing unit (GPU) implementation runs the 3D US registration at 8 Hz. PMID:25484018
A fast and robust method for automated analysis of axonal transport.
Welzel, Oliver; Knörr, Jutta; Stroebel, Armin M; Kornhuber, Johannes; Groemer, Teja W
2011-09-01
Cargo movement along axons and dendrites is indispensable for the survival and maintenance of neuronal networks. Key parameters of this transport such as particle velocities and pausing times are often studied using kymograph construction, which converts the transport along a line of interest from a time-lapse movie into a position versus time image. Here we present a method for the automatic analysis of such kymographs based on the Hough transform, which is a robust and fast technique to extract lines from images. The applicability of the method was tested on simulated kymograph images and real data from axonal transport of synaptophysin and tetanus toxin as well as the velocity analysis of synaptic vesicle sharing between adjacent synapses in hippocampal neurons. Efficiency analysis revealed that the algorithm is able to detect a wide range of velocities and can be used at low signal-to-noise ratios. The present work enables the quantification of axonal transport parameters with high throughput with no a priori assumptions and minimal human intervention. PMID:21695534
WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method
NASA Astrophysics Data System (ADS)
Crevoisier, David; Voltz, Marc
2013-04-01
To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute
A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids.
Boschitsch, Alexander H; Fenley, Marcia O
2011-05-10
An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent - analytical solutions are available for this case, thus allowing rigorous
A Fast-and-Robust Profiler for Improving Polymerase Chain Reaction Diagnostics
Besseris, George J.
2014-01-01
Polymerase chain reaction (PCR) is an in vitro technology in molecular genetics that progressively amplifies minimal copies of short DNA sequences in a fast and inexpensive manner. However, PCR performance is sensitive to suboptimal processing conditions. Compromised PCR conditions lead to artifacts and bias that downgrade the discriminatory power and reproducibility of the results. Promising attempts to resolve the PCR performance optimization issue have been guided by quality improvement tactics adopted in the past for industrial trials. Thus, orthogonal arrays (OAs) have been employed to program quick-and-easy structured experiments. Profiling of influences facilitates the quantification of effects that may counteract the detectability of amplified DNA fragments. Nevertheless, the attractive feature of reducing greatly the amount of work and expenditures by planning trials with saturated-unreplicated OA schemes is known to be relinquished in the subsequent analysis phase. This is because of an inherent incompatibility of ordinary multi-factorial comparison techniques to convert small yet dense datasets. Treating unreplicated-saturated data with either the analysis of variance (ANOVA) or regression models destroys the information extraction process. Both of those mentioned approaches are rendered blind to error since the examined effects absorb all available degrees of freedom. Therefore, in lack of approximating an experimental uncertainty, any outcome interpretation is rendered subjective. We propose a profiling method that permits the non-linear maximization of amplicon resolution by eliminating the necessity for direct error estimation. Our approach is distribution-free, calibration-free, simulation-free and sparsity-free with well-known power properties. It is also user-friendly by promoting rudimentary analytics. Testing our method on published amplicon count data, we found that the preponderant effect is the concentration of MgCl2 (p<0.05) followed by the
F2DPR: a fast and robust cross-correlation technique for volumetric PIV
NASA Astrophysics Data System (ADS)
Earl, Thomas; Jeon, Young Jin; Lecordier, Bertrand; David, Laurent
2016-08-01
The current state-of-the-art in cross-correlation based time-resolved particle image velocimetry (PIV) techniques are the fluid trajectory correlation, FTC (Lynch and Scarano 2013) and the fluid trajectory evaluation based on an ensemble-averaged cross-correlation, FTEE (Jeon et al 2014a). These techniques compute the velocity vector as a polynomial trajectory Γ in space and time, enabling the extraction of beneficial quantities such as material acceleration whilst significantly increasing the accuracy of the particle displacement prediction achieved by standard two-frame PIV. In the context of time-resolved volumetric PIV, the drawback of trajectory computation is the computational expense of the three-dimensional (3D) cross-correlation, exacerbated by the requirement to perform N ‑ 1 cross-correlations, where N (for typically 5≤slant N≤slant 9 ) is the number of sequential particle volumes, for each velocity field. Therefore, the acceleration of this calculation is highly desirable. This paper re-examines the application of two-dimensional (2D) cross-correlation methods to three-dimensional (3D) datasets by Bilsky et al (2011) and the binning techniques of Discetti and Astarita (2012). A new and robust version of the 2D methods is proposed and described, called fast 2D projection—re-projection (f2dpr). Performance tests based on computational time and accuracy for both two-frame and multi-frame PIV are carried out on synthetically generated data. The cases presented herein include uniaxial uniform linear displacements and shear, and simulated turbulence data. The proposed algorithm is shown to be in the order of 10 times faster than a standard 3D FFT without loss of precision for a wide range of synthetic test cases, while combining with the binning technique can yield 50 times faster computation. The algorithm is also applied to reconstructed synthetic turbulent particle fields to investigate reconstruction noise on its performance and no
Scatterometry—fast and robust measurements of nano-textured surfaces
NASA Astrophysics Data System (ADS)
Hannibal Madsen, Morten; Hansen, Poul-Erik
2016-06-01
Scatterometry is a fast, precise and low cost way to determine the mean pitch and dimensional parameters of periodic structures with lateral resolution of a few nanometer. It is robust enough for in-line process control and precise and accurate enough for metrology measurements. Furthermore, scatterometry is a non-destructive technique capable of measuring buried structures, for example a grating covered by a thick oxide layer. As scatterometry is a non-imaging technique, mathematical modeling is needed to retrieve structural parameters that describe a surface. In this review, the three main steps of scatterometry are discussed: the data acquisition, the simulation of diffraction efficiencies and the comparison of data and simulations. First, the intensity of the diffracted light is measured with a scatterometer as a function of incoming angle, diffraction angle and/or wavelength. We discuss the evolution of the scatterometers from the earliest angular scatterometers to the new imaging scatterometers. The basic principle of measuring diffraction efficiencies in scatterometry has remained the same since the beginning, but the instrumental improvements have made scatterometry a state-of-the-art solution for fast and accurate measurements of nano-textured surfaces. The improvements include extending the wavelength range from the visible to the extreme ultra-violet range, development of Fourier optics to measure all diffraction orders simultaneously, and an imaging scatterometer to measure area of interests smaller than the spot size. Secondly, computer simulations of the diffraction efficiencies are discussed with emphasis on the rigorous coupled-wave analysis (RCWA) method. RCWA has, since the mid-1990s, been the preferred method for grating simulations due to the speed of the algorithms. In the beginning the RCWA method suffered from a very slow convergence rate, and we discuss the historical improvements to overcome this challenge, e.g. by the introduction of Li
MTC: A Fast and Robust Graph-Based Transductive Learning Method.
Zhang, Yan-Ming; Huang, Kaizhu; Geng, Guang-Gang; Liu, Cheng-Lin
2015-09-01
Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this paper, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large-scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on web-spam detection and interactive image segmentation demonstrate our method's advantages in aspect of accuracy, speed, and robustness. PMID:25376047
NASA Astrophysics Data System (ADS)
Touati, F.; Idres, M.; Kahlouche, S.
2010-12-01
A method is presented for the fast and robust computation of the spherical harmonic coefficients of the terrestrial gravitational field from precise kinematic orbit of GOCE satellite. To reduce the influence of outliers in the kinematic orbit, Huber's M-estimation is applied. The computational aspect of this method is studied with great importance by investigating the Newton's procedure which converges faster than the iteratively reweighted least squares (IRLS) algorithm. The processing strategy of the orbit data is based on satellite accelerations, which are derived from GPS position time-series by Newton's interpolation. The gradient of the gravitational potential with respect to rectangular coordinates is expressed using the Cunningham-Metris method. The Newton's law of motion performs the equality between satellite accelerations and the gradient of the gravitational potential in an inertial frame system. Numerical results using simulated data are realized in order to test the robustness and the computational efficiency of the proposed method.
Fast and Robust Newton strategies for non-linear geodynamics problems
NASA Astrophysics Data System (ADS)
Le Pourhiet, Laetitia; May, Dave
2014-05-01
Geodynamic problems are inherently non-linear, with sources of non-inearities arising from the (i) rheology, (ii) boundary conditions and (iii) the choice of time integration scheme. We have developed a robust non-linear scheme utilizing PETSc's non-linear solver framework; SNES. Through the SNES framework, we have access to a wide range of globalization techniques. In this work we extensively use line search implementation. We explored a wide range different strategies for solving a variety of non-linear problems specific to geodynamics. In this presentation, we report of the most robust line-searching techniques which we have found for the three classes of non-linearities previously identified. Among the class of rheological non-linearities, the shear banding instability using visco-plastic flow rules is the most difficult to solve. Distinctively from its sibling, the elasto-plastic rheology, the visco-plastic rheology causes instantaneous shear localisation. As a results, decreasing time-stepping is not a viable approach to better capture the initial phase of localisation. Furthermore, return map algorithms based on a consistent tangent cannot be used as the slope of the tangent is infinite. Obtaining a converged non-linear solution to this problem only relies on the robustness non-linear solver. After presenting a Newton methodology suitable for rheological non-linearities, we examine the performance of this formulation when frictional sliding boundary conditions are introduced. We assess the robustness of the non-linear solver when applied to critical taper type problems.
Development of Chemically and Thermally Robust Lithium Fast Ion Conducting Chalcogenide Glasses
NASA Technical Reports Server (NTRS)
Martin, Steve W.; Hagedorn, Norman (Technical Monitor)
2002-01-01
In this project, a new research thrust into the development of an entirely new class of FIC glasses has begun that may lead to a new set of optimized thin-film lithium ion conducting materials. New chemically robust FIC glasses are being prepared that are expected to exhibit unusually high chemical and electrochemical stability. New thermally robust FIC glasses are being prepared that exhibit softening points in excess of 500 C which will dramatically expand the usable operating temperature range of batteries, fuel-cells, and sensors using such electrolytes. Glasses are being explored in the general compositional series xLi2S+ yGa2S3 + (1-x-y)GeS2. Li2S is added as the source of the conductive lithium ions. GeS2 is the base glass-forming phase and the trivalent sulfides, Ga2S3, is added to increase the "refractoniness" of the glass, that is to significantly increase the softening point of the glass as well as its chemical stability. By optimizing the composition of the glass, new glasses and glass-ceramic FIC materials have been prepared with softening points in excess of 500 C and conductivities above 10(exp -3)/Ohm cm at room temperature. These latter attributes are currently not available in any FIC glasses to date.
NASA Astrophysics Data System (ADS)
Baranwal, Mayank; Gorugantu, Ram S.; Salapaka, Srinivasa M.
2015-08-01
This paper aims at control design and its implementation for robust high-bandwidth precision (nanoscale) positioning systems. Even though modern model-based control theoretic designs for robust broadband high-resolution positioning have enabled orders of magnitude improvement in performance over existing model independent designs, their scope is severely limited by the inefficacies of digital implementation of the control designs. High-order control laws that result from model-based designs typically have to be approximated with reduced-order systems to facilitate digital implementation. Digital systems, even those that have very high sampling frequencies, provide low effective control bandwidth when implementing high-order systems. In this context, field programmable analog arrays (FPAAs) provide a good alternative to the use of digital-logic based processors since they enable very high implementation speeds, moreover with cheaper resources. The superior flexibility of digital systems in terms of the implementable mathematical and logical functions does not give significant edge over FPAAs when implementing linear dynamic control laws. In this paper, we pose the control design objectives for positioning systems in different configurations as optimal control problems and demonstrate significant improvements in performance when the resulting control laws are applied using FPAAs as opposed to their digital counterparts. An improvement of over 200% in positioning bandwidth is achieved over an earlier digital signal processor (DSP) based implementation for the same system and same control design, even when for the DSP-based system, the sampling frequency is about 100 times the desired positioning bandwidth.
Sakama, Makoto; Kanematsu, Nobuyuki; Inaniwa, Taku
2016-08-01
A simple and efficient approach is needed for robustness evaluation and optimization of treatment planning in routine clinical particle therapy. Here we propose a robustness analysis method using dose standard deviation (SD) in possible scenarios such as the robustness indicator and a fast dose warping method, i.e. deformation of dose distributions, taking into account the setup and range errors in carbon-ion therapy. The dose warping method is based on the nominal dose distribution and the water-equivalent path length obtained from planning computed tomography data with a clinically commissioned treatment planning system (TPS). We compared, in a limited number of scenarios at the extreme boundaries of the assumed error, the dose SD distributions obtained by the warping method with those obtained using the TPS dose recalculations. The accuracy of the warping method was examined by the standard-deviation-volume histograms (SDVHs) for varying degrees of setup and range errors for three different tumor sites. Furthermore, the influence of dose fractionation on the combined dose uncertainty, taking into consideration the correlation of setup and range errors between fractions, was evaluated with simple equations using the SDVHs and the mean value of SDs in the defined volume of interest. The results of the proposed method agreed well with those obtained with the dose recalculations in these comparisons, and the effectiveness of dose SD evaluations at the extreme boundaries of given errors was confirmed from the responsivity and DVH analysis of relative SD values for each error. The combined dose uncertainties depended heavily on the number of fractions, assumed errors and tumor sites. The typical computation time of the warping method is approximately 60 times less than that of the full dose calculation method using the TPS. The dose SD distributions and SDVHs with the fractionation effect will be useful indicators for robustness analysis in treatment planning, and the
NASA Astrophysics Data System (ADS)
Sakama, Makoto; Kanematsu, Nobuyuki; Inaniwa, Taku
2016-08-01
A simple and efficient approach is needed for robustness evaluation and optimization of treatment planning in routine clinical particle therapy. Here we propose a robustness analysis method using dose standard deviation (SD) in possible scenarios such as the robustness indicator and a fast dose warping method, i.e. deformation of dose distributions, taking into account the setup and range errors in carbon-ion therapy. The dose warping method is based on the nominal dose distribution and the water-equivalent path length obtained from planning computed tomography data with a clinically commissioned treatment planning system (TPS). We compared, in a limited number of scenarios at the extreme boundaries of the assumed error, the dose SD distributions obtained by the warping method with those obtained using the TPS dose recalculations. The accuracy of the warping method was examined by the standard-deviation-volume histograms (SDVHs) for varying degrees of setup and range errors for three different tumor sites. Furthermore, the influence of dose fractionation on the combined dose uncertainty, taking into consideration the correlation of setup and range errors between fractions, was evaluated with simple equations using the SDVHs and the mean value of SDs in the defined volume of interest. The results of the proposed method agreed well with those obtained with the dose recalculations in these comparisons, and the effectiveness of dose SD evaluations at the extreme boundaries of given errors was confirmed from the responsivity and DVH analysis of relative SD values for each error. The combined dose uncertainties depended heavily on the number of fractions, assumed errors and tumor sites. The typical computation time of the warping method is approximately 60 times less than that of the full dose calculation method using the TPS. The dose SD distributions and SDVHs with the fractionation effect will be useful indicators for robustness analysis in treatment planning, and the
Robust micromagnet design for fast electrical manipulations of single spins in quantum dots
NASA Astrophysics Data System (ADS)
Yoneda, Jun; Otsuka, Tomohiro; Takakura, Tatsuki; Pioro-Ladrière, Michel; Brunner, Roland; Lu, Hong; Nakajima, Takashi; Obata, Toshiaki; Noiri, Akito; Palmstrøm, Christopher J.; Gossard, Arthur C.; Tarucha, Seigo
2015-08-01
Tailoring spin coupling to electric fields is central to spintronics and spin-based quantum information processing. We present an optimal micromagnet design that produces appropriate stray magnetic fields to mediate fast electrical spin manipulations in nanodevices. We quantify the practical requirements for spatial field inhomogeneity and tolerance for misalignment with spins, and propose a design scheme to improve the spin-rotation frequency (to exceed 50 MHz in GaAs nanostructures). We then validate our design by experiments in separate devices. Our results will open a route to rapidly control solid-state electron spins with limited lifetimes and to study coherent spin dynamics in solids.
QMLE: fast, robust, and efficient estimation of distribution functions based on quantiles.
Brown, Scott; Heathcote, Andrew
2003-11-01
Quantile maximum likelihood (QML) is an estimation technique, proposed by Heathcote, Brown, and Mewhort (2002), that provides robust and efficient estimates of distribution parameters, typically for response time data, in sample sizes as small as 40 observations. In view of the computational difficulty inherent in implementing QML, we provide open-source Fortran 90 code that calculates QML estimates for parameters of the ex-Gaussian distribution, as well as standard maximum likelihood estimates. We show that parameter estimates from QML are asymptotically unbiased and normally distributed. Our software provides asymptotically correct standard error and parameter intercorrelation estimates, as well as producing the outputs required for constructing quantile-quantile plots. The code is parallelizable and can easily be modified to estimate parameters from other distributions. Compiled binaries, as well as the source code, example analysis files, and a detailed manual, are available for free on the Internet. PMID:14748492
Baranwal, Mayank; Gorugantu, Ram S; Salapaka, Srinivasa M
2015-08-01
This paper aims at control design and its implementation for robust high-bandwidth precision (nanoscale) positioning systems. Even though modern model-based control theoretic designs for robust broadband high-resolution positioning have enabled orders of magnitude improvement in performance over existing model independent designs, their scope is severely limited by the inefficacies of digital implementation of the control designs. High-order control laws that result from model-based designs typically have to be approximated with reduced-order systems to facilitate digital implementation. Digital systems, even those that have very high sampling frequencies, provide low effective control bandwidth when implementing high-order systems. In this context, field programmable analog arrays (FPAAs) provide a good alternative to the use of digital-logic based processors since they enable very high implementation speeds, moreover with cheaper resources. The superior flexibility of digital systems in terms of the implementable mathematical and logical functions does not give significant edge over FPAAs when implementing linear dynamic control laws. In this paper, we pose the control design objectives for positioning systems in different configurations as optimal control problems and demonstrate significant improvements in performance when the resulting control laws are applied using FPAAs as opposed to their digital counterparts. An improvement of over 200% in positioning bandwidth is achieved over an earlier digital signal processor (DSP) based implementation for the same system and same control design, even when for the DSP-based system, the sampling frequency is about 100 times the desired positioning bandwidth. PMID:26329226
Evaluating the CDM-Robustness of the input buffer with very fast transmission line pulse
NASA Astrophysics Data System (ADS)
Kao, Tzu-Cheng; Lee, Jian-Hsing; Hung, Chung-Yu; Lien, Chen-Hsin; Su, Hung-Der
2015-02-01
In this paper, a scheme for how to utilize VFTLP (very fast transmission line pulse) data to design an input buffer circuit for CDM (charged-device model) ESD protection is reported. The impedance of the ESD device under VFTLP stress is nearly 120 Ω at the beginning of turn-on transient, and decreases with time toward 10 Ω prior to the voltage falling below 0 V. In this work, the fact that the dynamic-characteristic impedance of the ESD device under VFTLP testing is independent of the stress current is found. Since both VFTLP zapping and the CDM are nanosecond events, the dynamic-characteristic impedance of the ESD device can be used to evaluate the CDM threshold voltage of the input buffer based on the equivalent and simplified RLC circuit.
HIFI-C: a robust and fast method for determining NMR couplings from adaptive 3D to 2D projections.
Cornilescu, Gabriel; Bahrami, Arash; Tonelli, Marco; Markley, John L; Eghbalnia, Hamid R
2007-08-01
We describe a novel method for the robust, rapid, and reliable determination of J couplings in multi-dimensional NMR coupling data, including small couplings from larger proteins. The method, "High-resolution Iterative Frequency Identification of Couplings" (HIFI-C) is an extension of the adaptive and intelligent data collection approach introduced earlier in HIFI-NMR. HIFI-C collects one or more optimally tilted two-dimensional (2D) planes of a 3D experiment, identifies peaks, and determines couplings with high resolution and precision. The HIFI-C approach, demonstrated here for the 3D quantitative J method, offers vital features that advance the goal of rapid and robust collection of NMR coupling data. (1) Tilted plane residual dipolar couplings (RDC) data are collected adaptively in order to offer an intelligent trade off between data collection time and accuracy. (2) Data from independent planes can provide a statistical measure of reliability for each measured coupling. (3) Fast data collection enables measurements in cases where sample stability is a limiting factor (for example in the presence of an orienting medium required for residual dipolar coupling measurements). (4) For samples that are stable, or in experiments involving relatively stronger couplings, robust data collection enables more reliable determinations of couplings in shorter time, particularly for larger biomolecules. As a proof of principle, we have applied the HIFI-C approach to the 3D quantitative J experiment to determine N-C' RDC values for three proteins ranging from 56 to 159 residues (including a homodimer with 111 residues in each subunit). A number of factors influence the robustness and speed of data collection. These factors include the size of the protein, the experimental set up, and the coupling being measured, among others. To exhibit a lower bound on robustness and the potential for time saving, the measurement of dipolar couplings for the N-C' vector represents a realistic
Fast robust non-sequential optical ray-tracing with implicit algebraic surfaces
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
2015-09-01
The fastest, most robust, general technique for non-sequentially ray-tracing a large class of imaging and non-imaging optical systems is by geometric modeling with algebraic (i.e. polynomial) implicit surfaces. The basic theory of these surfaces with special attention to optimizing their precise intersection with a ray (even at grazing incidence) is outlined for an admittedly limited software implementation. On a couple of "tame" examples, a 64-bit Windows 7 version is significantly faster than the fastest commercial design software (all multi-threaded). Non-sequential ray-surface interactions approaching 30M/sec are achieved on a 12-core 2.67 GHz Mac Pro desktop computer. For a more exotic example of a 6th degree Wood's horn beam dump (light trap), a 32-bit Windows single thread version traces rays nearly 4 times faster than the commercial ASAP software's implicit algebraic surface and over 13 times faster than its equivalent NURBS surface. However, implicit surfaces are foreign to most CAD systems and thus unfortunately, don't easily fit into a modern workflow.
A fast and Robust Algorithm for general inequality/equality constrained minimum time problems
Briessen, B.; Sadegh, N.
1995-12-01
This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.
NASA Astrophysics Data System (ADS)
Theis, L. S.; Motzoi, F.; Wilhelm, F. K.
2016-01-01
We present a few-parameter ansatz for pulses to implement a broad set of simultaneous single-qubit rotations in frequency-crowded multilevel systems. Specifically, we consider a system of two qutrits whose working and leakage transitions suffer from spectral crowding (detuned by δ ). In order to achieve precise controllability, we make use of two driving fields (each having two quadratures) at two different tones to simultaneously apply arbitrary combinations of rotations about axes in the X -Y plane to both qubits. Expanding the waveforms in terms of Hanning windows, we show how analytic pulses containing smooth and composite-pulse features can easily achieve gate errors less than 10-4 and considerably outperform known adiabatic techniques. Moreover, we find a generalization of the WAHWAH (Weak AnHarmonicity With Average Hamiltonian) method by Schutjens et al. [R. Schutjens, F. A. Dagga, D. J. Egger, and F. K. Wilhelm, Phys. Rev. A 88, 052330 (2013)], 10.1103/PhysRevA.88.052330 that allows precise separate single-qubit rotations for all gate times beyond a quantum speed limit. We find in all cases a quantum speed limit slightly below 2 π /δ for the gate time and show that our pulses are robust against variations in system parameters and filtering due to transfer functions, making them suitable for experimental implementations.
NASA Astrophysics Data System (ADS)
Spuhler, Christoph; Harders, Matthias; Székely, Gábor
2006-03-01
We present a fast and robust approach for automatic centerline extraction of tubular structures. The underlying idea is to cut traditional snakes into a set of shorter, independent segments - so-called snakelets. Following the same variational principles, each snakelet acts locally and extracts a subpart of the overall structure. After a parallel optimization step, outliers are detected and the remaining segments then form an implicit centerline. No manual initialization of the snakelets is necessary, which represents one advantage of the method. Moreover, computational complexity does not directly depend on dataset size, but on the number of snake segments necessary to cover the structure of interest, resulting in short computation times. Lastly, the approach is robust even for very complex datasets such as the small intestine. Our approach was tested on several medical datasets (CT datasets of colon, small bowel, and blood vessels) and yielded smooth, connected centerlines with few or no branches. The computation time needed is less than a minute using standard computing hardware.
X-PROP: a fast and robust diffusion-weighted propeller technique.
Li, Zhiqiang; Pipe, James G; Lee, Chu-Yu; Debbins, Josef P; Karis, John P; Huo, Donglai
2011-08-01
Diffusion-weighted imaging (DWI) has shown great benefits in clinical MR exams. However, current DWI techniques have shortcomings of sensitivity to distortion or long scan times or combinations of the two. Diffusion-weighted echo-planar imaging (EPI) is fast but suffers from severe geometric distortion. Periodically rotated overlapping parallel lines with enhanced reconstruction diffusion-weighted imaging (PROPELLER DWI) is free of geometric distortion, but the scan time is usually long and imposes high Specific Absorption Rate (SAR) especially at high fields. TurboPROP was proposed to accelerate the scan by combining signal from gradient echoes, but the off-resonance artifacts from gradient echoes can still degrade the image quality. In this study, a new method called X-PROP is presented. Similar to TurboPROP, it uses gradient echoes to reduce the scan time. By separating the gradient and spin echoes into individual blades and removing the off-resonance phase, the off-resonance artifacts in X-PROP are minimized. Special reconstruction processes are applied on these blades to correct for the motion artifacts. In vivo results show its advantages over EPI, PROPELLER DWI, and TurboPROP techniques. PMID:21661046
Fast and robust pointing and tracking using a second-generation star tracker
NASA Astrophysics Data System (ADS)
Joergensen, John L.; Pickles, Andrew J.
1998-05-01
Second generation star trackers work by taking wide-angle optical pictures of star fields, correlating the image against a star catalogue in ROM, centroiding many stars to derive an accurate position and orientation. This paper describes a miniature instrument, fast and lightweight, including database and search engine. It can be attached to any telescope to deliver an accurate absolute attitude reference via a serial line. It is independent of encoders or control system, and works whenever it can see the sky. Position update rates in the range of 1 to 5 Hz enable closed-loop operations. The paper describes the instrument operational principles, and its application as an attitude reference unit for a telescope. Actual data obtained at the University of Hawaii's 0.6-m telescope are presented, and their utility for correcting mechanical alignment discussed. The system has great potential as a positioner and guider for (i) remotely operated optical telescopes, (ii) IR telescopes operating in dark clouds, and (iii) radio telescopes. Other application recommendations and the performance estimates are given.
Analysis and Development of A Robust Fuel for Gas-Cooled Fast Reactors
Knight, Travis W
2010-01-31
The focus of this effort was on the development of an advanced fuel for gas-cooled fast reactor (GFR) applications. This composite design is based on carbide fuel kernels dispersed in a ZrC matrix. The choice of ZrC is based on its high temperature properties and good thermal conductivity and improved retention of fission products to temperatures beyond that of traditional SiC based coated particle fuels. A key component of this study was the development and understanding of advanced fabrication techniques for GFR fuels that have potential to reduce minor actinide (MA) losses during fabrication owing to their higher vapor pressures and greater volatility. The major accomplishments of this work were the study of combustion synthesis methods for fabrication of the ZrC matrix, fabrication of high density UC electrodes for use in the rotating electrode process, production of UC particles by rotating electrode method, integration of UC kernels in the ZrC matrix, and the full characterization of each component. Major accomplishments in the near-term have been the greater characterization of the UC kernels produced by the rotating electrode method and their condition following the integration in the composite (ZrC matrix) following the short time but high temperature combustion synthesis process. This work has generated four journal publications, one conference proceeding paper, and one additional journal paper submitted for publication (under review). The greater significance of the work can be understood in that it achieved an objective of the DOE Generation IV (GenIV) roadmap for GFR Fuel—namely the demonstration of a composite carbide fuel with 30% volume fuel. This near-term accomplishment is even more significant given the expected or possible time frame for implementation of the GFR in the years 2030 -2050 or beyond.
The ITS-90 realisation. A survey
Crovini, L.; Steur, P.P.M.
1994-12-31
The present state of the realisation of the International Temperature scale of 1990 is illustrated using the results of a recent inquiry conducted by the Consultative Committee for Thermometry of the CIPM among a sample group of metrology laboratories.
Hagberg, Emma E; Krarup, Anders; Fahnøe, Ulrik; Larsen, Lars E; Dam-Tuxen, Rebekka; Pedersen, Anders G
2016-08-01
Aleutian Mink Disease Virus (AMDV) is a frequently encountered pathogen associated with commercial mink breeding. AMDV infection leads to increased mortality and compromised animal health and welfare. Currently little is known about the molecular evolution of the virus, and the few existing studies have focused on limited regions of the viral genome. This paper describes a robust, reliable, and fast protocol for amplification of the full AMDV genome using long-range PCR. The method was used to generate next generation sequencing data for the non-virulent cell-culture adapted AMDV-G strain as well as for the virulent AMDV-Utah strain. Comparisons at nucleotide- and amino acid level showed that, in agreement with existing literature, the highest variability between the two virus strains was found in the left open reading frame, which encodes the non-structural (NS1-3) genes. This paper also reports a number of differences that potentially can be linked to virulence and host range. To the authors' knowledge, this is the first study to apply next generation sequencing on the entire AMDV genome. The results from the study will facilitate the development of new diagnostic tools and can form the basis for more detailed molecular epidemiological analyses of the virus. PMID:27060623
Jaafar, Haryati; Ibrahim, Salwani; Ramli, Dzati Athiar
2015-01-01
Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI) extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN), was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%. PMID:26113861
NASA Astrophysics Data System (ADS)
Lai, Jun; Kobayashi, Motoki; Barnett, Alex
2015-10-01
We present a solver for plane wave scattering from a periodic dielectric grating with a large number M of inclusions lying in each period of its middle layer. Such composite material geometries have a growing role in modern photonic devices and solar cells. The high-order scheme is based on boundary integral equations, and achieves many digits of accuracy with ease. The usual way to periodize the integral equation-via the quasi-periodic Green's function-fails at Wood's anomalies. We instead use the free-space Green's kernel for the near field, add auxiliary basis functions for the far field, and enforce periodicity in an expanded linear system; this is robust for all parameters. Inverting the periodic and layer unknowns, we are left with a square linear system involving only the inclusion scattering coefficients. Preconditioning by the single-inclusion scattering matrix, this is solved iteratively in O (M) time using a fast matrix-vector product. Numerical experiments show that a diffraction grating containing M = 1000 inclusions per period can be solved to 9-digit accuracy in under 5 minutes on a laptop.
Jaafar, Haryati; Ibrahim, Salwani; Ramli, Dzati Athiar
2015-01-01
Mobile implementation is a current trend in biometric design. This paper proposes a new approach to palm print recognition, in which smart phones are used to capture palm print images at a distance. A touchless system was developed because of public demand for privacy and sanitation. Robust hand tracking, image enhancement, and fast computation processing algorithms are required for effective touchless and mobile-based recognition. In this project, hand tracking and the region of interest (ROI) extraction method were discussed. A sliding neighborhood operation with local histogram equalization, followed by a local adaptive thresholding or LHEAT approach, was proposed in the image enhancement stage to manage low-quality palm print images. To accelerate the recognition process, a new classifier, improved fuzzy-based k nearest centroid neighbor (IFkNCN), was implemented. By removing outliers and reducing the amount of training data, this classifier exhibited faster computation. Our experimental results demonstrate that a touchless palm print system using LHEAT and IFkNCN achieves a promising recognition rate of 98.64%. PMID:26113861
The SKYLON Spaceplane - Progress to Realisation
NASA Astrophysics Data System (ADS)
Varvill, R.; Bond, A.
The Skylon spaceplane will enable single stage to orbit delivery of payloads with aircraft like operations. The key to realising this goal is a combined cycle engine that can operate both in airbreathing and pure rocket modes. To achieve this new low mass structure concepts and several new engine technologies need to be proven. An extensive program of technology development has addressed these issues with very positive results. This now allows the project to proceed to the final concept proving stage before full development commences.
Realising the Promise of Cancer Predisposition Genes
Rahman, Nazneen
2016-01-01
Genes in which germline mutations confer high or moderate increased risks of cancer are called cancer predisposition genes (CPG). Over 100 CPGs have been identified providing important scientific insights in many areas, particularly mechanisms of cancer causation. Moreover, clinical utilisation of CPGs has had substantial impact in diagnosis, optimised management and prevention of cancer. The recent transformative advances in DNA sequencing bring the promise of many more CPG discoveries and greater, broader clinical applications. However, there is also considerable potential for incorrect inferences and inappropriate clinical applications. Realising the promise of cancer predisposition genes for science and medicine will thus require careful navigation. PMID:24429628
Stable LPV realisation of the Smith predictor
NASA Astrophysics Data System (ADS)
Blanchini, Franco; Casagrande, Daniele; Miani, Stefano; Viaro, Umberto
2016-07-01
The paper is concerned with the control of a linear plant with an output delay. As is known, when the plant parameters do not vary in time, the transfer function approach can be used to find a high-performing controller with the Smith-predictor structure. Such an approach in the domain of the Laplace transform is not directly applicable in the time-variant case. Nevertheless, it is shown that the transfer function of the Smith controller valid for constant values of the parameters can be realised in such a way that closed-loop stability, as well as point-wise optimal performance, is ensured also when the parameters vary with time. The suggested technique is applied to the control of a heat exchanger whose varying parameters include a measurement delay.
NASA Astrophysics Data System (ADS)
Or-Guil, Michal
2009-03-01
Germinal centers (GCs) are dynamic microstructures that form in lymphatic tissues during immune responses. There, B cells undergo rapid proliferation and mutation of their B cell receptors (BCRs). Selection of B cells bearing BCRs that bind to the pathogen causing the immune response ultimately leads to BCRs that, when secreted as antibodies, form a new, effective, and pathogen specific antibody repertoire. However, the details of this evolutionary process are poorly understood, since currently available experimental techniques do not allow for direct observation of the prevailing mechanisms [Or-Guil et al., Imm.Rev. 2007]. Based on optimality considerations, we put forward the assumption that GCs are not isolated entities where evolutionary processes occur independently, but interconnected structures which allow for continuous exchange of B cells. We show that this architecture leads to a system whose response is much more robust towards different antigen variants than a set of independently working GCs could ever be. We test this hypothesis by generating our own experimental data (time course of 3-D volume distribution of GCs, analysis of high-throughput BCR sequences), and show that available data is consistent with the outlined hypothesis.
Kiefer, T; Villanueva, L G; Fargier, F; Favier, F; Brugger, J
2010-12-17
Fast hydrogen sensors based on discontinuous palladium (Pd) films on supporting polyimide layers, fabricated by a cost-efficient and full-wafer compatible process, are presented. The films, deposited by electron-beam evaporation with a nominal thickness of 1.5 nm, consist of isolated Pd islands that are separated by nanoscopic gaps. On hydrogenation, the volume expansion of Pd brings initially separated islands into contact which leads to the creation of new electrical pathways through the film. The supporting polyimide layer provides both sufficient elasticity for the Pd nanoclusters to expand on hydrogenation and a sufficiently high surface energy for good adhesion of both film and contacting electrodes. The novel order of the fabrication processes involves a dicing step prior to the Pd deposition and stencil lithography for the patterning of microelectrodes. This allows us to preserve the as-deposited film properties. The devices work at room temperature, show response times of a few seconds and have a low power consumption of some tens of nW. PMID:21098952
Robust 2D/3D registration for fast-flexion motion of the knee joint using hybrid optimization.
Ohnishi, Takashi; Suzuki, Masahiko; Kobayashi, Tatsuya; Naomoto, Shinji; Sukegawa, Tomoyuki; Nawata, Atsushi; Haneishi, Hideaki
2013-01-01
Previously, we proposed a 2D/3D registration method that uses Powell's algorithm to obtain 3D motion of a knee joint by 3D computed-tomography and bi-plane fluoroscopic images. The 2D/3D registration is performed consecutively and automatically for each frame of the fluoroscopic images. This method starts from the optimum parameters of the previous frame for each frame except for the first one, and it searches for the next set of optimum parameters using Powell's algorithm. However, if the flexion motion of the knee joint is fast, it is likely that Powell's algorithm will provide a mismatch because the initial parameters are far from the correct ones. In this study, we applied a hybrid optimization algorithm (HPS) combining Powell's algorithm with the Nelder-Mead simplex (NM-simplex) algorithm to overcome this problem. The performance of the HPS was compared with the separate performances of Powell's algorithm and the NM-simplex algorithm, the Quasi-Newton algorithm and hybrid optimization algorithm with the Quasi-Newton and NM-simplex algorithms with five patient data sets in terms of the root-mean-square error (RMSE), target registration error (TRE), success rate, and processing time. The RMSE, TRE, and the success rate of the HPS were better than those of the other optimization algorithms, and the processing time was similar to that of Powell's algorithm alone. PMID:23138929
NASA Astrophysics Data System (ADS)
Timofey, Sizonenko; Karsanina, Marina; Byuk, Irina; Gerke, Kirill
2016-04-01
To characterize pore structure relevant to single and multi-phase flow modelling it is of special interest to extract topology of the pore space. This is usually achieved using so-called pore-network models. Such models are useful not only to characterize pore space and pore size distributions, but also provide means to simulate flow and transport with very limited computational resources compared to other pore-scale modelling techniques. The main drawback of the pore-network approach is that they have first to simplify the pore space geometry. This crucial step is both time consuming and prone to numerous errors. Two most popular methods based on median axis or inscribed maximal balls have their own strong sides and disadvantages. To address aforementioned problems related to pore-network extraction here we propose a novel method utilizing the advantages of both popular approaches. Combining two algorithms resulted in much faster and robust extraction methodology. Moreover, we have found that accurate topology representation requires extension of the conventional pore-body and pore-throat classification. We test our new methodology using pore structures with "analytical solutions" such as different sphere packs. In addition, we rigorously compare it against inscribed maximal balls methodology's results using numerous 3D images of sandstone and carbonate rocks, soils and some other porous materials. Another verification includes permeability calculations which are also compared both against lab data and voxel based pore-scale modelling simulations. This work was partially supported by RFBR grant 15-34-20989 (X-ray tomography and image fusion) and RSF grant 14-17-00658 (image segmentation and pore-scale modelling).
Lochy, Aliette; Van Belle, Goedele; Rossion, Bruno
2015-01-01
Despite decades of research on reading, including the relatively recent contributions of neuroimaging and electrophysiology, identifying selective representations of whole visual words (in contrast to pseudowords) in the human brain remains challenging, in particular without an explicit linguistic task. Here we measured discrimination responses to written words by means of electroencephalography (EEG) during fast periodic visual stimulation. Sequences of pseudofonts, nonwords, or pseudowords were presented through sinusoidal contrast modulation at a periodic 10 Hz frequency rate (F), in which words were interspersed at regular intervals of every fifth item (i.e., F/5, 2 Hz). Participants monitored a central cross color change and had no linguistic task to perform. Within only 3 min of stimulation, a robust discrimination response for words at 2 Hz (and its harmonics, i.e., 4 and 6 Hz) was observed in all conditions, located predominantly over the left occipito-temporal cortex. The magnitude of the response was largest for words embedded in pseudofonts, and larger in nonwords than in pseudowords, showing that list context effects classically reported in behavioral lexical decision tasks are due to visual discrimination rather than decisional processes. Remarkably, the oddball response was significant even for the critical words/pseudowords discrimination condition in every individual participant. A second experiment replicated this words/pseudowords discrimination, and showed that this effect is not accounted for by a higher bigram frequency of words than pseudowords. Without any explicit task, our results highlight the potential of an EEG fast periodic visual stimulation approach for understanding the representation of written language. Its development in the scientific community might be valuable to rapidly and objectively measure sensitivity to word processing in different human populations, including neuropsychological patients with dyslexia and other reading
NASA Astrophysics Data System (ADS)
Cools, S.; Vanroose, W.
2016-03-01
This paper improves the convergence and robustness of a multigrid-based solver for the cross sections of the driven Schrödinger equation. Adding a Coupled Channel Correction Step (CCCS) after each multigrid (MG) V-cycle efficiently removes the errors that remain after the V-cycle sweep. The combined iterative solution scheme (MG-CCCS) is shown to feature significantly improved convergence rates over the classical MG method at energies where bound states dominate the solution, resulting in a fast and scalable solution method for the complex-valued Schrödinger break-up problem for any energy regime. The proposed solver displays optimal scaling; a solution is found in a time that is linear in the number of unknowns. The method is validated on a 2D Temkin-Poet model problem, and convergence results both as a solver and preconditioner are provided to support the O (N) scalability of the method. This paper extends the applicability of the complex contour approach for far field map computation (Cools et al. (2014) [10]).
NASA Astrophysics Data System (ADS)
Seidel, A.; Wagner, S.; Dreizler, A.; Ebert, V.
2015-05-01
We have developed a fast, spatially scanning direct tunable diode laser absorption spectrometer (dTDLAS) that combines four polygon-mirror based scanning units with low-cost retro-reflective foils. With this instrument, tomographic measurements of absolute 2-D water vapor concentration profiles are possible without any calibration using a reference gas. A spatial area of 0.8 m x 0.8 m was covered, which allows for application in soil physics, where greenhouse gas emission from certain soil structures shall be monitored. The whole concentration field was measured with up to 2.5 Hz. In this paper, we present the setup and spectroscopic performance of the instrument regarding the influence of the polygon rotation speed and mode on the absorption signal. Homogeneous H2O distributions were measured and compared to a single channel, bi-static reference TDLAS spectrometer for validation of the instrument. Good accuracy and precision with errors of less than 6% of the absolute concentration and length and bandwidth normalized detection limits of up to 1.1 ppmv . m (Hz)-0.5 were achieved. The spectrometer is a robust and easy to set up instrument for tomographic reconstructions of 2-D-concentration fields that can be considered as a good basis for future field measurements in environmental research.
NASA Astrophysics Data System (ADS)
Seidel, A.; Wagner, S.; Dreizler, A.; Ebert, V.
2014-12-01
We have developed a fast, spatially direct scanning tunable diode laser absorption spectrometer (dTDLAS) that combines four polygon-mirror based scanning units with low-cost retro-reflective foils. With this instrument, tomographic measurements of absolute 2-D water vapour concentration profiles are possible without any calibration using a reference gas. A spatial area of 0.8 m × 0.8 m was covered, which allows for application in soil physics, where greenhouse gas emission from certain soil structures shall be monitored. The whole concentration field was measured with up to 2.5 Hz. In this paper, we present the setup and spectroscopic performance of the instrument regarding the influence of the polygon rotation speed and mode on the absorption signal. Homogeneous H2O distributions were measured and compared to a single channel, bi-static reference TDLAS spectrometer for validation of the instrument. Good accuracy and precision with errors of less than 6% of the absolute concentration and length and bandwidth normalized detection limits of up to 1.1 ppmv · m · √Hz-1 were achieved. The spectrometer is a robust and easy to set up instrument for tomographic reconstructions of 2-D-concentration fields that can be considered a good basis for future field measurements in environmental research.
Realising the Uncertainty Enabled Model Web
NASA Astrophysics Data System (ADS)
Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.
2012-12-01
The FP7 funded UncertWeb project aims to create the "uncertainty enabled model web". The central concept here is that geospatial models and data resources are exposed via standard web service interfaces, such as the Open Geospatial Consortium (OGC) suite of encodings and interface standards, allowing the creation of complex workflows combining both data and models. The focus of UncertWeb is on the issue of managing uncertainty in such workflows, and providing the standards, architecture, tools and software support necessary to realise the "uncertainty enabled model web". In this paper we summarise the developments in the first two years of UncertWeb, illustrating several key points with examples taken from the use case requirements that motivate the project. Firstly we address the issue of encoding specifications. We explain the usage of UncertML 2.0, a flexible encoding for representing uncertainty based on a probabilistic approach. This is designed to be used within existing standards such as Observations and Measurements (O&M) and data quality elements of ISO19115 / 19139 (geographic information metadata and encoding specifications) as well as more broadly outside the OGC domain. We show profiles of O&M that have been developed within UncertWeb and how UncertML 2.0 is used within these. We also show encodings based on NetCDF and discuss possible future directions for encodings in JSON. We then discuss the issues of workflow construction, considering discovery of resources (both data and models). We discuss why a brokering approach to service composition is necessary in a world where the web service interfaces remain relatively heterogeneous, including many non-OGC approaches, in particular the more mainstream SOAP and WSDL approaches. We discuss the trade-offs between delegating uncertainty management functions to the service interfaces themselves and integrating the functions in the workflow management system. We describe two utility services to address
Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs
2014-02-01
The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. PMID:24252726
Realising Haldane's vision for a Chern insulator in buckled lattices.
Wright, Anthony R
2013-01-01
The Chern insulator displays a quantum Hall effect with no net magnetic field. Proposed by Haldane over 20 years ago, it laid the foundation for the fields of topological order, unconventional quantum Hall effects, and topological insulators. Despite enormous impact over two decades, Haldane's original vision of a staggered magnetic field within a crystal lattice has been prohibitively difficult to realise. In fact, in the original paper Haldane stresses his idea is probably merely a toy model. I show that buckled lattices with only simple hopping terms, within in-plane magnetic fields, can realise these models, requiring no exotic interactions or experimental parameters. As a concrete example of this very broad, and remarkably simple principle, I consider silicene, a honeycomb lattice with out-of-plane sublattice anisotropy, in an in-plane magnetic field, and show that it is a Chern insulator, even at negligibly small magnetic fields, which is analogous to Haldane's original model. PMID:24061332
Tinkum, Kelsey L.; White, Lynn S.; Marpegan, Luciano; Herzog, Erik; Piwnica-Worms, David; Piwnica-Worms, Helen
2013-01-01
Reporter mice that enable the activity of the endogenous p21 promoter to be dynamically monitored in real time in vivo and under a variety of experimental conditions revealed ubiquitous p21 expression in mouse organs including the brain. Low light bioluminescence microscopy was employed to localize p21 expression to specific regions of the brain. Interestingly, p21 expression was observed in the paraventricular, arcuate, and dorsomedial nuclei of the hypothalamus, regions that detect nutrient levels in the blood stream and signal metabolic actions throughout the body. These results suggested a link between p21 expression and metabolic regulation. We found that short-term food deprivation (fasting) potently induced p21 expression in tissues involved in metabolic regulation including liver, pancreas and hypothalamic nuclei. Conditional reporter mice were generated that enabled hepatocyte-specific expression of p21 to be monitored in vivo. Bioluminescence imaging demonstrated that fasting induced a 7-fold increase in p21 expression in livers of reporter mice and Western blotting demonstrated an increase in protein levels as well. The ability of fasting to induce p21 expression was found to be independent of p53 but dependent on FOXO1. Finally, occupancy of the endogenous p21 promoter by FOXO1 was observed in the livers of fasted but not fed mice. Thus, fasting promotes loading of FOXO1 onto the p21 promoter to induce p21 expression in hepatocytes. PMID:23918930
Tinkum, Kelsey L; White, Lynn S; Marpegan, Luciano; Herzog, Erik; Piwnica-Worms, David; Piwnica-Worms, Helen
2013-09-27
Reporter mice that enable the activity of the endogenous p21 promoter to be dynamically monitored in real time in vivo and under a variety of experimental conditions revealed ubiquitous p21 expression in mouse organs including the brain. Low light bioluminescence microscopy was employed to localize p21 expression to specific regions of the brain. Interestingly, p21 expression was observed in the paraventricular, arcuate, and dorsomedial nuclei of the hypothalamus, regions that detect nutrient levels in the blood stream and signal metabolic actions throughout the body. These results suggested a link between p21 expression and metabolic regulation. We found that short-term food deprivation (fasting) potently induced p21 expression in tissues involved in metabolic regulation including liver, pancreas and hypothalamic nuclei. Conditional reporter mice were generated that enabled hepatocyte-specific expression of p21 to be monitored in vivo. Bioluminescence imaging demonstrated that fasting induced a 7-fold increase in p21 expression in livers of reporter mice and Western blotting demonstrated an increase in protein levels as well. The ability of fasting to induce p21 expression was found to be independent of p53 but dependent on FOXO1. Finally, occupancy of the endogenous p21 promoter by FOXO1 was observed in the livers of fasted but not fed mice. Thus, fasting promotes loading of FOXO1 onto the p21 promoter to induce p21 expression in hepatocytes. PMID:23918930
Reyhan, Meral; Kim, Hyun J.; Brown, Matthew S.; Ennis, Daniel B.
2013-01-01
Purpose To assess the intra- and inter-scan reproducibility of LV twist using FAST. Assessing the reproducibility of the measurement of new magnetic resonance imaging (MRI) biomarkers is an important part of validation. Fourier Analysis of STimulated Echoes (FAST) is a new MRI tissue tagging method that has recently been shown to compare favorably to conventional estimates of left ventricular (LV) twist from cardiac tagged images, but with significantly reduced user interaction time. Materials and Methods Healthy volunteers (N=10) were scanned twice using FAST over one week. On Day-1 two measurements of LV twist were collected for intra-scan comparisons. Measurements for LV twist were again collected on Day-8 for inter-scan assessment. LV short-axis tagged images were acquired on a 3T scanner in order to ensure detectability of tags during early and mid-diastole. Peak LV twist is reported as mean±SD. Reproducibility was assessed using the concordance correlation coefficient (CCC) and the repeatability coefficient (RC) (95%-CI range). Results Mean peak twist measurements were 13.4±4.3° (Day-1, Scan-1), 13.6±3.7° (Day-1, Scan-2), and 13.0±2.7° (Day-8). Bland-Altman analysis resulted in intra- and inter-scan bias and 95%-CI of −0.6° [−1.0°, 1.6°] and 1.4° [−1.0°, 3.0°], respectively. The Bland-Altman RC for peak LV twist was 2.6° and 4.0° for intra- and inter-scan respectively. The CCC was 0.9 and 0.6 for peak LV twist for intra- and inter-scan respectively. Conclusion FAST is a semi-automated method that provides a quick and quantitative assessment of LV systolic and diastolic twist that demonstrates high intra-scan and moderate inter-scan reproducibility in preliminary studies. PMID:23633244
Automated eigensystem realisation algorithm for operational modal analysis
NASA Astrophysics Data System (ADS)
Zhang, Guowen; Ma, Jinghua; Chen, Zhuo; Wang, Ruirong
2014-07-01
The eigensystem realisation algorithm (ERA) is one of the most popular methods in civil engineering applications for estimating modal parameters. Three issues have been addressed in the paper: spurious mode elimination, estimating the energy relationship between different modes, and automatic analysis of the stabilisation diagram. On spurious mode elimination, a new criterion, modal similarity index (MSI) is proposed to measure the reliability of the modes obtained by ERA. On estimating the energy relationship between different modes, the mode energy level (MEL) was introduced to measure the energy contribution of each mode, which can be used to indicate the dominant mode. On automatic analysis of the stabilisation diagram, an automation of the mode selection process based on a hierarchical clustering algorithm was developed. An experimental example of the parameter estimation for the Chaotianmen bridge model in Chongqing, China, is presented to demonstrate the efficacy of the proposed method.
Castets, Charles R; Ribot, Emeline J; Lefrançois, William; Trotier, Aurélien J; Thiaudière, Eric; Franconi, Jean-Michel; Miraux, Sylvain
2015-07-01
Mapping longitudinal relaxation times in 3D is a promising quantitative and non-invasive imaging tool to assess cardiac remodeling. Few methods are proposed in the literature allowing us to perform 3D T1 mapping. These methods often require long scan times and use a low number of 3D images to calculate T1 . In this project, a fast 3D T1 mapping method using a stack-of-spirals sampling scheme and regular RF pulse excitation at 7 T is presented. This sequence, combined with a newly developed fitting procedure, allowed us to quantify T1 of the whole mouse heart with a high spatial resolution of 208 × 208 × 315 µm(3) in 10-12 min acquisition time. The sensitivity of this method for measuring T1 variations was demonstrated on mouse hearts after several injections of manganese chloride (doses from 25 to 150 µmol kg(-1) ). T1 values were measured in vivo in both pre- and post-contrast experiments. This protocol was also validated on ischemic mice to demonstrate its efficiency to visualize tissue damage induced by a myocardial infarction. This study showed that combining spiral gradient shape and steady RF excitation enabled fast and robust 3D T1 mapping of the entire heart with a high spatial resolution. PMID:25989986
Jović, Ozren; Smrečki, Neven; Popović, Zora
2016-04-01
A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for p<0.05). Also, iRR can be a fast alternative to iPLS, especially in case of unknown degree of complexity of analyzed system, i.e. if upper limit of number of latent variables is not easily estimated for iPLS. Adulteration of hempseed (H) oil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEP<1.2%). This means that FTIR-ATR coupled with iRR can very rapidly and effectively determine the level of adulteration in the adulterated hempseed oil (R(2)>0.99). PMID:26838379
Huang, Dong; Cabral, Ricardo; De la Torre, Fernando
2016-02-01
Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740
Free-field realisations of the BMS3 algebra and its extensions
NASA Astrophysics Data System (ADS)
Banerjee, Nabamita; Jatkar, Dileep P.; Mukhi, Sunil; Neogi, Turmoli
2016-06-01
We construct an explicit realisation of the BMS3 algebra with nonzero central charges using holomorphic free fields. This can be extended by the addition of chiral matter to a realisation having arbitrary values for the two independent central charges. Via the introduction of additional free fields, we extend our construction to the minimally supersymmetric BMS3 algebra and to the nonlinear higher-spin BMS3-W3 algebra. We also describe an extended system that realises both the SU(2) current algebra as well as BMS3 via the Wakimoto representation, though in this case introducing a central extension also brings in new non-central operators.
Mechanically robust, chemically inert superhydrophobic charcoal surfaces.
Xie, Jian-Bo; Li, Liang; Knyazeva, Anastassiya; Weston, James; Naumov, Panče
2016-08-11
We report a fast and cost-effective strategy towards the preparation of superhydrophobic composites where a double-sided adhesive tape is paved with charcoal particles. The composites are mechanically robust, and resistant to strong chemical agents. PMID:27405255
George, Angela; Riddell, Daniel; Seal, Sheila; Talukdar, Sabrina; Mahamdallie, Shazia; Ruark, Elise; Cloke, Victoria; Slade, Ingrid; Kemp, Zoe; Gore, Martin; Strydom, Ann; Banerjee, Susana; Hanson, Helen; Rahman, Nazneen
2016-01-01
Advances in DNA sequencing have made genetic testing fast and affordable, but limitations of testing processes are impeding realisation of patient benefits. Ovarian cancer exemplifies the potential value of genetic testing and the shortcomings of current pathways to access testing. Approximately 15% of ovarian cancer patients have a germline BRCA1 or BRCA2 mutation which has substantial implications for their personal management and that of their relatives. Unfortunately, in most countries, routine implementation of BRCA testing for ovarian cancer patients has been inconsistent and largely unsuccessful. We developed a rapid, robust, mainstream genetic testing pathway in which testing is undertaken by the trained cancer team with cascade testing to relatives performed by the genetics team. 207 women with ovarian cancer were offered testing through the mainstream pathway. All accepted. 33 (16%) had a BRCA mutation. The result informed management of 79% (121/154) women with active disease. Patient and clinician feedback was very positive. The pathway offers a 4-fold reduction in time and 13-fold reduction in resource requirement compared to the conventional testing pathway. The mainstream genetic testing pathway we present is effective, efficient and patient-centred. It can deliver rapid, robust, large-scale, cost-effective genetic testing of BRCA1 and BRCA2 and may serve as an exemplar for other genes and other diseases. PMID:27406733
George, Angela; Riddell, Daniel; Seal, Sheila; Talukdar, Sabrina; Mahamdallie, Shazia; Ruark, Elise; Cloke, Victoria; Slade, Ingrid; Kemp, Zoe; Gore, Martin; Strydom, Ann; Banerjee, Susana; Hanson, Helen; Rahman, Nazneen
2016-01-01
Advances in DNA sequencing have made genetic testing fast and affordable, but limitations of testing processes are impeding realisation of patient benefits. Ovarian cancer exemplifies the potential value of genetic testing and the shortcomings of current pathways to access testing. Approximately 15% of ovarian cancer patients have a germline BRCA1 or BRCA2 mutation which has substantial implications for their personal management and that of their relatives. Unfortunately, in most countries, routine implementation of BRCA testing for ovarian cancer patients has been inconsistent and largely unsuccessful. We developed a rapid, robust, mainstream genetic testing pathway in which testing is undertaken by the trained cancer team with cascade testing to relatives performed by the genetics team. 207 women with ovarian cancer were offered testing through the mainstream pathway. All accepted. 33 (16%) had a BRCA mutation. The result informed management of 79% (121/154) women with active disease. Patient and clinician feedback was very positive. The pathway offers a 4-fold reduction in time and 13-fold reduction in resource requirement compared to the conventional testing pathway. The mainstream genetic testing pathway we present is effective, efficient and patient-centred. It can deliver rapid, robust, large-scale, cost-effective genetic testing of BRCA1 and BRCA2 and may serve as an exemplar for other genes and other diseases. PMID:27406733
Positive Stable Realisation of Fractional Electrical Circuits Consisting of n Subsystem
NASA Astrophysics Data System (ADS)
Markowski, Konrad Andrzej
2015-11-01
This paper presents a method of the determination of a positive stable realisation of the fractional continuous-time positive system consisting of n subsystems with one fractional order and with different fractional orders. For the proposed method, a digraph-based algorithm was constructed. In this paper, we have shown how we can realise the transfer matrix using electrical circuits consisting of resistances, inductances, capacitances and source voltages. The proposed method was discussed and illustrated with some numerical examples.
Investigation of resistive guiding of fast electrons in ultra-intense laser-solid interactions
NASA Astrophysics Data System (ADS)
Green, James; Booth, Nicola; Robinson, Alex; Lancaster, Kate; Murphy, Chris; Ridgers, Chris
2015-11-01
A key issue in realising the development of a number of high-intensity laser-plasma applications is the critical problem of fast electron divergence. Previous experimental measurements have indicated that the electron divergence angle is considerable at relativistic intensities (> 1018 Wcm-2) and that self-pinching of the electron beam will not be sufficient to produce the collimated propagation that is required for applications such as WDM studies or bright, short-pulse X-ray sources. A number of concepts have been proposed to improve fast electron collimation, with one promising approach being to exploit resistivity gradients inside targets to magnetically guide fast electrons. Here we present experimental work using a novel conical target geometry that uses a high/low Z interface to produce such guiding. A range of target designs have been tested using the Vulcan Petawatt laser to investigate improvements in fast electron transport and collimation. Preliminary results will be presented from a number of complementary diagnostics in order to assess the degree and robustness of the focusing mechanism.
Realising the European network of biodosimetry: RENEB—status quo
Kulka, U.; Ainsbury, L.; Atkinson, M.; Barnard, S.; Smith, R.; Barquinero, J. F.; Barrios, L.; Bassinet, C.; Beinke, C.; Cucu, A.; Darroudi, F.; Fattibene, P.; Bortolin, E.; Monaca, S. Della; Gil, O.; Gregoire, E.; Hadjidekova, V.; Haghdoost, S.; Hatzi, V.; Hempel, W.; Herranz, R.; Jaworska, A.; Lindholm, C.; Lumniczky, K.; M'kacher, R.; Mörtl, S.; Montoro, A.; Moquet, J.; Moreno, M.; Noditi, M.; Ogbazghi, A.; Oestreicher, U.; Palitti, F.; Pantelias, G.; Popescu, I.; Prieto, M. J.; Roch-Lefevre, S.; Roessler, U.; Romm, H.; Rothkamm, K.; Sabatier, L.; Sebastià, N.; Sommer, S.; Terzoudi, G.; Testa, A.; Thierens, H.; Trompier, F.; Turai, I.; Vandevoorde, C.; Vaz, P.; Voisin, P.; Vral, A.; Ugletveit, F.; Wieser, A.; Woda, C.; Wojcik, A.
2015-01-01
Creating a sustainable network in biological and retrospective dosimetry that involves a large number of experienced laboratories throughout the European Union (EU) will significantly improve the accident and emergency response capabilities in case of a large-scale radiological emergency. A well-organised cooperative action involving EU laboratories will offer the best chance for fast and trustworthy dose assessments that are urgently needed in an emergency situation. To this end, the EC supports the establishment of a European network in biological dosimetry (RENEB). The RENEB project started in January 2012 involving cooperation of 23 organisations from 16 European countries. The purpose of RENEB is to increase the biodosimetry capacities in case of large-scale radiological emergency scenarios. The progress of the project since its inception is presented, comprising the consolidation process of the network with its operational platform, intercomparison exercises, training activities, proceedings in quality assurance and horizon scanning for new methods and partners. Additionally, the benefit of the network for the radiation research community as a whole is addressed. PMID:25205835
Robust efficient video fingerprinting
NASA Astrophysics Data System (ADS)
Puri, Manika; Lubin, Jeffrey
2009-02-01
We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.
ERIC Educational Resources Information Center
Riddell, Richard
2013-01-01
Taking recent policy on education and social mobility as a working example, this article examines developments in the mechanisms for realising policy over the past ten years, as indicative of changes in the neoliberal state. This initial analysis suggests that, despite similarities in the process of policy formation before and after the General…
Realising Graduate Attributes in the Research Degree: The Role of Peer Support Groups
ERIC Educational Resources Information Center
Stracke, Elke; Kumar, Vijay
2014-01-01
This paper discusses the role of peer support groups (PSGs) in realising graduate attributes in the research degree. The literature indicates that top-down embedding of graduate attributes has met with only limited success. By taking a bottom-up approach, this paper shows that PSGs offer an opportunity to improve the graduate attribute outcomes of…
Technology Transfer Automated Retrieval System (TEKTRAN)
Genotyping-by-sequencing (GBS) provides an opportunity for fast and inexpensive generation of unbiased SNPs. However, due to its low coverage, GBS SNPs have a higher proportion of missing data and genotyping error associated with heterozygote undercalling than traditional genotyping platforms. These...
Mechanisms for Robust Cognition.
Walsh, Matthew M; Gluck, Kevin A
2015-08-01
To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition. PMID:25352094
Realising effective theories of tribrid inflation: are there effects from messenger fields?
Antusch, Stefan; Nolde, David
2015-09-22
Tribrid inflation is a variant of supersymmetric hybrid inflation in which the inflaton is a matter field (which can be charged under gauge symmetries) and inflation ends by a GUT-scale phase transition of a waterfall field. These features make tribrid inflation a promising framework for realising inflation with particularly close connections to particle physics. Superpotentials of tribrid inflation involve effective operators suppressed by some cutoff scale, which is often taken as the Planck scale. However, these operators may also be generated by integrating out messenger superfields with masses below the Planck scale, which is in fact quite common in GUT and/or flavour models. The values of the inflaton field during inflation can then lie above this mass scale, which means that for reliably calculating the model predictions one has to go beyond the effective theory description. We therefore discuss realisations of effective theories of tribrid inflation and specify in which cases effects from the messenger fields are expected, and under which conditions they can safely be neglected. In particular, we point out how to construct realisations where, despite the fact that the inflaton field values are above the messenger mass scale, the predictions for the observables are (to a good approximation) identical to the ones calculated in the effective theory treatment where the messenger mass scale is identified with the (apparent) cutoff scale.
Realising effective theories of tribrid inflation: are there effects from messenger fields?
NASA Astrophysics Data System (ADS)
Antusch, Stefan; Nolde, David
2015-09-01
Tribrid inflation is a variant of supersymmetric hybrid inflation in which the inflaton is a matter field (which can be charged under gauge symmetries) and inflation ends by a GUT-scale phase transition of a waterfall field. These features make tribrid inflation a promising framework for realising inflation with particularly close connections to particle physics. Superpotentials of tribrid inflation involve effective operators suppressed by some cutoff scale, which is often taken as the Planck scale. However, these operators may also be generated by integrating out messenger superfields with masses below the Planck scale, which is in fact quite common in GUT and/or flavour models. The values of the inflaton field during inflation can then lie above this mass scale, which means that for reliably calculating the model predictions one has to go beyond the effective theory description. We therefore discuss realisations of effective theories of tribrid inflation and specify in which cases effects from the messenger fields are expected, and under which conditions they can safely be neglected. In particular, we point out how to construct realisations where, despite the fact that the inflaton field values are above the messenger mass scale, the predictions for the observables are (to a good approximation) identical to the ones calculated in the effective theory treatment where the messenger mass scale is identified with the (apparent) cutoff scale.
NASA Technical Reports Server (NTRS)
Narendra, K. S.; Annaswamy, A. M.
1985-01-01
Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.
Mechanisms for Robust Cognition
ERIC Educational Resources Information Center
Walsh, Matthew M.; Gluck, Kevin A.
2015-01-01
To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…
NASA Astrophysics Data System (ADS)
Óskarsson, G. J.; Kjesbu, O. S.; Slotte, A.
2002-08-01
Maturing Norwegian spring-spawning (NSS) herring, Clupea harengus, were collected for reproductive analyses along the Norwegian coast prior to the spawning seasons of 1997-2000. Over this time period there was a marked change in weight (W) at length (TL) with 1998 showing extremely low values and 2000 high values in a historical perspective. Potential fecundity, amounting to about 20 000-100 000 developing (vitellogenic) oocytes per fish and positively related to fish size, increased significantly with fish condition. Relative somatic potential fecundity (RF P, number of oocytes per g ovary-free body weight) in NSS herring was found to vary by 35-55% between years. Unexpectedly, females in 2000 showed low RF P-values, possibly due to negative feedback from previous reproductive investments at low condition. A clear threshold value for Fulton's condition factor, K (K=100×W/TL 3), of 0.65-0.70 existed below which there was considerable atresia (resorption of vitellogenic oocytes). Thus, these components of the spawning stock, amounting to 1-46% in the period 1980-1999, obviously contributed relatively little to the total egg production. This was confirmed by low ovary weights and examples of delayed oocyte development in these individuals. An up-to-date atresia model is presented. The established oocyte growth curve, and to a lesser degree the assumed atretic oocytic turnover rate, was critical for the estimation of realised fecundity (number of eggs spawned). Modelled realised fecundity was significantly below observed potential fecundity. Females that had migrated the shortest distance from the over-wintering area, Vestfjorden, northern Norway, were in the poorest condition, had the least developed oocytes and the lowest potential and realised fecundities. In agreement with previously published studies on temporal and spatial changes in gonad weights, those females reaching the main spawning grounds in the south-western part of the coast (Møre) were the most
The Realisation and Validation of the Galileo Terrestrial Reference Frame (GTRF)
NASA Astrophysics Data System (ADS)
Söhne, W.; Dach, R.; GTRF Team
2008-12-01
After a time of uncertainty the European Global Navigation Satellite System (GNSS) Galileo is going to be realized in the next years. Two test satellites - GIOVE A and B - have been placed in the orbit to validate various signals and component. One basic element of the overall Galileo system is the so-called Galileo Terrestrial Reference Frame (GTRF) as the basis for all Galileo products and services. The realization and maintenance of such a TRF has been given to an external consortium, named the Galileo Geodetic Service Provider (GGSP), which consists of seven institutions under the lead of GeoForschungsZentrum Potsdam. The project is funded within the sixth framework programme (FP6) of the European Union and managed by the European GNSS Supervisory Authority (GSA). It will last until May 2009. The GTRF will be a realisation of the International Terrestrial Reference Frame (ITRF) on a position precision level of 3 cm (2 sigma). Since the GTRF will already be required by the time when the first Galileo signals are going to be emitted during the In-Orbit-Validation (IOV) phase, an initial realisation of the GTRF has to be based on other positioning data, notably GPS. In addition to the GTRF, the GGSP will generate additional products and information, such as Earth Rotation Parameters, satellites orbits, clocks for satellites and stations. The presentation describes the strategy for the GTRF realisation following the "state of the art" TRF implementation. Since the Galileo tracking stations, named Galileo Sensor Stations (GSS), will form a sparse global network, it is necessary to densify the network with additional stations to get the highest possible precision and stability for the GTRF. The connection to the ITRF is realized and validated by IGS stations, which are part of the ITRF, and especially by local ties to other geodetic techniques like satellite laser ranging and VLBI. Results from the first analysis campaigns will be shown with special concern to the so
Ruggedness and robustness testing.
Dejaegher, Bieke; Heyden, Yvan Vander
2007-07-27
Due to the strict regulatory requirements, especially in pharmaceutical analysis, analysis results with an acceptable quality should be reported. Thus, a proper validation of the measurement method is required. In this context, ruggedness and robustness testing becomes increasingly more important. In this review, the definitions of ruggedness and robustness are given, followed by a short explanation of the different approaches applied to examine the ruggedness or the robustness of an analytical method. Then, case studies, describing ruggedness or robustness tests of high-performance liquid chromatographic (HPLC), capillary electrophoretic (CE), gas chromatographic (GC), supercritical fluid chromatographic (SFC), and ultra-performance liquid chromatographic (UPLC) assay methods, are critically reviewed and discussed. Mainly publications of the last 10 years are considered. PMID:17379230
NASA Astrophysics Data System (ADS)
Cox, Henry; Heaney, Kevin D.
2003-04-01
The term robustness in signal processing applications usually refers to approaches that are not degraded significantly when the assumptions that were invoked in defining the processing algorithm are no longer valid. Highly tuned algorithms that fall apart in real-world conditions are useless. The classic example is super-directive arrays of closely spaced elements. The very narrow beams and high directivity could be predicted under ideal conditions, could not be achieved under realistic conditions of amplitude, phase and position errors. The robust design tries to take into account the real environment as part of the optimization problem. This problem led to the introduction of the white noise gain constraint and diagonal loading in adaptive beam forming. Multiple linear constraints have been introduced in pursuit of robustness. Sonar systems such as towed arrays operate in less than ideal conditions, making robustness a concern. A special problem in sonar systems is failed array elements. This leads to severe degradation in beam patterns and bearing response patterns. Another robustness issue arises in matched field processing that uses an acoustic propagation model in the beamforming. Knowledge of the environmental parameters is usually limited. This paper reviews the various approaches to achieving robustness in sonar systems.
Chiang, Chern-En; Naditch-Brûlé, Lisa; Brette, Sandrine; Silva-Cardoso, José; Gamra, Habib; Murin, Jan; Zharinov, Oleg J.; Steg, Philippe Gabriel
2016-01-01
Background Atrial fibrillation (AF) can be managed with rhythm- or rate-control strategies. There are few data from routine clinical practice on the frequency with which each strategy is used and their correlates in terms of patients’ clinical characteristics, AF control, and symptom burden. Methods RealiseAF was an international, cross-sectional, observational survey of 11,198 patients with AF. The aim of this analysis was to describe patient profiles and symptoms according to the AF management strategy used. A multivariate logistic regression identified factors associated with AF management strategy at the end of the visit. Results Among 10,497 eligible patients, 53.7% used a rate-control strategy, compared with 34.5% who used a rhythm-control strategy. In 11.8% of patients, no clear strategy was stated. The proportion of patients with AF-related symptoms (EHRA Class > = II) was 78.1% (n = 4396/5630) for those using a rate-control strategy vs. 67.8% for those using a rhythm-control strategy (p<0.001). Multivariate logistic regression analysis revealed that age <75 years or the paroxysmal or persistent form of AF favored the choice of a rhythm-control strategy. A change in strategy was infrequent, even in patients with European Heart Rhythm Association (EHRA) Class > = II. Conclusions In the RealiseAF routine clinical practice survey, rate control was more commonly used than rhythm control, and a change in strategy was uncommon, even in symptomatic patients. In almost 12% of patients, no clear strategy was stated. Physician awareness regarding optimal management strategies for AF may be improved. PMID:26800084
Engineering robust intelligent robots
NASA Astrophysics Data System (ADS)
Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.
2010-01-01
The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.
Robust control of accelerators
Johnson, W.J.D. ); Abdallah, C.T. )
1990-01-01
The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.
Robust control of accelerators
NASA Astrophysics Data System (ADS)
Joel, W.; Johnson, D.; Chaouki, Abdallah T.
1991-07-01
The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modelling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control method leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this article, we report on our research progress. In section 1, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section 2, the results of our proof-of-principle experiments are presented. In section 3, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf. without demodulating, compensating, and then remodulating.
Realising the Potential of New Technology? Assessing the Legacy of New Labour's ICT Agenda 1997-2007
ERIC Educational Resources Information Center
Selwyn, Neil
2008-01-01
"Realising the potential of new technology" was one of the central educational themes of New Labour's 1997 election manifesto, with "information and communications technology" (ICT) established subsequently as a prominent feature of the Blair administration policy portfolio. As such New Labour can claim rightly to have made an unprecedented and…
ERIC Educational Resources Information Center
Soriano, Encarnacion; Franco, Clemente; Sleeter, Christine
2011-01-01
This study analysed the effects a values education programme can have on the feelings of self-realisation, self-concept and self-esteem of Romany adolescents in southern Spain. To do this, an experimental group received a values education intervention but a control group did not. The intervention programme was adapted to the Romany culture. The…
ERIC Educational Resources Information Center
Zibeniene, Gintaute
2004-01-01
The author analyzes the nature of study programme assessment with regard to the assurance of study quality. The organisation of the assessment process of the non-university study programmes which were developed and submitted for realisation in Lithuania and other countries is also presented and compared. It is being analysed whether it is possible…
Realisation d'un detecteur de radioactivite pour un systeme microfluidique
NASA Astrophysics Data System (ADS)
Girard Baril, Frederique
Pour etablir le comportement pharmacocinetique de nouveaux radiotraceurs en imagerie moleculaire, il est necessaire d'approfondir l'analyse realisee a partir d'une image par l'ajout d'une mesure dynamique de la radioactivite dans le sang. L'Universite de Sherbrooke developpe presentement une plateforme microfluidique d'echantillonnage et d'analyse permettant la mesure de la radioactivite du plasma en temps reel. L'objectif du present projet de maitrise etait de realiser le composant optoelectronique responsable de la detection des positrons et de l'integrer a la puce microfluidique. L'option retenue a ete l'utilisation de photodiodes PIN en silicium. Un procede de fabrication, ainsi qu'une serie de photomasques ont ete developpes afin de produire une premiere iteration de prototypes. Les detecteurs ont ete concus de maniere a optimiser leur sensibilite en fonction du type de rayonnement a detecter. En effet, la region de detection doit etre suffisamment epaisse et sensible pour absorber le maximum de particules energetiques. Egalement, il est essentiel de minimiser le courant de fuite en noirceur afin d'obtenir un photocourant directement proportionnel a l'energie des radiations incidentes. Les caracteristiques electriques obtenues avec les premiers detecteurs ont ete demontrees proches des performances de detecteurs commerciaux similaires. De plus, il a ete possible d'integrer un canal microfluidique au substrat contenant les photodiodes et d'en realiser l'encapsulation sans alterer les performances electriques initiales des detecteurs. Une courbe de l'activite radioactive du 18F a ete mesuree, celle-ci se comparant a l'activite theorique associee a ce radioisotope communement utilise en TEP. Enfin, un spectre en energie des emissions radiatives du 18F a ete mesure et compare aux performances de systemes utilisant des photodiodes commerciales. Il a ete demontre que le prototype offrait un rapport signal sur bruit similaire aux systemes bases sur des photodiodes
Robustness of spatial micronetworks
NASA Astrophysics Data System (ADS)
McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.
2015-04-01
Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.
Robustness of spatial micronetworks.
McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P
2015-04-01
Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure. PMID:25974553
Advances in genetic engineering of the avian genome: "Realising the promise".
Doran, Timothy J; Cooper, Caitlin A; Jenkins, Kristie A; Tizard, Mark L V
2016-06-01
This review provides an historic perspective of the key steps from those reported at the 1st Transgenic Animal Research Conference in 1997 through to the very latest developments in avian transgenesis. Eighteen years later, on the occasion of the 10th conference in this series, we have seen breakthrough advances in the use of viral vectors and transposons to transform the germline via the direct manipulation of the chicken embryo, through to the establishment of PGC cultures allowing in vitro modification, expansion into populations to analyse the genetic modifications and then injection of these cells into embryos to create germline chimeras. We have now reached an unprecedented time in the history of chicken transgenic research where we have the technology to introduce precise, targeted modifications into the chicken genome, ranging from; new transgenes that provide improved phenotypes such as increased resilience to economically important diseases; the targeted disruption of immunoglobulin genes and replacement with human sequences to generate transgenic chickens that express "humanised" antibodies for biopharming; and the deletion of specific nucleotides to generate targeted gene knockout chickens for functional genomics. The impact of these advances is set to be realised through applications in chickens, and other bird species as models in scientific research, for novel biotechnology and to protect and improve agricultural productivity. PMID:26820412
NASA Astrophysics Data System (ADS)
Oshri, Ilan; Kotlarsky, Julia
These days firms are, more than ever, pressed to demonstrate returns on their investment in outsourcing. While the initial returns can always be associated with one-off cost cutting, outsourcing arrangements are complex, often involving inter-related high-value activities, which makes the realisation of long-term benefits from outsourcing ever more challenging. Executives in client firms are no longer satisfied with the same level of service delivery through the outsourcing lifecycle. They seek to achieve business transformation and innovation in their present and future services, beyond satisfying service level agreements (SLAs). Clearly the business world is facing a new challenge: an outsourcing delivery system of high-value activities that demonstrates value over time and across business functions. However, despite such expectations, many client firms are in the dark when trying to measure and quantify the return on outsourcing investments: results of this research show that less than half of all CIOs and CFOs (43%) have attempted to calculate the financial impact of outsourcing to their bottom line, indicating that the financial benefits are difficult to quantify (51%).
Khayat, Olfa; Kilani, Afef; Chedly-Debbiche, Achraf; Zeddini, Abdelfattah; Gargouri, Dalila; Kharrat, Jamel; Souissi, Adnene; Ghorbel, Abdel Jabbar; Ben Ayed, Mohamed; Ben Khelifa, Habib
2006-06-01
It's a prospective study leaded between September 1997 and july 1999 (23 months ) in 75 patients with duodenal ulcer and positif for Helicobacter pylori. All patients had a first endoscopy with antral, fundic and duodenal biopsies, followed one month later by a second control fibroscopy with biopsies of the same sites. A total of 420 biopsies was realised. Chronic gastritis was evaluated according to sydney system. Patients was divided by randomisation in 4 groups. Every group was received a different therapeutic association. The results was conform to liberation concering activity 80%, intestinal metaplasia 12%. inflammation 100%. Atrophy was observed in 56% of cases, this percentage is variable in literature; chronic gastritis was predominant in antre relatively to fundus (p<0.005). After treatment, a significative fall of Helicobacter pylori and activity and atrophy was established, contrarity to intestinal metaplasia and chronic inflammation witch are persisted. The prevalence of follicular gastritis was 57%. The better rate of ulcer cicatrisation and Helicobacter pylori eradication was respectively of 79% and 66% in group 1 treated by omeprazol, amoxcillin, metronidazol by comparison with the others 3 groups (p<0.005). PMID:17042205
Fast-coding robust motion estimation model in a GPU
NASA Astrophysics Data System (ADS)
García, Carlos; Botella, Guillermo; de Sande, Francisco; Prieto-Matias, Manuel
2015-02-01
Nowadays vision systems are used with countless purposes. Moreover, the motion estimation is a discipline that allow to extract relevant information as pattern segmentation, 3D structure or tracking objects. However, the real-time requirements in most applications has limited its consolidation, considering the adoption of high performance systems to meet response times. With the emergence of so-called highly parallel devices known as accelerators this gap has narrowed. Two extreme endpoints in the spectrum of most common accelerators are Field Programmable Gate Array (FPGA) and Graphics Processing Systems (GPU), which usually offer higher performance rates than general propose processors. Moreover, the use of GPUs as accelerators involves the efficient exploitation of any parallelism in the target application. This task is not easy because performance rates are affected by many aspects that programmers should overcome. In this paper, we evaluate OpenACC standard, a programming model with directives which favors porting any code to a GPU in the context of motion estimation application. The results confirm that this programming paradigm is suitable for this image processing applications achieving a very satisfactory acceleration in convolution based problems as in the well-known Lucas & Kanade method.
Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L
2016-09-10
Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037609
NASA Astrophysics Data System (ADS)
Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim
2016-02-01
We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.
Robust Collaborative Recommendation
NASA Astrophysics Data System (ADS)
Burke, Robin; O'Mahony, Michael P.; Hurley, Neil J.
Collaborative recommender systems are vulnerable to malicious users who seek to bias their output, causing them to recommend (or not recommend) particular items. This problem has been an active research topic since 2002. Researchers have found that the most widely-studied memory-based algorithms have significant vulnerabilities to attacks that can be fairly easily mounted. This chapter discusses these findings and the responses that have been investigated, especially detection of attack profiles and the implementation of robust recommendation algorithms.
Robust impedance shaping telemanipulation
Colgate, J.E.
1993-08-01
When a human operator performs a task via a bilateral manipulator, the feel of the task is embodied in the mechanical impedance of the manipulator. Traditionally, a bilateral manipulator is designed for transparency; i.e., so that the impedance reflected through the manipulator closely approximates that of the task. Impedance shaping bilateral control, introduced here, differs in that it treats the bilateral manipulator as a means of constructively altering the impedance of a task. This concept is particularly valuable if the characteristic dimensions (e.g., force, length, time) of the task impedance are very different from those of the human limb. It is shown that a general form of impedance shaping control consists of a conventional power-scaling bilateral controller augmented with a real-time interactive task simulation (i.e., a virtual environment). An approach to impedance shaping based on kinematic similarity between tasks of different scale is introduced and illustrated with an example. It is shown that an important consideration in impedance shaping controller design is robustness; i.e., guaranteeing the stability of the operator/manipulator/task system. A general condition for the robustness of a bilateral manipulator is derived. This condition is based on the structured singular value ({mu}). An example of robust impedance shaping bilateral control is presented and discussed.
Robust quantitative scratch assay
Vargas, Andrea; Angeli, Marc; Pastrello, Chiara; McQuaid, Rosanne; Li, Han; Jurisicova, Andrea; Jurisica, Igor
2016-01-01
The wound healing assay (or scratch assay) is a technique frequently used to quantify the dependence of cell motility—a central process in tissue repair and evolution of disease—subject to various treatments conditions. However processing the resulting data is a laborious task due its high throughput and variability across images. This Robust Quantitative Scratch Assay algorithm introduced statistical outputs where migration rates are estimated, cellular behaviour is distinguished and outliers are identified among groups of unique experimental conditions. Furthermore, the RQSA decreased measurement errors and increased accuracy in the wound boundary at comparable processing times compared to previously developed method (TScratch). Availability and implementation: The RQSA is freely available at: http://ophid.utoronto.ca/RQSA/RQSA_Scripts.zip. The image sets used for training and validation and results are available at: (http://ophid.utoronto.ca/RQSA/trainingSet.zip, http://ophid.utoronto.ca/RQSA/validationSet.zip, http://ophid.utoronto.ca/RQSA/ValidationSetResults.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975Results.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip). Supplementary Material is provided for detailed description of the development of the RQSA. Contact: juris@ai.utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26722119
Robustness of metabolic networks
NASA Astrophysics Data System (ADS)
Jeong, Hawoong
2009-03-01
We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.
Robustness of Interdependent Networks
NASA Astrophysics Data System (ADS)
Havlin, Shlomo
2011-03-01
In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.
Van Dyke, W.J.
1992-04-07
A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing. 4 figs.
Van Dyke, William J.
1992-01-01
A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing.
Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.
2009-01-16
We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.
Carlson, J. M.; Doyle, John
2002-01-01
Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality. PMID:11875207
Ballance, Robert A.
2003-01-01
The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF also provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.
NASA Astrophysics Data System (ADS)
Tulsi, Avatar
2016-07-01
Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.
NASA Astrophysics Data System (ADS)
Tulsi, Avatar
2016-04-01
Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}) . Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.
Energy Science and Technology Software Center (ESTSC)
2003-01-01
The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF alsomore » provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.« less
Robust Kriged Kalman Filtering
Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.
2015-11-11
Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.
Robust control for uncertain structures
NASA Technical Reports Server (NTRS)
Douglas, Joel; Athans, Michael
1991-01-01
Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.
Robustness and modeling error characterization
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.
1984-01-01
The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.
ERIC Educational Resources Information Center
Essexville-Hampton Public Schools, MI.
Described are components of Project FAST (Functional Analysis Systems Training) a nationally validated project to provide more effective educational and support services to learning disordered children and their regular elementary classroom teachers. The program is seen to be based on a series of modules of delivery systems ranging from mainstream…
Sniedovich, Moshe
2012-10-01
One would have expected the considerable public debate created by Nassim Taleb's two best selling books on uncertainty, Fooled by Randomness and The Black Swan, to inspire greater caution to the fundamental difficulties posed by severe uncertainty. Yet, methodologies exhibiting an incautious approach to uncertainty have been proposed recently in a range of publications. So, the objective of this short note is to call attention to a prime example of an incautious approach to severe uncertainty that is manifested in the proposition to use the concept radius of stability as a measure of robustness against severe uncertainty. The central proposition of this approach, which is exemplified in info-gap decision theory, is this: use a simple radius of stability model to analyze and manage a severe uncertainty that is characterized by a vast uncertainty space, a poor point estimate, and a likelihood-free quantification of uncertainty. This short discussion serves then as a reminder that the generic radius of stability model is a model of local robustness. It is, therefore, utterly unsuitable for the treatment of severe uncertainty when the latter is characterized by a poor estimate of the parameter of interest, a vast uncertainty space, and a likelihood-free quantification of uncertainty. PMID:22384828
Robustness in multicellular systems
NASA Astrophysics Data System (ADS)
Xavier, Joao
2011-03-01
Cells and organisms cope with the task of maintaining their phenotypes in the face of numerous challenges. Much attention has recently been paid to questions of how cells control molecular processes to ensure robustness. However, many biological functions are multicellular and depend on interactions, both physical and chemical, between cells. We use a combination of mathematical modeling and molecular biology experiments to investigate the features that convey robustness to multicellular systems. Cell populations must react to external perturbations by sensing environmental cues and acting coordinately in response. At the same time, they face a major challenge: the emergence of conflict from within. Multicellular traits are prone to cells with exploitative phenotypes that do not contribute to shared resources yet benefit from them. This is true in populations of single-cell organisms that have social lifestyles, where conflict can lead to the emergence of social ``cheaters,'' as well as in multicellular organisms, where conflict can lead to the evolution of cancer. I will describe features that diverse multicellular systems can have to eliminate potential conflicts as well as external perturbations.
Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.
2008-01-01
Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270
Robust reflective pupil slicing technology
NASA Astrophysics Data System (ADS)
Meade, Jeffrey T.; Behr, Bradford B.; Cenko, Andrew T.; Hajian, Arsen R.
2014-07-01
Tornado Spectral Systems (TSS) has developed the High Throughput Virtual Slit (HTVSTM), robust all-reflective pupil slicing technology capable of replacing the slit in research-, commercial- and MIL-SPEC-grade spectrometer systems. In the simplest configuration, the HTVS allows optical designers to remove the lossy slit from pointsource spectrometers and widen the input slit of long-slit spectrometers, greatly increasing throughput without loss of spectral resolution or cross-dispersion information. The HTVS works by transferring etendue between image plane axes but operating in the pupil domain rather than at a focal plane. While useful for other technologies, this is especially relevant for spectroscopic applications by performing the same spectral narrowing as a slit without throwing away light on the slit aperture. HTVS can be implemented in all-reflective designs and only requires a small number of reflections for significant spectral resolution enhancement-HTVS systems can be efficiently implemented in most wavelength regions. The etendueshifting operation also provides smooth scaling with input spot/image size without requiring reconfiguration for different targets (such as different seeing disk diameters or different fiber core sizes). Like most slicing technologies, HTVS provides throughput increases of several times without resolution loss over equivalent slitbased designs. HTVS technology enables robust slit replacement in point-source spectrometer systems. By virtue of pupilspace operation this technology has several advantages over comparable image-space slicer technology, including the ability to adapt gracefully and linearly to changing source size and better vertical packing of the flux distribution. Additionally, this technology can be implemented with large slicing factors in both fast and slow beams and can easily scale from large, room-sized spectrometers through to small, telescope-mounted devices. Finally, this same technology is directly
Evolving Robust Gene Regulatory Networks
Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi
2015-01-01
Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055
Robust springback compensation
NASA Astrophysics Data System (ADS)
Carleer, Bart; Grimm, Peter
2013-12-01
Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.
1995-01-01
The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.
Robust automated knowledge capture.
Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt
2011-10-01
This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.
Robustness in Digital Hardware
NASA Astrophysics Data System (ADS)
Woods, Roger; Lightbody, Gaye
The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!
Robust snapshot interferometric spectropolarimetry.
Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert
2016-05-15
This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy. PMID:27176992
Chen, Li; Shen, Cencheng; Vogelstein, Joshua T; Priebe, Carey E
2016-03-01
For random graphs distributed according to stochastic blockmodels, a special case of latent position graphs, adjacency spectral embedding followed by appropriate vertex classification is asymptotically Bayes optimal; but this approach requires knowledge of and critically depends on the model dimension. In this paper, we propose a sparse representation vertex classifier which does not require information about the model dimension. This classifier represents a test vertex as a sparse combination of the vertices in the training set and uses the recovered coefficients to classify the test vertex. We prove consistency of our proposed classifier for stochastic blockmodels, and demonstrate that the sparse representation classifier can predict vertex labels with higher accuracy than adjacency spectral embedding approaches via both simulation studies and real data experiments. Our results demonstrate the robustness and effectiveness of our proposed vertex classifier when the model dimension is unknown. PMID:26340770
High-performance quantitative robust switching control for optical telescopes
NASA Astrophysics Data System (ADS)
Lounsbury, William P.; Garcia-Sanz, Mario
2014-07-01
This paper introduces an innovative robust and nonlinear control design methodology for high-performance servosystems in optical telescopes. The dynamics of optical telescopes typically vary according to azimuth and altitude angles, temperature, friction, speed and acceleration, leading to nonlinearities and plant parameter uncertainty. The methodology proposed in this paper combines robust Quantitative Feedback Theory (QFT) techniques with nonlinear switching strategies that achieve simultaneously the best characteristics of a set of very active (fast) robust QFT controllers and very stable (slow) robust QFT controllers. A general dynamic model and a variety of specifications from several different commercially available amateur Newtonian telescopes are used for the controller design as well as the simulation and validation. It is also proven that the nonlinear/switching controller is stable for any switching strategy and switching velocity, according to described frequency conditions based on common quadratic Lyapunov functions (CQLF) and the circle criterion.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Healy, M J F
2015-09-01
The quest for new sensing phenomena continues because detecting, discriminating, identifying, measuring and monitoring nuclear materials and their radiation from greater range, at lower concentrations, and in a more timely fashion brings greater safety, security and efficiency. The potential phenomena are diverse, and those that have been realised can be found in disparate fields of science, engineering and medicine, which makes the full range difficult to realise and record. The framework presented here offers a means to systematically and comprehensively explore nuclear sensing phenomena. The approach is based on the fundamental concepts of matter and energy, where the sequence starts with the original nuclear material and its emissions, and progressively considers signatures arising from secondary effects and the emissions from associated materials and the environment. Concepts of operations such as active and passive interrogation, and networked sensing are considered. In this operational light, unpacking nuclear signatures forces a fresh look at the sensing concept. It also exposes how some phenomena that exist in established technology may be considered novel based on how they could be exploited rather than what they fundamentally are. This article selects phenomena purely to illustrate the framework and how it can be best used to foster creativity in the quest for novel phenomena rather than exhaustively listing, categorising or comparing any practical aspects of candidate phenomena. PMID:26270745
Benders, Titia
2013-12-01
Exaggeration of the vowel space in infant-directed speech (IDS) is well documented for English, but not consistently replicated in other languages or for other speech-sound contrasts. A second attested, but less discussed, pattern of change in IDS is an overall rise of the formant frequencies, which may reflect an affective speaking style. The present study investigates longitudinally how Dutch mothers change their corner vowels, voiceless fricatives, and pitch when speaking to their infant at 11 and 15 months of age. In comparison to adult-directed speech (ADS), Dutch IDS has a smaller vowel space, higher second and third formant frequencies in the vowels, and a higher spectral frequency in the fricatives. The formants of the vowels and spectral frequency of the fricatives are raised more strongly for infants at 11 than at 15 months, while the pitch is more extreme in IDS to 15-month olds. These results show that enhanced positive affect is the main factor influencing Dutch mothers' realisation of speech sounds in IDS, especially to younger infants. This study provides evidence that mothers' expression of emotion in IDS can influence the realisation of speech sounds, and that the loss or gain of speech clarity may be secondary effects of affect. PMID:24239878
Robust Understanding of Statistical Variation
ERIC Educational Resources Information Center
Peters, Susan A.
2011-01-01
This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…
Robust, Optimal Subsonic Airfoil Shapes
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2014-01-01
A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.
Inherent robustness of discrete-time adaptive control systems
NASA Technical Reports Server (NTRS)
Ma, C. C. H.
1986-01-01
Global stability robustness with respect to unmodeled dynamics, arbitrary bounded internal noise, as well as external disturbance is shown to exist for a class of discrete-time adaptive control systems when the regressor vectors of these systems are persistently exciting. Although fast adaptation is definitely undesirable, so far as attaining the greatest amount of global stability robustness is concerned, slow adaptation is shown to be not necessarily beneficial. The entire analysis in this paper holds for systems with slowly varying return difference matrices; the plants in these systems need not be slowly varying.
Robust levitation control for maglev systems with guaranteed bounded airgap.
Xu, Jinquan; Chen, Ye-Hwa; Guo, Hong
2015-11-01
The robust control design problem for the levitation control of a nonlinear uncertain maglev system is considered. The uncertainty is (possibly) fast time-varying. The system has magnitude limitation on the airgap between the suspended chassis and the guideway in order to prevent undesirable contact. Furthermore, the (global) matching condition is not satisfied. After a three-step state transformation, a robust control scheme for the maglev vehicle is proposed, which is able to guarantee the uniform boundedness and uniform ultimate boundedness of the system, regardless of the uncertainty. The magnitude limitation of the airgap is guaranteed, regardless of the uncertainty. PMID:26524957
NASA Technical Reports Server (NTRS)
Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.
2000-01-01
containing fossil biofilm, including the 3.5 b.y..-old carbonaceous cherts from South Africa and Australia. As a result of the unique compositional, structural and "mineralisable" properties of bacterial polymer and biofilms, we conclude that bacterial polymers and biofilms constitute a robust and reliable biomarker for life on Earth and could be a potential biomarker for extraterrestrial life.
RSRE: RNA structural robustness evaluator.
Shu, Wenjie; Bo, Xiaochen; Zheng, Zhiqiang; Wang, Shengqi
2007-07-01
Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/. PMID:17567615
Robustness of airline route networks
NASA Astrophysics Data System (ADS)
Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David
2016-03-01
Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.
Pervasive robustness in biological systems.
Félix, Marie-Anne; Barkoulas, Michalis
2015-08-01
Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598
Robust mobility in human-populated environments
NASA Astrophysics Data System (ADS)
Gonzalez, Juan Pablo; Phillips, Mike; Neuman, Brad; Likhachev, Max
2012-06-01
Creating robots that can help humans in a variety of tasks requires robust mobility and the ability to safely navigate among moving obstacles. This paper presents an overview of recent research in the Robotics Collaborative Technology Alliance (RCTA) that addresses many of the core requirements for robust mobility in human-populated environments. Safe Interval Path Planning (SIPP) allows for very fast planning in dynamic environments when planning timeminimal trajectories. Generalized Safe Interval Path Planning extends this concept to trajectories that minimize arbitrary cost functions. Finally, generalized PPCP algorithm is used to generate plans that reason about the uncertainty in the predicted trajectories of moving obstacles and try to actively disambiguate the intentions of humans whenever necessary. We show how these approaches consider moving obstacles and temporal constraints and produce high-fidelity paths. Experiments in simulated environments show the performance of the algorithms under different controlled conditions, and experiments on physical mobile robots interacting with humans show how the algorithms perform under the uncertainties of the real world.
Heo, Gaeun; Pyo, Kyoung-Hee; Lee, Da Hee; Kim, Youngmin; Kim, Jong-Woong
2016-01-01
This paper presents the successful fabrication of a transparent electrode comprising a sandwich structure of silicone/Ag nanowires (AgNWs)/silicone equipped with Diels-Alder (DA) adducts as crosslinkers to realise highly stable stretchability. Because of the reversible DA reaction, the crosslinked silicone successfully bonds with the silicone overcoat, which should completely seal the electrode. Thus, any surrounding liquid cannot leak through the interfaces among the constituents. Furthermore, the nanowires are protected by the silicone cover when they are stressed by mechanical loads such as bending, folding, and stretching. After delicate optimisation of the layered silicone/AgNW/silicone sandwich structure, a stretchable transparent electrode which can withstand 1000 cycles of 50% stretching-releasing with an exceptionally high stability and reversibility was fabricated. This structure can be used as a transparent strain sensor; it possesses a strong piezoresistivity with a gauge factor greater than 11. PMID:27140436
Heo, Gaeun; Pyo, Kyoung-hee; Lee, Da Hee; Kim, Youngmin; Kim, Jong-Woong
2016-01-01
This paper presents the successful fabrication of a transparent electrode comprising a sandwich structure of silicone/Ag nanowires (AgNWs)/silicone equipped with Diels–Alder (DA) adducts as crosslinkers to realise highly stable stretchability. Because of the reversible DA reaction, the crosslinked silicone successfully bonds with the silicone overcoat, which should completely seal the electrode. Thus, any surrounding liquid cannot leak through the interfaces among the constituents. Furthermore, the nanowires are protected by the silicone cover when they are stressed by mechanical loads such as bending, folding, and stretching. After delicate optimisation of the layered silicone/AgNW/silicone sandwich structure, a stretchable transparent electrode which can withstand 1000 cycles of 50% stretching–releasing with an exceptionally high stability and reversibility was fabricated. This structure can be used as a transparent strain sensor; it possesses a strong piezoresistivity with a gauge factor greater than 11. PMID:27140436
A grating-less in-fibre magnetometer realised in a polymer-MOF infiltrated using ferrofluid
NASA Astrophysics Data System (ADS)
Candiani, A.; Argyros, A.; Lwin, R.; Leon-Saval, S. G.; Zito, G.; Selleri, S.; Pissadakis, S.
2012-04-01
We report a grating-less, in-fibre magnetometer realised in a polymethylmethacrylate (PMMA) microstructured optical fibre that has been infiltrated using a hydrocarbon oil based ferrofluid. The lossy magnetic fluid has been infiltrated by capillarity action into the microcapillaries of the fiber cladding, resulting in a generation of a short cut-off band located in the vicinity of 600nm. When the magnetic field is applied perpendicular to the fiber axis, the ferrofluid undergoes refractive index and scattering loss changes, modulating the transmission properties of the infiltrated microstructured fibre. Spectral measurements of the transmitted signal are reported for magnetic field changes up to 1300Gauss, revealing a strong decrease of the signal near its bandgap edge proportionally with the increase of the magnetic field. Instead, when the magnetic field is applied with respect to the rotational symmetry the fibre axis, the sensor exhibits high polarisation sensitivity for a specific wavelength band, providing the possibility of directional measurements.
NASA Astrophysics Data System (ADS)
Heo, Gaeun; Pyo, Kyoung-Hee; Lee, Da Hee; Kim, Youngmin; Kim, Jong-Woong
2016-05-01
This paper presents the successful fabrication of a transparent electrode comprising a sandwich structure of silicone/Ag nanowires (AgNWs)/silicone equipped with Diels–Alder (DA) adducts as crosslinkers to realise highly stable stretchability. Because of the reversible DA reaction, the crosslinked silicone successfully bonds with the silicone overcoat, which should completely seal the electrode. Thus, any surrounding liquid cannot leak through the interfaces among the constituents. Furthermore, the nanowires are protected by the silicone cover when they are stressed by mechanical loads such as bending, folding, and stretching. After delicate optimisation of the layered silicone/AgNW/silicone sandwich structure, a stretchable transparent electrode which can withstand 1000 cycles of 50% stretching–releasing with an exceptionally high stability and reversibility was fabricated. This structure can be used as a transparent strain sensor; it possesses a strong piezoresistivity with a gauge factor greater than 11.
Use of a genetic algorithm to analyze robust stability problems
Murdock, T.M.; Schmitendorf, W.E.; Forrest, S.
1990-01-01
This note resents a genetic algorithm technique for testing the stability of a characteristic polynomial whose coefficients are functions of unknown but bounded parameters. This technique is fast and can handle a large number of parametric uncertainties. We also use this method to determine robust stability margins for uncertain polynomials. Several benchmark examples are included to illustrate the two uses of the algorithm. 27 refs., 4 figs.
Robust Optimization of Biological Protocols
Flaherty, Patrick; Davis, Ronald W.
2015-01-01
When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115
Dosimetry robustness with stochastic optimization
NASA Astrophysics Data System (ADS)
Nohadani, Omid; Seco, Joao; Martin, Benjamin C.; Bortfeld, Thomas
2009-06-01
All radiation therapy treatment planning relies on accurate dose calculation. Uncertainties in dosimetric prediction can significantly degrade an otherwise optimal plan. In this work, we introduce a robust optimization method which handles dosimetric errors and warrants for high-quality IMRT plans. Unlike other dose error estimations, we do not rely on the detailed knowledge about the sources of the uncertainty and use a generic error model based on random perturbation. This generality is sought in order to cope with a large variety of error sources. We demonstrate the method on a clinical case of lung cancer and show that our method provides plans that are more robust against dosimetric errors and are clinically acceptable. In fact, the robust plan exhibits a two-fold improved equivalent uniform dose compared to the non-robust but optimized plan. The achieved speedup will allow computationally extensive multi-criteria or beam-angle optimization approaches to warrant for dosimetrically relevant plans.
Robust controls with structured perturbations
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1993-01-01
This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.
A robust chaotic algorithm for digital image steganography
NASA Astrophysics Data System (ADS)
Ghebleh, M.; Kanso, A.
2014-06-01
This paper proposes a new robust chaotic algorithm for digital image steganography based on a 3-dimensional chaotic cat map and lifted discrete wavelet transforms. The irregular outputs of the cat map are used to embed a secret message in a digital cover image. Discrete wavelet transforms are used to provide robustness. Sweldens' lifting scheme is applied to ensure integer-to-integer transforms, thus improving the robustness of the algorithm. The suggested scheme is fast, efficient and flexible. Empirical results are presented to showcase the satisfactory performance of our proposed steganographic scheme in terms of its effectiveness (imperceptibility and security) and feasibility. Comparison with some existing transform domain steganographic schemes is also presented.
Robustness Elasticity in Complex Networks
Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu
2012-01-01
Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060
NASA Astrophysics Data System (ADS)
Kwakkel, Jan; Haasnoot, Marjolijn
2015-04-01
In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the
FAST: FAST Analysis of Sequences Toolbox.
Lawrence, Travis J; Kauffman, Kyle T; Amrine, Katherine C H; Carper, Dana L; Lee, Raymond S; Becich, Peter J; Canales, Claudia J; Ardell, David H
2015-01-01
FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145
FAST: FAST Analysis of Sequences Toolbox
Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.
2015-01-01
FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145
Robust adiabatic sum frequency conversion.
Suchowski, Haim; Prabhudesai, Vaibhav; Oron, Dan; Arie, Ady; Silberberg, Yaron
2009-07-20
We discuss theoretically and demonstrate experimentally the robustness of the adiabatic sum frequency conversion method. This technique, borrowed from an analogous scheme of robust population transfer in atomic physics and nuclear magnetic resonance, enables the achievement of nearly full frequency conversion in a sum frequency generation process for a bandwidth up to two orders of magnitude wider than in conventional conversion schemes. We show that this scheme is robust to variations in the parameters of both the nonlinear crystal and of the incoming light. These include the crystal temperature, the frequency of the incoming field, the pump intensity, the crystal length and the angle of incidence. Also, we show that this extremely broad bandwidth can be tuned to higher or lower central wavelengths by changing either the pump frequency or the crystal temperature. The detailed study of the properties of this converter is done using the Landau-Zener theory dealing with the adiabatic transitions in two level systems. PMID:19654679
ERIC Educational Resources Information Center
Stamelos, Georgios; Bartzakli, Marianna
2013-01-01
The purpose of this article is to analyse and interpret the effect of the primary school teachers' trade union in Greece insofar as the formation and realisation of education policy is concerned, and, more precisely, insofar as it concerns the issue of teacher evaluation. The research material used comes from the filing and analysis of the…
ERIC Educational Resources Information Center
Lahtero, Tapio Juhani; Kuusilehto-Awale, Lea
2013-01-01
This article introduces a quantitative research into how the leadership team members of 49 basic education schools in the city of Vantaa, Finland, experienced the realisation of strategic leadership in their leadership teams' work. The data were collected by a survey of 24 statements, rated on a five-point Likert scale, and analysed with the…
Robust Portfolio Optimization Using Pseudodistances
2015-01-01
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948
Network Robustness: the whole story
NASA Astrophysics Data System (ADS)
Longjas, A.; Tejedor, A.; Zaliapin, I. V.; Ambroj, S.; Foufoula-Georgiou, E.
2014-12-01
A multitude of actual processes operating on hydrological networks may exhibit binary outcomes such as clean streams in a river network that may become contaminated. These binary outcomes can be modeled by node removal processes (attacks) acting in a network. Network robustness against attacks has been widely studied in fields as diverse as the Internet, power grids and human societies. However, the current definition of robustness is only accounting for the connectivity of the nodes unaffected by the attack. Here, we put forward the idea that the connectivity of the affected nodes can play a crucial role in proper evaluation of the overall network robustness and its future recovery from the attack. Specifically, we propose a dual perspective approach wherein at any instant in the network evolution under attack, two distinct networks are defined: (i) the Active Network (AN) composed of the unaffected nodes and (ii) the Idle Network (IN) composed of the affected nodes. The proposed robustness metric considers both the efficiency of destroying the AN and the efficiency of building-up the IN. This approach is motivated by concrete applied problems, since, for example, if we study the dynamics of contamination in river systems, it is necessary to know both the connectivity of the healthy and contaminated parts of the river to assess its ecological functionality. We show that trade-offs between the efficiency of the Active and Idle network dynamics give rise to surprising crossovers and re-ranking of different attack strategies, pointing to significant implications for decision making.
Mental Models: A Robust Definition
ERIC Educational Resources Information Center
Rook, Laura
2013-01-01
Purpose: The concept of a mental model has been described by theorists from diverse disciplines. The purpose of this paper is to offer a robust definition of an individual mental model for use in organisational management. Design/methodology/approach: The approach adopted involves an interdisciplinary literature review of disciplines, including…
Robust design of dynamic observers
NASA Technical Reports Server (NTRS)
Bhattacharyya, S. P.
1974-01-01
The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.
Robust Sliding Window Synchronizer Developed
NASA Technical Reports Server (NTRS)
Chun, Kue S.; Xiong, Fuqin; Pinchak, Stanley
2004-01-01
The development of an advanced robust timing synchronization scheme is crucial for the support of two NASA programs--Advanced Air Transportation Technologies and Aviation Safety. A mobile aeronautical channel is a dynamic channel where various adverse effects--such as Doppler shift, multipath fading, and shadowing due to precipitation, landscape, foliage, and buildings--cause the loss of symbol timing synchronization.
Robust template matching using run-length encoding
NASA Astrophysics Data System (ADS)
Lee, Hunsue; Suh, Sungho; Cho, Hansang
2015-09-01
In this paper we propose a novel template matching algorithm for visual inspection of bare printed circuit board (PCB).1 In the conventional template matching for PCB inspection, the matching score and its relevant offsets are acquired by calculating the maximum value among the convolutions of template image and camera image. While the method is fast, the robustness and accuracy of matching are not guaranteed due to the gap between a design and an implementation resulting from defects and process variations. To resolve this problem, we suggest a new method which uses run-length encoding (RLE). For the template image to be matched, we accumulate data of foreground and background, and RLE data for each row and column in the template image. Using the data, we can find the x and y offsets which minimize the optimization function. The efficiency and robustness of the proposed algorithm are verified through a series of experiments. By comparing the proposed algorithm with the conventional approach, we could realize that the proposed algorithm is not only fast but also more robust and reliable in matching results.
ERIC Educational Resources Information Center
Fullick, Leisha; Field, John; Rees, Teresa; Gilchrist, Helen
2009-01-01
The Inquiry into the Future for Lifelong Learning proposes a strategy for lifelong learning for the next quarter-century. In this article, four of the Inquiry's commissioners--Leisha Fullick, John Field, Teresa Rees and Helen Gilchrist--reflect on some of the report's key themes. Fullick discusses the role of "local responsiveness" in…
Algebraic connectivity and graph robustness.
Feddema, John Todd; Byrne, Raymond Harry; Abdallah, Chaouki T.
2009-07-01
Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.
Robust dynamic mitigation of instabilities
Kawata, S.; Karino, T.
2015-04-15
A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, the instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.
Robust, optimal subsonic airfoil shapes
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor)
2008-01-01
Method system, and product from application of the method, for design of a subsonic airfoil shape, beginning with an arbitrary initial airfoil shape and incorporating one or more constraints on the airfoil geometric parameters and flow characteristics. The resulting design is robust against variations in airfoil dimensions and local airfoil shape introduced in the airfoil manufacturing process. A perturbation procedure provides a class of airfoil shapes, beginning with an initial airfoil shape.
Robust flight control of rotorcraft
NASA Astrophysics Data System (ADS)
Pechner, Adam Daniel
With recent design improvement in fixed wing aircraft, there has been a considerable interest in the design of robust flight control systems to compensate for the inherent instability necessary to achieve desired performance. Such systems are designed for maximum available retention of stability and performance in the presence of significant vehicle damage or system failure. The rotorcraft industry has shown similar interest in adopting these reconfigurable flight control schemes specifically because of their ability to reject disturbance inputs and provide a significant amount of robustness for all but the most catastrophic of situations. The research summarized herein focuses on the extension of the pseudo-sliding mode control design procedure interpreted in the frequency domain. Application of the technique is employed and simulated on two well known helicopters, a simplified model of a hovering Sikorsky S-61 and the military's Black Hawk UH-60A also produced by Sikorsky. The Sikorsky helicopter model details are readily available and was chosen because it can be limited to pitch and roll motion reducing the number of degrees of freedom and yet contains two degrees of freedom, which is the minimum requirement in proving the validity of the pseudo-sliding control technique. The full order model of a hovering Black Hawk system was included both as a comparison to the S-61 helicopter design system and as a means to demonstrate the scaleability and effectiveness of the control technique on sophisticated systems where design robustness is of critical concern.
Robust image registration of biological microscopic images.
Wang, Ching-Wei; Ka, Shuk-Man; Chen, Ann
2014-01-01
Image registration of biological data is challenging as complex deformation problems are common. Possible deformation effects can be caused in individual data preparation processes, involving morphological deformations, stain variations, stain artifacts, rotation, translation, and missing tissues. The combining deformation effects tend to make existing automatic registration methods perform poor. In our experiments on serial histopathological images, the six state of the art image registration techniques, including TrakEM2, SURF + affine transformation, UnwarpJ, bUnwarpJ, CLAHE + bUnwarpJ and BrainAligner, achieve no greater than 70% averaged accuracies, while the proposed method achieves 91.49% averaged accuracy. The proposed method has also been demonstrated to be significantly better in alignment of laser scanning microscope brain images and serial ssTEM images than the benchmark automatic approaches (p < 0.001). The contribution of this study is to introduce a fully automatic, robust and fast image registration method for 2D image registration. PMID:25116443
Robust retrieval from compressed medical image archives
NASA Astrophysics Data System (ADS)
Sidorov, Denis N.; Lerallut, Jean F.; Cocquerez, Jean-Pierre; Azpiroz, Joaquin
2005-04-01
Paper addresses the computational aspects of extracting important features directly from compressed images for the purpose of aiding biomedical image retrieval based on content. The proposed method for treatment of compressed medical archives follows the JPEG compression standard and exploits algorithm based on spacial analysis of the image cosine spectrum coefficients amplitude and location. The experiments on modality-specific archive of osteoarticular images show robustness of the method based on measured spectral spatial statistics. The features, which were based on the cosine spectrum coefficients' values, could satisfy different types of queries' modalities (MRI, US, etc), which emphasized texture and edge properties. In particular, it has been shown that there is wealth of information in the AC coefficients of the DCT transform, which can be utilized to support fast content-based image retrieval. The computational cost of proposed signature generation algorithm is low. Influence of conventional and the state-of-the-art compression techniques based on cosine and wavelet integral transforms on the performance of content-based medical image retrieval has been also studied. We found no significant differences in retrieval efficiencies for non-compressed and JPEG2000-compressed images even at the lowest bit rate tested.
Punctuated evolution and robustness in morphogenesis
Grigoriev, D.; Reinitz, J.; Vakulenko, S.; Weber, A.
2014-01-01
This paper presents an analytic approach to the pattern stability and evolution problem in morphogenesis. The approach used here is based on the ideas from the gene and neural network theory. We assume that gene networks contain a number of small groups of genes (called hubs) controlling morphogenesis process. Hub genes represent an important element of gene network architecture and their existence is empirically confirmed. We show that hubs can stabilize morphogenetic pattern and accelerate the morphogenesis. The hub activity exhibits an abrupt change depending on the mutation frequency. When the mutation frequency is small, these hubs suppress all mutations and gene product concentrations do not change, thus, the pattern is stable. When the environmental pressure increases and the population needs new genotypes, the genetic drift and other effects increase the mutation frequency. For the frequencies that are larger than a critical amount the hubs turn off; and as a result, many mutations can affect phenotype. This effect can serve as an engine for evolution. We show that this engine is very effective: the evolution acceleration is an exponential function of gene redundancy. Finally, we show that the Eldredge-Gould concept of punctuated evolution results from the network architecture, which provides fast evolution, control of evolvability, and pattern robustness. To describe analytically the effect of exponential acceleration, we use mathematical methods developed recently for hard combinatorial problems, in particular, for so-called k-SAT problem, and numerical simulations. PMID:24996115
NASA Astrophysics Data System (ADS)
Santos, Abel; Yoo, Jeong Ha; Rohatgi, Charu Vashisth; Kumeria, Tushar; Wang, Ye; Losic, Dusan
2016-01-01
This study is the first realisation of true optical rugate filters (RFs) based on nanoporous anodic alumina (NAA) by sinusoidal waves. An innovative and rationally designed sinusoidal pulse anodisation (SPA) approach in galvanostatic mode is used with the aim of engineering the effective medium of NAA in a sinusoidal fashion. A precise control over the different anodisation parameters (i.e. anodisation period, anodisation amplitude, anodisation offset, number of pulses, anodisation temperature and pore widening time) makes it possible to engineer the characteristic reflection peaks and interferometric colours of NAA-RFs, which can be finely tuned across the UV-visible-NIR spectrum. The effect of the aforementioned anodisation parameters on the photonic properties of NAA-RFs (i.e. characteristic reflection peaks and interferometric colours) is systematically assessed in order to establish for the first time a comprehensive rationale towards NAA-RFs with fully controllable photonic properties. The experimental results are correlated with a theoretical model (Looyenga-Landau-Lifshitz - LLL), demonstrating that the effective medium of these photonic nanostructures can be precisely described by the effective medium approximation. NAA-RFs are also demonstrated as chemically selective photonic platforms combined with reflectometric interference spectroscopy (RIfS). The resulting optical sensing system is used to assess the reversible binding affinity between a model drug (i.e. indomethacin) and human serum albumin (HSA) in real-time. Our results demonstrate that this system can be used to determine the overall pharmacokinetic profile of drugs, which is a critical aspect to be considered for the implementation of efficient medical therapies.This study is the first realisation of true optical rugate filters (RFs) based on nanoporous anodic alumina (NAA) by sinusoidal waves. An innovative and rationally designed sinusoidal pulse anodisation (SPA) approach in galvanostatic
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Membrane Resonance Enables Stable and Robust Gamma Oscillations
Moca, Vasile V.; Nikolić, Danko; Singer, Wolf; Mureşan, Raul C.
2014-01-01
Neuronal mechanisms underlying beta/gamma oscillations (20–80 Hz) are not completely understood. Here, we show that in vivo beta/gamma oscillations in the cat visual cortex sometimes exhibit remarkably stable frequency even when inputs fluctuate dramatically. Enhanced frequency stability is associated with stronger oscillations measured in individual units and larger power in the local field potential. Simulations of neuronal circuitry demonstrate that membrane properties of inhibitory interneurons strongly determine the characteristics of emergent oscillations. Exploration of networks containing either integrator or resonator inhibitory interneurons revealed that: (i) Resonance, as opposed to integration, promotes robust oscillations with large power and stable frequency via a mechanism called RING (Resonance INduced Gamma); resonance favors synchronization by reducing phase delays between interneurons and imposes bounds on oscillation cycle duration; (ii) Stability of frequency and robustness of the oscillation also depend on the relative timing of excitatory and inhibitory volleys within the oscillation cycle; (iii) RING can reproduce characteristics of both Pyramidal INterneuron Gamma (PING) and INterneuron Gamma (ING), transcending such classifications; (iv) In RING, robust gamma oscillations are promoted by slow but are impaired by fast inputs. Results suggest that interneuronal membrane resonance can be an important ingredient for generation of robust gamma oscillations having stable frequency. PMID:23042733
Mitigation of Remedial Action Schemes by Decentralized Robust Governor Control
Elizondo, Marcelo A.; Marinovici, Laurentiu D.; Lian, Jianming; Kalsi, Karanjit; Du, Pengwei
2014-04-15
This paper presents transient stability improvement by a new distributed hierarchical control architecture (DHC). The integration of remedial action schemes (RAS) to the distributed hierarchical control architecture is studied. RAS in power systems are designed to maintain stability and avoid undesired system conditions by rapidly switching equipment and/or changing operating points according to predetermined rules. The acceleration trend relay currently in use in the US western interconnection is an example of RAS that trips generators to maintain transient stability. The link between RAS and DHC is through fast acting robust turbine/governor control that can also improve transient stability. In this paper, the influence of the decentralized robust turbine/governor control on the design of RAS is studied. Benefits of combining these two schemes are increasing power transfer capability and mitigation of RAS generator tripping actions; the later benefit is shown through simulations.
Fast-track for fast times: catching and keeping generation Y in the nursing workforce.
Walker, Kim
2007-04-01
There is little doubt we find ourselves in challenging times as never before has there been such generational diversity in the nursing workforce. Currently, nurses from four distinct (and now well recognised and discussed) generational groups jostle for primacy of recognition and reward. Equally significant is the acute realisation that our ageing profession must find ways to sustain itself in the wake of huge attrition as the 'baby boomer' nurses start retiring over the next ten to fifteen years. These realities impel us to become ever more strategic in our thinking about how best to manage the workforce of the future. This paper presents two exciting and original innovations currently in train at one of Australia's leading Catholic health care providers: firstly, a new fast-track bachelor of nursing program for fee-paying domestic students. This is a collaborative venture between St Vincent's and Mater Health, Sydney (SV&MHS) and the University of Tasmania (UTas); as far as we know, it is unprecedented in Australia. As well, the two private facilities of SV&MHS, St Vincent's Private (SVPH) and the Mater Hospitals, have developed and implemented a unique 'accelerated progression pathway' (APP) to enable registered nurses with talent and ambition to fast track their career through a competency and merit based system of performance management and reward. Both these initiatives are aimed squarely at the gen Y demographic and provide potential to significantly augment our capacity to recruit and retain quality people well into the future. PMID:17563323
Fast foods are quick, reasonably priced, and readily available alternatives to home cooking. While convenient and economical for a busy lifestyle, fast foods are typically high in calories, fat, saturated fat, ...
... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ... challenge to eat healthy when going to a fast food place. In general, avoiding items that are deep ...
... this page: //medlineplus.gov/ency/article/003766.htm Acid-fast stain To use the sharing features on this page, please enable JavaScript. The acid-fast stain is a laboratory test that determines ...
Light field reconstruction robust to signal dependent noise
NASA Astrophysics Data System (ADS)
Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai
2014-11-01
Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.
Garber, Andrea K; Lustig, Robert H
2011-09-01
Studies of food addiction have focused on highly palatable foods. While fast food falls squarely into that category, it has several other attributes that may increase its salience. This review examines whether the nutrients present in fast food, the characteristics of fast food consumers or the presentation and packaging of fast food may encourage substance dependence, as defined by the American Psychiatric Association. The majority of fast food meals are accompanied by a soda, which increases the sugar content 10-fold. Sugar addiction, including tolerance and withdrawal, has been demonstrated in rodents but not humans. Caffeine is a "model" substance of dependence; coffee drinks are driving the recent increase in fast food sales. Limited evidence suggests that the high fat and salt content of fast food may increase addictive potential. Fast food restaurants cluster in poorer neighborhoods and obese adults eat more fast food than those who are normal weight. Obesity is characterized by resistance to insulin, leptin and other hormonal signals that would normally control appetite and limit reward. Neuroimaging studies in obese subjects provide evidence of altered reward and tolerance. Once obese, many individuals meet criteria for psychological dependence. Stress and dieting may sensitize an individual to reward. Finally, fast food advertisements, restaurants and menus all provide environmental cues that may trigger addictive overeating. While the concept of fast food addiction remains to be proven, these findings support the role of fast food as a potentially addictive substance that is most likely to create dependence in vulnerable populations. PMID:21999689
Robust recognition of 1D barcodes using Hough transform
NASA Astrophysics Data System (ADS)
Dwinell, John; Bian, Peng; Bian, Long Xiang
2012-01-01
In this paper we present an algorithm for the recognition of 1D barcodes using the Hough transform, which is highly robust regarding the typical degraded image. The algorithm addresses various typical image distortions, such as inhomogeneous illumination, reflections, damaged barcode or blurriness etc. Other problems arise from recognizing low quality printing (low contrast or poor ink receptivity). Traditional approaches are unable to provide a fast solution for handling such complex and mixed noise factors. A multi-level method offers a better approach to best manage competing constraints of complex noise and fast decode. At the lowest level, images are processed in gray scale. At the middle level, the image is transformed into the Hough domain. At the top level, global results, including missing information, is processed within a global context including domain heuristics as well as OCR. The three levels work closely together by passing information up and down between levels.
Recent Progress toward Robust Photocathodes
Mulhollan, G. A.; Bierman, J. C.
2009-08-04
RF photoinjectors for next generation spin-polarized electron accelerators require photo-cathodes capable of surviving RF gun operation. Free electron laser photoinjectors can benefit from more robust visible light excited photoemitters. A negative electron affinity gallium arsenide activation recipe has been found that diminishes its background gas susceptibility without any loss of near bandgap photoyield. The highest degree of immunity to carbon dioxide exposure was achieved with a combination of cesium and lithium. Activated amorphous silicon photocathodes evince advantageous properties for high current photoinjectors including low cost, substrate flexibility, visible light excitation and greatly reduced gas reactivity compared to gallium arsenide.
Robust Software Architecture for Robots
NASA Technical Reports Server (NTRS)
Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael
2009-01-01
Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.
Wisz, Mary Susanne; Pottier, Julien; Kissling, W Daniel; Pellissier, Loïc; Lenoir, Jonathan; Damgaard, Christian F; Dormann, Carsten F; Forchhammer, Mads C; Grytnes, John-Arvid; Guisan, Antoine; Heikkinen, Risto K; Høye, Toke T; Kühn, Ingolf; Luoto, Miska; Maiorano, Luigi; Nilsson, Marie-Charlotte; Normand, Signe; Öckinger, Erik; Schmidt, Niels M; Termansen, Mette; Timmermann, Allan; Wardle, David A; Aastrup, Peter; Svenning, Jens-Christian
2013-01-01
Predicting which species will occur together in the future, and where, remains one of the greatest challenges in ecology, and requires a sound understanding of how the abiotic and biotic environments interact with dispersal processes and history across scales. Biotic interactions and their dynamics influence species' relationships to climate, and this also has important implications for predicting future distributions of species. It is already well accepted that biotic interactions shape species' spatial distributions at local spatial extents, but the role of these interactions beyond local extents (e.g. 10 km2 to global extents) are usually dismissed as unimportant. In this review we consolidate evidence for how biotic interactions shape species distributions beyond local extents and review methods for integrating biotic interactions into species distribution modelling tools. Drawing upon evidence from contemporary and palaeoecological studies of individual species ranges, functional groups, and species richness patterns, we show that biotic interactions have clearly left their mark on species distributions and realised assemblages of species across all spatial extents. We demonstrate this with examples from within and across trophic groups. A range of species distribution modelling tools is available to quantify species environmental relationships and predict species occurrence, such as: (i) integrating pairwise dependencies, (ii) using integrative predictors, and (iii) hybridising species distribution models (SDMs) with dynamic models. These methods have typically only been applied to interacting pairs of species at a single time, require a priori ecological knowledge about which species interact, and due to data paucity must assume that biotic interactions are constant in space and time. To better inform the future development of these models across spatial scales, we call for accelerated collection of spatially and temporally explicit species data. Ideally
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
Robust fusion with reliabilities weights
NASA Astrophysics Data System (ADS)
Grandin, Jean-Francois; Marques, Miguel
2002-03-01
The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.
Robust Inflation from fibrous strings
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Cicoli, M.; de Alwis, S.; Quevedo, F.
2016-05-01
Successful inflationary models should (i) describe the data well; (ii) arise generically from sensible UV completions; (iii) be insensitive to detailed fine-tunings of parameters and (iv) make interesting new predictions. We argue that a class of models with these properties is characterized by relatively simple potentials with a constant term and negative exponentials. We here continue earlier work exploring UV completions for these models—including the key (though often ignored) issue of modulus stabilisation—to assess the robustness of their predictions. We show that string models where the inflaton is a fibration modulus seem to be robust due to an effective rescaling symmetry, and fairly generic since most known Calabi-Yau manifolds are fibrations. This class of models is characterized by a generic relation between the tensor-to-scalar ratio r and the spectral index ns of the form r propto (ns‑1)2 where the proportionality constant depends on the nature of the effects used to develop the inflationary potential and the topology of the internal space. In particular we find that the largest values of the tensor-to-scalar ratio that can be obtained by generalizing the original set-up are of order r lesssim 0.01. We contrast this general picture with specific popular models, such as the Starobinsky scenario and α-attractors. Finally, we argue the self consistency of large-field inflationary models can strongly constrain non-supersymmetric inflationary mechanisms.
Reliable and robust entanglement witness
NASA Astrophysics Data System (ADS)
Yuan, Xiao; Mei, Quanxin; Zhou, Shan; Ma, Xiongfeng
2016-04-01
Entanglement, a critical resource for quantum information processing, needs to be witnessed in many practical scenarios. Theoretically, witnessing entanglement is by measuring a special Hermitian observable, called an entanglement witness (EW), which has non-negative expected outcomes for all separable states but can have negative expectations for certain entangled states. In practice, an EW implementation may suffer from two problems. The first one is reliability. Due to unreliable realization devices, a separable state could be falsely identified as an entangled one. The second problem relates to robustness. A witness may not be optimal for a target state and fail to identify its entanglement. To overcome the reliability problem, we employ a recently proposed measurement-device-independent entanglement witness scheme, in which the correctness of the conclusion is independent of the implemented measurement devices. In order to overcome the robustness problem, we optimize the EW to draw a better conclusion given certain experimental data. With the proposed EW scheme, where only data postprocessing needs to be modified compared to the original measurement-device-independent scheme, one can efficiently take advantage of the measurement results to maximally draw reliable conclusions.
NASA Technical Reports Server (NTRS)
Walatka, Pamela P.; Clucas, Jean; McCabe, R. Kevin; Plessel, Todd; Potter, R.; Cooper, D. M. (Technical Monitor)
1994-01-01
The Flow Analysis Software Toolkit, FAST, is a software environment for visualizing data. FAST is a collection of separate programs (modules) that run simultaneously and allow the user to examine the results of numerical and experimental simulations. The user can load data files, perform calculations on the data, visualize the results of these calculations, construct scenes of 3D graphical objects, and plot, animate and record the scenes. Computational Fluid Dynamics (CFD) visualization is the primary intended use of FAST, but FAST can also assist in the analysis of other types of data. FAST combines the capabilities of such programs as PLOT3D, RIP, SURF, and GAS into one environment with modules that share data. Sharing data between modules eliminates the drudgery of transferring data between programs. All the modules in the FAST environment have a consistent, highly interactive graphical user interface. Most commands are entered by pointing and'clicking. The modular construction of FAST makes it flexible and extensible. The environment can be custom configured and new modules can be developed and added as needed. The following modules have been developed for FAST: VIEWER, FILE IO, CALCULATOR, SURFER, TOPOLOGY, PLOTTER, TITLER, TRACER, ARCGRAPH, GQ, SURFERU, SHOTET, and ISOLEVU. A utility is also included to make the inclusion of user defined modules in the FAST environment easy. The VIEWER module is the central control for the FAST environment. From VIEWER, the user can-change object attributes, interactively position objects in three-dimensional space, define and save scenes, create animations, spawn new FAST modules, add additional view windows, and save and execute command scripts. The FAST User Guide uses text and FAST MAPS (graphical representations of the entire user interface) to guide the user through the use of FAST. Chapters include: Maps, Overview, Tips, Getting Started Tutorial, a separate chapter for each module, file formats, and system
Mechanisms of mutational robustness in transcriptional regulation
Payne, Joshua L.; Wagner, Andreas
2015-01-01
Robustness is the invariance of a phenotype in the face of environmental or genetic change. The phenotypes produced by transcriptional regulatory circuits are gene expression patterns that are to some extent robust to mutations. Here we review several causes of this robustness. They include robustness of individual transcription factor binding sites, homotypic clusters of such sites, redundant enhancers, transcription factors, redundant transcription factors, and the wiring of transcriptional regulatory circuits. Such robustness can either be an adaptation by itself, a byproduct of other adaptations, or the result of biophysical principles and non-adaptive forces of genome evolution. The potential consequences of such robustness include complex regulatory network topologies that arise through neutral evolution, as well as cryptic variation, i.e., genotypic divergence without phenotypic divergence. On the longest evolutionary timescales, the robustness of transcriptional regulation has helped shape life as we know it, by facilitating evolutionary innovations that helped organisms such as flowering plants and vertebrates diversify. PMID:26579194
Robust temporal alignment of multimodal cardiac sequences
NASA Astrophysics Data System (ADS)
Perissinotto, Andrea; Queirós, Sandro; Morais, Pedro; Baptista, Maria J.; Monaghan, Mark; Rodrigues, Nuno F.; D'hooge, Jan; Vilaça, João. L.; Barbosa, Daniel
2015-03-01
Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized cross-correlation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates different temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1.6 +/- 1.9% and 4.0 +/- 4.2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.
Robust characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph
2016-04-01
Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.
CONTAINER MATERIALS, FABRICATION AND ROBUSTNESS
Dunn, K.; Louthan, M.; Rawls, G.; Sindelar, R.; Zapp, P.; Mcclard, J.
2009-11-10
The multi-barrier 3013 container used to package plutonium-bearing materials is robust and thereby highly resistant to identified degradation modes that might cause failure. The only viable degradation mechanisms identified by a panel of technical experts were pressurization within and corrosion of the containers. Evaluations of the container materials and the fabrication processes and resulting residual stresses suggest that the multi-layered containers will mitigate the potential for degradation of the outer container and prevent the release of the container contents to the environment. Additionally, the ongoing surveillance programs and laboratory studies should detect any incipient degradation of containers in the 3013 storage inventory before an outer container is compromised.
Robust holographic storage system design.
Watanabe, Takahiro; Watanabe, Minoru
2011-11-21
Demand is increasing daily for large data storage systems that are useful for applications in spacecraft, space satellites, and space robots, which are all exposed to radiation-rich space environment. As candidates for use in space embedded systems, holographic storage systems are promising because they can easily provided the demanded large-storage capability. Particularly, holographic storage systems, which have no rotation mechanism, are demanded because they are virtually maintenance-free. Although a holographic memory itself is an extremely robust device even in a space radiation environment, its associated lasers and drive circuit devices are vulnerable. Such vulnerabilities sometimes engendered severe problems that prevent reading of all contents of the holographic memory, which is a turn-off failure mode of a laser array. This paper therefore presents a proposal for a recovery method for the turn-off failure mode of a laser array on a holographic storage system, and describes results of an experimental demonstration. PMID:22109441
Probabilistic Reasoning for Plan Robustness
NASA Technical Reports Server (NTRS)
Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.
2005-01-01
A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.
Towards designing robust coupled networks
Schneider, Christian M.; Yazdani, Nuri; Araújo, Nuno A. M.; Havlin, Shlomo; Herrmann, Hans J.
2013-01-01
Natural and technological interdependent systems have been shown to be highly vulnerable due to cascading failures and an abrupt collapse of global connectivity under initial failure. Mitigating the risk by partial disconnection endangers their functionality. Here we propose a systematic strategy of selecting a minimum number of autonomous nodes that guarantee a smooth transition in robustness. Our method which is based on betweenness is tested on various examples including the famous 2003 electrical blackout of Italy. We show that, with this strategy, the necessary number of autonomous nodes can be reduced by a factor of five compared to a random choice. We also find that the transition to abrupt collapse follows tricritical scaling characterized by a set of exponents which is independent on the protection strategy. PMID:23752705
The structure of robust observers
NASA Technical Reports Server (NTRS)
Bhattacharyya, S. P.
1975-01-01
Conventional observers for linear time-invariant systems are shown to be structurally inadequate from a sensitivity standpoint. It is proved that if a linear dynamic system is to provide observer action despite arbitrary small perturbations in a specified subset of its parameters, it must: (1) be a closed loop system, be driven by the observer error, (2) possess redundancy, the observer must be generating, implicitly or explicitly, at least one linear combination of states that is already contained in the measurements, and (3) contain a perturbation-free model of the portion of the system observable from the external input to the observer. The procedure for design of robust observers possessing the above structural features is established and discussed.
Robust matching for voice recognition
NASA Astrophysics Data System (ADS)
Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.
1994-10-01
This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.
How robust are distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1989-01-01
A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.
A Robust Feedforward Model of the Olfactory System
Zhang, Yilun; Sharpee, Tatyana O.
2016-01-01
Most natural odors have sparse molecular composition. This makes the principles of compressed sensing potentially relevant to the structure of the olfactory code. Yet, the largely feedforward organization of the olfactory system precludes reconstruction using standard compressed sensing algorithms. To resolve this problem, recent theoretical work has shown that signal reconstruction could take place as a result of a low dimensional dynamical system converging to one of its attractor states. However, the dynamical aspects of optimization slowed down odor recognition and were also found to be susceptible to noise. Here we describe a feedforward model of the olfactory system that achieves both strong compression and fast reconstruction that is also robust to noise. A key feature of the proposed model is a specific relationship between how odors are represented at the glomeruli stage, which corresponds to a compression, and the connections from glomeruli to third-order neurons (neurons in the olfactory cortex of vertebrates or Kenyon cells in the mushroom body of insects), which in the model corresponds to reconstruction. We show that should this specific relationship hold true, the reconstruction will be both fast and robust to noise, and in particular to the false activation of glomeruli. The predicted connectivity rate from glomeruli to third-order neurons can be tested experimentally. PMID:27065441
A Robust Feedforward Model of the Olfactory System
NASA Astrophysics Data System (ADS)
Zhang, Yilun; Sharpee, Tatyana
Most natural odors have sparse molecular composition. This makes the principles of compressing sensing potentially relevant to the structure of the olfactory code. Yet, the largely feedforward organization of the olfactory system precludes reconstruction using standard compressed sensing algorithms. To resolve this problem, recent theoretical work has proposed that signal reconstruction could take place as a result of a low dimensional dynamical system converging to one of its attractor states. The dynamical aspects of optimization, however, would slow down odor recognition and were also found to be susceptible to noise. Here we describe a feedforward model of the olfactory system that achieves both strong compression and fast reconstruction that is also robust to noise. A key feature of the proposed model is a specific relationship between how odors are represented at the glomeruli stage, which corresponds to a compression, and the connections from glomeruli to Kenyon cells, which in the model corresponds to reconstruction. We show that provided this specific relationship holds true, the reconstruction will be both fast and robust to noise, and in particular to failure of glomeruli. The predicted connectivity rate from glomeruli to the Kenyon cells can be tested experimentally. This research was supported by James S. McDonnell Foundation, NSF CAREER award IIS-1254123, NSF Ideas Lab Collaborative Research IOS 1556388.
Robust Mosaicking of Uav Images with Narrow Overlaps
NASA Astrophysics Data System (ADS)
Kim, J.; Kim, T.; Shin, D.; Kim, S. H.
2016-06-01
This paper considers fast and robust mosaicking of UAV images under a circumstance that each UAV images have very narrow overlaps in-between. Image transformation for image mosaicking consists of two estimations: relative transformations and global transformations. For estimating relative transformations between adjacent images, projective transformation is widely considered. For estimating global transformations, panoramic constraint is widely used. While perspective transformation is a general transformation model in 2D-2D transformation, this may not be optimal with weak stereo geometry such as images with narrow overlaps. While panoramic constraint works for reliable conversion of global transformation for panoramic image generation, this constraint is not applicable to UAV images in linear motions. For these reasons, a robust approach is investigated to generate a high quality mosaicked image from narrowly overlapped UAV images. For relative transformations, several transformation models were considered to ensure robust estimation of relative transformation relationship. Among them were perspective transformation, affine transformation, coplanar relative orientation, and relative orientation with reduced adjustment parameters. Performance evaluation for each transformation model was carried out. The experiment results showed that affine transformation and adjusted coplanar relative orientation were superior to others in terms of stability and accuracy. For global transformation, we set initial approximation by converting each relative transformation to a common transformation with respect to a reference image. In future work, we will investigate constrained relative orientation for enhancing geometric accuracy of image mosaicking and bundle adjustments of each relative transformation model for optimal global transformation.
Dynamic optimization of bioprocesses: efficient and robust numerical strategies.
Banga, Julio R; Balsa-Canto, Eva; Moles, Carmen G; Alonso, Antonio A
2005-06-29
The dynamic optimization (open loop optimal control) of non-linear bioprocesses is considered in this contribution. These processes can be described by sets of non-linear differential and algebraic equations (DAEs), usually subject to constraints in the state and control variables. A review of the available solution techniques for this class of problems is presented, highlighting the numerical difficulties arising from the non-linear, constrained and often discontinuous nature of these systems. In order to surmount these difficulties, we present several alternative stochastic and hybrid techniques based on the control vector parameterization (CVP) approach. The CVP approach is a direct method which transforms the original problem into a non-linear programming (NLP) problem, which must be solved by a suitable (efficient and robust) solver. In particular, a hybrid technique uses a first global optimization phase followed by a fast second phase based on a local deterministic method, so it can handle the nonconvexity of many of these NLPs. The efficiency and robustness of these techniques is illustrated by solving several challenging case studies regarding the optimal control of fed-batch bioreactors and other bioprocesses. In order to fairly evaluate their advantages, a careful and critical comparison with several other direct approaches is provided. The results indicate that the two-phase hybrid approach presents the best compromise between robustness and efficiency. PMID:15888349
Robust Read Channel System Directly Processing Asynchronous Sampling Data
NASA Astrophysics Data System (ADS)
Yamamoto, Akira; Mouri, Hiroki; Yamamoto, Takashi
2006-02-01
In this study, we describe a robust read channel employing a novel timing recovery system and a unique Viterbi detector which extracts channel timing and channel data directly from asynchronous sampling data. The timing recovery system in the proposed read channel has feed-forward architecture and consists entirely of digital circuits. Thus, it enables robust timing recovery at high-speed and has no performance deterioration caused by variations in analog circuits. The Viterbi detector not only detects maximum-likelihood data using a reference level generator, but also transforms asynchronous data into pseudosynchronous data using two clocks, such as an asynchronous clock generated by a frequency synthesizer and a pseudosynchronous clock generated by a timing detector. The proposed read channel has achieved a constant and fast frequency acquisition time against initial frequency error and has improved its bit error rate performance. This robust read channel system can be used for high-speed signal processing and LSIs using nanometer-scale semiconductor processes.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
... quick, reasonably priced, and readily available alternatives to home cooking. While convenient and economical for a busy lifestyle, fast foods are typically high in calories, fat, saturated fat, ...
Grey Ballard, Austin Benson
2014-11-26
This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fast matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.
fastSCOP: a fast web server for recognizing protein structural domains and SCOP superfamilies.
Tung, Chi-Hua; Yang, Jinn-Moon
2007-07-01
The fastSCOP is a web server that rapidly identifies the structural domains and determines the evolutionary superfamilies of a query protein structure. This server uses 3D-BLAST to scan quickly a large structural classification database (SCOP1.71 with <95% identity with each other) and the top 10 hit domains, which have different superfamily classifications, are obtained from the hit lists. MAMMOTH, a detailed structural alignment tool, is adopted to align these top 10 structures to refine domain boundaries and to identify evolutionary superfamilies. Our previous works demonstrated that 3D-BLAST is as fast as BLAST, and has the characteristics of BLAST (e.g. a robust statistical basis, effective search and reliable database search capabilities) in large structural database searches based on a structural alphabet database and a structural alphabet substitution matrix. The classification accuracy of this server is approximately 98% for 586 query structures and the average execution time is approximately 5. This server was also evaluated on 8700 structures, which have no annotations in the SCOP; the server can automatically assign 7311 (84%) proteins (9420 domains) to the SCOP superfamilies in 9.6 h. These results suggest that the fastSCOP is robust and can be a useful server for recognizing the evolutionary classifications and the protein functions of novel structures. The server is accessible at http://fastSCOP.life.nctu.edu.tw. PMID:17485476
Robust neuro-sliding mode multivariable control strategy for powered wheelchairs.
Nguyen, Tuan Nghia; Su, Steven W; Nguyen, Hung T
2011-02-01
This paper proposes an advanced robust multivariable control strategy for a powered wheelchair system. The new control strategy is based on a combination of the systematic triangularization technique and the robust neuro-sliding mode control approach. This strategy effectively copes with parameter uncertainties and external disturbances in real-time in order to achieve robustness and optimal performance of a multivariable system. This novel strategy reduces coupling effects on a multivariable system, eliminates chattering phenomena, and avoids the plant Jacobian calculation problem. Furthermore, the strategy can also achieve fast and global convergence using less computation. The effectiveness of the new multivariable control strategy is verified in real-time implementation on a powered wheelchair system. The obtained results confirm that robustness and desired performance of the overall system are guaranteed, even under parameter uncertainty and external disturbance effects. PMID:20805057
Valiant load-balanced robust routing under hose model for WDM mesh networks
NASA Astrophysics Data System (ADS)
Zhang, Xiaoning; Li, Lemin; Wang, Sheng
2006-09-01
In this paper, we propose Valiant Load-Balanced robust routing scheme for WDM mesh networks under the model of polyhedral uncertainty (i.e., hose model), and the proposed routing scheme is implemented with traffic grooming approach. Our Objective is to maximize the hose model throughput. A mathematic formulation of Valiant Load-Balanced robust routing is presented and three fast heuristic algorithms are also proposed. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimizing hop first) is proposed. We compare the three heuristic algorithms with the VPN tree under the hose model. Finally we demonstrate in the simulation results that MHF with Valiant Load-Balanced robust routing scheme outperforms the traditional traffic-grooming algorithm in terms of the throughput for the uniform/non-uniform traffic matrix under the hose model.
Gavrankapetanović, F
1997-01-01
Fasting (arabic-savm) was proclaimed through islam, and thus it is an obligation for Holly Prophet Muhammad s.a.v.s.-Peace be to Him-in the second year after Hijra (in 624 after Milad-born of Isa a.s.). There is a month of fasting-Ramadan-each lunar (hijra) year. So, it was 1415th fasting this year. Former Prophets have brought obligative messages on fasting to their people; so there are also certain forms of fasting with other religions i.e. with Catholics, Jews, Orthodox. These kinds of fasting above differ from muslim fasting, but they also appear obligative. All revelations have brought fasting as obligative. From medical point of view, fasting has two basical components: psychical and physical. Psychical sphere correlate closely with its fundamental ideological message. Allah dz.s. says in Quran: "... Fasting is obligative for you, as it was obligative to your precedents, as to avoid sins; during very few days (II, II, 183 & 184)." Will strength, control of passions, effort and self-discipline makes a pure faithfull person, who purify its mind and body through fasting. Thinking about The Creator is more intensive, character is more solid; and spirit and will get stronger. We will mention the hadith saying: "Essaihune humus saimun!" That means: "Travellers at the Earth are fasters (of my ummet)." The commentary of this hadith, in the Collection of 1001 hadiths (Bin bir hadis), number 485, says: "There are no travelling dervishs or monks in islam; thus there is no such a kind of relligousity in islam. In stead, it is changed by fasting and constant attending of mosque. That was proclaimed as obligation, although there were few cases of travelling in the name of relligousity, like travelling dervishs and sheichs." In this paper, the author discusses medical aspects of fasting and its positive characteristics in the respect of healthy life style and prevention of many sicks. The author mentions positive influence of fasting to certain system and organs of human
Integrative Physiology of Fasting.
Secor, Stephen M; Carey, Hannah V
2016-04-01
Extended bouts of fasting are ingrained in the ecology of many organisms, characterizing aspects of reproduction, development, hibernation, estivation, migration, and infrequent feeding habits. The challenge of long fasting episodes is the need to maintain physiological homeostasis while relying solely on endogenous resources. To meet that challenge, animals utilize an integrated repertoire of behavioral, physiological, and biochemical responses that reduce metabolic rates, maintain tissue structure and function, and thus enhance survival. We have synthesized in this review the integrative physiological, morphological, and biochemical responses, and their stages, that characterize natural fasting bouts. Underlying the capacity to survive extended fasts are behaviors and mechanisms that reduce metabolic expenditure and shift the dependency to lipid utilization. Hormonal regulation and immune capacity are altered by fasting; hormones that trigger digestion, elevate metabolism, and support immune performance become depressed, whereas hormones that enhance the utilization of endogenous substrates are elevated. The negative energy budget that accompanies fasting leads to the loss of body mass as fat stores are depleted and tissues undergo atrophy (i.e., loss of mass). Absolute rates of body mass loss scale allometrically among vertebrates. Tissues and organs vary in the degree of atrophy and downregulation of function, depending on the degree to which they are used during the fast. Fasting affects the population dynamics and activities of the gut microbiota, an interplay that impacts the host's fasting biology. Fasting-induced gene expression programs underlie the broad spectrum of integrated physiological mechanisms responsible for an animal's ability to survive long episodes of natural fasting. PMID:27065168
Selection for Robustness in Mutagenized RNA Viruses
Furió, Victoria; Holmes, Edward C; Moya, Andrés
2007-01-01
Mutational robustness is defined as the constancy of a phenotype in the face of deleterious mutations. Whether robustness can be directly favored by natural selection remains controversial. Theory and in silico experiments predict that, at high mutation rates, slow-replicating genotypes can potentially outcompete faster counterparts if they benefit from a higher robustness. Here, we experimentally validate this hypothesis, dubbed the “survival of the flattest,” using two populations of the vesicular stomatitis RNA virus. Characterization of fitness distributions and genetic variability indicated that one population showed a higher replication rate, whereas the other was more robust to mutation. The faster replicator outgrew its robust counterpart in standard competition assays, but the outcome was reversed in the presence of chemical mutagens. These results show that selection can directly favor mutational robustness and reveal a novel viral resistance mechanism against treatment by lethal mutagenesis. PMID:17571922
Tail mean and related robust solution concepts
NASA Astrophysics Data System (ADS)
Ogryczak, Włodzimierz
2014-01-01
Robust optimisation might be viewed as a multicriteria optimisation problem where objectives correspond to the scenarios although their probabilities are unknown or imprecise. The simplest robust solution concept represents a conservative approach focused on the worst-case scenario results optimisation. A softer concept allows one to optimise the tail mean thus combining performances under multiple worst scenarios. We show that while considering robust models allowing the probabilities to vary only within given intervals, the tail mean represents the robust solution for only upper bounded probabilities. For any arbitrary intervals of probabilities the corresponding robust solution may be expressed by the optimisation of appropriately combined mean and tail mean criteria thus remaining easily implementable with auxiliary linear inequalities. Moreover, we use the tail mean concept to develope linear programming implementable robust solution concepts related to risk averse optimisation criteria.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs. PMID:26080050
Robust Face Sketch Style Synthesis.
Shengchuan Zhang; Xinbo Gao; Nannan Wang; Jie Li
2016-01-01
Heterogeneous image conversion is a critical issue in many computer vision tasks, among which example-based face sketch style synthesis provides a convenient way to make artistic effects for photos. However, existing face sketch style synthesis methods generate stylistic sketches depending on many photo-sketch pairs. This requirement limits the generalization ability of these methods to produce arbitrarily stylistic sketches. To handle such a drawback, we propose a robust face sketch style synthesis method, which can convert photos to arbitrarily stylistic sketches based on only one corresponding template sketch. In the proposed method, a sparse representation-based greedy search strategy is first applied to estimate an initial sketch. Then, multi-scale features and Euclidean distance are employed to select candidate image patches from the initial estimated sketch and the template sketch. In order to further refine the obtained candidate image patches, a multi-feature-based optimization model is introduced. Finally, by assembling the refined candidate image patches, the completed face sketch is obtained. To further enhance the quality of synthesized sketches, a cascaded regression strategy is adopted. Compared with the state-of-the-art face sketch synthesis methods, experimental results on several commonly used face sketch databases and celebrity photos demonstrate the effectiveness of the proposed method. PMID:26595919
Nanotechnology Based Environmentally Robust Primers
Barbee, T W Jr; Gash, A E; Satcher, J H Jr; Simpson, R L
2003-03-18
An initiator device structure consisting of an energetic metallic nano-laminate foil coated with a sol-gel derived energetic nano-composite has been demonstrated. The device structure consists of a precision sputter deposition synthesized nano-laminate energetic foil of non-toxic and non-hazardous metals along with a ceramic-based energetic sol-gel produced coating made up of non-toxic and non-hazardous components such as ferric oxide and aluminum metal. Both the nano-laminate and sol-gel technologies are versatile commercially viable processes that allow the ''engineering'' of properties such as mechanical sensitivity and energy output. The nano-laminate serves as the mechanically sensitive precision igniter and the energetic sol-gel functions as a low-cost, non-toxic, non-hazardous booster in the ignition train. In contrast to other energetic nanotechnologies these materials can now be safely manufactured at application required levels, are structurally robust, have reproducible and engineerable properties, and have excellent aging characteristics.
A Robust, Microwave Rain Gauge
NASA Astrophysics Data System (ADS)
Mansheim, T. J.; Niemeier, J. J.; Kruger, A.
2008-12-01
Researchers at The University of Iowa have developed an all-electronic rain gauge that uses microwave sensors operating at either 10 GHz or 23 GHz, and measures the Doppler shift caused by falling raindrops. It is straightforward to interface these sensors with conventional data loggers, or integrate them into a wireless sensor network. A disadvantage of these microwave rain gauges is that they consume significant power when they are operating. However, this may be partially negated by using data loggers' or sensors networks' sleep-wake-sleep mechanism. Advantages of the microwave rain gauges are that one can make them very robust, they cannot clog, they don't have mechanical parts that wear out, and they don't have to be perfectly level. Prototype microwave rain gauges were collocated with tipping-bucket rain gauges, and data were collected for two seasons. At higher rain rates, microwave rain gauge measurements compare well with tipping-bucket measurements. At lower rain rates, the microwave rain gauges provide more detailed information than tipping buckets, which quantize measurement typically in 1 tip per 0.01 inch, or 1 tip per mm of rainfall.
Robust boosting via convex optimization
NASA Astrophysics Data System (ADS)
Rätsch, Gunnar
2001-12-01
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems
Gelman, Hannah; Gruebele, Martin
2014-01-01
Fast folding proteins have been a major focus of computational and experimental study because they are accessible to both techniques: they are small and fast enough to be reasonably simulated with current computational power, but have dynamics slow enough to be observed with specially developed experimental techniques. This coupled study of fast folding proteins has provided insight into the mechanisms which allow some proteins to find their native conformation well less than 1 ms and has uncovered examples of theoretically predicted phenomena such as downhill folding. The study of fast folders also informs our understanding of even “slow” folding processes: fast folders are small, relatively simple protein domains and the principles that govern their folding also govern the folding of more complex systems. This review summarizes the major theoretical and experimental techniques used to study fast folding proteins and provides an overview of the major findings of fast folding research. Finally, we examine the themes that have emerged from studying fast folders and briefly summarize their application to protein folding in general as well as some work that is left to do. PMID:24641816
O'Brien, Travis A.; Kashinath, Karthik
2015-05-22
This software implements the fast, self-consistent probability density estimation described by O'Brien et al. (2014, doi: ). It uses a non-uniform fast Fourier transform technique to reduce the computational cost of an objective and self-consistent kernel density estimation method.
Trueland, Jennifer
2013-12-18
The 5.2 diet involves two days of fasting each week. It is being promoted as the key to sustained weight loss, as well as wider health benefits, despite the lack of evidence on the long-term effects. Nurses need to support patients who wish to try intermittent fasting. PMID:24345130
Robust satisficing and the probability of survival
NASA Astrophysics Data System (ADS)
Ben-Haim, Yakov
2014-01-01
Concepts of robustness are sometimes employed when decisions under uncertainty are made without probabilistic information. We present a theorem that establishes necessary and sufficient conditions for non-probabilistic robustness to be equivalent to the probability of satisfying the specified outcome requirements. When this holds, probability is enhanced (or maximised) by enhancing (or maximising) robustness. Two further theorems establish important special cases. These theorems have implications for success or survival under uncertainty. Applications to foraging and finance are discussed.
NASA Technical Reports Server (NTRS)
Wie, Bong; Liu, Qiang
1992-01-01
Both feedback and feedforward control approaches for uncertain dynamical systems (in particular, with uncertainty in structural mode frequency) are investigated. The control objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant uncertainty. Preshaping of an ideal, time optimal control input using a tapped-delay filter is shown to provide a fast settling time with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. It is shown that a properly designed, feedback controller performs well, as compared with a time optimal open loop controller with special preshaping for performance robustness. Also included are two separate papers by the same authors on this subject.
NASA Astrophysics Data System (ADS)
Zhang, Haiyan; Nan, Rendong; Gan, Hengqian; Yue, Youling; Wu, Mingchang; Zhang, Zhiwei; Jin, Chengjin; Peng, Bo
2015-08-01
Five-hundred-meter Aperture Spherical radio Telescope (FAST) is a Chinese mega-science project to build the largest single dish radio telescope in the world. The construction was officially commenced in March 2011. The first light of FAST is expected in 2016. Due to the high sensitivity of FAST, Radio Frequency Interference (RFI) mitigation for the telescope is required to assure the realization of the scientific goals. In order to protect the radio environment of FAST site, the local government has established a radio quiet zone with 30 km radius. Moreover, Electromagnetic Compatibility (EMC) designs and measurements for FAST have also been carried out, and some examples, such as EMC designs for actuator and focus cabin, have been introduced briefly.
Robust Fixed-Structure Controller Synthesis
NASA Technical Reports Server (NTRS)
Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)
2000-01-01
The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.
Robust Hypothesis Testing with alpha -Divergence
NASA Astrophysics Data System (ADS)
Gul, Gokhan; Zoubir, Abdelhak M.
2016-09-01
A robust minimax test for two composite hypotheses, which are determined by the neighborhoods of two nominal distributions with respect to a set of distances - called $\\alpha-$divergence distances, is proposed. Sion's minimax theorem is adopted to characterize the saddle value condition. Least favorable distributions, the robust decision rule and the robust likelihood ratio test are derived. If the nominal probability distributions satisfy a symmetry condition, the design procedure is shown to be simplified considerably. The parameters controlling the degree of robustness are bounded from above and the bounds are shown to be resulting from a solution of a set of equations. The simulations performed evaluate and exemplify the theoretical derivations.
Fast skin color detector for face extraction
NASA Astrophysics Data System (ADS)
Chen, Lihui; Grecos, Christos
2005-02-01
Face detection is the first step for an automatic face recognition system. For color images, skin color filter is considered as an important method for removing non-face pixels. In the paper, we will propose a novel and efficient detector for skin color regions for face extraction. The detector processes the image in four steps: lighting compensation, skin color filter and mask refinement and fast patch identification. Experimental results show that our detector is more robust and efficient than other skin color filters.
Southeast Asia: `A robust market`
Pagano, S.S.
1997-04-01
Southeast Asia is emerging as a robust market for exploration and field development activities. While much of the worldwide attention is focused on lucrative deep water drilling and production in the U.S. Gulf of Mexico, Brazil, and West Africa, the burgeoning Pacific Rim region is very much in the spotlight. As the industry approaches the next century. Southeast Asia is a key growth area that will be the focus of extensive drilling and development. Regional licensing activity is buoyant as oil and gas companies continue to express interest in Southeast Asian opportunities. During 1996, about 75 new license awards were granted. This year, at least an equal number of licenses likely will be awarded to international major and independent oil companies. In the past five years, the number of production-sharing contracts and concessions awarded declined slightly as oil companies apparently opted to invest in other foreign markets. Brunei government officials plan to open offshore areas to licensing in 1997, including what may prove to be attractive deep water areas. Indonesia`s state oil company Pertamina will offer 26 offshore tracts under production-sharing and technical assistance contracts this year. Malaysia expects to attract international interest in some 30 blocks it will soon offer under production-sharing terms. Bangladesh expects to call for tenders for an unspecified number of concessions later this year. Nearby, bids were submitted earlier this year to the Australian government for rights to explore 38 offshore areas. Results are expected to be announced by mid-year.
Robust control with structured perturbations
NASA Technical Reports Server (NTRS)
Keel, Leehyun
1988-01-01
Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.