A dynamical systems approach to the tilted Bianchi models of solvable type
NASA Astrophysics Data System (ADS)
Coley, Alan; Hervik, Sigbjørn
2005-02-01
We use a dynamical systems approach to analyse the tilting spatially homogeneous Bianchi models of solvable type (e.g., types VIh and VIIh) with a perfect fluid and a linear barotropic γ-law equation of state. In particular, we study the late-time behaviour of tilted Bianchi models, with an emphasis on the existence of equilibrium points and their stability properties. We briefly discuss the tilting Bianchi type V models and the late-time asymptotic behaviour of irrotational Bianchi type VII0 models. We prove the important result that for non-inflationary Bianchi type VIIh models vacuum plane-wave solutions are the only future attracting equilibrium points in the Bianchi type VIIh invariant set. We then investigate the dynamics close to the plane-wave solutions in more detail, and discover some new features that arise in the dynamical behaviour of Bianchi cosmologies with the inclusion of tilt. We point out that in a tiny open set of parameter space in the type IV model (the loophole) there exist closed curves which act as attracting limit cycles. More interestingly, in the Bianchi type VIIh models there is a bifurcation in which a set of equilibrium points turns into closed orbits. There is a region in which both sets of closed curves coexist, and it appears that for the type VIIh models in this region the solution curves approach a compact surface which is topologically a torus.
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
17 CFR 230.605 - Filing and use of the offering circular.
Code of Federal Regulations, 2011 CFR
2011-04-01
... similar process which will result in clearly legible copies. If printed, it shall be set in roman type at... tabular matter may be set in roman type at least as large as eight-point modern type. All type shall be...
On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems
NASA Astrophysics Data System (ADS)
Junge, Oliver; Kevrekidis, Ioannis G.
2017-06-01
We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.
Junge, Oliver; Kevrekidis, Ioannis G
2017-06-01
We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.
24 CFR 1715.50 - Advertising disclaimers; subdivisions registered and effective with HUD.
Code of Federal Regulations, 2010 CFR
2010-04-01
... statement may be set in type of at least six point font. (c) This disclaimer statement need not appear on... at the bottom of the front page. The disclaimer statement shall be set in type of at least ten point font. Obtain the Property Report required by Federal law and read it before signing anything. No...
Spectroscopic evidence for a type II Weyl semimetallic state in MoTe 2
Huang, Lunan; McCormick, Timothy M.; Ochi, Masayuki; ...
2016-07-11
In a type I Dirac or Weyl semimetal, the low-energy states are squeezed to a single point in momentum space when the chemical potential μ is tuned precisely to the Dirac/Weyl point. Recently, a type II Weyl semimetal was predicted to exist, where the Weyl states connect hole and electron bands, separated by an indirect gap. This leads to unusual energy states, where hole and electron pockets touch at the Weyl point. Here we present the discovery of a type II topological Weyl semimetal state in pure MoTe 2, where two sets of Weyl points ( W±2 , W±3) existmore » at the touching points of electron and hole pockets and are located at different binding energies above E F. Using angle-resolved photoemission spectroscopy, modelling, density functional theory and calculations of Berry curvature, we identify the Weyl points and demonstrate that they are connected by different sets of Fermi arcs for each of the two surface terminations. We also find new surface ‘track states’ that form closed loops and are unique to type II Weyl semimetals. Lastly, this material provides an exciting, new platform to study the properties of Weyl fermions.« less
Spectroscopic evidence for a type II Weyl semimetallic state in MoTe2
NASA Astrophysics Data System (ADS)
Huang, Lunan; McCormick, Timothy M.; Ochi, Masayuki; Zhao, Zhiying; Suzuki, Michi-To; Arita, Ryotaro; Wu, Yun; Mou, Daixiang; Cao, Huibo; Yan, Jiaqiang; Trivedi, Nandini; Kaminski, Adam
2016-11-01
In a type I Dirac or Weyl semimetal, the low-energy states are squeezed to a single point in momentum space when the chemical potential μ is tuned precisely to the Dirac/Weyl point. Recently, a type II Weyl semimetal was predicted to exist, where the Weyl states connect hole and electron bands, separated by an indirect gap. This leads to unusual energy states, where hole and electron pockets touch at the Weyl point. Here we present the discovery of a type II topological Weyl semimetal state in pure MoTe2, where two sets of Weyl points (, ) exist at the touching points of electron and hole pockets and are located at different binding energies above EF. Using angle-resolved photoemission spectroscopy, modelling, density functional theory and calculations of Berry curvature, we identify the Weyl points and demonstrate that they are connected by different sets of Fermi arcs for each of the two surface terminations. We also find new surface `track states' that form closed loops and are unique to type II Weyl semimetals. This material provides an exciting, new platform to study the properties of Weyl fermions.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Local, smooth, and consistent Jacobi set simplification
Bhatia, Harsh; Wang, Bei; Norgard, Gregory; ...
2014-10-31
The relation between two Morse functions defined on a smooth, compact, and orientable 2-manifold can be studied in terms of their Jacobi set. The Jacobi set contains points in the domain where the gradients of the two functions are aligned. Both the Jacobi set itself as well as the segmentation of the domain it induces, have shown to be useful in various applications. In practice, unfortunately, functions often contain noise and discretization artifacts, causing their Jacobi set to become unmanageably large and complex. Although there exist techniques to simplify Jacobi sets, they are unsuitable for most applications as they lackmore » fine-grained control over the process, and heavily restrict the type of simplifications possible. In this paper, we introduce a new framework that generalizes critical point cancellations in scalar functions to Jacobi set in two dimensions. We present a new interpretation of Jacobi set simplification based on the perspective of domain segmentation. Generalizing the cancellation of critical points from scalar functions to Jacobi sets, we focus on simplifications that can be realized by smooth approximations of the corresponding functions, and show how these cancellations imply simultaneous simplification of contiguous subsets of the Jacobi set. Using these extended cancellations as atomic operations, we introduce an algorithm to successively cancel subsets of the Jacobi set with minimal modifications to some user-defined metric. We show that for simply connected domains, our algorithm reduces a given Jacobi set to its minimal configuration, that is, one with no birth–death points (a birth–death point is a specific type of singularity within the Jacobi set where the level sets of the two functions and the Jacobi set have a common normal direction).« less
Reading Materials in Large Type. Reference Circular No. 87-4.
ERIC Educational Resources Information Center
Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.
This circular provides information about reading materials in large type, i.e., materials set in type that is a minimum size of 14-point and, most commonly, 16- to 18-point size. Most of the materials listed are typeset, but a few are photographically enlarged conventionally printed books or typewritten materials prepared using a large-print…
NASA Astrophysics Data System (ADS)
Schwind, Michael
Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.
Genetic identification of brain cell types underlying schizophrenia.
Skene, Nathan G; Bryois, Julien; Bakken, Trygve E; Breen, Gerome; Crowley, James J; Gaspar, Héléna A; Giusti-Rodriguez, Paola; Hodge, Rebecca D; Miller, Jeremy A; Muñoz-Manchado, Ana B; O'Donovan, Michael C; Owen, Michael J; Pardiñas, Antonio F; Ryge, Jesper; Walters, James T R; Linnarsson, Sten; Lein, Ed S; Sullivan, Patrick F; Hjerling-Leffler, Jens
2018-06-01
With few exceptions, the marked advances in knowledge about the genetic basis of schizophrenia have not converged on findings that can be confidently used for precise experimental modeling. By applying knowledge of the cellular taxonomy of the brain from single-cell RNA sequencing, we evaluated whether the genomic loci implicated in schizophrenia map onto specific brain cell types. We found that the common-variant genomic results consistently mapped to pyramidal cells, medium spiny neurons (MSNs) and certain interneurons, but far less consistently to embryonic, progenitor or glial cells. These enrichments were due to sets of genes that were specifically expressed in each of these cell types. We also found that many of the diverse gene sets previously associated with schizophrenia (genes involved in synaptic function, those encoding mRNAs that interact with FMRP, antipsychotic targets, etc.) generally implicated the same brain cell types. Our results suggest a parsimonious explanation: the common-variant genetic results for schizophrenia point at a limited set of neurons, and the gene sets point to the same cells. The genetic risk associated with MSNs did not overlap with that of glutamatergic pyramidal cells and interneurons, suggesting that different cell types have biologically distinct roles in schizophrenia.
Ren, Biye
2003-01-01
Structure-boiling point relationships are studied for a series of oxo organic compounds by means of multiple linear regression (MLR) analysis. Excellent MLR models based on the recently introduced Xu index and the atom-type-based AI indices are obtained for the two subsets containing respectively 77 ethers and 107 carbonyl compounds and a combined set of 184 oxo compounds. The best models are tested using the leave-one-out cross-validation and an external test set, respectively. The MLR model produces a correlation coefficient of r = 0.9977 and a standard error of s = 3.99 degrees C for the training set of 184 compounds, and r(cv) = 0.9974 and s(cv) = 4.16 degrees C for the cross-validation set, and r(pred) = 0.9949 and s(pred) = 4.38 degrees C for the prediction set of 21 compounds. For the two subsets containing respectively 77 ethers and 107 carbonyl compounds, the quality of the models is further improved. The standard errors are reduced to 3.30 and 3.02 degrees C, respectively. Furthermore, the results obtained from this study indicate that the boiling points of the studied oxo compound dominantly depend on molecular size and also depend on individual atom types, especially oxygen heteroatoms in molecules due to strong polar interactions between molecules. These excellent structure-boiling point models not only provide profound insights into the role of structural features in a molecule but also illustrate the usefulness of these indices in QSPR/QSAR modeling of complex compounds.
Code of Federal Regulations, 2010 CFR
2010-01-01
... height) knocked out of a 1″ (2.54 cm) deep band. The type for the words “MINIMUM” and the principal... should be set as “normal.” The type for the fuel name is 50 point (1/2″ 1.27 cm) cap height) knocked out... point (1/2″ 1.27 cm) cap height) knocked out of a 1″ (2.54 cm) deep band. All other type is 24 pt. (1/4...
Code of Federal Regulations, 2011 CFR
2011-01-01
... height) knocked out of a 1″ (2.54 cm) deep band. The type for the words “MINIMUM” and the principal... should be set as “normal.” The type for the fuel name is 50 point (1/2″ 1.27 cm) cap height) knocked out... point (1/2″ 1.27 cm) cap height) knocked out of a 1″ (2.54 cm) deep band. All other type is 24 pt. (1/4...
Trivial dynamics in discrete-time systems: carrying simplex and translation arcs
NASA Astrophysics Data System (ADS)
Niu, Lei; Ruiz-Herrera, Alfonso
2018-06-01
In this paper we show that the dynamical behavior in (first octant) of the classical Kolmogorov systems of competitive type admitting a carrying simplex can be sometimes determined completely by the number of fixed points on the boundary and the local behavior around them. Roughly speaking, T has trivial dynamics (i.e. the omega limit set of any orbit is a connected set contained in the set of fixed points) provided T has exactly four hyperbolic nontrivial fixed points in with local attractors on the carrying simplex and local repellers on the carrying simplex; and there exists a unique hyperbolic fixed point in Int. Our results are applied to some classical models including the Leslie–Gower models, Atkinson-Allen systems and Ricker maps.
A Logical Basis In The Layered Computer Vision Systems Model
NASA Astrophysics Data System (ADS)
Tejwani, Y. J.
1986-03-01
In this paper a four layer computer vision system model is described. The model uses a finite memory scratch pad. In this model planar objects are defined as predicates. Predicates are relations on a k-tuple. The k-tuple consists of primitive points and relationship between primitive points. The relationship between points can be of the direct type or the indirect type. Entities are goals which are satisfied by a set of clauses. The grammar used to construct these clauses is examined.
Benchmark Design and Installation: A synthesis of Existing Information.
1987-07-01
casings (15 ft deep) drilled to rock and filled with concrete. Disks - 1 . Set on vertically stable structures (e.g., dam monoliths). 2 . Set in rock ...Structural movement survey 1 . Rock outcrops (first choice) -- chiseled square on high point. 2 . Massive concrete structure (second choice) - cut square on...bolt marker (type 2 ). 58,. % %--"% %I 1 ± 4 -I,.- Table Cl. Recomnded benchmarks. Type of condition or terrain Type of markert Bedrock, rock outcrops
Metabolic vs. hedonic obesity: a conceptual distinction and its clinical implications
Zhang, Y.; Mechanick, J. I.; Korner, J.; Peterli, R.
2015-01-01
Summary Body weight is determined via both metabolic and hedonic mechanisms. Metabolic regulation of body weight centres around the ‘body weight set point’, which is programmed by energy balance circuitry in the hypothalamus and other specific brain regions. The metabolic body weight set point has a genetic basis, but exposure to an obesogenic environment may elicit allostatic responses and upward drift of the set point, leading to a higher maintained body weight. However, an elevated steady‐state body weight may also be achieved without an alteration of the metabolic set point, via sustained hedonic over‐eating, which is governed by the reward system of the brain and can override homeostatic metabolic signals. While hedonic signals are potent influences in determining food intake, metabolic regulation involves the active control of both food intake and energy expenditure. When overweight is due to elevation of the metabolic set point (‘metabolic obesity’), energy expenditure theoretically falls onto the standard energy–mass regression line. In contrast, when a steady‐state weight is above the metabolic set point due to hedonic over‐eating (‘hedonic obesity’), a persistent compensatory increase in energy expenditure per unit metabolic mass may be demonstrable. Recognition of the two types of obesity may lead to more effective treatment and prevention of obesity. PMID:25588316
NASA Astrophysics Data System (ADS)
Kanoveĭ, V. G.; Linton, Tom; Uspensky, Vladimir A.
2008-12-01
Lebesgue measure of point sets is characterized in terms of the existence of various strategies in a certain coin-flipping game. 'Rational' and 'discrete' modifications of this game are investigated. We prove that if one of the players has a winning strategy in a game of this type depending on a given set P\\subseteq \\lbrack 0,1 \\rbrack , then this set is measurable. Bibliography: 11 titles.
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
ERIC Educational Resources Information Center
Cosgun Ögeyik, Muhlise
2017-01-01
In English language teaching settings, the type of lecture is important since students should be exposed to instantly recognisable linguistic features in the target language through interaction. This quasi-experimental study was designed to compare the effectiveness of PowerPoint presentations (PPP) and conventional lecture/discussion sessions on…
Child care choices, food intake, and children's obesity status in the United States.
Mandal, Bidisha; Powell, Lisa M
2014-07-01
This article studies two pathways in which selection into different types of child care settings may affect likelihood of childhood obesity. Frequency of intake of high energy-dense and low energy-dense food items may vary across care settings, affecting weight outcomes. We find that increased use of paid and regulated care settings, such as center care and Head Start, is associated with higher consumption of fruits and vegetables. Among children from single-mother households, the probability of obesity increases by 15 percentage point with an increase in intake of soft drinks from four to six times a week to daily consumption and by 25 percentage point with an increase in intake of fast food from one to three times a week to four to six times a week. Among children from two-parent households, eating vegetables one additional time a day is associated with 10 percentage point decreased probability of obesity, while one additional drink of juice a day is associated with 10 percentage point increased probability of obesity. Second, variation across care types could be manifested through differences in the structure of the physical environment not captured by differences in food intake alone. This type of effect is found to be marginal and is statistically significant among children from two-parent households only. Data are used from the Early Childhood Longitudinal Study - Birth Cohort surveys (N=10,700; years=2001-2008). Children's age ranged from four to six years in the sample. Copyright © 2014 Elsevier B.V. All rights reserved.
Effects of lines of progress and semilogarithmic charts on ratings of charted data
Bailey, Donald B.
1984-01-01
The extent to which interrater agreement and ratings of significance on both changes in level and trend are affected by lines of progress and semilogarithmic charts was investigated. Thirteen graduate students rated four sets of charts, each set containing 19 phase changes. Set I data were plotted on equal interval charts. In Set II a line of progress was drawn through each phase on each chart. In Set III data points were replotted on semilogarithmic charts. In Set IV a line of progress was drawn through each phase of each Set III chart. A significant main effect on interrater agreement was found for lines of progress as well as a significant 2-way interaction between lines of progress and change type. Three main effects (chart type, lines of progress, and type of change) and a significant 3-way interaction were found for ratings of significance. Implications of these data for visual analysis of charted data are discussed. PMID:16795676
The influence of sampling interval on the accuracy of trail impact assessment
Leung, Y.-F.; Marion, J.L.
1999-01-01
Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.
NASA Astrophysics Data System (ADS)
Lilja, Dan
2018-03-01
Since its inception in the 1970s at the hands of Feigenbaum and, independently, Coullet and Tresser the study of renormalization operators in dynamics has been very successful at explaining universality phenomena observed in certain families of dynamical systems. The first proof of existence of a hyperbolic fixed point for renormalization of area-preserving maps was given by Eckmann et al. (Mem Am Math Soc 47(289):vi+122, 1984). However, there are still many things that are unknown in this setting, in particular regarding the invariant Cantor sets of infinitely renormalizable maps. In this paper we show that the invariant Cantor set of period doubling type of any infinitely renormalizable area-preserving map in the universality class of the Eckmann-Koch-Wittwer renormalization fixed point is always contained in a Lipschitz curve but never contained in a smooth curve. This extends previous results by de Carvalho, Lyubich and Martens about strongly dissipative maps of the plane close to unimodal maps to the area-preserving setting. The method used for constructing the Lipschitz curve is very similar to the method used in the dissipative case but proving the nonexistence of smooth curves requires new techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufmann, Ralph M., E-mail: rkaufman@math.purdue.edu; Khlebnikov, Sergei, E-mail: skhleb@physics.purdue.edu; Wehefritz-Kaufmann, Birgit, E-mail: ebkaufma@math.purdue.edu
2012-11-15
Motivated by the Double Gyroid nanowire network we develop methods to detect Dirac points and classify level crossings, aka. singularities in the spectrum of a family of Hamiltonians. The approach we use is singularity theory. Using this language, we obtain a characterization of Dirac points and also show that the branching behavior of the level crossings is given by an unfolding of A{sub n} type singularities. Which type of singularity occurs can be read off a characteristic region inside the miniversal unfolding of an A{sub k} singularity. We then apply these methods in the setting of families of graph Hamiltonians,more » such as those for wire networks. In the particular case of the Double Gyroid we analytically classify its singularities and show that it has Dirac points. This indicates that nanowire systems of this type should have very special physical properties. - Highlights: Black-Right-Pointing-Pointer New method for analytically finding Dirac points. Black-Right-Pointing-Pointer Novel relation of level crossings to singularity theory. Black-Right-Pointing-Pointer More precise version of the von-Neumann-Wigner theorem for arbitrary smooth families of Hamiltonians of fixed size. Black-Right-Pointing-Pointer Analytical proof of the existence of Dirac points for the Gyroid wire network.« less
The Julia sets of basic uniCremer polynomials of arbitrary degree
NASA Astrophysics Data System (ADS)
Blokh, Alexander; Oversteegen, Lex
Let P be a polynomial of degree d with a Cremer point p and no repelling or parabolic periodic bi-accessible points. We show that there are two types of such Julia sets J_P . The red dwarf J_P are nowhere connected im kleinen and such that the intersection of all impressions of external angles is a continuum containing p and the orbits of all critical images. The solar J_P are such that every angle with dense orbit has a degenerate impression disjoint from other impressions and J_P is connected im kleinen at its landing point. We study bi-accessible points and locally connected models of J_P and show that such sets J_P appear through polynomial-like maps for generic polynomials with Cremer points. Since known tools break down for d>2 (if d>2 , it is not known if there are small cycles near p , while if d=2 , this result is due to Yoccoz), we introduce wandering ray continua in J_P and provide a new application of Thurston laminations.
Superposition and alignment of labeled point clouds.
Fober, Thomas; Glinca, Serghei; Klebe, Gerhard; Hüllermeier, Eyke
2011-01-01
Geometric objects are often represented approximately in terms of a finite set of points in three-dimensional euclidean space. In this paper, we extend this representation to what we call labeled point clouds. A labeled point cloud is a finite set of points, where each point is not only associated with a position in three-dimensional space, but also with a discrete class label that represents a specific property. This type of model is especially suitable for modeling biomolecules such as proteins and protein binding sites, where a label may represent an atom type or a physico-chemical property. Proceeding from this representation, we address the question of how to compare two labeled points clouds in terms of their similarity. Using fuzzy modeling techniques, we develop a suitable similarity measure as well as an efficient evolutionary algorithm to compute it. Moreover, we consider the problem of establishing an alignment of the structures in the sense of a one-to-one correspondence between their basic constituents. From a biological point of view, alignments of this kind are of great interest, since mutually corresponding molecular constituents offer important information about evolution and heredity, and can also serve as a means to explain a degree of similarity. In this paper, we therefore develop a method for computing pairwise or multiple alignments of labeled point clouds. To this end, we proceed from an optimal superposition of the corresponding point clouds and construct an alignment which is as much as possible in agreement with the neighborhood structure established by this superposition. We apply our methods to the structural analysis of protein binding sites.
Yu, Weiyu; Wardrop, Nicola A; Bain, Robert; Wright, Jim A
2017-07-01
Sustainable Development Goal (SDG) 6 has expanded the Millennium Development Goals' focus from improved drinking-water to safely managed water services. This expanded focus to include issues such as water quality requires richer monitoring data and potentially integration of datasets from different sources. Relevant data sets include water point mapping (WPM), the survey of boreholes, wells and other water points, census and household survey data. This study examined inconsistencies between population census and WPM datasets for Cambodia, Liberia and Tanzania, and identified potential barriers to integrating the two datasets to meet monitoring needs. Literatures on numbers of people served per water point were used to convert WPM data to population served by water source type per area and compared with census reports. For Cambodia and Tanzania, discrepancies with census data suggested incomplete WPM coverage. In Liberia, where the data sets were consistent, WPM-derived data on functionality, quantity and quality of drinking water were further combined with census area statistics to generate an enhanced drinking-water access measure for protected wells and springs. The process revealed barriers to integrating census and WPM data, including exclusion of water points not used for drinking by households, matching of census and WPM source types; temporal mismatches between data sources; data quality issues such as missing or implausible data values, and underlying assumptions about population served by different water point technologies. However, integration of these two data sets could be used to identify and rectify gaps in WPM coverage. If WPM databases become more complete and the above barriers are addressed, it could also be used to develop more realistic measures of household drinking-water access for monitoring. Copyright © 2017 Elsevier GmbH. All rights reserved.
Eigenstrain as a mechanical set-point of cells.
Lin, Shengmao; Lampi, Marsha C; Reinhart-King, Cynthia A; Tsui, Gary; Wang, Jian; Nelson, Carl A; Gu, Linxia
2018-02-05
Cell contraction regulates how cells sense their mechanical environment. We sought to identify the set-point of cell contraction, also referred to as tensional homeostasis. In this work, bovine aortic endothelial cells (BAECs), cultured on substrates with different stiffness, were characterized using traction force microscopy (TFM). Numerical models were developed to provide insights into the mechanics of cell-substrate interactions. Cell contraction was modeled as eigenstrain which could induce isometric cell contraction without external forces. The predicted traction stresses matched well with TFM measurements. Furthermore, our numerical model provided cell stress and displacement maps for inspecting the fundamental regulating mechanism of cell mechanosensing. We showed that cell spread area, traction force on a substrate, as well as the average stress of a cell were increased in response to a stiffer substrate. However, the cell average strain, which is cell type-specific, was kept at the same level regardless of the substrate stiffness. This indicated that the cell average strain is the tensional homeostasis that each type of cell tries to maintain. Furthermore, cell contraction in terms of eigenstrain was found to be the same for both BAECs and fibroblast cells in different mechanical environments. This implied a potential mechanical set-point across different cell types. Our results suggest that additional measurements of contractility might be useful for monitoring cell mechanosensing as well as dynamic remodeling of the extracellular matrix (ECM). This work could help to advance the understanding of the cell-ECM relationship, leading to better regenerative strategies.
Spray CVD for Making Solar-Cell Absorber Layers
NASA Technical Reports Server (NTRS)
Banger, Kulbinder K.; Harris, Jerry; Jin, Michael H.; Hepp, Aloysius
2007-01-01
Spray chemical vapor deposition (spray CVD) processes of a special type have been investigated for use in making CuInS2 absorber layers of thin-film solar photovoltaic cells from either of two subclasses of precursor compounds: [(PBu3) 2Cu(SEt)2In(SEt)2] or [(PPh3)2Cu(SEt)2 In(SEt)2]. The CuInS2 films produced in the experiments have been characterized by x-ray diffraction, scanning electron microscopy, energy-dispersive spectroscopy, and four-point-probe electrical tests.
NASA Astrophysics Data System (ADS)
Altin, Necmi
2018-05-01
An interval type-2 fuzzy logic controller-based maximum power point tracking algorithm and direct current-direct current (DC-DC) converter topology are proposed for photovoltaic (PV) systems. The proposed maximum power point tracking algorithm is designed based on an interval type-2 fuzzy logic controller that has an ability to handle uncertainties. The change in PV power and the change in PV voltage are determined as inputs of the proposed controller, while the change in duty cycle is determined as the output of the controller. Seven interval type-2 fuzzy sets are determined and used as membership functions for input and output variables. The quadratic boost converter provides high voltage step-up ability without any reduction in performance and stability of the system. The performance of the proposed system is validated through MATLAB/Simulink simulations. It is seen that the proposed system provides high maximum power point tracking speed and accuracy even for fast changing atmospheric conditions and high voltage step-up requirements.
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
On E-discretization of tori of compact simple Lie groups. II
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Juránek, Michal
2017-10-01
Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.
Forest type mapping of the Interior West
Bonnie Ruefenacht; Gretchen G. Moisen; Jock A. Blackard
2004-01-01
This paper develops techniques for the mapping of forest types in Arizona, New Mexico, and Wyoming. The methods involve regression-tree modeling using a variety of remote sensing and GIS layers along with Forest Inventory Analysis (FIA) point data. Regression-tree modeling is a fast and efficient technique of estimating variables for large data sets with high accuracy...
The integrable case of Adler-van Moerbeke. Discriminant set and bifurcation diagram
NASA Astrophysics Data System (ADS)
Ryabov, Pavel E.; Oshemkov, Andrej A.; Sokolov, Sergei V.
2016-09-01
The Adler-van Moerbeke integrable case of the Euler equations on the Lie algebra so(4) is investigated. For the L- A pair found by Reyman and Semenov-Tian-Shansky for this system, we explicitly present a spectral curve and construct the corresponding discriminant set. The singularities of the Adler-van Moerbeke integrable case and its bifurcation diagram are discussed. We explicitly describe singular points of rank 0, determine their types, and show that the momentum mapping takes them to self-intersection points of the real part of the discriminant set. In particular, the described structure of singularities of the Adler-van Moerbeke integrable case shows that it is topologically different from the other known integrable cases on so(4).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poyer, D.A.
In this report, tests of statistical significance of five sets of variables with household energy consumption (at the point of end-use) are described. Five models, in sequence, were empirically estimated and tested for statistical significance by using the Residential Energy Consumption Survey of the US Department of Energy, Energy Information Administration. Each model incorporated additional information, embodied in a set of variables not previously specified in the energy demand system. The variable sets were generally labeled as economic variables, weather variables, household-structure variables, end-use variables, and housing-type variables. The tests of statistical significance showed each of the variable sets tomore » be highly significant in explaining the overall variance in energy consumption. The findings imply that the contemporaneous interaction of different types of variables, and not just one exclusive set of variables, determines the level of household energy consumption.« less
A fast learning method for large scale and multi-class samples of SVM
NASA Astrophysics Data System (ADS)
Fan, Yu; Guo, Huiming
2017-06-01
A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.
Fixed point theorems for generalized contractions in ordered metric spaces
NASA Astrophysics Data System (ADS)
O'Regan, Donal; Petrusel, Adrian
2008-05-01
The purpose of this paper is to present some fixed point results for self-generalized contractions in ordered metric spaces. Our results generalize and extend some recent results of A.C.M. Ran, M.C. Reurings [A.C.M. Ran, MEC. Reurings, A fixed point theorem in partially ordered sets and some applications to matrix equations, Proc. Amer. Math. Soc. 132 (2004) 1435-1443], J.J. Nieto, R. Rodríguez-López [J.J. Nieto, R. Rodríguez-López, Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations, Order 22 (2005) 223-239; J.J. Nieto, R. Rodríguez-López, Existence and uniqueness of fixed points in partially ordered sets and applications to ordinary differential equations, Acta Math. Sin. (Engl. Ser.) 23 (2007) 2205-2212], J.J. Nieto, R.L. Pouso, R. Rodríguez-López [J.J. Nieto, R.L. Pouso, R. Rodríguez-López, Fixed point theorem theorems in ordered abstract sets, Proc. Amer. Math. Soc. 135 (2007) 2505-2517], A. Petrusel, I.A. Rus [A. Petrusel, I.A. Rus, Fixed point theorems in ordered L-spaces, Proc. Amer. Math. Soc. 134 (2006) 411-418] and R.P. Agarwal, M.A. El-Gebeily, D. O'Regan [R.P. Agarwal, M.A. El-Gebeily, D. O'Regan, Generalized contractions in partially ordered metric spaces, Appl. Anal., in press]. As applications, existence and uniqueness results for Fredholm and Volterra type integral equations are given.
Australian sea-floor survey data, with images and expert annotations.
Bewley, Michael; Friedman, Ariell; Ferrari, Renata; Hill, Nicole; Hovey, Renae; Barrett, Neville; Marzinelli, Ezequiel M; Pizarro, Oscar; Figueira, Will; Meyer, Lisa; Babcock, Russ; Bellchambers, Lynda; Byrne, Maria; Williams, Stefan B
2015-01-01
This Australian benthic data set (BENTHOZ-2015) consists of an expert-annotated set of georeferenced benthic images and associated sensor data, captured by an autonomous underwater vehicle (AUV) around Australia. This type of data is of interest to marine scientists studying benthic habitats and organisms. AUVs collect georeferenced images over an area with consistent illumination and altitude, and make it possible to generate broad scale, photo-realistic 3D maps. Marine scientists then typically spend several minutes on each of thousands of images, labeling substratum type and biota at a subset of points. Labels from four Australian research groups were combined using the CATAMI classification scheme, a hierarchical classification scheme based on taxonomy and morphology for scoring marine imagery. This data set consists of 407,968 expert labeled points from around the Australian coast, with associated images, geolocation and other sensor data. The robotic surveys that collected this data form part of Australia's Integrated Marine Observing System (IMOS) ongoing benthic monitoring program. There is reuse potential in marine science, robotics, and computer vision research.
Australian sea-floor survey data, with images and expert annotations
Bewley, Michael; Friedman, Ariell; Ferrari, Renata; Hill, Nicole; Hovey, Renae; Barrett, Neville; Pizarro, Oscar; Figueira, Will; Meyer, Lisa; Babcock, Russ; Bellchambers, Lynda; Byrne, Maria; Williams, Stefan B.
2015-01-01
This Australian benthic data set (BENTHOZ-2015) consists of an expert-annotated set of georeferenced benthic images and associated sensor data, captured by an autonomous underwater vehicle (AUV) around Australia. This type of data is of interest to marine scientists studying benthic habitats and organisms. AUVs collect georeferenced images over an area with consistent illumination and altitude, and make it possible to generate broad scale, photo-realistic 3D maps. Marine scientists then typically spend several minutes on each of thousands of images, labeling substratum type and biota at a subset of points. Labels from four Australian research groups were combined using the CATAMI classification scheme, a hierarchical classification scheme based on taxonomy and morphology for scoring marine imagery. This data set consists of 407,968 expert labeled points from around the Australian coast, with associated images, geolocation and other sensor data. The robotic surveys that collected this data form part of Australia's Integrated Marine Observing System (IMOS) ongoing benthic monitoring program. There is reuse potential in marine science, robotics, and computer vision research. PMID:26528396
Australian sea-floor survey data, with images and expert annotations
NASA Astrophysics Data System (ADS)
Bewley, Michael; Friedman, Ariell; Ferrari, Renata; Hill, Nicole; Hovey, Renae; Barrett, Neville; Pizarro, Oscar; Figueira, Will; Meyer, Lisa; Babcock, Russ; Bellchambers, Lynda; Byrne, Maria; Williams, Stefan B.
2015-10-01
This Australian benthic data set (BENTHOZ-2015) consists of an expert-annotated set of georeferenced benthic images and associated sensor data, captured by an autonomous underwater vehicle (AUV) around Australia. This type of data is of interest to marine scientists studying benthic habitats and organisms. AUVs collect georeferenced images over an area with consistent illumination and altitude, and make it possible to generate broad scale, photo-realistic 3D maps. Marine scientists then typically spend several minutes on each of thousands of images, labeling substratum type and biota at a subset of points. Labels from four Australian research groups were combined using the CATAMI classification scheme, a hierarchical classification scheme based on taxonomy and morphology for scoring marine imagery. This data set consists of 407,968 expert labeled points from around the Australian coast, with associated images, geolocation and other sensor data. The robotic surveys that collected this data form part of Australia's Integrated Marine Observing System (IMOS) ongoing benthic monitoring program. There is reuse potential in marine science, robotics, and computer vision research.
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
Touch and gravitropic set-point angle interact to modulate gravitropic growth in roots
NASA Technical Reports Server (NTRS)
Massa, G. D.; Gilroy, S.
2003-01-01
Plant roots must sense and respond to a variety of environmental stimuli as they grow through the soil. Touch and gravity represent two of the mechanical signals that roots must integrate to elicit the appropriate root growth patterns and root system architecture. Obstacles such as rocks will impede the general downwardly directed gravitropic growth of the root system and so these soil features must be sensed and this information processed for an appropriate alteration in gravitropic growth to allow the root to avoid the obstruction. We show that primary and lateral roots of Arabidopsis do appear to sense and respond to mechanical barriers placed in their path of growth in a qualitatively similar fashion. Both types of roots exhibited a differential growth response upon contacting the obstacle that directed the main axis of elongation parallel to the barrier. This growth habit was maintained until the obstacle was circumvented, at which point normal gravitropic growth was resumed. Thus, the gravitational set-point angle of the primary and lateral roots prior to encountering the barrier were 95 degrees and 136 degrees respectively and after growing off the end of the obstacle identical set-point angles were reinstated. However, whilst tracking across the barrier, quantitative differences in response were observed between these two classes of roots. The root tip of the primary root maintained an angle of 136 degrees to the horizontal as it traversed the barrier whereas the lateral roots adopted an angle of 154 degrees. Thus, this root tip angle appeared dependent on the gravitropic set-point angle of the root type with the difference in tracking angle quantitatively reflecting differences in initial set-point angle. Concave and convex barriers were also used to analyze the response of the root to tracking along a continuously varying surface. The roots maintained the a fairly fixed angle to gravity on the curved surface implying a constant resetting of this tip angle/tracking response as the curve of the surface changed. We propose that the interaction of touch and gravity sensing/response systems combine to strictly control the tropic growth of the root. Such signal integration is likely a critical part of growth control in the stimulus-rich environment of the soil. c2003 COSPAR. Published by Elsevier Ltd. All rights reserved.
Data approximation using a blending type spline construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less
NASA Astrophysics Data System (ADS)
Zhang, Dianjun; Zhou, Guoqing
2015-12-01
Soil moisture (SM) is a key variable that has been widely used in many environmental studies. Land surface temperature versus vegetation index (LST-VI) space becomes a common way to estimate SM in optical remote sensing applications. Normalized LST-VI space is established by the normalized LST and VI to obtain the comparable SM in Zhang et al. (Validation of a practical normalized soil moisture model with in situ measurements in humid and semiarid regions [J]. International Journal of Remote Sensing, DOI: 10.1080/01431161.2015.1055610). The boundary conditions in the study were set to limit the point A (the driest bare soil) and B (the wettest bare soil) for surface energy closure. However, no limitation was installed for point D (the full vegetation cover). In this paper, many vegetation types are simulated by the land surface model - Noah LSM 3.2 to analyze the effects on soil moisture estimation, such as crop, grass and mixed forest. The locations of point D are changed with vegetation types. The normalized LST of point D for forest is much lower than crop and grass. The location of point D is basically unchanged for crop and grass.
Selection and Characterization of Vegetable Crop Cultivars for use in Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Langhans, Robert W.
1997-01-01
Cultivar evaluation for controlled environments is a lengthy and multifaceted activity. The chapters of this thesis cover eight steps preparatory to yield trials, and the final step of cultivar selection after data are collected. The steps are as follows: 1. Examination of the literature on the crop and crop cultivars to assess the state of knowledge. 2. Selection of standard cultivars with which to explore crop response to major growth factors and determine set points for screening and, later, production. 3. Determination of practical growing techniques for the crop in controlled environments. 4. Design of experiments for determination of crop responses to the major growth factors, with particular emphasis on photoperiod, daily light integral and air temperature. 5. Developing a way of measuring yield appropriate to the crop type by sampling through the harvest period and calculating a productivity function. 6. Narrowing down the pool of cultivars and breeding lines according to a set of criteria and breeding history. 7. Determination of environmental set points for cultivar evaluation through calculating production cost as a function of set points and size of target facility. 8. Design of screening and yield trial experiments emphasizing efficient use of space. 9. Final evaluation of cultivars after data collection, in terms of production cost and value to the consumer. For each of the steps, relevant issues are addressed. In selecting standards to determine set points for screening, set points that optimize cost of production for the standards may not be applicable to all cultivars. Production of uniform and equivalent- sized seedlings is considered as a means of countering possible differences in seed vigor. Issues of spacing and re-spacing are also discussed.
Generalization of the Time-Energy Uncertainty Relation of Anandan-Aharonov Type
NASA Technical Reports Server (NTRS)
Hirayama, Minoru; Hamada, Takeshi; Chen, Jin
1996-01-01
A new type of time-energy uncertainty relation was proposed recently by Anandan and Aharonov. Their formula, to estimate the lower bound of time-integral of the energy-fluctuation in a quantum state is generalized to the one involving a set of quantum states. This is achieved by obtaining an explicit formula for the distance between two finitely separated points in the Grassman manifold.
Making data matter: Voxel printing for the digital fabrication of data across scales and domains.
Bader, Christoph; Kolb, Dominik; Weaver, James C; Sharma, Sunanda; Hosny, Ahmed; Costa, João; Oxman, Neri
2018-05-01
We present a multimaterial voxel-printing method that enables the physical visualization of data sets commonly associated with scientific imaging. Leveraging voxel-based control of multimaterial three-dimensional (3D) printing, our method enables additive manufacturing of discontinuous data types such as point cloud data, curve and graph data, image-based data, and volumetric data. By converting data sets into dithered material deposition descriptions, through modifications to rasterization processes, we demonstrate that data sets frequently visualized on screen can be converted into physical, materially heterogeneous objects. Our approach alleviates the need to postprocess data sets to boundary representations, preventing alteration of data and loss of information in the produced physicalizations. Therefore, it bridges the gap between digital information representation and physical material composition. We evaluate the visual characteristics and features of our method, assess its relevance and applicability in the production of physical visualizations, and detail the conversion of data sets for multimaterial 3D printing. We conclude with exemplary 3D-printed data sets produced by our method pointing toward potential applications across scales, disciplines, and problem domains.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
Abgrall, N.; Arnquist, I. J.; Avignone, F. T.; ...
2016-11-11
Here, a search for Pauli-exclusion-principle-violating K α electron transitions was performed using 89.5 kg-d of data collected with a p-type point contact high-purity germanium detector operated at the Kimballton Underground Research Facility. A lower limit on the transition lifetime of 5.8 × 10 30 s at 90% C.L. was set by looking for a peak at 10.6 keV resulting from the X-ray and Auger electrons present following the transition. A similar analysis was done to look for the decay of atomic K-shell electrons into neutrinos, resulting in a lower limit of 6.8 × 10 30 s at 90% C.L. Itmore » is estimated that the Majorana Demonstrator, a 44 kg array of p-type point contact detectors that will search for the neutrinoless double-beta decay of 76Ge, could improve upon these exclusion limits by an order of magnitude after three years of operation.« less
Personalizing Androgen Suppression for Prostate Cancer Using Mathematical Modeling.
Hirata, Yoshito; Morino, Kai; Akakura, Koichiro; Higano, Celestia S; Aihara, Kazuyuki
2018-02-08
Using a dataset of 150 patients treated with intermittent androgen suppression (IAS) through a fixed treatment schedule, we retrospectively designed a personalized treatment schedule mathematically for each patient. We estimated 100 sets of parameter values for each patient by randomly resampling each patient's time points to take into account the uncertainty for observations of prostate specific antigen (PSA). Then, we identified 3 types and classified patients accordingly: in type (i), the relapse, namely the divergence of PSA, can be prevented by IAS; in type (ii), the relapse can be delayed by IAS later than by continuous androgen suppression (CAS); in type (iii) IAS was not beneficial and therefore CAS would have been more appropriate in the long run. Moreover, we obtained a treatment schedule of hormone therapy by minimizing the PSA of 3 years later in the worst case scenario among the 100 parameter sets by searching exhaustively all over the possible treatment schedules. If the most frequent type among 100 sets was type (i), the maximal PSA tended to be kept less than 100 ng/ml longer in IAS than in CAS, while there was no statistical difference for the other cases. Thus, mathematically personalized IAS should be studied prospectively.
Dorman, Michael F; Natale, Sarah; Loiselle, Louise
2018-03-01
Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology
Lebwohl, David; Kay, Andrea; Berg, William; Baladi, Jean Francois; Zheng, Ji
2009-01-01
In clinical trials of oncology drugs, overall survival (OS) is a direct measure of clinical efficacy and is considered the gold standard primary efficacy end point. The purpose of this study was to discuss the difficulties in using OS as a primary efficacy end point in the setting of evolving cancer therapies. We suggest that progression-free survival is an appropriate efficacy end point in many types of cancer, specifically those for which OS is expected to be prolonged and for which subsequent treatments are expected to affect OS.
Hip and Wrist Accelerometer Algorithms for Free-Living Behavior Classification.
Ellis, Katherine; Kerr, Jacqueline; Godbole, Suneeta; Staudenmayer, John; Lanckriet, Gert
2016-05-01
Accelerometers are a valuable tool for objective measurement of physical activity (PA). Wrist-worn devices may improve compliance over standard hip placement, but more research is needed to evaluate their validity for measuring PA in free-living settings. Traditional cut-point methods for accelerometers can be inaccurate and need testing in free living with wrist-worn devices. In this study, we developed and tested the performance of machine learning (ML) algorithms for classifying PA types from both hip and wrist accelerometer data. Forty overweight or obese women (mean age = 55.2 ± 15.3 yr; BMI = 32.0 ± 3.7) wore two ActiGraph GT3X+ accelerometers (right hip, nondominant wrist; ActiGraph, Pensacola, FL) for seven free-living days. Wearable cameras captured ground truth activity labels. A classifier consisting of a random forest and hidden Markov model classified the accelerometer data into four activities (sitting, standing, walking/running, and riding in a vehicle). Free-living wrist and hip ML classifiers were compared with each other, with traditional accelerometer cut points, and with an algorithm developed in a laboratory setting. The ML classifier obtained average values of 89.4% and 84.6% balanced accuracy over the four activities using the hip and wrist accelerometer, respectively. In our data set with average values of 28.4 min of walking or running per day, the ML classifier predicted average values of 28.5 and 24.5 min of walking or running using the hip and wrist accelerometer, respectively. Intensity-based cut points and the laboratory algorithm significantly underestimated walking minutes. Our results demonstrate the superior performance of our PA-type classification algorithm, particularly in comparison with traditional cut points. Although the hip algorithm performed better, additional compliance achieved with wrist devices might justify using a slightly lower performing algorithm.
Control system for an artificial heart
NASA Technical Reports Server (NTRS)
Gebben, V. D.; Webb, J. A., Jr.
1970-01-01
Inexpensive industrial pneumatic components are combined to produce control system to drive sac-type heart-assistance blood pump with controlled pulsatile pressure that makes pump rate of flow sensitive to venous /atrial/ pressure, while stroke is centered about set operating point and pump is synchronized with natural heart.
Maximum power point tracker for photovoltaic power plants
NASA Astrophysics Data System (ADS)
Arcidiacono, V.; Corsi, S.; Lambri, L.
The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xuetao; Zhu, Quanxin, E-mail: zqx22@126.com
2015-12-15
In this paper, we are mainly concerned with a class of stochastic neutral functional differential equations of Sobolev-type with Poisson jumps. Under two different sets of conditions, we establish the existence of the mild solution by applying the Leray-Schauder alternative theory and the Sadakovskii’s fixed point theorem, respectively. Furthermore, we use the Bihari’s inequality to prove the Osgood type uniqueness. Also, the mean square exponential stability is investigated by applying the Gronwall inequality. Finally, two examples are given to illustrate the theory results.
Three-stage sorption type cryogenic refrigeration systems and methods employing heat regeneration
NASA Technical Reports Server (NTRS)
Bard, Steven (Inventor); Jones, Jack A. (Inventor)
1992-01-01
A three-stage sorption type cryogenic refrigeration system, each stage containing a fluid having a respectively different boiling point, is presented. Each stage includes a compressor in which a respective fluid is heated to be placed in a high pressure gaseous state. The compressor for that fluid which is heated to the highest temperature is enclosed by the other two compressors to permit heat to be transferred from the inner compressor to the surrounding compressors. The system may include two sets of compressors, each having the structure described above, with the interior compressors of the two sets coupled together to permit selective heat transfer therebetween, resulting in more efficient utilization of input power.
The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV
NASA Astrophysics Data System (ADS)
Ho, Y.; Weber, J.
2017-12-01
WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
Expected Number of Fixed Points in Boolean Networks with Arbitrary Topology.
Mori, Fumito; Mochizuki, Atsushi
2017-07-14
Boolean network models describe genetic, neural, and social dynamics in complex networks, where the dynamics depend generally on network topology. Fixed points in a genetic regulatory network are typically considered to correspond to cell types in an organism. We prove that the expected number of fixed points in a Boolean network, with Boolean functions drawn from probability distributions that are not required to be uniform or identical, is one, and is independent of network topology if only a feedback arc set satisfies a stochastic neutrality condition. We also demonstrate that the expected number is increased by the predominance of positive feedback in a cycle.
NASA Astrophysics Data System (ADS)
Guo, W. C.; Yang, J. D.; Chen, J. P.; Peng, Z. Y.; Zhang, Y.; Chen, C. C.
2016-11-01
Load rejection test is one of the essential tests that carried out before the hydroelectric generating set is put into operation formally. The test aims at inspecting the rationality of the design of the water diversion and power generation system of hydropower station, reliability of the equipment of generating set and the dynamic characteristics of hydroturbine governing system. Proceeding from different accident conditions of hydroelectric generating set, this paper presents the transient processes of load rejection corresponding to different accident conditions, and elaborates the characteristics of different types of load rejection. Then the numerical simulation method of different types of load rejection is established. An engineering project is calculated to verify the validity of the method. Finally, based on the numerical simulation results, the relationship among the different types of load rejection and their functions on the design of hydropower station and the operation of load rejection test are pointed out. The results indicate that: The load rejection caused by the accident within the hydroelectric generating set is realized by emergency distributing valve, and it is the basis of the optimization for the closing law of guide vane and the calculation of regulation and guarantee. The load rejection caused by the accident outside the hydroelectric generating set is realized by the governor. It is the most efficient measure to inspect the dynamic characteristics of hydro-turbine governing system, and its closure rate of guide vane set in the governor depends on the optimization result in the former type load rejection.
17 CFR 230.431 - Summary prospectuses.
Code of Federal Regulations, 2011 CFR
2011-04-01
... States or any State or Territory or the District of Columbia and has its principal business operations in... published in a newspaper, magazine or other periodical need only be set in type at least as large as 7 point... a newspaper, magazine, or other periodical, if such reprints are clearly legible. (g) Eight copies...
17 CFR 230.431 - Summary prospectuses.
Code of Federal Regulations, 2013 CFR
2013-04-01
... States or any State or Territory or the District of Columbia and has its principal business operations in... published in a newspaper, magazine or other periodical need only be set in type at least as large as 7 point... a newspaper, magazine, or other periodical, if such reprints are clearly legible. (g) Eight copies...
17 CFR 230.431 - Summary prospectuses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... States or any State or Territory or the District of Columbia and has its principal business operations in... published in a newspaper, magazine or other periodical need only be set in type at least as large as 7 point... a newspaper, magazine, or other periodical, if such reprints are clearly legible. (g) Eight copies...
17 CFR 230.431 - Summary prospectuses.
Code of Federal Regulations, 2014 CFR
2014-04-01
... States or any State or Territory or the District of Columbia and has its principal business operations in... published in a newspaper, magazine or other periodical need only be set in type at least as large as 7 point... a newspaper, magazine, or other periodical, if such reprints are clearly legible. (g) Eight copies...
17 CFR 230.431 - Summary prospectuses.
Code of Federal Regulations, 2012 CFR
2012-04-01
... States or any State or Territory or the District of Columbia and has its principal business operations in... published in a newspaper, magazine or other periodical need only be set in type at least as large as 7 point... a newspaper, magazine, or other periodical, if such reprints are clearly legible. (g) Eight copies...
40 CFR 86.605-88 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., pressure increase across the pump, and the temperature set point of the temperature control system. (2... samples are being collected. (3) Humidity of dilution air. (4) Manufacturer, model, type and serial number..., ambient temperature and humidity. (2) Data and time of day. (ii) In lieu of recording test equipment...
40 CFR 86.605-88 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., pressure increase across the pump, and the temperature set point of the temperature control system. (2... samples are being collected. (3) Humidity of dilution air. (4) Manufacturer, model, type and serial number..., ambient temperature and humidity. (2) Data and time of day. (ii) In lieu of recording test equipment...
40 CFR 86.605-88 - Maintenance of records; submittal of information.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., pressure increase across the pump, and the temperature set point of the temperature control system. (2... samples are being collected. (3) Humidity of dilution air. (4) Manufacturer, model, type and serial number..., ambient temperature and humidity. (2) Data and time of day. (ii) In lieu of recording test equipment...
Computer program documentation: ISOCLS iterative self-organizing clustering program, program C094
NASA Technical Reports Server (NTRS)
Minter, R. T. (Principal Investigator)
1972-01-01
The author has identified the following significant results. This program implements an algorithm which, ideally, sorts a given set of multivariate data points into similar groups or clusters. The program is intended for use in the evaluation of multispectral scanner data; however, the algorithm could be used for other data types as well. The user may specify a set of initial estimated cluster means to begin the procedure, or he may begin with the assumption that all the data belongs to one cluster. The procedure is initiatized by assigning each data point to the nearest (in absolute distance) cluster mean. If no initial cluster means were input, all of the data is assigned to cluster 1. The means and standard deviations are calculated for each cluster.
NASA Technical Reports Server (NTRS)
Hoffer, R. M. (Principal Investigator)
1975-01-01
The author has reported the following significant results. A data set containing SKYLAB, LANDSAT, and topographic data has been overlayed, registered, and geometrically corrected to a scale of 1:24,000. After geometrically correcting both sets of data, the SKYLAB data were overlayed on the LANDSAT data. Digital topographic data were then obtained, reformatted, and a data channel containing elevation information was then digitally overlayed onto the LANDSAT and SKYLAB spectral data. The 14,039 square kilometers involving 2,113, 776 LANDSAT pixels represents a relatively large data set available for digital analysis. The overlayed data set enables investigators to numerically analyze and compare two sources of spectral data and topographic data from any point in the scene. This capability is new and it will permit a numerical comparison of spectral response with elevation, slope, and aspect. Utilization of the spectral and topographic data together to obtain more accurate classifications of the various cover types present is feasible.
Bidshahri, Roza; Attali, Dean; Fakhfakh, Kareem; McNeil, Kelly; Karsan, Aly; Won, Jennifer R; Wolber, Robert; Bryan, Jennifer; Hughesman, Curtis; Haynes, Charles
2016-03-01
A need exists for robust and cost-effective assays to detect a single or small set of actionable point mutations, or a complete set of clinically informative mutant alleles. Knowledge of these mutations can be used to alert the clinician to a rare mutation that might necessitate more aggressive clinical monitoring or a personalized course of treatment. An example is BRAF, a (proto)oncogene susceptible to either common or rare mutations in codon V600 and adjacent codons. We report a diagnostic technology that leverages the unique capabilities of droplet digital PCR to achieve not only accurate and sensitive detection of BRAF(V600E) but also all known somatic point mutations within the BRAF V600 codon. The simple and inexpensive two-well droplet digital PCR assay uses a chimeric locked nucleic acid/DNA probe against wild-type BRAF and a novel wild-type-negative screening paradigm. The assay shows complete diagnostic accuracy when applied to formalin-fixed, paraffin-embedded tumor specimens from metastatic colorectal cancer patients deficient for Mut L homologue-1. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
Type-2 fuzzy set extension of DEMATEL method combined with perceptual computing for decision making
NASA Astrophysics Data System (ADS)
Hosseini, Mitra Bokaei; Tarokh, Mohammad Jafar
2013-05-01
Most decision making methods used to evaluate a system or demonstrate the weak and strength points are based on fuzzy sets and evaluate the criteria with words that are modeled with fuzzy sets. The ambiguity and vagueness of the words and different perceptions of a word are not considered in these methods. For this reason, the decision making methods that consider the perceptions of decision makers are desirable. Perceptual computing is a subjective judgment method that considers that words mean different things to different people. This method models words with interval type-2 fuzzy sets that consider the uncertainty of the words. Also, there are interrelations and dependency between the decision making criteria in the real world; therefore, using decision making methods that cannot consider these relations is not feasible in some situations. The Decision-Making Trail and Evaluation Laboratory (DEMATEL) method considers the interrelations between decision making criteria. The current study used the combination of DEMATEL and perceptual computing in order to improve the decision making methods. For this reason, the fuzzy DEMATEL method was extended into type-2 fuzzy sets in order to obtain the weights of dependent criteria based on the words. The application of the proposed method is presented for knowledge management evaluation criteria.
Radiation-hardened transistor and integrated circuit
Ma, Kwok K.
2007-11-20
A composite transistor is disclosed for use in radiation hardening a CMOS IC formed on an SOI or bulk semiconductor substrate. The composite transistor has a circuit transistor and a blocking transistor connected in series with a common gate connection. A body terminal of the blocking transistor is connected only to a source terminal thereof, and to no other connection point. The blocking transistor acts to prevent a single-event transient (SET) occurring in the circuit transistor from being coupled outside the composite transistor. Similarly, when a SET occurs in the blocking transistor, the circuit transistor prevents the SET from being coupled outside the composite transistor. N-type and P-type composite transistors can be used for each and every transistor in the CMOS IC to radiation harden the IC, and can be used to form inverters and transmission gates which are the building blocks of CMOS ICs.
A healthy lifestyle coaching-persuasive application for patients with type 2 diabetes.
Fico, G; Fioravanti, A; Arredondo, M T; Ardigó, D; Guillén, A
2010-01-01
Losing weight can be one of the toughest objectives related to diabetes treatment, especially for Type 2 diabetes mellitus. This paper describes a tool to set goals to achieve lifestyle behavioral changes, and keep track of the benefits derived from these changes. This strategy leans on the capability of evaluating users' compliance to treatment, identifying key points where the lack of motivation causes therapy dropping, and on the better resources that physicians will have to adjust the treatments and the prescriptions.
A new technique for solving the Parker-type wind equations
NASA Technical Reports Server (NTRS)
Melia, Fulvio
1988-01-01
Substitution of the novel function Phi for velocity, as one of the dependent variables in Parker-type solar wind equations, removes the critical point, and therefore the numerical difficulties encountered, from the set of coupled differential wind equations. The method has already been successfully used in a study of radiatively-driven mass loss from the surface of X-ray bursting neutron stars. The present technique for solving the equations of time-independent mass loss can be useful in similar applications.
Equilibrium points of the tilted perfect fluid Bianchi VIh state space
NASA Astrophysics Data System (ADS)
Apostolopoulos, Pantelis S.
2005-05-01
We present the full set of evolution equations for the spatially homogeneous cosmologies of type VIh filled with a tilted perfect fluid and we provide the corresponding equilibrium points of the resulting dynamical state space. It is found that only when the group parameter satisfies h > -1 a self-similar solution exists. In particular we show that for h > -{1/9} there exists a self-similar equilibrium point provided that γ ∈ ({2(3+sqrt{-h})/5+3sqrt{-h}},{3/2}) whereas for h < -{frac 19} the state parameter belongs to the interval γ ∈(1,{2(3+sqrt{-h})/5+3sqrt{-h}}). This family of new exact self-similar solutions belongs to the subclass nαα = 0 having non-zero vorticity. In both cases the equilibrium points have a six-dimensional stable manifold and may act as future attractors at least for the models satisfying nαα = 0. Also we give the exact form of the self-similar metrics in terms of the state and group parameter. As an illustrative example we provide the explicit form of the corresponding self-similar radiation model (γ = {frac 43}), parametrised by the group parameter h. Finally we show that there are no tilted self-similar models of type III and irrotational models of type VIh.
High-order time-marching reinitialization for regional level-set functions
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-02-01
In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.
On the structure of the set of coincidence points
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arutyunov, A V; Gel'man, B D
2015-03-31
We consider the set of coincidence points for two maps between metric spaces. Cardinality, metric and topological properties of the coincidence set are studied. We obtain conditions which guarantee that this set (a) consists of at least two points; (b) consists of at least n points; (c) contains a countable subset; (d) is uncountable. The results are applied to study the structure of the double point set and the fixed point set for multivalued contractions. Bibliography: 12 titles.
Experiment Comparison between Engineering Acid Dew Point and Thermodynamic Acid Dew Point
NASA Astrophysics Data System (ADS)
Song, Jinghui; Yuan, Hui; Deng, Jianhua
2018-06-01
in order to realize the accurate prediction of acid dew point, a set of measurement system of acid dew point for the flue gas flue gas in the tail of the boiler was designed and built, And measured at the outlet of an air preheater of a power plant of 1 000 MW, The results show that: Under the same conditions, with the test temperature decreases, Nu of heat transfer tubes, fouling and corrosion of pipe wall and corrosion pieces gradually deepened. Then, the measured acid dew point is compared with the acid dew point obtained by using the existing empirical formula under the same coal type. The dew point of engineering acid is usually about 40 ° lower than the dew point of thermodynamic acid because of the coupling effect of fouling on the acid liquid, which can better reflect the actual operation of flue gas in engineering and has certain theoretical guidance for the design and operation of deep waste heat utilization system significance.
NASA Astrophysics Data System (ADS)
Mika, Janos; Ivady, Anett; Fulop, Andrea; Makra, László
2010-05-01
Synoptic climatology i.e. classification of the endless variability of the everyday weather states according to the pressure configuration and frontal systems relative to the point, or region of interest has long history in meteorology. Its logical alternative, i.e. classification of weather according to the observed local weather elements was less popular until the recent times when the numerical weather forecasts became able to outline not only the synoptic situation, but the near-surface meteorological variables, as well. Nowadays the computer-based statistical facilities are able to operate with matrices of multivariate diurnal samples, as well. The paper presents an attempt to define a set of local weather types using point-wise series at five rural stations, Szombathely, Pécs, Budapest, Szeged és Debrecen in the 1961-1990 reference period. Ten local variables are used, i.e. the diurnal mean temperature, the diurnal temperature range; the cloudiness, the sunshine duration, the water vapour pressure, the precipitation in a logarithmic scale, also differing trace (below 0.1 mm) and no precipitation, the relative humidity and wind speed, including the more extremity indicators of the two latter parameters, i.e. number of hours with over 80 % relative humidity and over 15 m/s wind gusts. Factor analysis of these ten variables was performed leading to 5 fairly independent variables retained for cluster analysis to obtain the local weather types. Hierarchical cluster analysis was performed to classify the 840-930 days within each month of the 30 years period. Furthers neighbour approach was preferred based on Euclidean metrics to establish optimum number of types. The 12 months and the 5 stations exhibited slightly different results but the optimum number of the types was always between 4 and 12 which is a quite reasonable number from practical considerations. According to a further reasonable compromise, the common number of the types not too bad in either stations or months defines that the common optimum number of local weather types is nine. This set of weather types, specified for each station, was used to "explain" the possible portion of local inter-diurnal variance of seven daily urban air quality measurements, i.e. CO, NO, NO2, NOx, O3, SO2 and PM10. Another set of data for testing the types are the mortalities with chronicle illnesses, i.e. cardio-vascular and respiratory illnesses. This set of 35 years data (1971-2005) is layered for capital city (Budapest, 2 million inhabitants) and rest of the countries (max. 200 000 inhab.). The use of complex weather types is likely better than the common use of individual weather elements, e.g. diurnal mean temperature or a kind of bioclimatic index. The ability of the types to decrease the variability is also compared for both sets of target variables to the analogous ability of macrosynoptic classification by Peczely. The results are also discussed by grouping the investigated contaminants according to their origin.
Evaluation of Humidity Control Options in Hot-Humid Climate Homes (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-12-01
This technical highlight describes NREL research to analyze the indoor relative humidity in three home types in the hot-humid climate zone, and examine the impacts of various dehumidification equipment and controls. As the Building America program researches construction of homes that achieve greater source energy savings over typical mid-1990s construction, proper modeling of whole-house latent loads and operation of humidity control equipment has become a high priority. Long-term high relative humidity can cause health and durability problems in homes, particularly in a hot-humid climate. In this study, researchers at the National Renewable Energy Laboratory (NREL) used the latest EnergyPlus toolmore » equipped with the moisture capacitance model to analyze the indoor relative humidity in three home types: a Building America high-performance home; a mid-1990s reference home; and a 2006 International Energy Conservation Code (IECC)-compliant home in hot-humid climate zones. They examined the impacts of various dehumidification equipment and controls on the high-performance home where the dehumidification equipment energy use can become a much larger portion of whole-house energy consumption. The research included a number of simulated cases: thermostat reset, A/C with energy recovery ventilator, heat exchanger assisted A/C, A/C with condenser reheat, A/C with desiccant wheel dehumidifier, A/C with DX dehumidifier, A/C with energy recovery ventilator, and DX dehumidifier. Space relative humidity, thermal comfort, and whole-house source energy consumption were compared for indoor relative humidity set points of 50%, 55%, and 60%. The study revealed why similar trends of high humidity were observed in all three homes regardless of energy efficiency, and why humidity problems are not necessarily unique in the high-performance home. Thermal comfort analysis indicated that occupants are unlikely to notice indoor humidity problems. The study confirmed that supplemental dehumidification is needed to maintain space relative humidity (RH) below 60% in a hot-humid climate home. Researchers also concluded that while all the active dehumidification options included in the study successfully controlled space relative humidity excursions, the increase in whole-house energy consumption was much more sensitive to the humidity set point than the chosen technology option. In the high-performance home, supplemental dehumidification equipment results in a significant source energy consumption penalty at 50% RH set point (12.6%-22.4%) compared to the consumption at 60% RH set point (1.5%-2.7%). At 50% and 55% RH set points, A/C with desiccant wheel dehumidifier and A/C with ERV and high-efficiency DX dehumidifier stand out as the two cases resulting in the smallest increase of source energy consumption. At an RH set point of 60%, all explicit dehumidification technologies result in similar insignificant increases in source energy consumption and thus are equally competitive.« less
Stability of the Kasner universe in f (T ) gravity
NASA Astrophysics Data System (ADS)
Paliathanasis, Andronikos; Said, Jackson Levi; Barrow, John D.
2018-02-01
f (T ) gravity theory offers an alternative context in which to consider gravitational interactions where torsion, rather than curvature, is the mechanism by which gravitation is communicated. We investigate the stability of the Kasner solution with several forms of the arbitrary Lagrangian function examined within the f (T ) context. This is a Bianchi type-I vacuum solution with anisotropic expansion factors. In the f (T ) gravity setting, the solution must conform to a set of conditions in order to continue to be a vacuum solution of the generalized field equations. With this solution in hand, the perturbed field equations are determined for power-law and exponential forms of the f (T ) function. We find that the point which describes the Kasner solution is a saddle point which means that the singular solution is unstable. However, we find the de Sitter universe is a late-time attractor. In general relativity, the cosmological constant drives the isotropization of the spacetime while in this setting the extra f (T ) contributions now provide this impetus.
Approximating scatterplots of large datasets using distribution splats
NASA Astrophysics Data System (ADS)
Camuto, Matthew; Crawfis, Roger; Becker, Barry G.
2000-02-01
Many situations exist where the plotting of large data sets with categorical attributes is desired in a 3D coordinate system. For example, a marketing company may conduct a survey involving one million subjects and then plot peoples favorite car type against their weight, height and annual income. Scatter point plotting, in which each point is individually plotted at its correspond cartesian location using a defined primitive, is usually used to render a plot of this type. If the dependent variable is continuous, we can discretize the 3D space into bins or voxels and retain the average value of all records falling within each voxel. Previous work employed volume rendering techniques, in particular, splatting, to represent this aggregated data, by mapping each average value to a representative color.
The four fixed points of scale invariant single field cosmological models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, BingKan, E-mail: bxue@princeton.edu
2012-10-01
We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less
No Special K! A Signal Detection Framework for the Strategic Regulation of Memory Accuracy
ERIC Educational Resources Information Center
Higham, Philip A.
2007-01-01
Two experiments investigated criterion setting and metacognitive processes underlying the strategic regulation of accuracy on the Scholastic Aptitude Test (SAT) using Type-2 signal detection theory (SDT). In Experiment 1, report bias was manipulated by penalizing participants either 0.25 (low incentive) or 4 (high incentive) points for each error.…
Capitanio, John P.; Abel, Kristina; Mendoza, Sally P.; Blozis, Shelley A.; McChesney, Michael B.; Cole, Steve W.; Mason, William A.
2008-01-01
From the beginning of the AIDS epidemic, stress has been a suspected contributor to the wide variation seen in disease progression, and some evidence supports this idea. Not all individuals respond to a stressor in the same way, however, and little is known about the biological mechanisms by which variations in individuals’ responses to their environment affect disease-relevant immunologic processes. Using the simian immunodeficiency virus/rhesus macaque model of AIDS, we explored how personality (sociability) and genotype (serotonin transporter promoter) independently interact with social context (stable or unstable social conditions) to influence behavioral expression, plasma cortisol concentrations, SIV-specific IgG, and expression of genes associated with Type I interferon early in infection. SIV viral RNA set-point was strongly and negatively correlated with survival as expected. Set-point was also associated with expression of interferon-stimulated genes, with CXCR3 expression, and with SIV-specific IgG titers. Poorer immune responses, in turn, were associated with display of sustained aggression and submission. Personality and genotype acted independently as well as in interaction with social condition to affect behavioral responses. Together, the data support an “interactionist” perspective (Eysenck, 1991) on disease. Given that an important goal of HIV treatment is to maintain viral set-point as low as possible, our data suggest that supplementing anti-retroviral therapy with behavioral or pharmacologic modulation of other aspects of an organism’s functioning might prolong survival, particularly among individuals living under conditions of threat or uncertainty. PMID:17719201
NASA Technical Reports Server (NTRS)
Edwards, T. R. (Inventor)
1985-01-01
Apparatus for doubling the data density rate of an analog to digital converter or doubling the data density storage capacity of a memory deviced is discussed. An interstitial data point midway between adjacent data points in a data stream having an even number of equal interval data points is generated by applying a set of predetermined one-dimensional convolute integer coefficients which can include a set of multiplier coefficients and a normalizer coefficient. Interpolator means apply the coefficients to the data points by weighting equally on each side of the center of the even number of equal interval data points to obtain an interstital point value at the center of the data points. A one-dimensional output data set, which is twice as dense as a one-dimensional equal interval input data set, can be generated where the output data set includes interstitial points interdigitated between adjacent data points in the input data set. The method for generating the set of interstital points is a weighted, nearest-neighbor, non-recursive, moving, smoothing averaging technique, equivalent to applying a polynomial regression calculation to the data set.
AI and simulation: What can they learn from each other
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.
1988-01-01
Simulation and Artificial Intelligence share a fertile common ground both from a practical and from a conceptual point of view. Strengths and weaknesses of both Knowledge Based System and Modeling and Simulation are examined and three types of systems that combine the strengths of both technologies are discussed. These types of systems are a practical starting point, however, the real strengths of both technologies will be exploited only when they are combined in a common knowledge representation paradigm. From an even deeper conceptual point of view, one might even argue that the ability to reason from a set of facts (i.e., Expert System) is less representative of human reasoning than the ability to make a model of the world, change it as required, and derive conclusions about the expected behavior of world entities. This is a fundamental problem in AI, and Modeling Theory can contribute to its solution. The application of Knowledge Engineering technology to a Distributed Processing Network Simulator (DPNS) is discussed.
DIORAMA Location Type User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terry, James Russell
2015-01-29
The purpose of this report is to present the current design and implementation of the DIORAMA location type object (LocationType) and to provide examples and use cases. The LocationType object is included in the diorama-app package in the diorama::types namespace. Abstractly, the object is intended to capture the full time history of the location of an object or reference point. For example, a location may be speci ed as a near-Earth orbit in terms of a two-line element set, in which case the location type is capable of propagating the orbit both forward and backward in time to provide amore » location for any given time. Alternatively, the location may be speci ed as a xed set of geodetic coordinates (latitude, longitude, and altitude), in which case the geodetic location of the object is expected to remain constant for all time. From an implementation perspective, the location type is de ned as a union of multiple independent objects defi ned in the DIORAMA tle library. Types presently included in the union are listed and described in subsections below, and all conversions or transformation between these location types are handled by utilities provided by the tle library with the exception of the \\special-values" location type.« less
Matrix Sturm-Liouville equation with a Bessel-type singularity on a finite interval
NASA Astrophysics Data System (ADS)
Bondarenko, Natalia
2017-03-01
The matrix Sturm-Liouville equation on a finite interval with a Bessel-type singularity in the end of the interval is studied. Special fundamental systems of solutions for this equation are constructed: analytic Bessel-type solutions with the prescribed behavior at the singular point and Birkhoff-type solutions with the known asymptotics for large values of the spectral parameter. The asymptotic formulas for Stokes multipliers, connecting these two fundamental systems of solutions, are derived. We also set boundary conditions and obtain asymptotic formulas for the spectral data (the eigenvalues and the weight matrices) of the boundary value problem. Our results will be useful in the theory of direct and inverse spectral problems.
NASA Astrophysics Data System (ADS)
Bassier, M.; Bonduel, M.; Van Genechten, B.; Vergauwen, M.
2017-11-01
Point cloud segmentation is a crucial step in scene understanding and interpretation. The goal is to decompose the initial data into sets of workable clusters with similar properties. Additionally, it is a key aspect in the automated procedure from point cloud data to BIM. Current approaches typically only segment a single type of primitive such as planes or cylinders. Also, current algorithms suffer from oversegmenting the data and are often sensor or scene dependent. In this work, a method is presented to automatically segment large unstructured point clouds of buildings. More specifically, the segmentation is formulated as a graph optimisation problem. First, the data is oversegmented with a greedy octree-based region growing method. The growing is conditioned on the segmentation of planes as well as smooth surfaces. Next, the candidate clusters are represented by a Conditional Random Field after which the most likely configuration of candidate clusters is computed given a set of local and contextual features. The experiments prove that the used method is a fast and reliable framework for unstructured point cloud segmentation. Processing speeds up to 40,000 points per second are recorded for the region growing. Additionally, the recall and precision of the graph clustering is approximately 80%. Overall, nearly 22% of oversegmentation is reduced by clustering the data. These clusters will be classified and used as a basis for the reconstruction of BIM models.
Tang, Hao-chen; Xiang, Ming; Chen, Hang; Hu, Xiao-chuan; Yang, Guo-yong
2016-01-01
To investigate the therapeutic efficacy of bone-setting manipulative reduction and small splint fixation combined with micro-movement theory exercise for treatment of humeral shaft fractures. From March 2011 to February 2014, 64 cases of humeral shaft fractures were treated by bone-setting manipulative reduction and small splint fixation including 28 males and 36 females with an average age of 38.1 years old ranging from 22 to 67 years old. According to the classification of AO/OTA, there were 10 cases of type A1, 12 cases of type A2,11 cases of type A3,10 cases of type B1,12 cases of type B2, 7 cases of type B3, 2 cases of type C1, 1 case of type C2, 1 case of type C3. After close reduction early functional exercise performed according to micro-movement theory. All patients had no other parts of the fractures, neurovascular injury, and serious medical problems. Patients were followed up for fracture healing, shoulder and elbow joint function recovery, and curative effect. All patients were followed up from 10 to 12 months with an average of 10.3 months. Of them, 2 cases had a small amount of callus growth at 3 months after close reduction, so instead of operation; 2 cases appeared radial nerve symptoms after close reduction ,so instead of operation. Other patients were osseous healing, the time was 8 to 12 weeks with an average of 10.2 weeks. After osseous healing, according to Constant-Murley score system ,the average score was (93.5 ± 3.2) points, the result was excellent in 29 cases, good in 29 cases, fair in 6 cases, excellent and good rate was 90.3%; according to the Mayo score system, the average score was (93.7 ± 4.2) points, the result was excellent in 35 cases, good in 23 cases, fair in 6 cases, excellent and good rate was 91.9%. Bone-setting manipulative reduction and small splint fixation combined with micromovement theory exercise for treatment of humeral shaft fractures has advantage of positive effect, easy and inexpensive method, the treatment has relevant scientific basis and practical value, it can effectively reduce complications, promote patients early recovery.
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
A method for improved accuracy in three dimensions for determining wheel/rail contact points
NASA Astrophysics Data System (ADS)
Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang
2015-11-01
Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.
[Research on spectra recognition method for cabbages and weeds based on PCA and SIMCA].
Zu, Qin; Deng, Wei; Wang, Xiu; Zhao, Chun-Jiang
2013-10-01
In order to improve the accuracy and efficiency of weed identification, the difference of spectral reflectance was employed to distinguish between crops and weeds. Firstly, the different combinations of Savitzky-Golay (SG) convolutional derivation and multiplicative scattering correction (MSC) method were applied to preprocess the raw spectral data. Then the clustering analysis of various types of plants was completed by using principal component analysis (PCA) method, and the feature wavelengths which were sensitive for classifying various types of plants were extracted according to the corresponding loading plots of the optimal principal components in PCA results. Finally, setting the feature wavelengths as the input variables, the soft independent modeling of class analogy (SIMCA) classification method was used to identify the various types of plants. The experimental results of classifying cabbages and weeds showed that on the basis of the optimal pretreatment by a synthetic application of MSC and SG convolutional derivation with SG's parameters set as 1rd order derivation, 3th degree polynomial and 51 smoothing points, 23 feature wavelengths were extracted in accordance with the top three principal components in PCA results. When SIMCA method was used for classification while the previously selected 23 feature wavelengths were set as the input variables, the classification rates of the modeling set and the prediction set were respectively up to 98.6% and 100%.
Evolution of Query Optimization Methods
NASA Astrophysics Data System (ADS)
Hameurlain, Abdelkader; Morvan, Franck
Query optimization is the most critical phase in query processing. In this paper, we try to describe synthetically the evolution of query optimization methods from uniprocessor relational database systems to data Grid systems through parallel, distributed and data integration systems. We point out a set of parameters to characterize and compare query optimization methods, mainly: (i) size of the search space, (ii) type of method (static or dynamic), (iii) modification types of execution plans (re-optimization or re-scheduling), (iv) level of modification (intra-operator and/or inter-operator), (v) type of event (estimation errors, delay, user preferences), and (vi) nature of decision-making (centralized or decentralized control).
Non-smooth Hopf-type bifurcations arising from impact–friction contact events in rotating machinery
Mora, Karin; Budd, Chris; Glendinning, Paul; Keogh, Patrick
2014-01-01
We analyse the novel dynamics arising in a nonlinear rotor dynamic system by investigating the discontinuity-induced bifurcations corresponding to collisions with the rotor housing (touchdown bearing surface interactions). The simplified Föppl/Jeffcott rotor with clearance and mass unbalance is modelled by a two degree of freedom impact–friction oscillator, as appropriate for a rigid rotor levitated by magnetic bearings. Two types of motion observed in experiments are of interest in this paper: no contact and repeated instantaneous contact. We study how these are affected by damping and stiffness present in the system using analytical and numerical piecewise-smooth dynamical systems methods. By studying the impact map, we show that these types of motion arise at a novel non-smooth Hopf-type bifurcation from a boundary equilibrium bifurcation point for certain parameter values. A local analysis of this bifurcation point allows us a complete understanding of this behaviour in a general setting. The analysis identifies criteria for the existence of such smooth and non-smooth bifurcations, which is an essential step towards achieving reliable and robust controllers that can take compensating action. PMID:25383034
Improving Music Mood Classification Using Lyrics, Audio and Social Tags
ERIC Educational Resources Information Center
Hu, Xiao
2010-01-01
The affective aspect of music (popularly known as music mood) is a newly emerging metadata type and access point to music information, but it has not been well studied in information science. There has yet to be developed a suitable set of mood categories that can reflect the reality of music listening and can be well adopted in the Music…
Cyber OODA: Towards a Conceptual Cyberspace Framework
2010-06-01
settings (cannot view), or a combination of physical and syntactic limits, or viewing a content-rich PowerPoint presentation on a blackberry ...original work) 21 This is an ideal type definition. VPNs tunnel through traditional networks, but do not...exchange information other than travel instructions. As long as the VPN tunnel remains secure, it is treated as a separate cyberspace. If security
ERIC Educational Resources Information Center
Nam, Ta Thanh; Trinh, Lap Q.
2012-01-01
In Vietnamese secondary education, translation and visuals are traditionally used as major techniques in teaching new English lexical items. Responding to the Vietnamese government policy issued in 2008 on using IT for a quality education, the application of PowerPoint has been considered the most prevalent type of technology used in the…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-01
... accepted. Submissions must be submitted in English by the applicable deadlines set forth in this notice. To... Intent to Testify,'' ``Pre-hearing brief'' or a ``Post hearing brief.'' Submissions must be in English, with the total submission not to exceed 30 single-spaced standard letter-size pages in 12-point type...
ERIC Educational Resources Information Center
Wesson, David A.
Copyfitting is probably the least exciting portion of any course that deals with design and production of print advertising. Students find the transformation of manuscript copy into set type difficult to visualize. The math, though no more than multiplication and division, seems insurmountable to some--probably because the entities such as points,…
Chandler, Mark A.; Goggin, David J.; Horne, Patrick J.; Kocurek, Gary G.; Lake, Larry W.
1989-01-01
For making rapid, non-destructive permeability measurements in the field, a portable minipermeameter of the kind having a manually-operated gas injection tip is provided with a microcomputer system which operates a flow controller to precisely regulate gas flow rate to a test sample, and reads a pressure sensor which senses the pressure across the test sample. The microcomputer system automatically turns on the gas supply at the start of each measurement, senses when a steady-state is reached, collects and records pressure and flow rate data, and shuts off the gas supply immediately after the measurement is completed. Preferably temperature is also sensed to correct for changes in gas viscosity. The microcomputer system may also provide automatic zero-point adjustment, sensor calibration, over-range sensing, and may select controllers, sensors, and set-points for obtaining the most precise measurements. Electronic sensors may provide increased accuracy and precision. Preferably one microcomputer is used for sensing instrument control and data collection, and a second microcomputer is used which is dedicated to recording and processing the data, selecting the sensors and set-points for obtaining the most precise measurements, and instructing the user how to set-up and operate the minipermeameter. To provide mass data collection and user-friendly operation, the second microcomputer is preferably a lap-type portable microcomputer having a non-volatile or battery-backed CMOS memory.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
NASA Astrophysics Data System (ADS)
Zeng, Hao; Zhang, Jingrui
2018-04-01
The low-thrust version of the fuel-optimal transfers between periodic orbits with different energies in the vicinity of five libration points is exploited deeply in the Circular Restricted Three-Body Problem. Indirect optimization technique incorporated with constraint gradients is employed to further improve the computational efficiency and accuracy of the algorithm. The required optimal thrust magnitude and direction can be determined to create the bridging trajectory that connects the invariant manifolds. A hierarchical design strategy dividing the constraint set is proposed to seek the optimal solution when the problem cannot be solved directly. Meanwhile, the solution procedure and the value ranges of used variables are summarized. To highlight the effectivity of the transfer scheme and aim at different types of libration point orbits, transfer trajectories between some sample orbits, including Lyapunov orbits, planar orbits, halo orbits, axial orbits, vertical orbits and butterfly orbits for collinear and triangular libration points, are investigated with various time of flight. Numerical results show that the fuel consumption varies from a few kilograms to tens of kilograms, related to the locations and the types of mission orbits as well as the corresponding invariant manifold structures, and indicates that the low-thrust transfers may be a beneficial option for the extended science missions around different libration points.
Forcucci, Alessandra; Pawlowski, Michal E.; Majors, Catherine; Richards-Kortum, Rebecca; Tkaczyk, Tomasz S.
2015-01-01
Three-part differential white blood cell counts are used for disease diagnosis and monitoring at the point-of-care. A low-cost, miniature achromatic microscope was fabricated for identification of lymphocytes, monocytes, and granulocytes in samples of whole blood stained with acridine orange. The microscope was manufactured using rapid prototyping techniques of diamond turning and 3D printing and is intended for use at the point-of-care in low-resource settings. The custom-designed microscope requires no manual adjustment between samples and was successfully able to classify three white blood cell types (lymphocytes, granulocytes, and monocytes) using samples of peripheral whole blood stained with acridine orange. PMID:26601006
Dynamics of a durable commodity market involving trade at disequilibrium
NASA Astrophysics Data System (ADS)
Panchuk, A.; Puu, T.
2018-05-01
The present work considers a simple model of a durable commodity market involving two agents who trade stocks of two different types. Stock commodities, in contrast to flow commodities, remain on the market from period to period and, consequently, there is neither unique demand function nor unique supply function exists. We also set up exact conditions for trade at disequilibrium, the issue being usually neglected, though a fact of reality. The induced iterative system has infinite number of fixed points and path dependent dynamics. We show that a typical orbit is either attracted to one of the fixed points or eventually sticks at a no-trade point. For the latter the stock distribution always remains the same while the price displays periodic or chaotic oscillations.
Minimum airflow reset of single-duct VAV terminal boxes
NASA Astrophysics Data System (ADS)
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Sacchetti, Rossella; De Luca, Giovanna; Guberti, Emilia; Zanetti, Franca
2015-01-01
Municipal tap water is increasingly treated at the point of use (POU) to improve the acceptability and palatability of its taste. The aim of this study was to assess the bacteriologic and nutritional characteristics of tap water treated at the point of use in residential healthcare facilities for the elderly. Two types of POU devices were used: microfiltered water dispensers (MWDs) and reverse-osmosis water dispensers (ROWDs). All samples of water entering the devices and leaving them were tested for the bacteriological parameters set by Italian regulations for drinking water and for opportunistic pathogens associated with various infections in healthcare settings; in addition, the degree of mineralization of the water was assessed. The results revealed widespread bacterial contamination in the POU treatment devices, particularly from potentially pathogenic species. As expected, the use of ROWDs led to a decrease in the saline content of the water. In conclusion, the use of POU treatment in healthcare facilities for the elderly can be considered advisable only if the devices are constantly and carefully maintained. PMID:26371025
Sacchetti, Rossella; De Luca, Giovanna; Guberti, Emilia; Zanetti, Franca
2015-09-09
Municipal tap water is increasingly treated at the point of use (POU) to improve the acceptability and palatability of its taste. The aim of this study was to assess the bacteriologic and nutritional characteristics of tap water treated at the point of use in residential healthcare facilities for the elderly. Two types of POU devices were used: microfiltered water dispensers (MWDs) and reverse-osmosis water dispensers (ROWDs). All samples of water entering the devices and leaving them were tested for the bacteriological parameters set by Italian regulations for drinking water and for opportunistic pathogens associated with various infections in healthcare settings; in addition, the degree of mineralization of the water was assessed. The results revealed widespread bacterial contamination in the POU treatment devices, particularly from potentially pathogenic species. As expected, the use of ROWDs led to a decrease in the saline content of the water. In conclusion, the use of POU treatment in healthcare facilities for the elderly can be considered advisable only if the devices are constantly and carefully maintained.
Goldbach, Hayley; Chang, Aileen Y; Kyer, Andrea; Ketshogileng, Dineo; Taylor, Lynne; Chandra, Amit; Dacso, Matthew; Kung, Shiang-Ju; Rijken, Taatske; Fontelo, Paul; Littman-Quinn, Ryan; Seymour, Anne K; Kovarik, Carrie L
2014-01-01
Objective Many mobile phone resources have been developed to increase access to health education in the developing world, yet few studies have compared these resources or quantified their performance in a resource-limited setting. This study aims to compare the performance of resident physicians in answering clinical scenarios using PubMed abstracts accessed via the PubMed for Handhelds (PubMed4Hh) website versus medical/drug reference applications (Medical Apps) accessed via software on the mobile phone. Methods A two-arm comparative study with crossover design was conducted. Subjects, who were resident physicians at the University of Botswana, completed eight scenarios, each with multi-part questions. The primary outcome was a grade for each question. The primary independent variable was the intervention arm and other independent variables included residency and question. Results Within each question type there were significant differences in ‘percentage correct’ between Medical Apps and PubMed4Hh for three of the six types of questions: drug-related, diagnosis/definitions, and treatment/management. Within each of these question types, Medical Apps had a higher percentage of fully correct responses than PubMed4Hh (63% vs 13%, 33% vs 12%, and 41% vs 13%, respectively). PubMed4Hh performed better for epidemiologic questions. Conclusions While mobile access to primary literature remains important and serves an information niche, mobile applications with condensed content may be more appropriate for point-of-care information needs. Further research is required to examine the specific information needs of clinicians in resource-limited settings and to evaluate the appropriateness of current resources in bridging location- and context-specific information gaps. PMID:23535665
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldegunde, Manuel, E-mail: M.A.Aldegunde-Rodriguez@warwick.ac.uk; Kermode, James R., E-mail: J.R.Kermode@warwick.ac.uk; Zabaras, Nicholas
This paper presents the development of a new exchange–correlation functional from the point of view of machine learning. Using atomization energies of solids and small molecules, we train a linear model for the exchange enhancement factor using a Bayesian approach which allows for the quantification of uncertainties in the predictions. A relevance vector machine is used to automatically select the most relevant terms of the model. We then test this model on atomization energies and also on bulk properties. The average model provides a mean absolute error of only 0.116 eV for the test points of the G2/97 set butmore » a larger 0.314 eV for the test solids. In terms of bulk properties, the prediction for transition metals and monovalent semiconductors has a very low test error. However, as expected, predictions for types of materials not represented in the training set such as ionic solids show much larger errors.« less
Low energy sign illumination system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minogue, R.W.
A low energy sign contruction is illustrated for illumination of signs of the type having translucent illuminated faces. An opaque sign border is bridged by a reflector extending generally parallel to the illuminated face and having a truncated sawtooth profile. For single sided signs, one set of sawtooth points is truncated; for dual sided signs, both set of sawtooth points are truncated. Bayonet mounted lighting sockets are mounted at apertures in the respective truncations and utilize the metallic reflective surface as one side of a low voltage (10.5-volt) ac circuit. The reflector forms a cooled heat sink mounting the bulbsmore » as well as a supporting matrix. The lamps, as mounted to this supporting matrix, are typically spaced at distances which do not exceed twice the distance of the lamp filament to the translucent face. By the expedient of using 14-V lamps, prolonged lamp life with low energy illumination results.« less
Stijkel, A; van Eijndhoven, J C; Bal, R
1996-12-01
The Dutch procedure for standard setting for occupational exposure to chemicals, just like the European Union (EU) procedure, is characterized by an organizational separation between considerations of health on the one side, and of technology, economics, and policy on the other side. Health considerations form the basis for numerical guidelines. These guidelines are next combined with technical-economical considerations. Standards are then proposed, and are finally set by the Ministry of Social Affairs and Employment. An analysis of this procedure might be of relevance to the US, where other procedures are used and criticized. In this article we focus on the first stage of the standard-setting procedure. In this stage, the Dutch Expert Committee on Occupational Standards (DECOS) drafts a criteria document in which a health-based guideline is proposed. The drafting is based on a set of starting points for assessing toxicity. We raise the questions, "Does DECOS limit itself only to health considerations? And if not, what are the consequences of such a situation?" We discuss DECOS' starting points and analyze the relationships between those starting points, and then explore eight criteria documents where DECOS was considering reproductive risks as a possible critical effect. For various reasons, it will be concluded that the starting points leave much interpretative space, and that this space is widened further by the manner in which DECOS utilizes it. This is especially true in situations involving sex-specific risks and uncertainties in knowledge. Consequently, even at the first stage, where health considerations alone are intended to play a role, there is much room for other than health-related factors to influence decision making, although it is unavoidable that some interpretative space will remain. We argue that separating the various types of consideration should not be abandoned. Rather, through adjustments in the starting points and aspects of the procedure, clarity should be guaranteed about the way the interpretative space is being employed.
Measuring the operational efficiency of individual theme park attractions.
Kim, Changhee; Kim, Soowook
2016-01-01
This study assesses the operation efficiency of theme park attractions using the data envelopment analysis, utilizing actual data on 15 attractions at Samsung Everland located in Yongin-si, Republic of Korea. In particular, this study identifies crowding and waiting time as one of the main causes of visitor's satisfaction, and analyzes the efficiency of individual attractions in terms of waiting time. The installation area, installation cost, and annual repair cost are set as input factors and the number of annual users and customer satisfaction as output factors. The results show that the roller coaster-type attractions were less efficient than other types of attractions while rotating-type attractions were relatively more efficient. However, an importance performance analysis on individual attraction's efficiency and satisfaction showed that operational efficiency should not be the sole consideration in attraction installation. In addition, the projection points for input factors for efficient use of attractions and the appropriate reference set for benchmarking are provided as guideline for attraction efficiency management.
Origami building blocks: Generic and special four-vertices
NASA Astrophysics Data System (ADS)
Waitukaitis, Scott; van Hecke, Martin
2016-02-01
Four rigid panels connected by hinges that meet at a point form a four-vertex, the fundamental building block of origami metamaterials. Most materials designed so far are based on the same four-vertex geometry, and little is known regarding how different geometries affect folding behavior. Here we systematically categorize and analyze the geometries and resulting folding motions of Euclidean four-vertices. Comparing the relative sizes of sector angles, we identify three types of generic vertices and two accompanying subtypes. We determine which folds can fully close and the possible mountain-valley assignments. Next, we consider what occurs when sector angles or sums thereof are set equal, which results in 16 special vertex types. One of these, flat-foldable vertices, has been studied extensively, but we show that a wide variety of qualitatively different folding motions exist for the other 15 special and 3 generic types. Our work establishes a straightforward set of rules for understanding the folding motion of both generic and special four-vertices and serves as a roadmap for designing origami metamaterials.
Origami building blocks: Generic and special four-vertices.
Waitukaitis, Scott; van Hecke, Martin
2016-02-01
Four rigid panels connected by hinges that meet at a point form a four-vertex, the fundamental building block of origami metamaterials. Most materials designed so far are based on the same four-vertex geometry, and little is known regarding how different geometries affect folding behavior. Here we systematically categorize and analyze the geometries and resulting folding motions of Euclidean four-vertices. Comparing the relative sizes of sector angles, we identify three types of generic vertices and two accompanying subtypes. We determine which folds can fully close and the possible mountain-valley assignments. Next, we consider what occurs when sector angles or sums thereof are set equal, which results in 16 special vertex types. One of these, flat-foldable vertices, has been studied extensively, but we show that a wide variety of qualitatively different folding motions exist for the other 15 special and 3 generic types. Our work establishes a straightforward set of rules for understanding the folding motion of both generic and special four-vertices and serves as a roadmap for designing origami metamaterials.
Region 9 NPL Sites (Superfund Sites 2013)
NPL site POINT locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup under the Superfund program. Eligibility is determined by a scoring method called Hazard Ranking System. Sites with high scores are listed on the NPL. The majority of the locations are derived from polygon centroids of digitized site boundaries. The remaining locations were generated from address geocoding and digitizing. Area covered by this data set include Arizona, California, Nevada, Hawaii, Guam, American Samoa, Northern Marianas and Trust Territories. Attributes include NPL status codes, NPL industry type codes and environmental indicators. Related table, NPL_Contaminants contains information about contaminated media types and chemicals. This is a one-to-many relate and can be related to the feature class using the relationship classes under the Feature Data Set ENVIRO_CONTAMINANT.
Information Needs, Infobutton Manager Use, and Satisfaction by Clinician Type: A Case Study
Collins, Sarah A.; Currie, Leanne M.; Bakken, Suzanne; Cimino, James J.
2009-01-01
To effectively meet clinician information needs at the point of care, we must understand how their needs are dependent on both context and clinician type. The Infobutton Manager (IM), accessed through a clinical information system, anticipates the clinician's questions and provides links to pertinent electronic resources. We conducted an observational usefulness case study of medical residents (MDs), nurse practitioners (NPs), registered nurses (RNs), and a physician assistant (PA), using the IM in a laboratory setting. Generic question types and success rates for each clinician's information needs were characterized. Question type frequency differed by clinician type. All clinician types asked for institution-specific protocols. The MDs asked about unfamiliar domains, RNs asked about physician order rationales, and NPs asked questions similar to both MDs and RNs. Observational data suggest that IM success rates may be improved by tailoring anticipated questions to clinician type. Clinicians reported that a more visible Infobutton may increase use. PMID:18952943
Feature detection on 3D images of dental imprints
NASA Astrophysics Data System (ADS)
Mokhtari, Marielle; Laurendeau, Denis
1994-09-01
A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.
Autoimmune regulator is acetylated by transcription coactivator CBP/p300
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saare, Mario, E-mail: mario.saare@ut.ee; Rebane, Ana; SIAF, Swiss Institute of Allergy and Asthma Research, University of Zuerich, Davos
2012-08-15
The Autoimmune Regulator (AIRE) is a regulator of transcription in the thymic medulla, where it controls the expression of a large set of peripheral-tissue specific genes. AIRE interacts with the transcriptional coactivator and acetyltransferase CBP and synergistically cooperates with it in transcriptional activation. Here, we aimed to study a possible role of AIRE acetylation in the modulation of its activity. We found that AIRE is acetylated in tissue culture cells and this acetylation is enhanced by overexpression of CBP and the CBP paralog p300. The acetylated lysines were located within nuclear localization signal and SAND domain. AIRE with mutations thatmore » mimicked acetylated K243 and K253 in the SAND domain had reduced transactivation activity and accumulated into fewer and larger nuclear bodies, whereas mutations that mimicked the unacetylated lysines were functionally similar to wild-type AIRE. Analogously to CBP, p300 localized to AIRE-containing nuclear bodies, however, the overexpression of p300 did not enhance the transcriptional activation of AIRE-regulated genes. Further studies showed that overexpression of p300 stabilized the AIRE protein. Interestingly, gene expression profiling revealed that AIRE, with mutations mimicking K243/K253 acetylation in SAND, was able to activate gene expression, although the affected genes were different and the activation level was lower from those regulated by wild-type AIRE. Our results suggest that the AIRE acetylation can influence the selection of AIRE activated genes. -- Highlights: Black-Right-Pointing-Pointer AIRE is acetylated by the acetyltransferases p300 and CBP. Black-Right-Pointing-Pointer Acetylation occurs between CARD and SAND domains and within the SAND domain. Black-Right-Pointing-Pointer Acetylation increases the size of AIRE nuclear dots. Black-Right-Pointing-Pointer Acetylation increases AIRE protein stability. Black-Right-Pointing-Pointer AIRE acetylation mimic regulates a different set of AIRE target genes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London
2013-01-01
Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.« less
Optical texture analysis for automatic cytology and histology: a Markovian approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pressman, N.J.
1976-10-12
Markovian analysis is a method to measure optical texture based on gray-level transition probabilities in digitized images. The experiments described in this dissertation investigate the classification performance of parameters generated by this method. Three types of data sets are used: images of (1) human blood leukocytes (nuclei of monocytes, neutrophils, and lymphocytes; Wright stain; (0.125 ..mu..m)/sup 2//picture point), (2) cervical exfoliative cells (nuclei of normal intermediate squamous cells and dysplastic and carcinoma in situ cells; azure-A/Feulgen stain; (0.125 ..mu..m)/sup 2//picture point), and (3) lymph-node tissue sections (6-..mu..m thick sections from normal, acute lymphadenitis, and Hodgkin lymph nodes; hematoxylin and eosinmore » stain; (0.625 ..mu..m)/sup 2/ picture point). Each image consists of 128 x 128 picture points originally scanned with a 256 gray-level resolution. Each image class is defined by 75 images.« less
Alexander, Jeffrey A; Young, Gary J; Weiner, Bryan J; Hearld, Larry R
2008-04-01
Recent investigations into the activities of nonprofit hospitals have pointed to weak or lax governance on the part of some of these organizations. As a result of these events, various federal and state initiatives are now either under way or under discussion to strengthen the governance of hospitals and other nonprofit corporations through mandatory board structures and practices. However, despite policy makers' growing interest in these types of governance reforms, there is in fact little empirical evidence to support their contribution to the effectiveness of hospital boards. The purpose of this article is to report the results of a study examining the relationship between the structure and practices of nonprofit hospital boards relative to the hospital's provision of community benefits. Our results point to modest relationships between these sets of variables, suggesting considerable limitations to what federal and state policy makers can accomplish through legislative initiatives to improve the governance of nonprofit hospitals.
Simulation study into the identification of nuclear materials in cargo containers using cosmic rays
NASA Astrophysics Data System (ADS)
Blackwell, T. B.; Kudryavtsev, V. A.
2015-04-01
Muon tomography represents a new type of imaging technique that can be used in detecting high-Z materials. Monte Carlo simulations for muon scattering in different types of target materials are presented. The dependence of the detector capability to identify high-Z targets on spatial resolution has been studied. Muon tracks are reconstructed using a basic point of closest approach (PoCA) algorithm. In this article we report the development of a secondary analysis algorithm that is applied to the reconstructed PoCA points. This algorithm efficiently ascertains clusters of voxels with high average scattering angles to identify `areas of interest' within the inspected volume. Using this approach the effect of other parameters, such as the distance between detectors and the number of detectors per set, on material identification is also presented. Finally, false positive and false negative rates for detecting shielded HEU in realistic scenarios with low-Z clutter are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abgrall, N.; Arnquist, I. J.; Avignone, F. T.
Here, a search for Pauli-exclusion-principle-violating K α electron transitions was performed using 89.5 kg-d of data collected with a p-type point contact high-purity germanium detector operated at the Kimballton Underground Research Facility. A lower limit on the transition lifetime of 5.8 × 10 30 s at 90% C.L. was set by looking for a peak at 10.6 keV resulting from the X-ray and Auger electrons present following the transition. A similar analysis was done to look for the decay of atomic K-shell electrons into neutrinos, resulting in a lower limit of 6.8 × 10 30 s at 90% C.L. Itmore » is estimated that the Majorana Demonstrator, a 44 kg array of p-type point contact detectors that will search for the neutrinoless double-beta decay of 76Ge, could improve upon these exclusion limits by an order of magnitude after three years of operation.« less
Coulomb matrix elements in multi-orbital Hubbard models.
Bünemann, Jörg; Gebhard, Florian
2017-04-26
Coulomb matrix elements are needed in all studies in solid-state theory that are based on Hubbard-type multi-orbital models. Due to symmetries, the matrix elements are not independent. We determine a set of independent Coulomb parameters for a d-shell and an f-shell and all point groups with up to 16 elements (O h , O, T d , T h , D 6h , and D 4h ). Furthermore, we express all other matrix elements as a function of the independent Coulomb parameters. Apart from the solution of the general point-group problem we investigate in detail the spherical approximation and first-order corrections to the spherical approximation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, T.A.
1990-01-01
A study undertaken on an Eocene age coal bed in southeast Kalimantan, Indonesia determined that there was a relationship between megascopically determined coal types and kinds and sizes of organic components. The study also concluded that the most efficient way to characterize the seam was from collection of two 3 cm blocks from each layer or bench defined by megascopic character and that a maximum of 125 point counts was needed on each block. Microscopic examination of uncrushed block samples showed the coal to be composed of plant parts and tissues set in a matrix of both fine-grained and amorphousmore » material. The particulate matrix is composed of cell wall and liptinite fragments, resins, spores, algae, and fungal material. The amorphous matrix consists of unstructured (at 400x) huminite and liptinite. Size measurements showed that each particulate component possessed its own size distribution which approached normality when transformed to a log{sub 2} or phi scale. Degradation of the plant material during peat accumulation probably controlled grain size in the coal types. This notion is further supported by the increased concentration of decay resistant resin and cell fillings in the nonbanded and dull coal types. In the sampling design experiment, two blocks from each layer and two layers from each coal type were collected. On each block, 2 to 4 traverses totaling 500 point counts per block were performed to test the minimum number of points needed to characterize a block. A hierarchical analysis of variance showed that most of the petrographic variation occurred between coal types. The results from these analyses also indicated that, within a coal type, sampling should concentrate on the layer level and that only 250 point counts, split between two blocks, were needed to characterize a layer.« less
A general scientific information system to support the study of climate-related data
NASA Technical Reports Server (NTRS)
Treinish, L. A.
1984-01-01
The development and use of NASA's Pilot Climate Data System (PCDS) are discussed. The PCDS is used as a focal point for managing and providing access to a large collection of actively used data for the Earth, ocean and atmospheric sciences. The PCDS provides uniform data catalogs, inventories, and access methods for selected NASA and non-NASA data sets. Scientific users can preview the data sets using graphical and statistical methods. The system has evolved from its original purpose as a climate data base management system in response to a national climate program, into an extensive package of capabilities to support many types of data sets from both spaceborne and surface based measurements with flexible data selection and analysis functions.
Scaling fixed-field alternating gradient accelerators with a small orbit excursion.
Machida, Shinji
2009-10-16
A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.
NASA Astrophysics Data System (ADS)
Bakoban, Rana A.
2017-08-01
The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.
Clapham, Renee P; van As-Brooks, Corina J; van Son, Rob J J H; Hilgers, Frans J M; van den Brekel, Michiel W M
2015-07-01
To investigate the relationship between acoustic signal typing and perceptual evaluation of sustained vowels produced by tracheoesophageal (TE) speakers and the use of signal typing in the clinical setting. Two evaluators independently categorized 1.75-second segments of narrow-band spectrograms according to acoustic signal typing and independently evaluated the recording of the same segments on a visual analog scale according to overall perceptual acoustic voice quality. The relationship between acoustic signal typing and overall voice quality (as a continuous scale and as a four-point ordinal scale) was investigated and the proportion of inter-rater agreement as well as the reliability between the two measures is reported. The agreement between signal type (I-IV) and ordinal voice quality (four-point scale) was low but significant, and there was a significant linear relationship between the variables. Signal type correctly predicted less than half of the voice quality data. There was a significant main effect of signal type on continuous voice quality scores with significant differences in median quality scores between signal types I-IV, I-III, and I-II. Signal typing can be used as an adjunct to perceptual and acoustic evaluation of the same stimuli for TE speech as part of a multidimensional evaluation protocol. Signal typing in its current form provides limited predictive information on voice quality, and there is significant overlap between signal types II and III and perceptual categories. Future work should consider whether the current four signal types could be refined. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Relatives as spouses: preferences and opportunities for kin marriage in a Western society.
Bras, Hilde; Van Poppel, Frans; Mandemakers, Kees
2009-01-01
This article investigates the determinants of kin marriage on the basis of a large-scale database covering a major rural part of The Netherlands during the period 1840-1922. We studied three types of kin marriage: first cousin marriage, deceased spouse's sibling marriage, and sibling set exchange marriage. Almost 2% of all marriages were between first cousins, 0.85% concerned the sibling of a former spouse, while 4.14% were sibling set exchange marriages. While the first two types generally declined across the study period, sibling set exchange marriage reached a high point of almost 5% between 1890 and 1900. We found evidence for three mechanisms explaining the choice for relatives as spouses, centering both on preferences and on opportunities for kin marriage. Among the higher and middle strata and among farmers, kin marriages were commonly practiced and played an important role in the process of social class formation in the late nineteenth century. An increased choice for cousin marriage as a means of enculturation was observed among orthodox Protestants in the Bible Belt area of The Netherlands. Finally, all studied types of kin marriage took place more often in the relatively isolated, inland provinces of The Netherlands. Sibling set exchange marriages were a consequence of the enlarged supply of same-generation kin as a result of the demographic transition.
Method of Characteristic (MOC) Nozzle Flowfield Solver - User’s Guide and Input Manual
2013-01-01
Description: Axi or Planar calculation. Value Description Default 0.0 Planer solution 1.0 Axisymmetric solution * &INPUT: NI Date Type: Integer...angle error !... !... Set Control values !... DELTA = 1.0 !1 axi, 0 planer (Mass flux not working correctly) NI = 81...DELTA = 1.0 !1 axi, 0 planer NI = 71 !NUMBER OF RADIAL POINTS ON INFLOW PLANE (Max 99) NT = 35 !NUMBER OF
Timoshenko-Type Theory in the Stability Analysis of Corrugated Cylindrical Shells
NASA Astrophysics Data System (ADS)
Semenyuk, N. P.; Neskhodovskaya, N. A.
2002-06-01
A technique is proposed for stability analysis of longitudinally corrugated shells under axial compression. The technique employs the equations of the Timoshenko-type nonlinear theory of shells. The geometrical parameters of shells are specified on discrete set of points and are approximated by segments of Fourier series. Infinite systems of homogeneous algebraic equations are derived from a variational equation written in displacements to determine the critical loads and buckling modes. Specific types of corrugated isotropic metal and fiberglass shells are considered. The calculated results are compared with those obtained within the framework of the classical theory of shells. It is shown that the Timoshenko-type theory extends significantly the possibility of exact allowance for the geometrical parameters and material properties of corrugated shells compared with Kirchhoff-Love theory.
Statistical analysis of content of Cs-137 in soils in Bansko-Razlog region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobilarov, R. G., E-mail: rkobi@tu-sofia.bg
Statistical analysis of the data set consisting of the activity concentrations of {sup 137}Cs in soils in Bansko–Razlog region is carried out in order to establish the dependence of the deposition and the migration of {sup 137}Cs on the soil type. The descriptive statistics and the test of normality show that the data set have not normal distribution. Positively skewed distribution and possible outlying values of the activity of {sup 137}Cs in soils were observed. After reduction of the effects of outliers, the data set is divided into two parts, depending on the soil type. Test of normality of themore » two new data sets shows that they have a normal distribution. Ordinary kriging technique is used to characterize the spatial distribution of the activity of {sup 137}Cs over an area covering 40 km{sup 2} (whole Razlog valley). The result (a map of the spatial distribution of the activity concentration of {sup 137}Cs) can be used as a reference point for future studies on the assessment of radiological risk to the population and the erosion of soils in the study area.« less
Representing Simple Geometry Types in NetCDF-CF
NASA Astrophysics Data System (ADS)
Blodgett, D. L.; Koziol, B. W.; Whiteaker, T. L.; Simons, R.
2016-12-01
The Climate and Forecast (CF) metadata convention is well-suited for representing gridded and point-based observational datasets. However, CF currently has no accepted mechanism for representing simple geometry types such as lines and polygons. Lack of support for simple geometries within CF has unintentionally excluded a broad set of geoscientific data types from NetCDF-CF data encodings. For example, hydrologic datasets often contain polygon watershed catchments and polyline stream reaches in addition to point sampling stations and water management infrastructure. The latter has an associated CF specification. In the interest of supporting all simple geometry types within CF, a working group was formed following an EarthCube workshop on Advancing NetCDF-CF [1] to draft a CF specification for simple geometries: points, lines, polygons, and their associated multi-geometry representations [2]. The draft also includes parametric geometry types such as circles and ellipses. This presentation will provide an overview of the scope and content of the proposed specification focusing on mechanisms for representing coordinate arrays using variable length or continuous ragged arrays, capturing multi-geometries, and accounting for type-specific geometry artifacts such as polygon holes/interiors, node ordering, etc. The concepts contained in the specification proposal will be described with a use case representing streamflow in rivers and evapotranspiration from HUC12 watersheds. We will also introduce Python and R reference implementations developed alongside the technical specification. These in-development, open source Python and R libraries convert between commonly used GIS software objects (i.e. GEOS-based primitives) and their associated simple geometry CF representation. [1] http://www.unidata.ucar.edu/events/2016CFWorkshop/[2] https://github.com/bekozi/netCDF-CF-simple-geometry
Hierarchical extraction of urban objects from mobile laser scanning data
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Zhao, Gang; Dai, Wenxia
2015-01-01
Point clouds collected in urban scenes contain a huge number of points (e.g., billions), numerous objects with significant size variability, complex and incomplete structures, and variable point densities, raising great challenges for the automated extraction of urban objects in the field of photogrammetry, computer vision, and robotics. This paper addresses these challenges by proposing an automated method to extract urban objects robustly and efficiently. The proposed method generates multi-scale supervoxels from 3D point clouds using the point attributes (e.g., colors, intensities) and spatial distances between points, and then segments the supervoxels rather than individual points by combining graph based segmentation with multiple cues (e.g., principal direction, colors) of the supervoxels. The proposed method defines a set of rules for merging segments into meaningful units according to types of urban objects and forms the semantic knowledge of urban objects for the classification of objects. Finally, the proposed method extracts and classifies urban objects in a hierarchical order ranked by the saliency of the segments. Experiments show that the proposed method is efficient and robust for extracting buildings, streetlamps, trees, telegraph poles, traffic signs, cars, and enclosures from mobile laser scanning (MLS) point clouds, with an overall accuracy of 92.3%.
NASA Astrophysics Data System (ADS)
Jordan, C. H.; Murray, S.; Trott, C. M.; Wayth, R. B.; Mitchell, D. A.; Rahimi, M.; Pindor, B.; Procopio, P.; Morgan, J.
2017-11-01
We detail new techniques for analysing ionospheric activity, using Epoch of Reionization data sets obtained with the Murchison Widefield Array, calibrated by the `real-time system' (RTS). Using the high spatial- and temporal-resolution information of the ionosphere provided by the RTS calibration solutions over 19 nights of observing, we find four distinct types of ionospheric activity, and have developed a metric to provide an `at a glance' value for data quality under differing ionospheric conditions. For each ionospheric type, we analyse variations of this metric as we reduce the number of pierce points, revealing that a modest number of pierce points is required to identify the intensity of ionospheric activity; it is possible to calibrate in real-time, providing continuous information of the phase screen. We also analyse temporal correlations, determine diffractive scales, examine the relative fractions of time occupied by various types of ionospheric activity and detail a method to reconstruct the total electron content responsible for the ionospheric data we observe. These techniques have been developed to be instrument agnostic, useful for application on LOw Frequency ARray and Square Kilometre Array-Low.
NASA Astrophysics Data System (ADS)
Li, Youping; Lu, Jinsong; Cheng, Jian; Yin, Yongzhen; Wang, Jianlan
2017-04-01
Based on the summaries of the rules about the vibration measurement for hydro-generator sets with respect to relevant standards, the key issues of the vibration measurement, such as measurement modes, the transducer selection are illustrated. In addition, the problems existing in vibration measurement are pointed out. The actual acquisition data of head cover vertical vibration respectively obtained by seismic transducer and eddy current transducer in site hydraulic turbine performance tests during the rising of the reservoir upstream level in a certain hydraulic power plant are compared. The difference of the data obtained by the two types of transducers and the potential reasons are presented. The application conditions of seismic transducer and eddy current transducer for hydro-generator set vibration measurement are given based on the analysis. Research subjects that should be focused on about the topic discussed in this paper are suggested.
El Youssef, Joseph; Bakhtiani, Parkash A.; Cai, Yu; Stobbe, Jade M.; Branigan, Deborah; Ramsey, Katrina; Jacobs, Peter; Reddy, Ravi; Woods, Mark; Ward, W. Kenneth
2015-01-01
OBJECTIVE To evaluate subjects with type 1 diabetes for hepatic glycogen depletion after repeated doses of glucagon, simulating delivery in a bihormonal closed-loop system. RESEARCH DESIGN AND METHODS Eleven adult subjects with type 1 diabetes participated. Subjects underwent estimation of hepatic glycogen using 13C MRS. MRS was performed at the following four time points: fasting and after a meal at baseline, and fasting and after a meal after eight doses of subcutaneously administered glucagon at a dose of 2 µg/kg, for a total mean dose of 1,126 µg over 16 h. The primary and secondary end points were, respectively, estimated hepatic glycogen by MRS and incremental area under the glucose curve for a 90-min interval after glucagon administration. RESULTS In the eight subjects with complete data sets, estimated glycogen stores were similar at baseline and after repeated glucagon doses. In the fasting state, glycogen averaged 21 ± 3 g/L before glucagon administration and 25 ± 4 g/L after glucagon administration (mean ± SEM) (P = NS). In the fed state, glycogen averaged 40 ± 2 g/L before glucagon administration and 34 ± 4 g/L after glucagon administration (P = NS). With the use of an insulin action model, the rise in glucose after the last dose of glucagon was comparable to the rise after the first dose, as measured by the 90-min incremental area under the glucose curve. CONCLUSIONS In adult subjects with well-controlled type 1 diabetes (mean A1C 7.2%), glycogen stores and the hyperglycemic response to glucagon administration are maintained even after receiving multiple doses of glucagon. This finding supports the safety of repeated glucagon delivery in the setting of a bihormonal closed-loop system. PMID:26341131
1992-10-01
MSFC Test Engineer performing a functional test on the TES. The TES can be operated as a refrigerator, with a minimum set point temperature of 4.0 degrees C, or as an incubator, with a maximum set point temperature 40.0 degrees C of the set point. The TES can be set to maintain a constant temperature or programmed to change temperature settings over time, internal temperature recorded by a date logger.
Jadoon, Khalid A; Ratcliffe, Stuart H; Barrett, David A; Thomas, E Louise; Stott, Colin; Bell, Jimmy D; O'Sullivan, Saoirse E; Tan, Garry D
2016-10-01
Cannabidiol (CBD) and Δ(9)-tetrahydrocannabivarin (THCV) are nonpsychoactive phytocannabinoids affecting lipid and glucose metabolism in animal models. This study set out to examine the effects of these compounds in patients with type 2 diabetes. In this randomized, double-blind, placebo-controlled study, 62 subjects with noninsulin-treated type 2 diabetes were randomized to five treatment arms: CBD (100 mg twice daily), THCV (5 mg twice daily), 1:1 ratio of CBD and THCV (5 mg/5 mg, twice daily), 20:1 ratio of CBD and THCV (100 mg/5 mg, twice daily), or matched placebo for 13 weeks. The primary end point was a change in HDL-cholesterol concentrations from baseline. Secondary/tertiary end points included changes in glycemic control, lipid profile, insulin sensitivity, body weight, liver triglyceride content, adipose tissue distribution, appetite, markers of inflammation, markers of vascular function, gut hormones, circulating endocannabinoids, and adipokine concentrations. Safety and tolerability end points were also evaluated. Compared with placebo, THCV significantly decreased fasting plasma glucose (estimated treatment difference [ETD] = -1.2 mmol/L; P < 0.05) and improved pancreatic β-cell function (HOMA2 β-cell function [ETD = -44.51 points; P < 0.01]), adiponectin (ETD = -5.9 × 10(6) pg/mL; P < 0.01), and apolipoprotein A (ETD = -6.02 μmol/L; P < 0.05), although plasma HDL was unaffected. Compared with baseline (but not placebo), CBD decreased resistin (-898 pg/ml; P < 0.05) and increased glucose-dependent insulinotropic peptide (21.9 pg/ml; P < 0.05). None of the combination treatments had a significant impact on end points. CBD and THCV were well tolerated. THCV could represent a new therapeutic agent in glycemic control in subjects with type 2 diabetes. © 2016 by the American Diabetes Association.
Embolic Strokes of Undetermined Source in the Athens Stroke Registry: An Outcome Analysis.
Ntaios, George; Papavasileiou, Vasileios; Milionis, Haralampos; Makaritsis, Konstantinos; Vemmou, Anastasia; Koroboki, Eleni; Manios, Efstathios; Spengos, Konstantinos; Michel, Patrik; Vemmos, Konstantinos
2015-08-01
Information about outcomes in Embolic Stroke of Undetermined Source (ESUS) patients is unavailable. This study provides a detailed analysis of outcomes of a large ESUS population. Data set was derived from the Athens Stroke Registry. ESUS was defined according to the Cryptogenic Stroke/ESUS International Working Group criteria. End points were mortality, stroke recurrence, functional outcome, and a composite cardiovascular end point comprising recurrent stroke, myocardial infarction, aortic aneurysm rupture, systemic embolism, or sudden cardiac death. We performed Kaplan-Meier analyses to estimate cumulative probabilities of outcomes by stroke type and Cox-regression to investigate whether stroke type was outcome predictor. 2731 patients were followed-up for a mean of 30.5±24.1months. There were 73 (26.5%) deaths, 60 (21.8%) recurrences, and 78 (28.4%) composite cardiovascular end points in the 275 ESUS patients. The cumulative probability of survival in ESUS was 65.6% (95% confidence intervals [CI], 58.9%-72.2%), significantly higher compared with cardioembolic stroke (38.8%, 95% CI, 34.9%-42.7%). The cumulative probability of stroke recurrence in ESUS was 29.0% (95% CI, 22.3%-35.7%), similar to cardioembolic strokes (26.8%, 95% CI, 22.1%-31.5%), but significantly higher compared with all types of noncardioembolic stroke. One hundred seventy-two (62.5%) ESUS patients had favorable functional outcome compared with 280 (32.2%) in cardioembolic and 303 (60.9%) in large-artery atherosclerotic. ESUS patients had similar risk of composite cardiovascular end point as all other stroke types, with the exception of lacunar strokes, which had significantly lower risk (adjusted hazard ratio, 0.70 [95% CI, 0.52-0.94]). Long-term mortality risk in ESUS is lower compared with cardioembolic strokes, despite similar rates of recurrence and composite cardiovascular end point. Recurrent stroke risk is higher in ESUS than in noncardioembolic strokes. © 2015 American Heart Association, Inc.
Functional Test on (TES) Thermal Enclosure System
NASA Technical Reports Server (NTRS)
1992-01-01
MSFC Test Engineer performing a functional test on the TES. The TES can be operated as a refrigerator, with a minimum set point temperature of 4.0 degrees C, or as an incubator, with a maximum set point temperature 40.0 degrees C of the set point. The TES can be set to maintain a constant temperature or programmed to change temperature settings over time, internal temperature recorded by a date logger.
NASA Astrophysics Data System (ADS)
Turrini, Paolo; Grossi, Davide; Broersen, Jan; Meyer, John-Jules Ch.
The purpose of this contribution is to set up a language to evaluate the results of concerted action among interdependent agents against predetermined properties that we can recognise as desirable from a deontic point of view. Unlike the standard view of logics to reason about coalitionally rational action, the capacity of a set of agents to take a rational decision will be restricted to what we will call agreements, that can be seen as solution concepts to a dependence structure present in a certain game. The language will identify in concise terms those agreements that act accordingly or disaccordingly with the desirable properties arbitrarily set up in the beginning, and will reveal, by logical reasoning, a variety of structural properties of this type of collective action.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Radiometric Normalization of Large Airborne Image Data Sets Acquired by Different Sensor Types
NASA Astrophysics Data System (ADS)
Gehrke, S.; Beshah, B. T.
2016-06-01
Generating seamless mosaics of aerial images is a particularly challenging task when the mosaic comprises a large number of im-ages, collected over longer periods of time and with different sensors under varying imaging conditions. Such large mosaics typically consist of very heterogeneous image data, both spatially (different terrain types and atmosphere) and temporally (unstable atmo-spheric properties and even changes in land coverage). We present a new radiometric normalization or, respectively, radiometric aerial triangulation approach that takes advantage of our knowledge about each sensor's properties. The current implementation supports medium and large format airborne imaging sensors of the Leica Geosystems family, namely the ADS line-scanner as well as DMC and RCD frame sensors. A hierarchical modelling - with parameters for the overall mosaic, the sensor type, different flight sessions, strips and individual images - allows for adaptation to each sensor's geometric and radiometric properties. Additional parameters at different hierarchy levels can compensate radiome-tric differences of various origins to compensate for shortcomings of the preceding radiometric sensor calibration as well as BRDF and atmospheric corrections. The final, relative normalization is based on radiometric tie points in overlapping images, absolute radiometric control points and image statistics. It is computed in a global least squares adjustment for the entire mosaic by altering each image's histogram using a location-dependent mathematical model. This model involves contrast and brightness corrections at radiometric fix points with bilinear interpolation for corrections in-between. The distribution of the radiometry fixes is adaptive to each image and generally increases with image size, hence enabling optimal local adaptation even for very long image strips as typi-cally captured by a line-scanner sensor. The normalization approach is implemented in HxMap software. It has been successfully applied to large sets of heterogeneous imagery, including the adjustment of original sensor images prior to quality control and further processing as well as radiometric adjustment for ortho-image mosaic generation.
Chien, Ming-Nan; Chen, Yen-Ling; Hung, Yi-Jen; Wang, Shu-Yi; Lu, Wen-Tsung; Chen, Chih-Hung; Lin, Ching-Ling; Huang, Tze-Pao; Tsai, Ming-Han; Tseng, Wei-Kung; Wu, Ta-Jen; Ho, Cheng; Lin, Wen-Yu; Chen, Bill; Chuang, Lee-Ming
2016-11-01
The aim of the present study was to assess the glycemic control, adherence and treatment satisfaction in a real-world setting with basal insulin therapy in type 2 diabetes patients in Taiwan. This was a multicenter, prospective, observational registry. A total of 836 patients with type 2 diabetes taking oral antidiabetic drugs with glycated hemoglobin (HbA1c) >7% entered the study. Basal insulin was given for 24 weeks. All treatment choices and medical instructions were at the physician's discretion to reflect real-life practice. After 24-week treatment, 11.7% of patients reached set HbA1c goals without severe hypoglycemia (primary effectiveness end-point). HbA1c and fasting blood glucose were significantly decreased from (mean ± SD) 10.1 ± 1.9% to 8.7 ± 1.7% (-1.4 ± 2.1%, P < 0.0001) and from 230.6 ± 68.8 mg/dL to 159.1 ± 55.6 mg/dL (-67.4 ± 72.3 mg/dL, P < 0.0001), respectively. Patients received insulin therapy at a frequency of nearly one shot per day on average, whereas self-monitoring of blood glucose was carried out approximately four times a week. Hypoglycemia was reported by 11.4% of patients, and only 0.7% of patients experienced severe hypoglycemia. Slight changes in weight (0.7 ± 2.4 kg) and a low incidence of adverse drug reactions (0.4%) were also noted. The score of 7-point treatment satisfaction rated by patients was significantly improved by 1.9 ± 1.7 (P < 0.0001). Basal insulin therapy was associated with a decrease in HbA1c and fasting blood glucose, and an improved treatment satisfaction. Most patients complied with physicians' instructions. The treatment was generally well tolerated by patients with type 2 diabetes, but findings pointed out the need to reinforce the early and appropriate uptitration to achieve treatment targets. © 2016 The Authors. Journal of Diabetes Investigation published by Asian Association for the Study of Diabetes (AASD) and John Wiley & Sons Australia, Ltd.
Gap-minimal systems of notations and the constructible hierarchy
NASA Technical Reports Server (NTRS)
Lucian, M. L.
1972-01-01
If a constructibly countable ordinal alpha is a gap ordinal, then the order type of the set of index ordinals smaller than alpha is exactly alpha. The gap ordinals are the only points of discontinuity of a certain ordinal-valued function. The notion of gap minimality for well ordered systems of notations is defined, and the existence of gap-minimal systems of notations of arbitrarily large constructibly countable length is established.
Rational Approximations with Hankel-Norm Criterion
1980-01-01
REPORT TYPE ANDu DATES COVERED It) L. TITLE AND SLWUIlL Fi901 ia FUNDING NUMOIRS, RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION PE61102F i...problem is proved to be reducible to obtain a two-variable all- pass ration 1 function, interpolating a set of parametric values at specified points inside...PAGES WHICH DO NOT REPRODUCE LEGIBLY. V" C - w RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION* Y. Genin* Philips Research Lab. 2, avenue van
NIF Ignition Target 3D Point Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O; Marinak, M; Milovich, J
2008-11-05
We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Syntheticmore » diagnostics.« less
Haneda, Masakazu; Koya, Daisuke; Kondo, Keiko; Tanaka, Sachiko; Arima, Hisatomi; Kume, Shinji; Nakazawa, Jun; Chin-Kanasaki, Masami; Ugi, Satoshi; Kawai, Hiromichi; Araki, Hisazumi; Uzu, Takashi; Maegawa, Hiroshi
2015-01-01
Background and objectives We investigated the association of urinary potassium and sodium excretion with the incidence of renal failure and cardiovascular disease in patients with type 2 diabetes. Design, setting, participants, & measurements A total of 623 Japanese type 2 diabetic patients with eGFR≥60 ml/min per 1.73 m2 were enrolled in this observational follow-up study between 1996 and 2003 and followed-up until 2013. At baseline, a 24-hour urine sample was collected to estimate urinary potassium and sodium excretion. The primary end point was renal and cardiovascular events (RRT, myocardial infarction, angina pectoris, stroke, and peripheral vascular disease). The secondary renal end points were the incidence of a 50% decline in eGFR, progression to CKD stage 4 (eGFR<30 ml/min per 1.73 m2), and the annual decline rate in eGFR. Results During the 11-year median follow-up period, 134 primary end points occurred. Higher urinary potassium excretion was associated with lower risk of the primary end point, whereas urinary sodium excretion was not. The adjusted hazard ratios for the primary end point in Cox proportional hazards analysis were 0.56 (95% confidence interval [95% CI], 0.33 to 0.95) in the third quartile of urinary potassium excretion (2.33–2.90 g/d) and 0.33 (95% CI, 0.18 to 0.62) in the fourth quartile (>2.90 g/d) compared with the lowest quartile (<1.72 g/d). Similar associations were observed for the secondary renal end points. The annual decline rate in eGFR in the fourth quartile of urinary potassium excretion (–1.3 ml/min per 1.73 m2/y; 95% CI, –1.5 to –1.0) was significantly slower than those in the first quartile (–2.2; 95% CI, –2.4 to –1.8). Conclusions Higher urinary potassium excretion was associated with the slower decline of renal function and the lower incidence of cardiovascular complications in type 2 diabetic patients with normal renal function. Interventional trials are necessary to determine whether increasing dietary potassium is beneficial. PMID:26563378
Liberato, Selma C; Bailie, Ross; Brimblecombe, Julie
2014-09-05
Point-of-sale is a potentially important opportunity to promote healthy eating through nutrition education and environment modification. The aim of this review was to describe and review the evidence of effectiveness of various types of interventions that have been used at point-of-sale to encourage purchase and/or eating of healthier food and to improve health outcomes, and the extent to which effectiveness was related to intensity, duration and intervention setting. Records from searches in databases were screened and assessed against inclusion criteria. Included studies had risk of bias assessed. Intervention effectiveness was assessed for two outcomes: i) purchase and/or intake of healthier food options and/or nutrient intake; and ii) mediating factors that might effect the primary outcome. The search identified 5635 references. Thirty-two papers met the inclusion criteria. Twelve studies had low risk of bias and were classified as strong, nine were moderate and 11 were weak. Six intervention types and a range of different outcome measures were described in these papers: i) nutrition education and promotion alone through supermarkets/stores; ii) nutrition education plus enhanced availability of healthy food; iii) monetary incentive alone; iv) nutrition education plus monetary incentives; v) nutrition intervention through vending machines; and vi) nutrition intervention through shopping online. The evidence of this review indicates that monetary incentives offered to customers for a short-term look promising in increasing purchase of healthier food options when the intervention is applied by itself in stores or supermarkets. There was a lack of good quality studies addressing all other types of relevant point-of-sale interventions examining change in purchase and/or intake of healthier food options. There were few studies that examined mediating factors that might mediate the effect on the primary outcomes of relevant interventions. A range of intervention types have been used at point-of-sale to encourage healthy purchasing and/or intake of healthier food options and to improve health outcomes. There is a need for more well designed studies on the effectiveness of a range of point-of-sale interventions to encourage healthier eating and improve health outcomes, and of the mediating factors that might impact these interventions.
Some spectral approximation of one-dimensional fourth-order problems
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Maday, Yvon
1989-01-01
Some spectral type collocation method well suited for the approximation of fourth-order systems are proposed. The model problem is the biharmonic equation, in one and two dimensions when the boundary conditions are periodic in one direction. It is proved that the standard Gauss-Lobatto nodes are not the best choice for the collocation points. Then, a new set of nodes related to some generalized Gauss type quadrature formulas is proposed. Also provided is a complete analysis of these formulas including some new issues about the asymptotic behavior of the weights and we apply these results to the analysis of the collocation method.
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs
Mosaliganti, Kishore R.; Gelas, Arnaud; Megason, Sean G.
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo. PMID:24501592
An efficient, scalable, and adaptable framework for solving generic systems of level-set PDEs.
Mosaliganti, Kishore R; Gelas, Arnaud; Megason, Sean G
2013-01-01
In the last decade, level-set methods have been actively developed for applications in image registration, segmentation, tracking, and reconstruction. However, the development of a wide variety of level-set PDEs and their numerical discretization schemes, coupled with hybrid combinations of PDE terms, stopping criteria, and reinitialization strategies, has created a software logistics problem. In the absence of an integrative design, current toolkits support only specific types of level-set implementations which restrict future algorithm development since extensions require significant code duplication and effort. In the new NIH/NLM Insight Toolkit (ITK) v4 architecture, we implemented a level-set software design that is flexible to different numerical (continuous, discrete, and sparse) and grid representations (point, mesh, and image-based). Given that a generic PDE is a summation of different terms, we used a set of linked containers to which level-set terms can be added or deleted at any point in the evolution process. This container-based approach allows the user to explore and customize terms in the level-set equation at compile-time in a flexible manner. The framework is optimized so that repeated computations of common intensity functions (e.g., gradient and Hessians) across multiple terms is eliminated. The framework further enables the evolution of multiple level-sets for multi-object segmentation and processing of large datasets. For doing so, we restrict level-set domains to subsets of the image domain and use multithreading strategies to process groups of subdomains or level-set functions. Users can also select from a variety of reinitialization policies and stopping criteria. Finally, we developed a visualization framework that shows the evolution of a level-set in real-time to help guide algorithm development and parameter optimization. We demonstrate the power of our new framework using confocal microscopy images of cells in a developing zebrafish embryo.
Gong, Chunmei; Yang, Bin; Shi, Yarong; Liu, Zhongqiong; Wan, Lili; Zhang, Hong; Jiang, Denghua; Zhang, Lian
2016-08-01
Objectives The aim of this study was to investigate factors affecting ablative efficiency of high intensity focused ultrasound (HIFU) for adenomyosis. Materials and methods In all, 245 patients with adenomyosis who underwent ultrasound guided HIFU (USgHIFU) were retrospectively reviewed. All patients underwent dynamic contrast-enhanced magnetic resonance imaging (MRI) before and after HIFU treatment. The non-perfused volume (NPV) ratio, energy efficiency factor (EEF) and greyscale change were set as dependent variables, while the factors possibly affecting ablation efficiency were set as independent variables. These variables were used to build multiple regression models. Results A total of 245 patients with adenomyosis successfully completed HIFU treatment. Enhancement type on T1 weighted image (WI), abdominal wall thickness, volume of adenomyotic lesion, the number of hyperintense points, location of the uterus, and location of adenomyosis all had a linear relationship with the NPV ratio. Distance from skin to the adenomyotic lesion's ventral side, enhancement type on T1WI, volume of adenomyotic lesion, abdominal wall thickness, and signal intensity on T2WI all had a linear relationship with EEF. Location of the uterus and abdominal wall thickness also both had a linear relationship with greyscale change. Conclusion The enhancement type on T1WI, signal intensity on T2WI, volume of adenomyosis, location of the uterus and adenomyosis, number of hyperintense points, abdominal wall thickness, and distance from the skin to the adenomyotic lesion's ventral side can all be used as predictors of HIFU for adenomyosis.
Structure and spectral features of H+(H2O)7: Eigen versus Zundel forms.
Shin, Ilgyou; Park, Mina; Min, Seung Kyu; Lee, Eun Cheol; Suh, Seung Bum; Kim, Kwang S
2006-12-21
The two dimensional (2D) to three dimensional (3D) transition for the protonated water cluster has been controversial, in particular, for H(+)(H(2)O)(7). For H(+)(H(2)O)(7) the 3D structure is predicted to be lower in energy than the 2D structure at most levels of theory without zero-point energy (ZPE) correction. On the other hand, with ZPE correction it is predicted to be either 2D or 3D depending on the calculational levels. Although the ZPE correction favors the 3D structure at the level of coupled cluster theory with singles, doubles, and perturbative triples excitations [CCSD(T)] using the aug-cc-pVDZ basis set, the result based on the anharmonic zero-point vibrational energy correction favors the 2D structure. Therefore, the authors investigated the energies based on the complete basis set limit scheme (which we devised in an unbiased way) at the resolution of the identity approximation Moller-Plesset second order perturbation theory and CCSD(T) levels, and found that the 2D structure has the lowest energy for H(+)(H(2)O)(7) [though nearly isoenergetic to the 3D structure for D(+)(D(2)O)(7)]. This structure has the Zundel-type configuration, but it shows the quantum probabilistic distribution including some of the Eigen-type configuration. The vibrational spectra of MP2/aug-cc-pVDZ calculations and Car-Parrinello molecular dynamics simulations, taking into account the thermal and dynamic effects, show that the 2D Zundel-type form is in good agreement with experiments.
Structure and spectral features of H+(H2O)7: Eigen versus Zundel forms
NASA Astrophysics Data System (ADS)
Shin, Ilgyou; Park, Mina; Min, Seung Kyu; Lee, Eun Cheol; Suh, Seung Bum; Kim, Kwang S.
2006-12-01
The two dimensional (2D) to three dimensional (3D) transition for the protonated water cluster has been controversial, in particular, for H+(H2O)7. For H+(H2O)7 the 3D structure is predicted to be lower in energy than the 2D structure at most levels of theory without zero-point energy (ZPE) correction. On the other hand, with ZPE correction it is predicted to be either 2D or 3D depending on the calculational levels. Although the ZPE correction favors the 3D structure at the level of coupled cluster theory with singles, doubles, and perturbative triples excitations [CCSD(T)] using the aug-cc-pVDZ basis set, the result based on the anharmonic zero-point vibrational energy correction favors the 2D structure. Therefore, the authors investigated the energies based on the complete basis set limit scheme (which we devised in an unbiased way) at the resolution of the identity approximation Møller-Plesset second order perturbation theory and CCSD(T) levels, and found that the 2D structure has the lowest energy for H+(H2O)7 [though nearly isoenergetic to the 3D structure for D+(D2O)7]. This structure has the Zundel-type configuration, but it shows the quantum probabilistic distribution including some of the Eigen-type configuration. The vibrational spectra of MP2/aug-cc-pVDZ calculations and Car-Parrinello molecular dynamics simulations, taking into account the thermal and dynamic effects, show that the 2D Zundel-type form is in good agreement with experiments.
Reynolds, Teri Ann; Amato, Stas; Kulola, Irene; Chen, Chuan-Jay Jeffrey; Mfinanga, Juma; Sawe, Hendry Robert
2018-01-01
Point of care ultrasound (PoCUS) is an efficient, inexpensive, safe, and portable imaging modality that can be particularly useful in resource-limited settings. However, its impact on clinical decision making in such settings has not been well studied. The objective of this study is to describe the utilization and impact of PoCUS on clinical decision making at an urban emergency department in Dar es Salaam, Tanzania. This was a prospective descriptive cross-sectional study of patients receiving PoCUS at Muhimbili National Hospital's Emergency Medical Department (MNH EMD). Data on PoCUS studies during a period of 10 months at MNH EMD was collected on consecutive patients during periods when research assistants were available. Data collected included patient age and sex, indications for ultrasound, findings, interpretations, and provider-reported diagnostic impression and disposition plan before and after PoCUS. Descriptive statistics, including medians and interquartile ranges, and counts and percentages, are reported. Pearson chi squared tests and p-values were used to evaluate categorical data for significant differences. PoCUS data was collected for 986 studies performed on 784 patients. Median patient age was 32 years; 56% of patients were male. Top indications for PoCUS included trauma, respiratory presentations, and abdomino-pelvic pain. The most frequent study types performed were eFAST, cardiac, and obstetric or gynaecologic studies. Overall, clinicians reported that the use of PoCUS changed either diagnostic impression or disposition plan in 29% of all cases. Rates of change in diagnostic impression or disposition plan increased to 45% in patients for whom more than one PoCUS study type was performed. In resource-limited emergency care settings, PoCUS can be utilized for a wide range of indications and has substantial impact on clinical decision making, especially when more than one study type is performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinho, Graca; Pires, Ana, E-mail: ana.lourenco.pires@gmail.com; Saraiva, Luanha
Highlights: Black-Right-Pointing-Pointer The article shows WEEE plastics characterization from a recycling unit in Portugal. Black-Right-Pointing-Pointer The recycling unit has low machinery, with hand sorting of plastics elements. Black-Right-Pointing-Pointer Most common polymers are PS, ABS, PC/ABS, HIPS and PP. Black-Right-Pointing-Pointer Most plastics found have no identification of plastic type or flame retardants. Black-Right-Pointing-Pointer Ecodesign is still not practiced for EEE, with repercussions in end of life stage. - Abstract: This paper describes a direct analysis study carried out in a recycling unit for waste electrical and electronic equipment (WEEE) in Portugal to characterize the plastic constituents of WEEE. Approximately 3400 items,more » including cooling appliances, small WEEE, printers, copying equipment, central processing units, cathode ray tube (CRT) monitors and CRT televisions were characterized, with the analysis finding around 6000 kg of plastics with several polymer types. The most common polymers are polystyrene, acrylonitrile-butadiene-styrene, polycarbonate blends, high-impact polystyrene and polypropylene. Additives to darken color are common contaminants in these plastics when used in CRT televisions and small WEEE. These additives can make plastic identification difficult, along with missing polymer identification and flame retardant identification marks. These drawbacks contribute to the inefficiency of manual dismantling of WEEE, which is the typical recycling process in Portugal. The information found here can be used to set a baseline for the plastics recycling industry and provide information for ecodesign in electrical and electronic equipment production.« less
Kuu, Wei Y; Nail, Steven L; Sacha, Gregory
2009-03-01
The purpose of this study was to perform a rapid determination of vial heat transfer parameters, that is, the contact parameter K(cs) and the separation distance l(v), using the sublimation rate profiles measured by tunable diode laser absorption spectroscopy (TDLAS). In this study, each size of vial was filled with pure water followed by a freeze-drying cycle using a LyoStar II dryer (FTS Systems) with step-changes of the chamber pressure set-point at to 25, 50, 100, 200, 300, and 400 mTorr. K(cs) was independently determined by nonlinear parameter estimation using the sublimation rates measured at the pressure set-point of 25 mTorr. After obtaining K(cs), the l(v) value for each vial size was determined by nonlinear parameter estimation using the pooled sublimation rate profiles obtained at 25 to 400 mTorr. The vial heat transfer coefficient K(v), as a function of the chamber pressure, was readily calculated, using the obtained K(cs) and l(v) values. It is interesting to note the significant difference in K(v) of two similar types of 10 mL Schott tubing vials, primary due to the geometry of the vial-bottom, as demonstrated by the images of the contact areas of the vial-bottom. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gézero, L.; Antunes, C.
2017-05-01
The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate "terrain points" from "no terrain points", quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain's shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.
Personal computer wallpaper user segmentation based on Sasang typology.
Lee, Joung-Youn
2015-03-01
As human-computer interaction (HCI) is becoming a significant part of all human life, the user's emotional satisfaction is an important factor to consider. These changes have been pointed out by several researchers who claim that a user's personality may become the most important factor in the design. The objective of this study is to examine Sasang typology as a user segmentation method in the area of HCI design. To test HCI usage patterns in terms of the user's personality and temperament, this study focuses on personal computer (PC) or lap-top wallpaper settings. One hundred and four Facebook friends completed a QSCC II survey assessing Sasang typology type and sent a captured image of their personal PC or lap-top wallpaper. To classify the computer usage pattern, folder organization and wallpaper setting were investigated. The research showed that So-Yang type organized folders and icons in an orderly manner, whereas So-Eum type did not organize folders and icons at all. With regard to wallpaper settings, So-Yang type used the default wallpaper provided by the PC but So-Eum type used landscape images. Because So-Yang type was reported to be emotionally stable and extrovert, they tended to be highly concerned with online privacy compared with So-Eum type. So-Eum type use a lot of images of landscapes as the background image, which demonstrates So-Eum's low emotional stability, anxiety, and the desire to obtain analogy throughout the computer screen. Also, So-Yang's wallpapers display family or peripheral figures and this is due to the sociability that extrovert So-Yang types possess. By proposing the Sasang typology as a factor in influencing an HCI usage pattern in this study, it can be used to predict the user's HCI experience, or suggest a native design methodology that can actively cope with the user's psychological environment.
Reducing the likelihood of long tennis matches.
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key PointsThe cumulant generating function has nice properties for calculating the parameters of distributions in a tennis matchA final tiebreaker set reduces the length of matches as currently being used in the US OpenA new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match.
NASA Astrophysics Data System (ADS)
Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong
2017-07-01
Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.
Reducing the Likelihood of Long Tennis Matches
Barnett, Tristan; Alan, Brown; Pollard, Graham
2006-01-01
Long matches can cause problems for tournaments. For example, the starting times of subsequent matches can be substantially delayed causing inconvenience to players, spectators, officials and television scheduling. They can even be seen as unfair in the tournament setting when the winner of a very long match, who may have negative aftereffects from such a match, plays the winner of an average or shorter length match in the next round. Long matches can also lead to injuries to the participating players. One factor that can lead to long matches is the use of the advantage set as the fifth set, as in the Australian Open, the French Open and Wimbledon. Another factor is long rallies and a greater than average number of points per game. This tends to occur more frequently on the slower surfaces such as at the French Open. The mathematical method of generating functions is used to show that the likelihood of long matches can be substantially reduced by using the tiebreak game in the fifth set, or more effectively by using a new type of game, the 50-40 game, throughout the match. Key Points The cumulant generating function has nice properties for calculating the parameters of distributions in a tennis match A final tiebreaker set reduces the length of matches as currently being used in the US Open A new 50-40 game reduces the length of matches whilst maintaining comparable probabilities for the better player to win the match. PMID:24357951
Svedbom, Axel; Borgström, Fredrik; Hernlund, Emma; Ström, Oskar; Alekna, Vidmantas; Bianchi, Maria Luisa; Clark, Patricia; Curiel, Manuel Díaz; Dimai, Hans Peter; Jürisson, Mikk; Uusküla, Anneli; Lember, Margus; Kallikorm, Riina; Lesnyak, Olga; McCloskey, Eugene; Ershova, Olga; Sanders, Kerrie M; Silverman, Stuart; Tamulaitiene, Marija; Thomas, Thierry; Tosteson, Anna N A; Jönsson, Bengt; Kanis, John A
2018-03-01
The International Costs and Utilities Related to Osteoporotic fractures Study is a multinational observational study set up to describe the costs and quality of life (QoL) consequences of fragility fracture. This paper aims to estimate and compare QoL after hip, vertebral, and distal forearm fracture using time-trade-off (TTO), the EuroQol (EQ) Visual Analogue Scale (EQ-VAS), and the EQ-5D-3L valued using the hypothetical UK value set. Data were collected at four time-points for five QoL point estimates: within 2 weeks after fracture (including pre-fracture recall), and at 4, 12, and 18 months after fracture. Health state utility values (HSUVs) were derived for each fracture type and time-point using the three approaches (TTO, EQ-VAS, EQ-5D-3L). HSUV were used to estimate accumulated QoL loss and QoL multipliers. In total, 1410 patients (505 with hip, 316 with vertebral, and 589 with distal forearm fracture) were eligible for analysis. Across all time-points for the three fracture types, TTO provided the highest HSUVs, whereas EQ-5D-3L consistently provided the lowest HSUVs directly after fracture. Except for 13-18 months after distal forearm fracture, EQ-5D-3L generated lower QoL multipliers than the other two methods, whereas no equally clear pattern was observed between EQ-VAS and TTO. On average, the most marked differences between the three approaches were observed immediately after the fracture. The approach to derive QoL markedly influences the estimated QoL impact of fracture. Therefore the choice of approach may be important for the outcome and interpretation of cost-effectiveness analysis of fracture prevention.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon=-2.7×10(-3) mm(-1), σrecon=7.0×10(-3) mm(-1)) and (μCT=-2.5×10(-3) mm(-1), σCT=5.3×10(-3) mm(-1)), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.
Meta-Analysis of Effect Sizes Reported at Multiple Time Points Using General Linear Mixed Model.
Musekiwa, Alfred; Manda, Samuel O M; Mwambi, Henry G; Chen, Ding-Geng
2016-01-01
Meta-analysis of longitudinal studies combines effect sizes measured at pre-determined time points. The most common approach involves performing separate univariate meta-analyses at individual time points. This simplistic approach ignores dependence between longitudinal effect sizes, which might result in less precise parameter estimates. In this paper, we show how to conduct a meta-analysis of longitudinal effect sizes where we contrast different covariance structures for dependence between effect sizes, both within and between studies. We propose new combinations of covariance structures for the dependence between effect size and utilize a practical example involving meta-analysis of 17 trials comparing postoperative treatments for a type of cancer, where survival is measured at 6, 12, 18 and 24 months post randomization. Although the results from this particular data set show the benefit of accounting for within-study serial correlation between effect sizes, simulations are required to confirm these results.
Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.
2018-01-01
Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424
Alternator control for battery charging
Brunstetter, Craig A.; Jaye, John R.; Tallarek, Glen E.; Adams, Joseph B.
2015-07-14
In accordance with an aspect of the present disclosure, an electrical system for an automotive vehicle has an electrical generating machine and a battery. A set point voltage, which sets an output voltage of the electrical generating machine, is set by an electronic control unit (ECU). The ECU selects one of a plurality of control modes for controlling the alternator based on an operating state of the vehicle as determined from vehicle operating parameters. The ECU selects a range for the set point voltage based on the selected control mode and then sets the set point voltage within the range based on feedback parameters for that control mode. In an aspect, the control modes include a trickle charge mode and battery charge current is the feedback parameter and the ECU controls the set point voltage within the range to maintain a predetermined battery charge current.
Floating shoulders: Clinical and radiographic analysis at a mean follow-up of 11 years
Pailhes, ReÌ gis; Bonnevialle, Nicolas; Laffosse, JeanMichel; Tricoire, JeanLouis; Cavaignac, Etienne; Chiron, Philippe
2013-01-01
Context: The floating shoulder (FS) is an uncommon injury, which can be managed conservatively or surgically. The therapeutic option remains controversial. Aims: The goal of our study was to evaluate the long-term results and to identify predictive factors of functional outcomes. Settings and Design: Retrospective monocentric study. Materials and Methods: Forty consecutive FS were included (24 nonoperated and 16 operated) from 1984 to 2009. Clinical results were assessed with Simple Shoulder Test (SST), Oxford Shoulder Score (OSS), Single Assessment Numeric Evaluation (SANE), Short Form-12 (SF12), Disabilities of the Arm Shoulder and Hand score (DASH), and Constant score (CST). Plain radiographs were reviewed to evaluate secondary displacement, fracture healing, and modification of the lateral offset of the gleno-humeral joint (chest X-rays). New radiographs were made to evaluate osteoarthritis during follow-up. Statistical Analysis Used: T-test, Mann-Whitney test, and the Pearson's correlation coefficient were used. The significance level was set at 0.05. Results: At mean follow-up of 135 months (range 12-312), clinical results were satisfactory regarding different mean scores: SST 10.5 points, OSS 14 points, SANE 81%, SF12 (50 points and 60 points), DASH 14.5 points and CST 84 points. There were no significant differences between operative and non-operative groups. However, the loss of lateral offset influenced the results negatively. Osteoarthritis was diagnosed in five patients (12.5%) without correlation to fracture patterns and type of treatment. Conclusions: This study advocates that floating shoulder may be treated conservatively and surgically with satisfactory clinical long-term outcomes. However, the loss of gleno-humeral lateral offset should be evaluated carefully before taking a therapeutic option. PMID:23960364
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
Gagaoua, Mohammed; Micol, Didier; Picard, Brigitte; Terlouw, Claudia E M; Moloney, Aidan P; Juin, Hervé; Meteau, Karine; Scollan, Nigel; Richardson, Ian; Hocquette, Jean-François
2016-12-01
Eating quality of the same meat samples from different animal types cooked at two end-point cooking temperatures (55°C and 74°C) was evaluated by trained panels in France and the United Kingdom. Tenderness and juiciness scores were greater at 55°C than at 74°C, irrespective of the animal type and location of the panel. The UK panel, independently of animal type, gave greater scores for beef flavour (+7 to +24%, P<0.001) but lower scores for abnormal flavour (-10 to -17%, P<0.001) at 74°C. Abnormal flavour score by the French panel was higher at 74°C than at 55°C (+26%, P<0.001). Irrespective of the data set, tenderness was correlated with juiciness and beef flavour. Overall, this study found that cooking beef at a lower temperature increased tenderness and juiciness, irrespective of the location of the panel. In contrast, cooking beef at higher temperatures increased beef flavour and decreased abnormal flavour for the UK panelists but increased abnormal flavour for the French panel. Copyright © 2016 Elsevier Ltd. All rights reserved.
Variance Analysis of Unevenly Spaced Time Series Data
NASA Technical Reports Server (NTRS)
Hackman, Christine; Parker, Thomas E.
1996-01-01
We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.
Carpenter, Afton S; Sullivan, Joanne H; Deshmukh, Arati; Glisson, Scott R; Gallo, Stephen A
2015-01-01
Objective With the use of teleconferencing for grant peer-review panels increasing, further studies are necessary to determine the efficacy of the teleconference setting compared to the traditional onsite/face-to-face setting. The objective of this analysis was to examine the effects of discussion, namely changes in application scoring premeeting and postdiscussion, in these settings. We also investigated other parameters, including the magnitude of score shifts and application discussion time in face-to-face and teleconference review settings. Design The investigation involved a retrospective, quantitative analysis of premeeting and postdiscussion scores and discussion times for teleconference and face-to-face review panels. The analysis included 260 and 212 application score data points and 212 and 171 discussion time data points for the face-to-face and teleconference settings, respectively. Results The effect of discussion was found to be small, on average, in both settings. However, discussion was found to be important for at least 10% of applications, regardless of setting, with these applications moving over a potential funding line in either direction (fundable to unfundable or vice versa). Small differences were uncovered relating to the effect of discussion between settings, including a decrease in the magnitude of the effect in the teleconference panels as compared to face-to-face. Discussion time (despite teleconferences having shorter discussions) was observed to have little influence on the magnitude of the effect of discussion. Additionally, panel discussion was found to often result in a poorer score (as opposed to an improvement) when compared to reviewer premeeting scores. This was true regardless of setting or assigned reviewer type (primary or secondary reviewer). Conclusions Subtle differences were observed between settings, potentially due to reduced engagement in teleconferences. Overall, further research is required on the psychology of decision-making, team performance and persuasion to better elucidate the group dynamics of telephonic and virtual ad-hoc peer-review panels. PMID:26351194
Multiple μ-stability of neural networks with unbounded time-varying delays.
Wang, Lili; Chen, Tianping
2014-05-01
In this paper, we are concerned with a class of recurrent neural networks with unbounded time-varying delays. Based on the geometrical configuration of activation functions, the phase space R(n) can be divided into several Φη-type subsets. Accordingly, a new set of regions Ωη are proposed, and rigorous mathematical analysis is provided to derive the existence of equilibrium point and its local μ-stability in each Ωη. It concludes that the n-dimensional neural networks can exhibit at least 3(n) equilibrium points and 2(n) of them are μ-stable. Furthermore, due to the compatible property, a set of new conditions are presented to address the dynamics in the remaining 3(n)-2(n) subset regions. As direct applications of these results, we can get some criteria on the multiple exponential stability, multiple power stability, multiple log-stability, multiple log-log-stability and so on. In addition, the approach and results can also be extended to the neural networks with K-level nonlinear activation functions and unbounded time-varying delays, in which there can store (2K+1)(n) equilibrium points, (K+1)(n) of them are locally μ-stable. Numerical examples are given to illustrate the effectiveness of our results. Copyright © 2014 Elsevier Ltd. All rights reserved.
Latash, M; Gottleib, G
1990-01-01
Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.
Terminal attractors in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.
Designing a SCADA system simulator for fast breeder reactor
NASA Astrophysics Data System (ADS)
Nugraha, E.; Abdullah, A. G.; Hakim, D. L.
2016-04-01
SCADA (Supervisory Control and Data Acquisition) system simulator is a Human Machine Interface-based software that is able to visualize the process of a plant. This study describes the results of the process of designing a SCADA system simulator that aims to facilitate the operator in monitoring, controlling, handling the alarm, accessing historical data and historical trend in Nuclear Power Plant (NPP) type Fast Breeder Reactor (FBR). This research used simulation to simulate NPP type FBR Kalpakkam in India. This simulator was developed using Wonderware Intouch software 10 and is equipped with main menu, plant overview, area graphics, control display, set point display, alarm system, real-time trending, historical trending and security system. This simulator can properly simulate the principle of energy flow and energy conversion process on NPP type FBR. This SCADA system simulator can be used as training media for NPP type FBR prospective operators.
Montesano, Francesco F.; Serio, Francesco; Mininni, Carlo; Signore, Angelo; Parente, Angelo; Santamaria, Pietro
2015-01-01
Automatic irrigation scheduling based on real-time measurement of soilless substrate water status has been recognized as a promising approach for efficient greenhouse irrigation management. Identification of proper irrigation set points is crucial for optimal crop performance, both in terms of yield and quality, and optimal use of water resources. The objective of the present study was to determine the effects of irrigation management based on matric potential control on growth, plant–water relations, yield, fruit quality traits, and water-use efficiency of subirrigated (through bench system) soilless tomato. Tensiometers were used for automatic irrigation control. Two cultivars, “Kabiria” (cocktail type) and “Diana” (intermediate type), and substrate water potential set-points (−30 and −60 hPa, for “Diana,” and −30, −60, and −90 hPa for “Kabiria”), were compared. Compared with −30 hPa, water stress (corresponding to a −60 hPa irrigation set-point) reduced water consumption (14%), leaf area (18%), specific leaf area (19%), total yield (10%), and mean fruit weight (13%), irrespective of the cultivars. At −60 hPa, leaf-water status of plants, irrespective of the cultivars, showed an osmotic adjustment corresponding to a 9% average osmotic potential decrease. Total yield, mean fruit weight, plant water, and osmotic potential decreased linearly when −30, −60, and −90 hPa irrigation set-points were used in “Kabiria.” Unmarketable yield in “Diana” increased when water stress was imposed (187 vs. 349 g·plant−1, respectively, at −30 and −60 hPa), whereas the opposite effect was observed in “Kabiria,” where marketable yield loss decreased linearly [by 1.05 g·plant−1 per unit of substrate water potential (in the tested range from −30 to −90 hPa)]. In the second cluster, total soluble solids of the fruit and dry matter increased irrespective of the cultivars. In the seventh cluster, in “Diana,” only a slight increase was observed from −30 vs. −60 hPa (3.3 and 1.3%, respectively, for TSS and dry matter), whereas in “Kabiria,” the increase was more pronounced (8.7 and 12.0%, respectively, for TSS and dry matter), and further reduction in matric potential from −60 to −90 hPa confirmed the linear increase for both parameters. Both glucose and fructose concentrations increased linearly in “Kabiria” fruits on decreasing the substrate matric potential, whereas in “Diana,” there was no increase. It is feasible to act on matric potential irrigation set-points to control plant response in terms of fruit quality parameters. Precise control of substrate water status may offer the possibility to steer crop response by enhancing different crop-performance components, namely yield and fruit quality, in subirrigated tomato. Small-sized fruit varieties benefit more from controlled water stress in terms of reduced unmarketable yield loss and fruit quality improvements. PMID:26779189
78 FR 24816 - Pricing for the 2013 American Eagle West Point Two-Coin Silver Set
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
... DEPARTMENT OF THE TREASURY United States Mint Pricing for the 2013 American Eagle West Point Two-Coin Silver Set AGENCY: United States Mint, Department of the Treasury. ACTION: Notice. SUMMARY: The United States Mint is announcing the price of the 2013 American Eagle West Point Two-Coin Silver Set. The...
GridTool: A surface modeling and grid generation tool
NASA Technical Reports Server (NTRS)
Samareh-Abolhassani, Jamshid
1995-01-01
GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.
Jones, Graham R D; Albarede, Stephanie; Kesseler, Dagmar; MacKenzie, Finlay; Mammen, Joy; Pedersen, Morten; Stavelin, Anne; Thelen, Marc; Thomas, Annette; Twomey, Patrick J; Ventura, Emma; Panteghini, Mauro
2017-06-27
External Quality Assurance (EQA) is vital to ensure acceptable analytical quality in medical laboratories. A key component of an EQA scheme is an analytical performance specification (APS) for each measurand that a laboratory can use to assess the extent of deviation of the obtained results from the target value. A consensus conference held in Milan in 2014 has proposed three models to set APS and these can be applied to setting APS for EQA. A goal arising from this conference is the harmonisation of EQA APS between different schemes to deliver consistent quality messages to laboratories irrespective of location and the choice of EQA provider. At this time there are wide differences in the APS used in different EQA schemes for the same measurands. Contributing factors to this variation are that the APS in different schemes are established using different criteria, applied to different types of data (e.g. single data points, multiple data points), used for different goals (e.g. improvement of analytical quality; licensing), and with the aim of eliciting different responses from participants. This paper provides recommendations from the European Federation of Laboratory Medicine (EFLM) Task and Finish Group on Performance Specifications for External Quality Assurance Schemes (TFG-APSEQA) and on clear terminology for EQA APS. The recommended terminology covers six elements required to understand APS: 1) a statement on the EQA material matrix and its commutability; 2) the method used to assign the target value; 3) the data set to which APS are applied; 4) the applicable analytical property being assessed (i.e. total error, bias, imprecision, uncertainty); 5) the rationale for the selection of the APS; and 6) the type of the Milan model(s) used to set the APS. The terminology is required for EQA participants and other interested parties to understand the meaning of meeting or not meeting APS.
Editing ERTS-1 data to exclude land aids cluster analysis of water targets
NASA Technical Reports Server (NTRS)
Erb, R. B. (Principal Investigator)
1973-01-01
The author has identified the following significant results. It has been determined that an increase in the number of spectrally distinct coastal water types is achieved when data values over the adjacent land areas are excluded from the processing routine. This finding resulted from an automatic clustering analysis of ERTS-1 system corrected MSS scene 1002-18134 of 25 July 1972 over Monterey Bay, California. When the entire study area data set was submitted to the clustering only two distinct water classes were extracted. However, when the land area data points were removed from the data set and resubmitted to the clustering routine, four distinct groupings of water features were identified. Additionally, unlike the previous separation, the four types could be correlated to features observable in the associated ERTS-1 imagery. This exercise demonstrates that by proper selection of data submitted to the processing routine, based upon the specific application of study, additional information may be extracted from the ERTS-1 MSS data.
Willsey, A. Jeremy; Sanders, Stephan J.; Li, Mingfeng; Dong, Shan; Tebbenkamp, Andrew T.; Muhle, Rebecca A.; Reilly, Steven K.; Lin, Leon; Fertuzinhos, Sofia; Miller, Jeremy A.; Murtha, Michael T.; Bichsel, Candace; Niu, Wei; Cotney, Justin; Ercan-Sencicek, A. Gulhan; Gockley, Jake; Gupta, Abha; Han, Wenqi; He, Xin; Hoffman, Ellen; Klei, Lambertus; Lei, Jing; Liu, Wenzhong; Liu, Li; Lu, Cong; Xu, Xuming; Zhu, Ying; Mane, Shrikant M.; Lein, Edward S.; Wei, Liping; Noonan, James P.; Roeder, Kathryn; Devlin, Bernie; Šestan, Nenad; State, Matthew W.
2013-01-01
SUMMARY Autism spectrum disorder (ASD) is a complex developmental syndrome of unknown etiology. Recent studies employing exome- and genome-wide sequencing have identified nine high-confidence ASD (hcASD) genes. Working from the hypothesis that ASD-associated mutations in these biologically pleiotropic genes will disrupt intersecting developmental processes to contribute to a common phenotype, we have attempted to identify time periods, brain regions, and cell types in which these genes converge. We have constructed coexpression networks based on the hcASD “seed” genes, leveraging a rich expression data set encompassing multiple human brain regions across human development and into adulthood. By assessing enrichment of an independent set of probable ASD (pASD) genes, derived from the same sequencing studies, we demonstrate a key point of convergence in midfetal layer 5/6 cortical projection neurons. This approach informs when, where, and in what cell types mutations in these specific genes may be productively studied to clarify ASD pathophysiology. PMID:24267886
Random catalytic reaction networks
NASA Astrophysics Data System (ADS)
Stadler, Peter F.; Fontana, Walter; Miller, John H.
1993-03-01
We study networks that are a generalization of replicator (or Lotka-Volterra) equations. They model the dynamics of a population of object types whose binary interactions determine the specific type of interaction product. Such a system always reduces its dimension to a subset that contains production pathways for all of its members. The network equation can be rewritten at a level of collectives in terms of two basic interaction patterns: replicator sets and cyclic transformation pathways among sets. Although the system contains well-known cases that exhibit very complicated dynamics, the generic behavior of randomly generated systems is found (numerically) to be extremely robust: convergence to a globally stable rest point. It is easy to tailor networks that display replicator interactions where the replicators are entire self-sustaining subsystems, rather than structureless units. A numerical scan of random systems highlights the special properties of elementary replicators: they reduce the effective interconnectedness of the system, resulting in enhanced competition, and strong correlations between the concentrations.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Testing Spatial Symmetry Using Contingency Tables Based on Nearest Neighbor Relations
Ceyhan, Elvan
2014-01-01
We consider two types of spatial symmetry, namely, symmetry in the mixed or shared nearest neighbor (NN) structures. We use Pielou's and Dixon's symmetry tests which are defined using contingency tables based on the NN relationships between the data points. We generalize these tests to multiple classes and demonstrate that both the asymptotic and exact versions of Pielou's first type of symmetry test are extremely conservative in rejecting symmetry in the mixed NN structure and hence should be avoided or only the Monte Carlo randomized version should be used. Under RL, we derive the asymptotic distribution for Dixon's symmetry test and also observe that the usual independence test seems to be appropriate for Pielou's second type of test. Moreover, we apply variants of Fisher's exact test on the shared NN contingency table for Pielou's second test and determine the most appropriate version for our setting. We also consider pairwise and one-versus-rest type tests in post hoc analysis after a significant overall symmetry test. We investigate the asymptotic properties of the tests, prove their consistency under appropriate null hypotheses, and investigate finite sample performance of them by extensive Monte Carlo simulations. The methods are illustrated on a real-life ecological data set. PMID:24605061
An efficient transport solver for tokamak plasmas
Park, Jin Myung; Murakami, Masanori; St. John, H. E.; ...
2017-01-03
A simple approach to efficiently solve a coupled set of 1-D diffusion-type transport equations with a stiff transport model for tokamak plasmas is presented based on the 4th order accurate Interpolated Differential Operator scheme along with a nonlinear iteration method derived from a root-finding algorithm. Here, numerical tests using the Trapped Gyro-Landau-Fluid model show that the presented high order method provides an accurate transport solution using a small number of grid points with robust nonlinear convergence.
Dental services advertising: does it affect consumers?
Sanchez, P M; Bonner, P G
1989-12-01
Dental services advertising appears to be increasing. Despite their negative attitude toward advertising, as many as 20% of all dentists may now be advertising to meet changing conditions in a highly competitive market. Research on dental services advertising has provided a useful starting point for developing dental advertising strategies. However, it affords little understanding of how consumers may respond to the many types of information provided in dental services advertisements. The authors extend knowledge in this area by examining consumer response to dental advertising in an experimental setting.
Unified Pairwise Spatial Relations: An Application to Graphical Symbol Retrieval
NASA Astrophysics Data System (ADS)
Santosh, K. C.; Wendling, Laurent; Lamiroy, Bart
In this paper, we present a novel unifying concept of pairwise spatial relations. We develop two way directional relations with respect to a unique point set, based on topology of the studied objects and thus avoids problems related to erroneous choices of reference objects while preserving symmetry. The method is robust to any type of image configuration since the directional relations are topologically guided. An automatic prototype graphical symbol retrieval is presented in order to establish its expressiveness.
Farhadi, Khosro; Choubsaz, Mansour; Setayeshi, Khosro; Kameli, Mohammad; Bazargan-Hejazi, Shahrzad; Heidari Zadie, Zahra; Ahmadi, Alireza
2016-09-01
Postoperative nausea and vomiting (PONV) is a common complication after general anesthesia, and the prevalence ranges between 25% and 30%. The aim of this study was to determine the preventive effects of dry cupping on PONV by stimulating point P6 in the wrist. This was a randomized controlled trial conducted at the Imam Reza Hospital in Kermanshah, Iran. The final study sample included 206 patients (107 experimental and 99 controls). Inclusion criteria included the following: female sex; age>18 years; ASA Class I-II; type of surgery: laparoscopic cholecystectomy; type of anesthesia: general anesthesia. Exclusion criteria included: change in the type of surgery, that is, from laparoscopic cholecystectomy to laparotomy, and ASA-classification III or more. Interventions are as follows: pre surgery, before the induction of anesthesia, the experimental group received dry cupping on point P6 of the dominant hand's wrist with activation of intermittent negative pressure. The sham group received cupping without activation of negative pressure at the same point. Main outcome was that the visual analogue scale was used to measure the severity of PONV. The experimental group who received dry cupping had significantly lower levels of PONV severity after surgery (P < 0.001) than the control group. The differences in measure were maintained after controlling for age and ASA in regression models (P < 0.01). Traditional dry cupping delivered in an operation room setting prevented PONV in laparoscopic cholecystectomy patients.
Farhadi, Khosro; Choubsaz, Mansour; Setayeshi, Khosro; Kameli, Mohammad; Bazargan-Hejazi, Shahrzad; Zadie, Zahra H.; Ahmadi, Alireza
2016-01-01
Abstract Background: Postoperative nausea and vomiting (PONV) is a common complication after general anesthesia, and the prevalence ranges between 25% and 30%. The aim of this study was to determine the preventive effects of dry cupping on PONV by stimulating point P6 in the wrist. Methods: This was a randomized controlled trial conducted at the Imam Reza Hospital in Kermanshah, Iran. The final study sample included 206 patients (107 experimental and 99 controls). Inclusion criteria included the following: female sex; age>18 years; ASA Class I-II; type of surgery: laparoscopic cholecystectomy; type of anesthesia: general anesthesia. Exclusion criteria included: change in the type of surgery, that is, from laparoscopic cholecystectomy to laparotomy, and ASA-classification III or more. Interventions are as follows: pre surgery, before the induction of anesthesia, the experimental group received dry cupping on point P6 of the dominant hand's wrist with activation of intermittent negative pressure. The sham group received cupping without activation of negative pressure at the same point. Main outcome was that the visual analogue scale was used to measure the severity of PONV. Results: The experimental group who received dry cupping had significantly lower levels of PONV severity after surgery (P < 0.001) than the control group. The differences in measure were maintained after controlling for age and ASA in regression models (P < 0.01). Conclusion: Traditional dry cupping delivered in an operation room setting prevented PONV in laparoscopic cholecystectomy patients. PMID:27661022
Inagaki, Nobuya; Sano, Hiroki; Seki, Yoshifumi; Kuroda, Shingo; Kaku, Kohei
2018-03-01
Trelagliptin, a novel once-weekly oral dipeptidyl peptidase-4 (DPP-4) inhibitor, has shown favorable efficacy and safety in type 2 diabetes mellitus patients. Trelagliptin was launched in Japan, and is expected to be initially used for switchover from a daily DPP-4 inhibitor in the clinical setting. Thus, the present study was carried out to explore the efficacy and safety of trelagliptin after a daily DPP-4 inhibitor was switched to it. This was an open-label, phase 3 exploratory study to evaluate the efficacy and safety of trelagliptin in Japanese type 2 diabetes mellitus patients who had stable glycemic control on once-daily sitagliptin therapy. Eligible patients received trelagliptin 100 mg orally before breakfast once a week for 12 weeks. The primary end-point was blood glucose by the meal tolerance test, and additional end-points were glycemic control (efficacy) and safety. Altogether, 14 patients received the study drug. The blood glucose did not markedly change from baseline at major assessment points in the meal tolerance test, and a decrease in blood glucose was observed at several other assessment points. Adverse events were reported in 42.9% (6/14) of patients, but all adverse events were mild or moderate in severity, and most were not related to the study drug. No cases of death, serious adverse events or hypoglycemia were reported. It is considered possible to switch a once-daily DPP-4 inhibitor to trelagliptin in type 2 diabetes mellitus patients with stable glycemic control in combination with diet and exercise therapy without any major influences on glycemic control or safety. © 2017 The Authors. Journal of Diabetes Investigation published by Asian Association for the Study of Diabetes (AASD) and John Wiley & Sons Australia, Ltd.
Neural network based optimal control of HVAC&R systems
NASA Astrophysics Data System (ADS)
Ning, Min
Heating, Ventilation, Air-Conditioning and Refrigeration (HVAC&R) systems have wide applications in providing a desired indoor environment for different types of buildings. It is well acknowledged that 30%-40% of the total energy generated is consumed by buildings and HVAC&R systems alone account for more than 50% of the building energy consumption. Low operational efficiency especially under partial load conditions and poor control are part of reasons for such high energy consumption. To improve energy efficiency, HVAC&R systems should be properly operated to maintain a comfortable and healthy indoor environment under dynamic ambient and indoor conditions with the least energy consumption. This research focuses on the optimal operation of HVAC&R systems. The optimization problem is formulated and solved to find the optimal set points for the chilled water supply temperature, discharge air temperature and AHU (air handling unit) fan static pressure such that the indoor environment is maintained with the least chiller and fan energy consumption. To achieve this objective, a dynamic system model is developed first to simulate the system behavior under different control schemes and operating conditions. The system model is modular in structure, which includes a water-cooled vapor compression chiller model and a two-zone VAV system model. A fuzzy-set based extended transformation approach is then applied to investigate the uncertainties of this model caused by uncertain parameters and the sensitivities of the control inputs with respect to the interested model outputs. A multi-layer feed forward neural network is constructed and trained in unsupervised mode to minimize the cost function which is comprised of overall energy cost and penalty cost when one or more constraints are violated. After training, the network is implemented as a supervisory controller to compute the optimal settings for the system. In order to implement the optimal set points predicted by the supervisory controller, a set of five adaptive PI (proportional-integral) controllers are designed for each of the five local control loops of the HVAC&R system. The five controllers are used to track optimal set points and zone air temperature set points. Parameters of these PI controllers are tuned online to reduce tracking errors. The updating rules are derived from Lyapunov stability analysis. Simulation results show that compared to the conventional night reset operation scheme, the optimal operation scheme saves around 10% energy under full load condition and 19% energy under partial load conditions.
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
Geometric registration of images by similarity transformation using two reference points
NASA Technical Reports Server (NTRS)
Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)
2011-01-01
A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.
Perceived synchrony for realistic and dynamic audiovisual events.
Eg, Ragnhild; Behne, Dawn M
2015-01-01
In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.
Perceived synchrony for realistic and dynamic audiovisual events
Eg, Ragnhild; Behne, Dawn M.
2015-01-01
In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli. PMID:26082738
Coal gasification system with a modulated on/off control system
Fasching, George E.
1984-01-01
A modulated control system is provided for improving regulation of the bed level in a fixed-bed coal gasifier into which coal is fed from a rotary coal feeder. A nuclear bed level gauge using a cobalt source and an ion chamber detector is used to detect the coal bed level in the gasifier. The detector signal is compared to a bed level set point signal in a primary controller which operates in proportional/integral modes to produce an error signal. The error signal is modulated by the injection of a triangular wave signal of a frequency of about 0.0004 Hz and an amplitude of about 80% of the primary deadband. The modulated error signal is fed to a triple-deadband secondary controller which jogs the coal feeder speed up or down by on/off control of a feeder speed change driver such that the gasifier bed level is driven toward the set point while preventing excessive cycling (oscillation) common in on/off mode automatic controllers of this type. Regulation of the bed level is achieved without excessive feeder speed control jogging.
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Analyses of pressure ulcer point prevalence at the first skin assessment in a Portuguese hospital.
Garcez Sardo, Pedro Miguel; Simões, Cláudia Sofia Oliveira; Alvarelhão, José Joaquim Marques; de Oliveira e Costa, César Telmo; Simões, Carlos Jorge Cardoso; Figueira, Jorge Manuel Rodrigues; Simões, João Filipe Fernandes Lindo; Amado, Francisco Manuel Lemos; Amaro, António José Monteiro; Pinheiro de Melo, Elsa Maria Oliveira
2016-05-01
To analyze the first pressure ulcer risk and skin assessment records of hospitalized adult patients in medical and surgical areas of Aveiro Hospital during 2012 in association with their demographic and clinical characteristics. Retrospective cohort analysis of electronic health record database from 7132 adult patients admitted to medical and surgical areas in a Portuguese hospital during 2012. The presence of (at least) one pressure ulcer at the first skin assessment in inpatient setting was associated with age, gender, type of admission, specialty units, length of stay, patient discharge and ICD-9 diagnosis. Point prevalence of participants with pressure ulcer category/stage I-IV of 7.9% at the first skin assessment in inpatient setting. A total of 1455 pressure ulcers were documented, most of them category/stage I. The heels and the sacrum/coccyx were the most problematic areas. Participants with pressure ulcer commonly had two or more pressure ulcers. The point prevalence of participants with pressure ulcer of our study was similar international literature. The presence of a pressure ulcer at the first skin assessment could be an important measure of frailty and the participants with pressure ulcer commonly had more than one documented pressure ulcer. Advanced age or lower Braden Scale scores or Emergency Service admission were relevant variables for the presence of (at least) one pressure ulcer at the first skin assessment in inpatient setting as well as respiratory, infectious or genitourinary system diseases. Copyright © 2016 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
de Jong, G. Theodoor; Geerke, Daan P.; Diefenbach, Axel; Matthias Bickelhaupt, F.
2005-06-01
We have evaluated the performance of 24 popular density functionals for describing the potential energy surface (PES) of the archetypal oxidative addition reaction of the methane C-H bond to the palladium atom by comparing the results with our recent ab initio [CCSD(T)] benchmark study of this reaction. The density functionals examined cover the local density approximation (LDA), the generalized gradient approximation (GGA), meta-GGAs as well as hybrid density functional theory. Relativistic effects are accounted for through the zeroth-order regular approximation (ZORA). The basis-set dependence of the density-functional-theory (DFT) results is assessed for the Becke-Lee-Yang-Parr (BLYP) functional using a hierarchical series of Slater-type orbital (STO) basis sets ranging from unpolarized double-ζ (DZ) to quadruply polarized quadruple-ζ quality (QZ4P). Stationary points on the reaction surface have been optimized using various GGA functionals, all of which yield geometries that differ only marginally. Counterpoise-corrected relative energies of stationary points are converged to within a few tenths of a kcal/mol if one uses the doubly polarized triple-ζ (TZ2P) basis set and the basis-set superposition error (BSSE) drops to 0.0 kcal/mol for our largest basis set (QZ4P). Best overall agreement with the ab initio benchmark PES is achieved by functionals of the GGA, meta-GGA, and hybrid-DFT type, with mean absolute errors of 1.3-1.4 kcal/mol and errors in activation energies ranging from +0.8 to -1.4 kcal/mol. Interestingly, the well-known BLYP functional compares very reasonably with an only slightly larger mean absolute error of 2.5 kcal/mol and an underestimation by -1.9 kcal/mol of the overall barrier (i.e., the difference in energy between the TS and the separate reactants). For comparison, with B3LYP we arrive at a mean absolute error of 3.8 kcal/mol and an overestimation of the overall barrier by 4.5 kcal/mol.
Viral Aggregation: Impact on Virus Behavior in the Environment.
Gerba, Charles P; Betancourt, Walter Q
2017-07-05
Aggregates of viruses can have a significant impact on quantification and behavior of viruses in the environment. Viral aggregates may be formed in numerous ways. Viruses may form crystal like structures and aggregates in the host cell during replication or may form due to changes in environmental conditions after virus particles are released from the host cells. Aggregates tend to form near the isoelectric point of the virus, under the influence of certain salts and salt concentrations in solution, cationic polymers, and suspended organic matter. The given conditions under which aggregates form in the environment are highly dependent on the type of virus, type of salts in solution (cation, anion. monovalent, divalent) and pH. However, virus type greatly influences the conditions when aggregation/disaggregation will occur, making predictions difficult under any given set of water quality conditions. Most studies have shown that viral aggregates increase the survival of viruses in the environment and resistance to disinfectants, especially with more reactive disinfectants. The presence of viral aggregates may also result in overestimation of removal by filtration processes. Virus aggregation-disaggregation is a complex process and predicting the behavior of any individual virus is difficult under a given set of environmental circumstances without actual experimental data.
Choice of data types in time resolved fluorescence enhanced diffuse optical tomography.
Riley, Jason; Hassan, Moinuddin; Chernomordik, Victor; Gandjbakhche, Amir
2007-12-01
In this paper we examine possible data types for time resolved fluorescence enhanced diffuse optical tomography (FDOT). FDOT is a particular case of diffuse optical tomography, where our goal is to analyze fluorophores deeply embedded in a turbid medium. We focus on the relative robustness of the different sets of data types to noise. We use an analytical model to generate the expected temporal point spread function (TPSF) and generate the data types from this. Varying levels of noise are applied to the TPSF before generating the data types. We show that local data types are more robust to noise than global data types, and should provide enhanced information to the inverse problem. We go on to show that with a simple reconstruction algorithm, depth and lifetime (the parameters of interest) of the fluorophore are better reconstructed using the local data types. Further we show that the relationship between depth and lifetime is better preserved for the local data types, suggesting they are in some way not only more robust, but also self-regularizing. We conclude that while the local data types may be more expensive to generate in the general case, they do offer clear advantages over the standard global data types.
Match Duration and Number of Rallies in Men’s and Women’s 2000–2010 FIVB World Tour Beach Volleyball
Palao, José Manuel; Valades, David; Ortega, Enrique
2012-01-01
After the 2000 Olympic Games, the Fédération Internationale de Volleyball (FIVB) modified the scoring system used in beach volleyball from side-out to a rally point system. The goal was to facilitate the comprehension of the game and to stabilize match duration. The purpose of this study was to assess the duration and number of rallies in men’s and women’s beach volleyball matches (2000–2010 FIVB World Tour). Data from 14,432 men’s matches and 14,175 women’s matches of the 2000–2010 World Tour were collected. The variables studied were: match duration, total rallies per set and match, number of sets, team that won the set and match, type of match (equality in score), and gender. The average match duration in beach volleyball is stable, ranging from 30 to 64 minutes, regardless of the number of sets, the stage of the tournament (qualifying round or main draw), or gender. The average number of rallies per match were 78–80 for two-set matches and 94–96 for three-set matches. Matches from the main draw are more balanced than matches from the qualifying round. More balanced matches (smaller point difference between teams) have longer durations. It is not clear why there is no relationship between the number of rallies and match duration. Future studies are needed to clarify this aspect. The results can serve as a reference to guide beach volleyball training (with regard to duration and number of rallies) and to help understand the effect of the rule change. PMID:23486703
Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set
NASA Technical Reports Server (NTRS)
Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping
1997-01-01
A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged parameters. Finally, the effects of even more extreme pigment packaging must be examined in order to improve algorithm performance at high latitudes. Note, however, that the North Sea and Mississippi River plume studies contributed data to the packaged and unpackaged classess, respectively, with little effect on algorithm performance. This suggests that gelbstoff-rich Case 2 waters do not seriously degrade performance of the semi-analytical algorithm.
Phase transitions in coupled map lattices and in associated probabilistic cellular automata.
Just, Wolfram
2006-10-01
Analytical tools are applied to investigate piecewise linear coupled map lattices in terms of probabilistic cellular automata. The so-called disorder condition of probabilistic cellular automata is closely related with attracting sets in coupled map lattices. The importance of this condition for the suppression of phase transitions is illustrated by spatially one-dimensional systems. Invariant densities and temporal correlations are calculated explicitly. Ising type phase transitions are found for one-dimensional coupled map lattices acting on repelling sets and for a spatially two-dimensional Miller-Huse-like system with stable long time dynamics. Critical exponents are calculated within a finite size scaling approach. The relevance of detailed balance of the resulting probabilistic cellular automaton for the critical behavior is pointed out.
Taylor, Kathryn S; Verbakel, Jan Y; Feakins, Benjamin G; Price, Christopher P; Perera, Rafael; Bankhead, Clare; Plüddemann, Annette
2018-05-21
To assess the diagnostic accuracy of point-of-care natriuretic peptide tests in patients with chronic heart failure, with a focus on the ambulatory care setting. Systematic review and meta-analysis. Ovid Medline, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Embase, Health Technology Assessment Database, Science Citation Index, and Conference Proceedings Citation Index until 31 March 2017. Eligible studies evaluated point-of-care natriuretic peptide testing (B-type natriuretic peptide (BNP) or N terminal fragment pro B-type natriuretic peptide (NTproBNP)) against any relevant reference standard, including echocardiography, clinical examination, or combinations of these, in humans. Studies were excluded if reported data were insufficient to construct 2×2 tables. No language restrictions were applied. 42 publications of 39 individual studies met the inclusion criteria and 40 publications of 37 studies were included in the analysis. Of the 37 studies, 30 evaluated BNP point-of-care testing and seven evaluated NTproBNP testing. 15 studies were done in ambulatory care settings in populations with a low prevalence of chronic heart failure. Five studies were done in primary care. At thresholds >100 pg/mL, the sensitivity of BNP, measured with the point-of-care index device Triage, was generally high and was 0.95 (95% confidence interval 0.90 to 0.98) at 100 pg/mL. At thresholds <100 pg/mL, sensitivity ranged from 0.46 to 0.97 and specificity from 0.31 to 0.98. Primary care studies that used NTproBNP testing reported a sensitivity of 0.99 (0.57 to 1.00) and specificity of 0.60 (0.44 to 0.74) at 135 pg/mL. No statistically significant difference in diagnostic accuracy was found between point-of-care BNP and NTproBNP tests. Given the lack of studies in primary care, the paucity of NTproBNP data, and potential methodological limitations in these studies, large scale trials in primary care are needed to assess the role of point-of-care natriuretic peptide testing and clarify appropriate thresholds to improve care of patients with suspected or chronic heart failure. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
GENERAL: Bursting Ca2+ Oscillations and Synchronization in Coupled Cells
NASA Astrophysics Data System (ADS)
Ji, Quan-Bao; Lu, Qi-Shao; Yang, Zhuo-Qin; Duan, Li-Xia
2008-11-01
A mathematical model proposed by Grubelnk et al. [Biophys. Chew,. 94 (2001) 59] is employed to study the physiological role of mitochondria and the cytosolic proteins in generating complex Ca2+ oscillations. Intracel-lular bursting calcium oscillations of point-point, point-cycle and two-folded limit cycle types are observed and explanations are given based on the fast/slow dynamical analysis, especially for point-cycle and two-folded limit cycle types, which have not been reported before. Furthermore, synchronization of coupled bursters of Ca2+ oscillations via gap junctions and the effect of bursting types on synchronization of coupled cells are studied. It is argued that bursting oscillations of point-point type may be superior to achieve synchronization than that of point-cycle type.
Three-phase receiving coil of wireless power transmission system for gastrointestinal robot
NASA Astrophysics Data System (ADS)
Jia, Z. W.; Jiang, T.; Liu, Y.
2017-11-01
Power shortage is the bottleneck for the wide application of gastrointestinal (GI) robot. Owing to the limited volume and free change of orientation of the receiving set in GI trace, the optimal of receiving set is the key point to promote the transmission efficiency of wireless power transmission system. A new type of receiving set, similar to the winding of three-phase asynchronous motor, is presented and compared with the original three-dimensional orthogonal coil. Considering the given volume and the space utilization ratio, the three-phase and the three-orthogonal ones are the parameters which are optimized and compared. Both the transmission efficiency and stability are analyzed and verified by in vitro experiments. Animal experiments show that the new one could provide at least 420 mW power in volume of Φ11 × 13mm with a uniformity of 78.3% for the GI robot.
Project Delivery System Mode Decision Based on Uncertain AHP and Fuzzy Sets
NASA Astrophysics Data System (ADS)
Kaishan, Liu; Huimin, Li
2017-12-01
The project delivery system mode determines the contract pricing type, project management mode and the risk allocation among all participants. Different project delivery system modes have different characteristics and applicable scope. For the owners, the selection of the delivery mode is the key point to decide whether the project can achieve the expected benefits, it relates to the success or failure of project construction. Under the precondition of comprehensively considering the influence factors of the delivery mode, the model of project delivery system mode decision was set up on the basis of uncertain AHP and fuzzy sets, which can well consider the uncertainty and fuzziness when conducting the index evaluation and weight confirmation, so as to rapidly and effectively identify the most suitable delivery mode according to project characteristics. The effectiveness of the model has been verified via the actual case analysis in order to provide reference for the construction project delivery system mode.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2011-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very di cult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The Tool for Rapid Analysis of Monte Carlo simulations (TRAM) has been used in recent design and analysis work for the Orion vehicle, greatly decreasing the time it takes to evaluate performance requirements. A previous version of this tool was developed to automatically identify driving design variables in Monte Carlo data sets. This paper describes a new, parallel version, of TRAM implemented on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Fluid/electrolyte and endocrine changes in space flight
NASA Technical Reports Server (NTRS)
Huntoon, Carolyn Leach
1989-01-01
The primary effects of space flight that influence the endocrine system and fluid and electrolyte regulation are the reduction of hydrostatic gradients, reduction in use and gravitational loading of bone and muscle, and stress. Each of these sets into motion a series of responses that culminates in alteration of some homeostatic set points for the environment of space. Set point alterations are believed to include decreases in venous pressure; red blood cell mass; total body water; plasma volume; and serum sodium, chloride, potassium, and osmolality. Serum calcium and phosphate increase. Hormones such as erythropoietin, atrial natriuretic peptide, aldosterone, cortisol, antidiuretic hormone, and growth hormone are involved in the dynamic processes that bring about the new set points. The inappropriateness of microgravity set points for 1-G conditions contributes to astronaut postflight responses.
Comparison of social and physical free energies on a toy model.
Kasac, Josip; Stefancic, Hrvoje; Stepanic, Josip
2004-01-01
Social free energy has been recently introduced as a measure of social action obtainable in a given social system, without changes in its structure. The authors of this paper argue that social free energy surpasses the gap between the verbally formulated value sets of social systems and the quantitatively based predictions. This point is further developed by analyzing the relation between the social and the physical free energy. Generically, this is done for a particular type of social dynamics. The extracted type of social dynamics is one of many realistic types of the differing proportion of social and economic elements. Numerically, this has been done for a toy model of interacting agents. The values of the social and physical free energies are, within the numerical accuracy, equivalent in the class of nontrivial, quasistationary model states.
Ultimate boundedness stability and controllability of hereditary systems
NASA Technical Reports Server (NTRS)
Chukwu, E. N.
1979-01-01
By generalizing the Liapunov-Yoshizawa techniques, necessary and sufficient conditions are given for uniform boundedness and uniform ultimate boundedness of a rather general class of nonlinear differential equations of neutral type. Among the applications treated by the methods are the Lienard equation of neutral type and hereditary systems of Lurie type. The absolute stability of this later equation is also investigated. A certain existence result of a solution of a neutral functional differential inclusion with two point boundary values is applied to study the exact function space controllability of a nonlinear neutral functional differential control system. A geometric growth condition is used to characterize both the function space and Euclidean controllability of another nonlinear delay system which has a compact and convex control set. This yields conditions under which perturbed nonlinear delay controllable systems are controllable.
NASA Astrophysics Data System (ADS)
Lei, Jie
2011-03-01
In order to understand the electronic and transport properties of organic field-effect transistor (FET) materials, we theoretically studied the polarons in two-dimensional systems using a tight-binding model with the Holstein type and Su--Schrieffer--Heeger type electron--lattice couplings. By numerical calculations, it was found that a carrier accepts four kinds of localization, which are named the point polaron, two-dimensional polaron, one-dimensional polaron, and the extended state. The degree of localization is sensitive to the following parameters in the model: the strength and type of electron--lattice couplings, and the signs and relative magnitudes of transfer integrals. When a parameter set for a single-crystal phase of pentacene is applied within the Holstein model, a considerably delocalized hole polaron is found, consistent with the bandlike transport mechanism.
T-duality of singular spacetime compactifications in an H-flux
NASA Astrophysics Data System (ADS)
Linshaw, Andrew; Mathai, Varghese
2018-07-01
We begin by presenting a symmetric version of the circle equivariant T-duality result in a joint work of the second author with Siye Wu, thereby generalizing the results there. We then initiate the study of twisted equivariant Courant algebroids and equivariant generalized geometry and apply it to our context. As before, T-duality exchanges type IIA and type IIB string theories. In our theory, both spacetime and the T-dual spacetime can be singular spaces when the fixed point set is non-empty; the singularities correspond to Kaluza-Klein monopoles. We propose that the Ramond-Ramond charges of type II string theories on the singular spaces are classified by twisted equivariant cohomology groups, consistent with the previous work of Mathai and Wu, and prove that they are naturally isomorphic. We also establish the corresponding isomorphism of twisted equivariant Courant algebroids.
NASA Astrophysics Data System (ADS)
Postnov, Sergey
2017-11-01
Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.
Programming of left hand exploits task set but that of right hand depends on recent history.
Tang, Rixin; Zhu, Hong
2017-07-01
There are many differences between the left hand and the right hand. But it is not clear if there is a difference in programming between left hand and right hand when the hands perform the same movement. In current study, we carried out two experiments to investigate whether the programming of two hands was equivalent or they exploited different strategies. In the first experiment, participants were required to use one hand to grasp an object with visual feedback or to point to the center of one object without visual feedback on alternate trials, or to grasp an object without visual feedback and to point the center of one object with visual feedback on alternating trials. They then performed the tasks with the other hand. The result was that previous pointing task affected current grasping when it was performed by the left hand, but not the right hand. In experiment 2, we studied if the programming of the left (or right) hand would be affected by the pointing task performed on the previous trial not only by the same hand, but also by the right (or left) hand. Participants pointed and grasped the objects alternately with two hands. The result was similar with Experiment 1, i.e., left-hand grasping was affected by right-hand pointing, whereas right-hand grasping was immune from the interference from left hand. Taken together, the results suggest that when open- and closed-loop trials are interleaved, motor programming of grasping with the right hand was affected by the nature of the online feedback on the previous trial only if it was a grasping trial, suggesting that the trial-to-trial transfer depends on sensorimotor memory and not on task set. In contrast, motor programming of grasping with the left hand can use information about the nature of the online feedback on the previous trial to specify the parameters of the movement, even when the type of movement that occurred was quite different (i.e., pointing) and was performed with the right hand. This suggests that trial-to-trial transfer with the left hand depends on some sort of carry-over of task set for dealing with the availability of visual feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discretemore » models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μ{sub recon} = − 2.7 × 10{sup −3} mm{sup −1}, σ{sub recon} = 7.0 × 10{sup −3} mm{sup −1}) and (μ{sub CT} = − 2.5 × 10{sup −3} mm{sup −1}, σ{sub CT} = 5.3 × 10{sup −3} mm{sup −1}), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy.« less
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit; Ruan, Dan
2015-01-01
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method achieved submillimeter reconstruction RMSE under different configurations, demonstrating quantitatively the faith of the proposed method in preserving local structural properties of the underlying surface in the presence of noise and missing measurements, and its robustness toward variations of such characteristics. On point clouds from the human subject, the proposed method successfully reconstructed all patient surfaces, filling regions where raw point coordinate readings were missing. Within two comparable regions of interest in the chest area, similar mean curvature distributions were acquired from both their reconstructed surface and CT surface, with mean and standard deviation of (μrecon = − 2.7 × 10−3 mm−1, σrecon = 7.0 × 10−3 mm−1) and (μCT = − 2.5 × 10−3 mm−1, σCT = 5.3 × 10−3 mm−1), respectively. The agreement of local geometry properties between the reconstructed surfaces and the CT surface demonstrated the ability of the proposed method in faithfully representing the underlying patient surface. Conclusions: The authors have integrated and developed an accurate level-set based continuous surface reconstruction method on point clouds acquired by a 3D surface photogrammetry system. The proposed method has generated a continuous representation of the underlying phantom and patient surfaces with good robustness against noise and missing measurements. It serves as an important first step for further development of motion tracking methods during radiotherapy. PMID:26520747
A comparison of methods for determining HIV viral set point.
Mei, Y; Wang, L; Holte, S E
2008-01-15
During a course of human immunodeficiency virus (HIV-1) infection, the viral load usually increases sharply to a peak following infection and then drops rapidly to a steady state, where it remains until progression to AIDS. This steady state is often referred to as the viral set point. It is believed that the HIV viral set point results from an equilibrium between the HIV virus and immune response and is an important indicator of AIDS disease progression. In this paper, we analyze a real data set of viral loads measured before antiretroviral therapy is initiated, and propose two-phase regression models to utilize all available data to estimate the viral set point. The advantages of the proposed methods are illustrated by comparing them with two empirical methods, and the reason behind the improvement is also studied. Our results illustrate that for our data set, the viral load data are highly correlated and it is cost effective to estimate the viral set point based on one or two measurements obtained between 5 and 12 months after HIV infection. The utility and limitations of this recommendation will be discussed. Copyright (c) 2007 John Wiley & Sons, Ltd.
Gschwind, Michael K [Chappaqua, NY
2011-03-01
Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
NASA Astrophysics Data System (ADS)
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
Constraining the Fundamental Parameters of the O-Type Binary CPD -41 7733
NASA Astrophysics Data System (ADS)
Sana, H.; Rauw, G.; Gosset, E.
2007-04-01
Using a set of high-resolution spectra, we studied the physical and orbital properties of the O-type binary CPD -41 7733, located in the core of NGC 6231. We report the unambiguous detection of a secondary spectral signature and we derive the first SB2 orbital solution of the system. The period is 5.6815+/-0.0015 days, and the orbit has no significant eccentricity. CPD -41 7733 probably consists of stars of spectral types O8.5 and B3. As for other objects in the cluster, we observe discrepant luminosity classifications while using spectroscopic or brightness criteria. Still, the present analysis suggests that both components display physical parameters close to those of typical O8.5 and B3 dwarfs. We also analyze the X-ray light curves and spectra obtained during six 30 ks XMM-Newton pointings spread over the 5.7 day period. We find no significant variability between the different pointings, nor within the individual observations. The CPD -41 7733 X-ray spectrum is well reproduced by a three-temperature thermal mekal model with temperatures of 0.3, 0.8, and 2.4 keV. No X-ray overluminosity, resulting, e.g., from a possible wind interaction, is observed. The emission of CPD -41 7733 is thus very representative of typical O-type star X-ray emission.
Influence of mono-axis random vibration on reading activity.
Bhiwapurkar, M K; Saran, V H; Harsha, S P; Goel, V K; Berg, Mats
2010-01-01
Recent studies on train passengers' activities found that many passengers were engaged in some form of work, e.g., reading and writing, while traveling by train. A majority of the passengers reported that their activities were disturbed by vibrations or motions during traveling. A laboratory study was therefore set up to study how low-frequency random vibrations influence the difficulty to read. The study involved 18 healthy male subjects of 23 to 32 yr of age group. Random vibrations were applied in the frequency range (1-10 Hz) at 0.5, 1.0 and 1.5 m/s(2) rms amplitude along three directions (longitudinal, lateral and vertical). The effect of vibration on reading activity was investigated by giving a word chain in two different font types (Times New Roman and Arial) and three different sizes (10, 12 and 14 points) of font for each type. Subjects performed reading tasks under two sitting positions (with backrest support and leaning over a table). The judgments of perceived difficulty to read were rated using 7-point discomfort judging scale. The result shows that reading difficulty increases with increasing vibration magnitudes and found to be maximum in longitudinal direction, but with leaning over a table position. In comparison with Times New Roman type and sizes of font, subjects perceived less difficulty with Arial type for all font sizes under all vibration magnitude.
Computing convex quadrangulations☆
Schiffer, T.; Aurenhammer, F.; Demuth, M.
2012-01-01
We use projected Delaunay tetrahedra and a maximum independent set approach to compute large subsets of convex quadrangulations on a given set of points in the plane. The new method improves over the popular pairing method based on triangulating the point set. PMID:22389540
A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH
CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG
2013-01-01
We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536
D Reconstruction from Uav-Based Hyperspectral Images
NASA Astrophysics Data System (ADS)
Liu, L.; Xu, L.; Peng, J.
2018-04-01
Reconstructing the 3D profile from a set of UAV-based images can obtain hyperspectral information, as well as the 3D coordinate of any point on the profile. Our images are captured from the Cubert UHD185 (UHD) hyperspectral camera, which is a new type of high-speed onboard imaging spectrometer. And it can get both hyperspectral image and panchromatic image simultaneously. The panchromatic image have a higher spatial resolution than hyperspectral image, but each hyperspectral image provides considerable information on the spatial spectral distribution of the object. Thus there is an opportunity to derive a high quality 3D point cloud from panchromatic image and considerable spectral information from hyperspectral image. The purpose of this paper is to introduce our processing chain that derives a database which can provide hyperspectral information and 3D position of each point. First, We adopt a free and open-source software, Visual SFM which is based on structure from motion (SFM) algorithm, to recover 3D point cloud from panchromatic image. And then get spectral information of each point from hyperspectral image by a self-developed program written in MATLAB. The production can be used to support further research and applications.
González-Recio, O; Haile-Mariam, M; Pryce, J E
2016-01-01
The objectives of this study were (1) to propose changing the selection criteria trait for evaluating fertility in Australia from calving interval to conception rate at d 42 after the beginning of the mating season and (2) to use type traits as early fertility predictors, to increase the reliability of estimated breeding values for fertility. The breeding goal in Australia is conception within 6 wk of the start of the mating season. Currently, the Australian model to predict fertility breeding values (expressed as a linear transformation of calving interval) is a multitrait model that includes calving interval (CVI), lactation length (LL), calving to first service (CFS), first nonreturn rate (FNRR), and conception rate. However, CVI has a lower genetic correlation with the breeding goal (conception within 6 wk of the start of the mating season) than conception rate. Milk yield, type, and fertility data from 164,318 cow sired by 4,766 bulls were used. Principal component analysis and genetic correlation estimates between type and fertility traits were used to select type traits that could subsequently be used in a multitrait analysis. Angularity, foot angle, and pin set were chosen as type traits to include in an index with the traits that are included in the multitrait fertility model: CVI, LL, CFS, FNRR, and conception rate at d 42 (CR42). An index with these 8 traits is expected to achieve an average bull first proof reliability of 0.60 on the breeding objective (conception within 6 wk of the start of the mating season) compared with reliabilities of 0.39 and 0.45 for CR42 only or the current 5-trait Australian model. Subsequently, we used the first eigenvector of a principal component analysis with udder texture, bone quality, angularity, and body condition score to calculate an energy status indicator trait. The inclusion of the energy status indicator trait composite in a multitrait index with CVI, LL, CFS, FNRR, and CR42 achieved a 12-point increase in fertility breeding value reliability (i.e., increased by 30%; up to 0.72 points of reliability), whereas a lower increase in reliability (4 points, i.e., increased by 10%) was obtained by including angularity, foot angle, and pin set in the index. In situations when a limited number of daughters have been phenotyped for CR42, including type data for sires increased reliabilities compared with when type data were omitted. However, sires with more than 80 daughters with CR42 records achieved reliability estimates close to 80% on average, and there did not appear to be a benefit from having daughters with type records. The cost of phenotyping to obtain such reliabilities (assuming a cost of AU$14 per cow with type data and AU$5 per cow with pregnancy diagnosed) is lower if more pregnancy data are collected in preference to type data. That is, efforts to increase the reliability of fertility EBV are most cost effective when directed at obtaining a larger number of pregnancy tests. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
NASA Astrophysics Data System (ADS)
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
Tóth, Gergely; Bodai, Zsolt; Héberger, Károly
2013-10-01
Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.
Discrete cosine and sine transforms generalized to honeycomb lattice
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Motlochová, Lenka
2018-06-01
The discrete cosine and sine transforms are generalized to a triangular fragment of the honeycomb lattice. The honeycomb point sets are constructed by subtracting the root lattice from the weight lattice points of the crystallographic root system A2. The two-variable orbit functions of the Weyl group of A2, discretized simultaneously on the weight and root lattices, induce a novel parametric family of extended Weyl orbit functions. The periodicity and von Neumann and Dirichlet boundary properties of the extended Weyl orbit functions are detailed. Three types of discrete complex Fourier-Weyl transforms and real-valued Hartley-Weyl transforms are described. Unitary transform matrices and interpolating behavior of the discrete transforms are exemplified. Consequences of the developed discrete transforms for transversal eigenvibrations of the mechanical graphene model are discussed.
Glimcher, Paul W.
2011-01-01
The ability of human subjects to choose between disparate kinds of rewards suggests that the neural circuits for valuing different reward types must converge. Economic theory suggests that these convergence points represent the subjective values (SVs) of different reward types on a common scale for comparison. To examine these hypotheses and to map the neural circuits for reward valuation we had food and water-deprived subjects make risky choices for money, food, and water both in and out of a brain scanner. We found that risk preferences across reward types were highly correlated; the level of risk aversion an individual showed when choosing among monetary lotteries predicted their risk aversion toward food and water. We also found that partially distinct neural networks represent the SVs of monetary and food rewards and that these distinct networks showed specific convergence points. The hypothalamic region mainly represented the SV for food, and the posterior cingulate cortex mainly represented the SV for money. In both the ventromedial prefrontal cortex (vmPFC) and striatum there was a common area representing the SV of both reward types, but only the vmPFC significantly represented the SVs of money and food on a common scale appropriate for choice in our data set. A correlation analysis demonstrated interactions across money and food valuation areas and the common areas in the vmPFC and striatum. This may suggest that partially distinct valuation networks for different reward types converge on a unified valuation network, which enables a direct comparison between different reward types and hence guides valuation and choice. PMID:21994386
Are fractal dimensions of the spatial distribution of mineral deposits meaningful?
Raines, G.L.
2008-01-01
It has been proposed that the spatial distribution of mineral deposits is bifractal. An implication of this property is that the number of deposits in a permissive area is a function of the shape of the area. This is because the fractal density functions of deposits are dependent on the distance from known deposits. A long thin permissive area with most of the deposits in one end, such as the Alaskan porphyry permissive area, has a major portion of the area far from known deposits and consequently a low density of deposits associated with most of the permissive area. On the other hand, a more equi-dimensioned permissive area, such as the Arizona porphyry permissive area, has a more uniform density of deposits. Another implication of the fractal distribution is that the Poisson assumption typically used for estimating deposit numbers is invalid. Based on datasets of mineral deposits classified by type as inputs, the distributions of many different deposit types are found to have characteristically two fractal dimensions over separate non-overlapping spatial scales in the range of 5-1000 km. In particular, one typically observes a local dimension at spatial scales less than 30-60 km, and a regional dimension at larger spatial scales. The deposit type, geologic setting, and sample size influence the fractal dimensions. The consequence of the geologic setting can be diminished by using deposits classified by type. The crossover point between the two fractal domains is proportional to the median size of the deposit type. A plot of the crossover points for porphyry copper deposits from different geologic domains against median deposit sizes defines linear relationships and identifies regions that are significantly underexplored. Plots of the fractal dimension can also be used to define density functions from which the number of undiscovered deposits can be estimated. This density function is only dependent on the distribution of deposits and is independent of the definition of the permissive area. Density functions for porphyry copper deposits appear to be significantly different for regions in the Andes, Mexico, United States, and western Canada. Consequently, depending on which regional density function is used, quite different estimates of numbers of undiscovered deposits can be obtained. These fractal properties suggest that geologic studies based on mapping at scales of 1:24,000 to 1:100,000 may not recognize processes that are important in the formation of mineral deposits at scales larger than the crossover points at 30-60 km. ?? 2008 International Association for Mathematical Geology.
Sets that Contain Their Circle Centers
ERIC Educational Resources Information Center
Martin, Greg
2008-01-01
Say that a subset S of the plane is a "circle-center set" if S is not a subset of a line, and whenever we choose three non-collinear points from S, the center of the circle through those three points is also an element of S. A problem appearing on the Macalester College Problem of the Week website stated that a finite set of points in the plane,…
The Building America Indoor Temperature and Humidity Measurement Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzger, C.; Norton, Paul
2014-02-01
When modeling homes using simulation tools, the heating and cooling set points can have a significant impact on home energy use. Every four years, the Energy Information Administration (EIA) Residential Energy Consumption Survey (RECS) asks homeowners about their heating and cooling set points. Unfortunately, no temperature data is measured, and most of the time, the homeowner may be guessing at this number. Even one degree Fahrenheit difference in heating set point can make a 5% difference in heating energy use! So, the survey-based RECS data cannot be used as the definitive reference for the set point for the "average occupant"more » in simulations. The purpose of this document is to develop a protocol for collecting consistent data for heating/cooling set points and relative humidity so that an average set point can be determined for asset energy models in residential buildings. This document covers the decision making process for researchers to determine how many sensors should be placed in each home, where to put those sensors, and what kind of asset data should be taken while they are in the home. The authors attempted to design the protocols to maximize the value of this study and minimize the resources required to achieve that value.« less
Building America Indoor Temperature and Humidity Measurement Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engebrecht-Metzger, Cheryn; Norton, Paul
2014-02-01
When modeling homes using simulation tools, the heating and cooling set points can have a significant impact on home energy use. Every 4 years the Energy Information Administration (EIA) Residential Energy Consumption Survey (RECS) asks homeowners about their heating and cooling set points. Unfortunately, no temperature data is measured, and most of the time, the homeowner may be guessing at this number. Even one degree Fahrenheit difference in heating set point can make a 5% difference in heating energy use! So, the survey-based RECS data cannot be used as the definitive reference for the set point for the 'average occupant'more » in simulations. The purpose of this document is to develop a protocol for collecting consistent data for heating/cooling set points and relative humidity so that an average set point can be determined for asset energy models in residential buildings. This document covers the decision making process for researchers to determine how many sensors should be placed in each home, where to put those sensors, and what kind of asset data should be taken while they are in the home. The authors attempted to design the protocols to maximize the value of this study and minimize the resources required to achieve that value.« less
Point- and line-based transformation models for high resolution satellite image rectification
NASA Astrophysics Data System (ADS)
Abd Elrahman, Ahmed Mohamed Shaker
Rigorous mathematical models with the aid of satellite ephemeris data can present the relationship between the satellite image space and the object space. With government funded satellites, access to calibration and ephemeris data has allowed the development and use of these models. However, for commercial high-resolution satellites, which have been recently launched, these data are withheld from users, and therefore alternative empirical models should be used. In general, the existing empirical models are based on the use of control points and involve linking points in the image space and the corresponding points in the object space. But the lack of control points in some remote areas and the questionable accuracy of the identified discrete conjugate points provide a catalyst for the development of algorithms based on features other than control points. This research, concerned with image rectification and 3D geo-positioning determination using High-Resolution Satellite Imagery (HRSI), has two major objectives. First, the effects of satellite sensor characteristics, number of ground control points (GCPs), and terrain elevation variations on the performance of several point based empirical models are studied. Second, a new mathematical model, using only linear features as control features, or linear features with a minimum number of GCPs, is developed. To meet the first objective, several experiments for different satellites such as Ikonos, QuickBird, and IRS-1D have been conducted using different point based empirical models. Various data sets covering different terrain types are presented and results from representative sets of the experiments are shown and analyzed. The results demonstrate the effectiveness and the superiority of these models under certain conditions. From the results obtained, several alternatives to circumvent the effects of the satellite sensor characteristics, the number of GCPs, and the terrain elevation variations are introduced. To meet the second objective, a new model named the Line Based Transformation Model (LBTM) is developed for HRSI rectification. The model has the flexibility to either solely use linear features or use linear features and a number of control points to define the image transformation parameters. Unlike point features, which must be explicitly defined, linear features have the advantage that they can be implicitly defined by any segment along the line. (Abstract shortened by UMI.)
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Leroy, S; Grenier, J; Rohe, D; Even, C; Pieranski, P
2006-05-01
From experiments with metal crystals, in the vicinity of their crystal/liquid/vapor triple points, it is known that melting of crystals starts on their surfaces and is anisotropic. Recently, we have shown that anisotropic surface melting occurs also in lyotropic systems. In our previous paper (Eur. Phys. J. E 19, 223 (2006)), we have focused on the case of poor faceting at the Pn3m/L1 interface in C12EO2/water binary mixtures. There anisotropic melting occurs in the vicinity of a Pn3m/L3/L1 triple point. In the present paper, we focus on the opposite case of a rich devil's-staircase-type faceting at Ia3d/vapor interfaces in monoolein/water and phytantriol/water mixtures. We show that anisotropic surface melting takes place in these systems in a narrow humidity range close to the Ia3d-L2 transition. As whole (hkl) sets of facets disappear one after another when the transition is approached, surface melting occurs in a facet-by-facet type.
Review Article: Increasing physical activity with point-of-choice prompts--a systematic review.
Nocon, Marc; Müller-Riemenschneider, Falk; Nitzschke, Katleen; Willich, Stefan N
2010-08-01
Stair climbing is an activity that can easily be integrated into everyday life and has positive health effects. Point-of-choice prompts are informational or motivational signs near stairs and elevators/escalators aimed at increased stair climbing. The aim of this review was to assess the effectiveness of point-of-choice prompts for the promotion of stair climbing. In a systematic search of the literature, studies that assessed the effectiveness of point-of-choice prompts to increase the rate of stair climbing in the general population were identified. No restrictions were made regarding the setting, the duration of the intervention, or the kind of message. A total of 25 studies were identified. Point-of-choice prompts were predominantly posters or stair-riser banners in public traffic stations, shopping malls or office buildings. The 25 studies reported 42 results. Of 10 results for elevator settings, only three reported a significant increase in stair climbing, whereas 28 of 32 results for escalator settings reported a significant increase in stair climbing. Overall, point-of-choice prompts are able to increase the rate of stair climbing, especially in escalator settings. In elevator settings, point-of-choice prompts seem less effective. The long-term efficacy and the most efficient message format have yet to be determined in methodologically rigorous studies.
Use of satellite imagery for wildland resource evaluation in the Great Basin
NASA Technical Reports Server (NTRS)
Tueller, P. T. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Most major vegetation types of Nevada have been mapped with success. The completed set of mosaic overlays will be more accurate and detailed than previous maps compiled by various State and Federal agencies due to the excellent vantage point that ERTS-1 data affords. This new vegetation type map will greatly aid resource agencies in their daily work. Such information as suitable grazing areas, wildlife habitat, forage production, and approximate wildland production potentials can be inferred from such a map. There has been some success in detecting vegetational changes with the use of ERTS-1 MSS imagery, but exposure differences have somewhat confounded the results. Future plans include work to solve this problem.
Performance of a 14.9-kW laminated-frame dc series motor with chopper controller
NASA Technical Reports Server (NTRS)
Schwab, J. R.
1979-01-01
Traction motor using two types of excitation: ripple free dc from a motor generator set for baseline data and chopped dc as supplied by a battery and chopper controller was tested. For the same average values of input voltage and current, the power output was independent of the type of excitation. At the same speeds, motor efficiency at low power output (corresponding to low duty cycle of the controller) was 5 to 10 percentage points less on chopped dc than on ripple-free dc. This illustrates that for chopped waveforms, it is incorrect to calculate input power as the product of average voltage and average current. Locked-rotor torque, no load losses, and magnetic saturation data were so determined.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
lidar change detection using building models
NASA Astrophysics Data System (ADS)
Kim, Angela M.; Runyon, Scott C.; Jalobeanu, Andre; Esterline, Chelsea H.; Kruse, Fred A.
2014-06-01
Terrestrial LiDAR scans of building models collected with a FARO Focus3D and a RIEGL VZ-400 were used to investigate point-to-point and model-to-model LiDAR change detection. LiDAR data were scaled, decimated, and georegistered to mimic real world airborne collects. Two physical building models were used to explore various aspects of the change detection process. The first model was a 1:250-scale representation of the Naval Postgraduate School campus in Monterey, CA, constructed from Lego blocks and scanned in a laboratory setting using both the FARO and RIEGL. The second model at 1:8-scale consisted of large cardboard boxes placed outdoors and scanned from rooftops of adjacent buildings using the RIEGL. A point-to-point change detection scheme was applied directly to the point-cloud datasets. In the model-to-model change detection scheme, changes were detected by comparing Digital Surface Models (DSMs). The use of physical models allowed analysis of effects of changes in scanner and scanning geometry, and performance of the change detection methods on different types of changes, including building collapse or subsistence, construction, and shifts in location. Results indicate that at low false-alarm rates, the point-to-point method slightly outperforms the model-to-model method. The point-to-point method is less sensitive to misregistration errors in the data. Best results are obtained when the baseline and change datasets are collected using the same LiDAR system and collection geometry.
Horizontal visibility graphs generated by type-I intermittency
NASA Astrophysics Data System (ADS)
Núñez, Ángel M.; Luque, Bartolo; Lacasa, Lucas; Gómez, Jose Patricio; Robledo, Alberto
2013-05-01
The type-I intermittency route to (or out of) chaos is investigated within the horizontal visibility (HV) graph theory. For that purpose, we address the trajectories generated by unimodal maps close to an inverse tangent bifurcation and construct their associated HV graphs. We show how the alternation of laminar episodes and chaotic bursts imprints a fingerprint in the resulting graph structure. Accordingly, we derive a phenomenological theory that predicts quantitative values for several network parameters. In particular, we predict that the characteristic power-law scaling of the mean length of laminar trend sizes is fully inherited by the variance of the graph degree distribution, in good agreement with the numerics. We also report numerical evidence on how the characteristic power-law scaling of the Lyapunov exponent as a function of the distance to the tangent bifurcation is inherited in the graph by an analogous scaling of block entropy functionals defined on the graph. Furthermore, we are able to recast the full set of HV graphs generated by intermittent dynamics into a renormalization-group framework, where the fixed points of its graph-theoretical renormalization-group flow account for the different types of dynamics. We also establish that the nontrivial fixed point of this flow coincides with the tangency condition and that the corresponding invariant graph exhibits extremal entropic properties.
Mishra, S; Xu, J; Agarwal, U; Gonzales, J; Levin, S; Barnard, N D
2013-07-01
To determine the effects of a low-fat plant-based diet program on anthropometric and biochemical measures in a multicenter corporate setting. Employees from 10 sites of a major US company with body mass index ≥ 25 kg/m(2) and/or previous diagnosis of type 2 diabetes were randomized to either follow a low-fat vegan diet, with weekly group support and work cafeteria options available, or make no diet changes for 18 weeks. Dietary intake, body weight, plasma lipid concentrations, blood pressure and glycated hemoglobin (HbA1C) were determined at baseline and 18 weeks. Mean body weight fell 2.9 kg and 0.06 kg in the intervention and control groups, respectively (P<0.001). Total and low-density lipoprotein (LDL) cholesterol fell 8.0 and 8.1 mg/dl in the intervention group and 0.01 and 0.9 mg/dl in the control group (P<0.01). HbA1C fell 0.6 percentage point and 0.08 percentage point in the intervention and control group, respectively (P<0.01).Among study completers, mean changes in body weight were -4.3 kg and -0.08 kg in the intervention and control groups, respectively (P<0.001). Total and LDL cholesterol fell 13.7 and 13.0 mg/dl in the intervention group and 1.3 and 1.7 mg/dl in the control group (P<0.001). HbA1C levels decreased 0.7 percentage point and 0.1 percentage point in the intervention and control group, respectively (P<0.01). An 18-week dietary intervention using a low-fat plant-based diet in a corporate setting improves body weight, plasma lipids, and, in individuals with diabetes, glycemic control.
Mishra, S; Xu, J; Agarwal, U; Gonzales, J; Levin, S; Barnard, N D
2013-01-01
Background/objectives: To determine the effects of a low-fat plant-based diet program on anthropometric and biochemical measures in a multicenter corporate setting. Subjects/methods: Employees from 10 sites of a major US company with body mass index ⩾25 kg/m2 and/or previous diagnosis of type 2 diabetes were randomized to either follow a low-fat vegan diet, with weekly group support and work cafeteria options available, or make no diet changes for 18 weeks. Dietary intake, body weight, plasma lipid concentrations, blood pressure and glycated hemoglobin (HbA1C) were determined at baseline and 18 weeks. Results: Mean body weight fell 2.9 kg and 0.06 kg in the intervention and control groups, respectively (P<0.001). Total and low-density lipoprotein (LDL) cholesterol fell 8.0 and 8.1 mg/dl in the intervention group and 0.01 and 0.9 mg/dl in the control group (P<0.01). HbA1C fell 0.6 percentage point and 0.08 percentage point in the intervention and control group, respectively (P<0.01). Among study completers, mean changes in body weight were −4.3 kg and −0.08 kg in the intervention and control groups, respectively (P<0.001). Total and LDL cholesterol fell 13.7 and 13.0 mg/dl in the intervention group and 1.3 and 1.7 mg/dl in the control group (P<0.001). HbA1C levels decreased 0.7 percentage point and 0.1 percentage point in the intervention and control group, respectively (P<0.01). Conclusions: An 18-week dietary intervention using a low-fat plant-based diet in a corporate setting improves body weight, plasma lipids, and, in individuals with diabetes, glycemic control. PMID:23695207
Pippel, Kristina; Meinck, M; Lübke, N
2017-06-01
Mobile geriatric rehabilitation can be provided in the setting of nursing homes, short-term care (STC) facilities and exclusively in private homes. This study analyzed the common features and differences of mobile rehabilitation interventions in various settings. Stratified by setting 1,879 anonymized mobile geriatric rehabilitation treatments between 2011 and 2014 from 11 participating institutions were analyzed with respect to patient, process and outcome-related features. Significant differences between the settings nursing home (n = 514, 27 %), STC (n = 167, 9 %) and private homes (n = 1198, 64 %) were evident for mean age (83 years, 83 years and 80 years, respectively), percentage of women (72 %, 64 % and 55 %), degree of dependency on pre-existing care (92 %, 76 % and 64 %), total treatment sessions (TS, 38 TS, 42 TS and 41 TS), treatment duration (54 days, 61 days and 58 days) as well as the Barthel index at the start of rehabilitation (34 points, 39 points and 46 points) and the gain in the Barthel index (15 points, 21 points and 18 points), whereby the gain in the capacity for self-sufficiency was significant in all settings. The setting-specific evaluation of mobile geriatric rehabilitation showed differences for relevant patient, process and outcome-related features. Compared to inpatient rehabilitation mobile rehabilitation in all settings made an above average contribution to the rehabilitation of patients with pre-existing dependency on care. The gains in the capacity for self-sufficiency achieved in all settings support the efficacy of mobile geriatric rehabilitation under the current prerequisites for applicability.
Sequential structural damage diagnosis algorithm using a change point detection method
NASA Astrophysics Data System (ADS)
Noh, H.; Rajagopal, R.; Kiremidjian, A. S.
2013-11-01
This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.
Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.
Li, Yan; Gu, Leon; Kanade, Takeo
2011-09-01
Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.
Vegetation community change points suggest that critical loads of nutrient nitrogen may be too high
NASA Astrophysics Data System (ADS)
Wilkins, Kayla; Aherne, Julian; Bleasdale, Andy
2016-12-01
It is widely accepted that elevated nitrogen deposition can have detrimental effects on semi-natural ecosystems, including changes to plant diversity. Empirical critical loads of nutrient nitrogen have been recommended to protect many sensitive European habitats from significant harmful effects. In this study, we used Threshold Indicator Taxa Analysis (TITAN) to investigate shifts in vegetation communities along an atmospheric nitrogen deposition gradient for twenty-two semi-natural habitat types (as described under Annex I of the European Union Habitats Directive) in Ireland. Significant changes in vegetation community, i.e., change points, were determined for twelve habitats, with seven habitats showing a decrease in the number of positive indicator species. Community-level change points indicated a decrease in species abundance along a nitrogen deposition gradient ranging from 3.9 to 15.3 kg N ha-1 yr-1, which were significantly lower than recommended critical loads (Wilcoxon signed-rank test; V = 6, p < 0.05). These results suggest that lower critical loads of empirical nutrient nitrogen deposition may be required to protect many European habitats. Changes to vegetation communities may mean a loss of sensitive indicator species and potentially rare species in these habitats, highlighting how emission reductions policies set under the National Emissions Ceilings Directive may be directly linked to meeting the goal set out under the European Union's Biodiversity Strategy of "halting the loss of biodiversity" across Europe by 2020.
Florindo, Alex Antonio; Guimarães, Vanessa Valente; Cesar, Chester Luiz Galvão; Barros, Marilisa Berti de Azevedo; Alves, Maria Cecília Goi Porto; Goldbaum, Moisés
2009-09-01
To estimate the prevalence of and identify factors associated with physical activity in leisure, transportation, occupational, and household settings. This was a cross-sectional study aimed at investigating living and health conditions among the population of São Paulo, Brazil. Data on 1318 adults aged 18 to 65 years were used. To assess physical activity, the long version of the International Physical Activity Questionnaire was applied. Multivariate analysis was conducted using a hierarchical model. The greatest prevalence of insufficient activity related to transportation (91.7%), followed by leisure (77.5%), occupational (68.9%), and household settings (56.7%). The variables associated with insufficient levels of physical activity in leisure were female sex, older age, low education level, nonwhite skin color, smoking, and self-reported poor health; in occupational settings were female sex, white skin color, high education level, self-reported poor health, nonsmoking, and obesity; in transportation settings were female sex; and in household settings, with male sex, separated, or widowed status and high education level. Physical activity in transportation and leisure settings should be encouraged. This study will serve as a reference point in monitoring different types of physical activities and implementing public physical activity policies in developing countries.
Meshless Geometric Subdivision
2004-10-01
Michelangelo Youthful data set is shown on the right. for p ∈ M and with boundary condition dM (q, q) = 0 is approximated by |∇dΩrP (p, ·)| = F̃ (p), (2...dealing with more complex geometry. We apply our meshless subdivision operator to a base point set of 10088 points generated from the Michelangelo ...acknowledge the permission to use the Michelangelo point sets granted by the Stanford Computer Graphics group. The Isis, 50% decimated and non
Validating a Monotonically-Integrated Large Eddy Simulation Code for Subsonic Jet Acoustics
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Bridges, James
2017-01-01
The results of subsonic jet validation cases for the Naval Research Lab's Jet Engine Noise REduction (JENRE) code are reported. Two set points from the Tanna matrix, set point 3 (Ma = 0.5, unheated) and set point 7 (Ma = 0.9, unheated) are attempted on three different meshes. After a brief discussion of the JENRE code and the meshes constructed for this work, the turbulent statistics for the axial velocity are presented and compared to experimental data, with favorable results. Preliminary simulations for set point 23 (Ma = 0.5, Tj=T1 = 1.764) on one of the meshes are also described. Finally, the proposed configuration for the farfield noise prediction with JENRE's Ffowcs-Williams Hawking solver are detailed.
Determination system for solar cell layout in traffic light network using dominating set
NASA Astrophysics Data System (ADS)
Eka Yulia Retnani, Windi; Fambudi, Brelyanes Z.; Slamin
2018-04-01
Graph Theory is one of the fields in Mathematics that solves discrete problems. In daily life, the applications of Graph Theory are used to solve various problems. One of the topics in the Graph Theory that is used to solve the problem is the dominating set. The concept of dominating set is used, for example, to locate some objects systematically. In this study, the dominating set are used to determine the dominating points for solar panels, where the vertex represents the traffic light point and the edge represents the connection between the points of the traffic light. To search the dominating points for solar panels using the greedy algorithm. This algorithm is used to determine the location of solar panel. This research produced applications that can determine the location of solar panels with optimal results, that is, the minimum dominating points.
Landmark-based elastic registration using approximating thin-plate splines.
Rohr, K; Stiehl, H S; Sprengel, R; Buzug, T M; Weese, J; Kuhn, M H
2001-06-01
We consider elastic image registration based on a set of corresponding anatomical point landmarks and approximating thin-plate splines. This approach is an extension of the original interpolating thin-plate spline approach and allows to take into account landmark localization errors. The extension is important for clinical applications since landmark extraction is always prone to error. Our approach is based on a minimizing functional and can cope with isotropic as well as anisotropic landmark errors. In particular, in the latter case it is possible to include different types of landmarks, e.g., unique point landmarks as well as arbitrary edge points. Also, the scheme is general with respect to the image dimension and the order of smoothness of the underlying functional. Optimal affine transformations as well as interpolating thin-plate splines are special cases of this scheme. To localize landmarks we use a semi-automatic approach which is based on three-dimensional (3-D) differential operators. Experimental results are presented for two-dimensional as well as 3-D tomographic images of the human brain.
elevatr: Access Elevation Data from Various APIs | Science ...
Several web services are available that provide access to elevation data. This package provides access to several of those services and returns elevation data either as a SpatialPointsDataFrame from point elevation services or as a raster object from raster elevation services. Currently, the package supports access to the Mapzen Elevation Service, Mapzen Terrain Service, and the USGS Elevation Point Query Service. The R language for statistical computing is increasingly used for spatial data analysis . This R package, elevatr, is in response to this and provides access to elevation data from various sources directly in R. The impact of `elevatr` is that it will 1) facilitate spatial analysis in R by providing access to foundational dataset for many types of analyses (e.g. hydrology, limnology) 2) open up a new set of users and uses for APIs widely used outside of R, and 3) provide an excellent example federal open source development as promoted by the Federal Source Code Policy (https://sourcecode.cio.gov/).
Using Lin's method to solve Bykov's problems
NASA Astrophysics Data System (ADS)
Knobloch, Jürgen; Lamb, Jeroen S. W.; Webster, Kevin N.
2014-10-01
We consider nonwandering dynamics near heteroclinic cycles between two hyperbolic equilibria. The constituting heteroclinic connections are assumed to be such that one of them is transverse and isolated. Such heteroclinic cycles are associated with the termination of a branch of homoclinic solutions, and called T-points in this context. We study codimension-two T-points and their unfoldings in Rn. In our consideration we distinguish between cases with real and complex leading eigenvalues of the equilibria. In doing so we establish Lin's method as a unified approach to (re)gain and extend results of Bykov's seminal studies and related works. To a large extent our approach reduces the study to the discussion of intersections of lines and spirals in the plane. Case (RR): Under open conditions on the eigenvalues, there exist open sets in parameter space for which there exist periodic orbits close to the heteroclinic cycle. In addition, there exist two one-parameter families of homoclinic orbits to each of the saddle points p1 and p2.See Theorem 2.1 and Proposition 2.2 for precise statements and Fig. 2 for bifurcation diagrams. Cases (RC) and (CC): At the bifurcation point μ=0 and for each N≥2, there exists an invariant set S0N close to the heteroclinic cycle on which the first return map is topologically conjugated to a full shift on N symbols. For any fixed N≥2, the invariant set SμN persists for |μ| sufficiently small.In addition, there exist infinitely many transversal and non-transversal heteroclinic orbits connecting the saddle points p1 and p2 in a neighbourhood of μ=0, as well as infinitely many one-parameter families of homoclinic orbits to each of the saddle points.For full statements of the results see Theorem 2.3 and Propositions 2.4, 2.5 and Fig. 3 for bifurcation diagrams. The dynamics near T-points has been studied previously by Bykov [6-10], Glendinning and Sparrow [20], Kokubu [27,28] and Labouriau and Rodrigues [30,31,38]. See also the surveys by Homburg and Sandstede [24], Shilnikov et al. [43] and Fiedler [18]. The occurrence of T-points in local bifurcations has been discussed by Barrientos et al. [4], and by Lamb et al. [32] in the context of reversible systems. All these studies consider dynamics in R3 using a geometric return map approach, and their results reflect the description of types of nonwandering dynamics described above.Further related studies concerning T-points can be found in [34] and [37], where inclination flips were considered in this context. In [5], numerical studies of T-points are performed using kneading invariants.The main aim of this paper is to present a comprehensive study of dynamics near T-points, including detailed proofs of all results, employing a unified functional-analytic approach, without making any assumption on the dimension of the phase space. In the process, we recover and generalise to higher dimensional settings all previously reported results for T-points in R3. In addition, we reveal the existence of richer dynamics in the (RC) and (CC) cases. A detailed discussion of our results is contained in Section 2.The functional analytic approach we follow is commonly referred to as Lin's method, after the seminal paper by Lin [33], and employs a reduction on an appropriate Banach space of piecewise continuous functions approximating the initial heteroclinic cycle to yield bifurcation equations whose solutions represent orbits of the nonwandering set. The development of such an approach is typical for the school of Hale, and is in contrast to the analysis contained in previous T-point studies, which relies on the construction of a first return map. Our choice of analytical framework is motivated by the fact that Lin's method provides a unified approach to study global bifurcations in arbitrary dimension, and has been shown to extend to a larger class of settings, such as delay and advance-delay equations [19,33].
Wang, Lu; Xu, Lisheng; Zhao, Dazhe; Yao, Yang; Song, Dan
2015-04-01
Because arterial pulse waves contain vital information related to the condition of the cardiovascular system, considerable attention has been devoted to the study of pulse waves in recent years. Accurate acquisition is essential to investigate arterial pulse waves. However, at the stage of developing equipment for acquiring and analyzing arterial pulse waves, specific pulse signals may be unavailable for debugging and evaluating the system under development. To produce test signals that reflect specific physiological conditions, in this paper, an arterial pulse wave generator has been designed and implemented using a field programmable gate array (FPGA), which can produce the desired pulse waves according to the feature points set by users. To reconstruct a periodic pulse wave from the given feature points, a method known as piecewise Gaussian-cosine fitting is also proposed in this paper. Using a test database that contains four types of typical pulse waves with each type containing 25 pulse wave signals, the maximum residual error of each sampling point of the fitted pulse wave in comparison with the real pulse wave is within 8%. In addition, the function for adding baseline drift and three types of noises is integrated into the developed system because the baseline occasionally wanders, and noise needs to be added for testing the performance of the designed circuits and the analysis algorithms. The proposed arterial pulse wave generator can be considered as a special signal generator with a simple structure, low cost and compact size, which can also provide flexible solutions for many other related research purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ecological transcriptomics of lake-type and riverine sockeye salmon (Oncorhynchus nerka)
2011-01-01
Background There are a growing number of genomes sequenced with tentative functions assigned to a large proportion of the individual genes. Model organisms in laboratory settings form the basis for the assignment of gene function, and the ecological context of gene function is lacking. This work addresses this shortcoming by investigating expressed genes of sockeye salmon (Oncorhynchus nerka) muscle tissue. We compared morphology and gene expression in natural juvenile sockeye populations related to river and lake habitats. Based on previously documented divergent morphology, feeding strategy, and predation in association with these distinct environments, we expect that burst swimming is favored in riverine population and continuous swimming is favored in lake-type population. In turn we predict that morphology and expressed genes promote burst swimming in riverine sockeye and continuous swimming in lake-type sockeye. Results We found the riverine sockeye population had deep, robust bodies and lake-type had shallow, streamlined bodies. Gene expression patterns were measured using a 16K microarray, discovering 141 genes with significant differential expression. Overall, the identity and function of these genes was consistent with our hypothesis. In addition, Gene Ontology (GO) enrichment analyses with a larger set of differentially expressed genes found the "biosynthesis" category enriched for the riverine population and the "metabolism" category enriched for the lake-type population. Conclusions This study provides a framework for understanding sockeye life history from a transcriptomic perspective and a starting point for more extensive, targeted studies determining the ecological context of genes. PMID:22136247
Ecological transcriptomics of lake-type and riverine sockeye salmon (Oncorhynchus nerka).
Pavey, Scott A; Sutherland, Ben J G; Leong, Jong; Robb, Adrienne; von Schalburg, Kris; Hamon, Troy R; Koop, Ben F; Nielsen, Jennifer L
2011-12-02
There are a growing number of genomes sequenced with tentative functions assigned to a large proportion of the individual genes. Model organisms in laboratory settings form the basis for the assignment of gene function, and the ecological context of gene function is lacking. This work addresses this shortcoming by investigating expressed genes of sockeye salmon (Oncorhynchus nerka) muscle tissue. We compared morphology and gene expression in natural juvenile sockeye populations related to river and lake habitats. Based on previously documented divergent morphology, feeding strategy, and predation in association with these distinct environments, we expect that burst swimming is favored in riverine population and continuous swimming is favored in lake-type population. In turn we predict that morphology and expressed genes promote burst swimming in riverine sockeye and continuous swimming in lake-type sockeye. We found the riverine sockeye population had deep, robust bodies and lake-type had shallow, streamlined bodies. Gene expression patterns were measured using a 16 k microarray, discovering 141 genes with significant differential expression. Overall, the identity and function of these genes was consistent with our hypothesis. In addition, Gene Ontology (GO) enrichment analyses with a larger set of differentially expressed genes found the "biosynthesis" category enriched for the riverine population and the "metabolism" category enriched for the lake-type population. This study provides a framework for understanding sockeye life history from a transcriptomic perspective and a starting point for more extensive, targeted studies determining the ecological context of genes.
Representation and display of vector field topology in fluid flow data sets
NASA Technical Reports Server (NTRS)
Helman, James; Hesselink, Lambertus
1989-01-01
The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.
Vector fields and nilpotent Lie algebras
NASA Technical Reports Server (NTRS)
Grayson, Matthew; Grossman, Robert
1987-01-01
An infinite-dimensional family of flows E is described with the property that the associated dynamical system: x(t) = E(x(t)), where x(0) is a member of the set R to the Nth power, is explicitly integrable in closed form. These flows E are of the form E = E1 + E2, where E1 and E2 are the generators of a nilpotent Lie algebra, which is either free, or satisfies some relations at a point. These flows can then be used to approximate the flows of more general types of dynamical systems.
Modeling concepts for communication of geometric shape data
NASA Technical Reports Server (NTRS)
Collins, M. F.; Emnett, R. F.; Magedson, R. L.; Shu, H. H.
1984-01-01
ANSI5, an abbreviation for Section 5 of the American National Standard under Engineering Drawing and Related Documentation Practices (Committee Y14) on Digital Representation for Communication of Product Definition Data (ANSI Y14.26M-1981), allows encoding of a broad range of geometric shapes to be communicated through digital channels. A brief review of its underlying concepts is presented. The intent of ANSI5 is to devise a unified set of concise language formats for transmission of data pertaining to five types of geometric entities in Euclidean 3 space (E(3)). These are regarded as point like, curve like, surface like, solid like, and a combination of these types. For the first four types, ANSI5 makes a distinction between the geometry and topology. Geometry is a description of the spatial occupancy of the entity, and topology discusses the interconnectedness of the entity's boundary components.
The Pendulum Weaves All Knots and Links
NASA Astrophysics Data System (ADS)
Starrett, John
2003-08-01
From a topological point of view, periodic orbits of three dimensional dynamical systems are knots, that is, circles (S∧1) embedded in the three sphere (S∧3) or in R∧3. The ensemble of periodic orbits comprising the skeleton of a 3-D strange attractor form a link: a collection of (not necessarily linked) knots. Joan Birman and Robert Williams used a topological device known as the template, a branched two-manifold that results when the stable direction is collapsed out of an attractor, to analyze the knot and link types appearing in the geometric Lorenz attractor. More recently, Robert Ghrist has shown the existence of universal templates: templates that support all knot and link types. I show that the template constructed from the geometric attractor of a forced physical pendulum contains a universal template as a subtemplate, and therefore the orbit set of the pendulum contains every knot and link type.
Kanna, T; Sakkaravarthi, K; Tamilselvan, K
2013-12-01
We consider the multicomponent Yajima-Oikawa (YO) system and show that the two-component YO system can be derived in a physical setting of a three-coupled nonlinear Schrödinger (3-CNLS) type system by the asymptotic reduction method. The derivation is further generalized to the multicomponent case. This set of equations describes the dynamics of nonlinear resonant interaction between a one-dimensional long wave and multiple short waves. The Painlevé analysis of the general multicomponent YO system shows that the underlying set of evolution equations is integrable for arbitrary nonlinearity coefficients which will result in three different sets of equations corresponding to positive, negative, and mixed nonlinearity coefficients. We obtain the general bright N-soliton solution of the multicomponent YO system in the Gram determinant form by using Hirota's bilinearization method and explicitly analyze the one- and two-soliton solutions of the multicomponent YO system for the above mentioned three choices of nonlinearity coefficients. We also point out that the 3-CNLS system admits special asymptotic solitons of bright, dark, anti-dark, and gray types, when the long-wave-short-wave resonance takes place. The short-wave component solitons undergo two types of energy-sharing collisions. Specifically, in the two-component YO system, we demonstrate that two types of energy-sharing collisions-(i) energy switching with opposite nature for a particular soliton in two components and (ii) similar kind of energy switching for a given soliton in both components-result for two different choices of nonlinearity coefficients. The solitons appearing in the long-wave component always exhibit elastic collision whereas those of short-wave components exhibit standard elastic collisions only for a specific choice of parameters. We have also investigated the collision dynamics of asymptotic solitons in the original 3-CNLS system. For completeness, we explore the three-soliton interaction and demonstrate the pairwise nature of collisions and unravel the fascinating state restoration property.
Ocular stability and set-point adaptation
Jareonsettasin, P.; Leigh, R. J.
2017-01-01
A fundamental challenge to the brain is how to prevent intrusive movements when quiet is needed. Unwanted limb movements such as tremor impair fine motor control and unwanted eye drifts such as nystagmus impair vision. A stable platform is also necessary to launch accurate movements. Accordingly, nature has designed control systems with agonist (excitation) and antagonist (inhibition) muscle pairs functioning in push–pull, around a steady level of balanced tonic activity, the set-point. Sensory information can be organized similarly, as in the vestibulo-ocular reflex, which generates eye movements that compensate for head movements. The semicircular canals, working in coplanar pairs, one in each labyrinth, are reciprocally excited and inhibited as they transduce head rotations. The relative change in activity is relayed to the vestibular nuclei, which operate around a set-point of stable balanced activity. When a pathological imbalance occurs, producing unwanted nystagmus without head movement, an adaptive mechanism restores the proper set-point and eliminates the nystagmus. Here we used 90 min of continuous 7 T magnetic field labyrinthine stimulation (MVS) in normal humans to produce sustained nystagmus simulating vestibular imbalance. We identified multiple time-scale processes towards a new zero set-point showing that MVS is an excellent paradigm to investigate the neurobiology of set-point adaptation. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’. PMID:28242733
Genomics, "Discovery Science," Systems Biology, and Causal Explanation: What Really Works?
Davidson, Eric H
2015-01-01
Diverse and non-coherent sets of epistemological principles currently inform research in the general area of functional genomics. Here, from the personal point of view of a scientist with over half a century of immersion in hypothesis driven scientific discovery, I compare and deconstruct the ideological bases of prominent recent alternatives, such as "discovery science," some productions of the ENCODE project, and aspects of large data set systems biology. The outputs of these types of scientific enterprise qualitatively reflect their radical definitions of scientific knowledge, and of its logical requirements. Their properties emerge in high relief when contrasted (as an example) to a recent, system-wide, predictive analysis of a developmental regulatory apparatus that was instead based directly on hypothesis-driven experimental tests of mechanism.
Tool for Rapid Analysis of Monte Carlo Simulations
NASA Technical Reports Server (NTRS)
Restrepo, Carolina; McCall, Kurt E.; Hurtado, John E.
2013-01-01
Designing a spacecraft, or any other complex engineering system, requires extensive simulation and analysis work. Oftentimes, the large amounts of simulation data generated are very difficult and time consuming to analyze, with the added risk of overlooking potentially critical problems in the design. The authors have developed a generic data analysis tool that can quickly sort through large data sets and point an analyst to the areas in the data set that cause specific types of failures. The first version of this tool was a serial code and the current version is a parallel code, which has greatly increased the analysis capabilities. This paper describes the new implementation of this analysis tool on a graphical processing unit, and presents analysis results for NASA's Orion Monte Carlo data to demonstrate its capabilities.
Spectral Characteristics of the Unitary Critical Almost-Mathieu Operator
NASA Astrophysics Data System (ADS)
Fillman, Jake; Ong, Darren C.; Zhang, Zhenghe
2017-04-01
We discuss spectral characteristics of a one-dimensional quantum walk whose coins are distributed quasi-periodically. The unitary update rule of this quantum walk shares many spectral characteristics with the critical Almost-Mathieu Operator; however, it possesses a feature not present in the Almost-Mathieu Operator, namely singularity of the associated cocycles (this feature is, however, present in the so-called Extended Harper's Model). We show that this operator has empty absolutely continuous spectrum and that the Lyapunov exponent vanishes on the spectrum; hence, this model exhibits Cantor spectrum of zero Lebesgue measure for all irrational frequencies and arbitrary phase, which in physics is known as Hofstadter's butterfly. In fact, we will show something stronger, namely, that all spectral parameters in the spectrum are of critical type, in the language of Avila's global theory of analytic quasiperiodic cocycles. We further prove that it has empty point spectrum for each irrational frequency and away from a frequency-dependent set of phases having Lebesgue measure zero. The key ingredients in our proofs are an adaptation of Avila's Global Theory to the present setting, self-duality via the Fourier transform, and a Johnson-type theorem for singular dynamically defined CMV matrices which characterizes their spectra as the set of spectral parameters at which the associated cocycles fail to admit a dominated splitting.
NASA Astrophysics Data System (ADS)
Flores-Marquez, Leticia Elsa; Ramirez Rojaz, Alejandro; Telesca, Luciano
2015-04-01
The study of two statistical approaches is analyzed for two different types of data sets, one is the seismicity generated by the subduction processes occurred at south Pacific coast of Mexico between 2005 and 2012, and the other corresponds to the synthetic seismic data generated by a stick-slip experimental model. The statistical methods used for the present study are the visibility graph in order to investigate the time dynamics of the series and the scaled probability density function in the natural time domain to investigate the critical order of the system. This comparison has the purpose to show the similarities between the dynamical behaviors of both types of data sets, from the point of view of critical systems. The observed behaviors allow us to conclude that the experimental set up globally reproduces the behavior observed in the statistical approaches used to analyses the seismicity of the subduction zone. The present study was supported by the Bilateral Project Italy-Mexico Experimental Stick-slip models of tectonic faults: innovative statistical approaches applied to synthetic seismic sequences, jointly funded by MAECI (Italy) and AMEXCID (Mexico) in the framework of the Bilateral Agreement for Scientific and Technological Cooperation PE 2014-2016.
Cubature versus Fekete-Gauss nodes for spectral element methods on simplicial meshes
NASA Astrophysics Data System (ADS)
Pasquetti, Richard; Rapetti, Francesca
2017-10-01
In a recent JCP paper [9], a higher order triangular spectral element method (TSEM) is proposed to address seismic wave field modeling. The main interest of this TSEM is that the mass matrix is diagonal, so that an explicit time marching becomes very cheap. This property results from the fact that, similarly to the usual SEM (say QSEM), the basis functions are Lagrange polynomials based on a set of points that shows both nice interpolation and quadrature properties. In the quadrangle, i.e. for the QSEM, the set of points is simply obtained by tensorial product of Gauss-Lobatto-Legendre (GLL) points. In the triangle, finding such an appropriate set of points is however not trivial. Thus, the work of [9] follows anterior works that started in 2000's [2,6,11] and now provides cubature nodes and weights up to N = 9, where N is the total degree of the polynomial approximation. Here we wish to evaluate the accuracy of this cubature nodes TSEM with respect to the Fekete-Gauss one, see e.g.[12], that makes use of two sets of points, namely the Fekete points and the Gauss points of the triangle for interpolation and quadrature, respectively. Because the Fekete-Gauss TSEM is in the spirit of any nodal hp-finite element methods, one may expect that the conclusions of this Note will remain relevant if using other sets of carefully defined interpolation points.
Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group
NASA Astrophysics Data System (ADS)
Ardentov, Andrei A.; Sachkov, Yuri L.
2017-12-01
We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.
Carpenter, Afton S; Sullivan, Joanne H; Deshmukh, Arati; Glisson, Scott R; Gallo, Stephen A
2015-09-08
With the use of teleconferencing for grant peer-review panels increasing, further studies are necessary to determine the efficacy of the teleconference setting compared to the traditional onsite/face-to-face setting. The objective of this analysis was to examine the effects of discussion, namely changes in application scoring premeeting and postdiscussion, in these settings. We also investigated other parameters, including the magnitude of score shifts and application discussion time in face-to-face and teleconference review settings. The investigation involved a retrospective, quantitative analysis of premeeting and postdiscussion scores and discussion times for teleconference and face-to-face review panels. The analysis included 260 and 212 application score data points and 212 and 171 discussion time data points for the face-to-face and teleconference settings, respectively. The effect of discussion was found to be small, on average, in both settings. However, discussion was found to be important for at least 10% of applications, regardless of setting, with these applications moving over a potential funding line in either direction (fundable to unfundable or vice versa). Small differences were uncovered relating to the effect of discussion between settings, including a decrease in the magnitude of the effect in the teleconference panels as compared to face-to-face. Discussion time (despite teleconferences having shorter discussions) was observed to have little influence on the magnitude of the effect of discussion. Additionally, panel discussion was found to often result in a poorer score (as opposed to an improvement) when compared to reviewer premeeting scores. This was true regardless of setting or assigned reviewer type (primary or secondary reviewer). Subtle differences were observed between settings, potentially due to reduced engagement in teleconferences. Overall, further research is required on the psychology of decision-making, team performance and persuasion to better elucidate the group dynamics of telephonic and virtual ad-hoc peer-review panels. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Using Gaussian windows to explore a multivariate data set
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1991-01-01
In an earlier paper, I recounted an exploratory analysis, using Gaussian windows, of a data set derived from the Infrared Astronomical Satellite. Here, my goals are to develop strategies for finding structural features in a data set in a many-dimensional space, and to find ways to describe the shape of such a data set. After a brief review of Gaussian windows, I describe the current implementation of the method. I give some ways of describing features that we might find in the data, such as clusters and saddle points, and also extended structures such as a 'bar', which is an essentially one-dimensional concentration of data points. I then define a distance function, which I use to determine which data points are 'associated' with a feature. Data points not associated with any feature are called 'outliers'. I then explore the data set, giving the strategies that I used and quantitative descriptions of the features that I found, including clusters, bars, and a saddle point. I tried to use strategies and procedures that could, in principle, be used in any number of dimensions.
Arndt, Michael; Hitzmann, Bernd
2004-01-01
A glucose control system is presented, which is able to control cultivations of Saccharomyces cerevisiae even at low glucose concentrations. Glucose concentrations are determined using a special flow injection analysis (FIA) system, which does not require a sampling module. An extended Kalman filter is employed for smoothing the glucose measurements as well as for the prediction of glucose and biomass concentration, the maximum specific growth rate, and the volume of the culture broth. The predicted values are utilized for feedforward/feedback control of the glucose concentration at set points of 0.08 and 0.05 g/L. The controller established well-defined conditions over several hours up to biomass concentrations of 13.5 and 20.7 g/L, respectively. The specific glucose uptake rates at both set points were 1.04 and 0.68 g/g/h, respectively. It is demonstrated that during fed-batch cultivation an overall pure oxidative metabolism of glucose is maintained at the lower set point and a specific ethanol production rate of 0.18 g/g/h at the higher set point.
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
Families of FPGA-Based Accelerators for Approximate String Matching1
Van Court, Tom; Herbordt, Martin C.
2011-01-01
Dynamic programming for approximate string matching is a large family of different algorithms, which vary significantly in purpose, complexity, and hardware utilization. Many implementations have reported impressive speed-ups, but have typically been point solutions – highly specialized and addressing only one or a few of the many possible options. The problem to be solved is creating a hardware description that implements a broad range of behavioral options without losing efficiency due to feature bloat. We report a set of three component types that address different parts of the approximate string matching problem. This allows each application to choose the feature set required, then make maximum use of the FPGA fabric according to that application’s specific resource requirements. Multiple, interchangeable implementations are available for each component type. We show that these methods allow the efficient generation of a large, if not complete, family of accelerators for this application. This flexibility was obtained while retaining high performance: We have evaluated a sample against serial reference codes and found speed-ups of from 150× to 400× over a high-end PC. PMID:21603598
Two copies of the Einstein-Podolsky-Rosen state of light lead to refutation of EPR ideas.
Rosołek, Krzysztof; Stobińska, Magdalena; Wieśniak, Marcin; Żukowski, Marek
2015-03-13
Bell's theorem applies to the normalizable approximations of original Einstein-Podolsky-Rosen (EPR) state. The constructions of the proof require measurements difficult to perform, and dichotomic observables. By noticing the fact that the four mode squeezed vacuum state produced in type II down-conversion can be seen both as two copies of approximate EPR states, and also as a kind of polarization supersinglet, we show a straightforward way to test violations of the EPR concepts with direct use of their state. The observables involved are simply photon numbers at outputs of polarizing beam splitters. Suitable chained Bell inequalities are based on the geometric concept of distance. For a few settings they are potentially a new tool for quantum information applications, involving observables of a nondichotomic nature, and thus of higher informational capacity. In the limit of infinitely many settings we get a Greenberger-Horne-Zeilinger-type contradiction: EPR reasoning points to a correlation, while quantum prediction is an anticorrelation. Violations of the inequalities are fully resistant to multipair emissions in Bell experiments using parametric down-conversion sources.
Anderson, Eric; Adams, Adi; Rivers, Ian
2012-04-01
In this article, we combined data from 145 interviews and three ethnographic investigations of heterosexual male students in the U.K. from multiple educational settings. Our results indicate that 89% have, at some point, kissed another male on the lips which they reported as being non-sexual: a means of expressing platonic affection among heterosexual friends. Moreover, 37% also reported engaging in sustained same-sex kissing, something they construed as non-sexual and non-homosexual. Although the students in our study understood that this type of kissing remains somewhat culturally symbolized as a taboo sexual behavior, they nonetheless reconstructed it, making it compatible with heteromasculinity by recoding it as homosocial. We hypothesize that both these types of kissing behaviors are increasingly permissible due to rapidly decreasing levels of cultural homophobia. Furthermore, we argue that there has been a loosening of the restricted physical and emotional boundaries of traditional heteromasculinity in these educational settings, something which may also gradually assist in the erosion of prevailing heterosexual hegemony.
Salnikova, L E; Kolobkov, D S
2016-06-01
Oncologists have pointed out an urgent need for biomarkers that can be useful for clinical application to predict the susceptibility of patients to preoperative therapy. This review collects, evaluates and combines data on the influence of reported somatic and germline genetic variations on histological tumor regression in neoadjuvant settings of rectal and esophageal cancers. Five hundred and twenty-seven articles were identified, 204 retrieved and 61 studies included. Among 24 and 14 genetic markers reported for rectal and esophageal cancers, respectively, significant associations in meta-analyses were demonstrated for the following markers. In rectal cancer, major response was more frequent in carriers of the TYMS genotype 2 R/2 R-2 R/3 R (rs34743033), MTHFR genotype 677C/C (rs1801133), wild-type TP53 and KRAS genes. In esophageal cancer, successful therapy appeared to correlate with wild-type TP53. These results may be useful for future research directions to translate reported data into practical clinical use.
X-ray backscatter radiography with lower open fraction coded masks
NASA Astrophysics Data System (ADS)
Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David
2017-09-01
Single sided radiographic imaging would find great utility for medical, aerospace and security applications. While coded apertures can be used to form such an image from backscattered X-rays they suffer from near field limitations that introduce noise. Several theoretical studies have indicated that for an extended source the images signal to noise ratio may be optimised by using a low open fraction (<0.5) mask. However, few experimental results have been published for such low open fraction patterns and details of their formulation are often unavailable or are ambiguous. In this paper we address this process for two types of low open fraction mask, the dilute URA and the Singer set array. For the dilute URA the procedure for producing multiple 2D array patterns from given 1D binary sequences (Barker codes) is explained. Their point spread functions are calculated and their imaging properties are critically reviewed. These results are then compared to those from the Singer set and experimental exposures are presented for both type of pattern; their prospects for near field imaging are discussed.
Permitted and forbidden sets in symmetric threshold-linear networks.
Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques
2003-03-01
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.
Digital data base application to porphyry copper mineralization in Alaska; case study summary
Trautwein, Charles M.; Greenlee, David D.; Orr, Donald G.
1982-01-01
The purpose of this report is to summarize the progress in use of digital image analysis techniques in developing a conceptual model for assessing porphyry copper mineral potential. The study area consists of approximately the southern one-half of the 1? by 3? Nabesna quadrangle in east-central Alaska. The digital geologic data base consists of data compiled under the Alaskan Mineral Resource Assessment Program (AMRAP) as well as digital elevation data and Landsat spectral reflectance data from the Multispectral Scanner System. The digital data base used to develop and implement a conceptual model for porphyry-type copper mineralization consisted of 16 original data types and 18 derived data sets formatted in a grid-cell (raster) structure and registered to a map base in the Universal Transverse Mercator (UTM) projection. Minimum curvature and inverse distance squared interpolation techniques were used to generate continuous surfaces from sets of irregularly spaced data points. Processing requirements included: (1) merging or overlaying of data sets, (2) display and color coding of maps and images, (3) univariate and multivariate statistical analyses, and (4) compound overlaying operations. Data sets were merged and processed to create stereoscopic displays of continuous surfaces. The ratio of several data sets were calculated to evaluate relative variations and to enhance the display of surface alteration (gossans). Factor analysis and principal components analysis techniques were used to determine complex relationships and correlations between data sets. The resultant model consists of 10 parameters that identify three areas most likely to contain porphyry copper mineralization; two of these areas are known occurrences of mineralization and the third is not well known. Field studies confirmed that the three areas identified by the model have significant copper potential.
Modak, Isitri; Sexton, J Bryan; Lux, Thomas R; Helmreich, Robert L; Thomas, Eric J
2007-01-01
Provider attitudes about issues pertinent to patient safety may be related to errors and adverse events. We know of no instruments that measure safety-related attitudes in the outpatient setting. To adapt the safety attitudes questionnaire (SAQ) to the outpatient setting and compare attitudes among different types of providers in the outpatient setting. We modified the SAQ to create a 62-item SAQ-ambulatory version (SAQ-A). Patient care staff in a multispecialty, academic practice rated their agreement with the items using a 5-point Likert scale. Cronbach's alpha was calculated to determine reliability of scale scores. Differences in SAQ-A scores between providers were assessed using ANOVA. Of the 409 staff, 282 (69%) returned surveys. One hundred ninety (46%) surveys were included in the analyses. Cronbach's alpha ranged from 0.68 to 0.86 for the scales: teamwork climate, safety climate, perceptions of management, job satisfaction, working conditions, and stress recognition. Physicians had the least favorable attitudes about perceptions of management while managers had the most favorable attitudes (mean scores: 50.4 +/- 22.5 vs 72.5 +/- 19.6, P < 0.05; percent with positive attitudes 18% vs 70%, respectively). Nurses had the most positive stress recognition scores (mean score 66.0 +/- 24.0). All providers had similar attitudes toward teamwork climate, safety climate, job satisfaction, and working conditions. The SAQ-A is a reliable tool for eliciting provider attitudes about the ambulatory work setting. Attitudes relevant to medical error may differ among provider types and reflect behavior and clinic operations that could be improved.
Thermoelectric Control Of Temperatures Of Pressure Sensors
NASA Technical Reports Server (NTRS)
Burkett, Cecil G., Jr.; West, James W.; Hutchinson, Mark A.; Lawrence, Robert M.; Crum, James R.
1995-01-01
Prototype controlled-temperature enclosure containing thermoelectric devices developed to house electronically scanned array of pressure sensors. Enclosure needed because (1) temperatures of transducers in sensors must be maintained at specified set point to ensure proper operation and calibration and (2) sensors sometimes used to measure pressure in hostile environments (wind tunnels in original application) that are hotter or colder than set point. Thus, depending on temperature of pressure-measurement environment, thermoelectric devices in enclosure used to heat or cool transducers to keep them at set point.
Visual Communication in PowerPoint Presentations in Applied Linguistics
ERIC Educational Resources Information Center
Kmalvand, Ayad
2014-01-01
PowerPoint knowledge presentation as a digital genre has established itself as the main software by which the findings of theses are disseminated in the academic settings. Although the importance of PowerPoint presentations is typically realized in academic settings like lectures, conferences, and seminars, the study of the visual features of…
Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.
Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang
2012-06-20
Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.
Apparatus and method for implementing power saving techniques when processing floating point values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young Moon; Park, Sang Phill
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
NASA Astrophysics Data System (ADS)
Dolan, B.; Rutledge, S. A.; Barnum, J. I.; Matsui, T.; Tao, W. K.; Iguchi, T.
2017-12-01
POLarimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a framework that has been developed to simulate radar observations from cloud resolving model (CRM) output and subject model data and observations to the same retrievals, analysis and visualization. This framework not only enables validation of bulk microphysical model simulated properties, but also offers an opportunity to study the uncertainties associated with retrievals such as hydrometeor classification (HID). For the CSU HID, membership beta functions (MBFs) are built using a set of simulations with realistic microphysical assumptions about axis ratio, density, canting angles, size distributions for each of ten hydrometeor species. These assumptions are tested using POLARRIS to understand their influence on the resulting simulated polarimetric data and final HID classification. Several of these parameters (density, size distributions) are set by the model microphysics, and therefore the specific assumptions of axis ratio and canting angle are carefully studied. Through these sensitivity studies, we hope to be able to provide uncertainties in retrieved polarimetric variables and HID as applied to CRM output. HID retrievals assign a classification to each point by determining the highest score, thereby identifying the dominant hydrometeor type within a volume. However, in nature, there is rarely just one a single hydrometeor type at a particular point. Models allow for mixing ratios of different hydrometeors within a grid point. We use the mixing ratios from CRM output in concert with the HID scores and classifications to understand how the HID algorithm can provide information about mixtures within a volume, as well as calculate a confidence in the classifications. We leverage the POLARRIS framework to additionally probe radar wavelength differences toward the possibility of a multi-wavelength HID which could utilize the strengths of different wavelengths to improve HID classifications. With these uncertainties and algorithm improvements, cases of convection are studied in a continental (Oklahoma) and maritime (Darwin, Australia) regime. Observations from C-band polarimetric data in both locations are compared to CRM simulations from NU-WRF using the POLARRIS framework.
Cluster synchronization in networks of identical oscillators with α-function pulse coupling.
Chen, Bolun; Engelbrecht, Jan R; Mirollo, Renato
2017-02-01
We study a network of N identical leaky integrate-and-fire model neurons coupled by α-function pulses, weighted by a coupling parameter K. Studies of the dynamics of this system have mostly focused on the stability of the fully synchronized and the fully asynchronous splay states, which naturally depends on the sign of K, i.e., excitation vs inhibition. We find that there is also a rich set of attractors consisting of clusters of fully synchronized oscillators, such as fixed (N-1,1) states, which have synchronized clusters of sizes N-1 and 1, as well as splay states of clusters with equal sizes greater than 1. Additionally, we find limit cycles that clarify the stability of previously observed quasiperiodic behavior. Our framework exploits the neutrality of the dynamics for K=0 which allows us to implement a dimensional reduction strategy that simplifies the dynamics to a continuous flow on a codimension 3 subspace with the sign of K determining the flow direction. This reduction framework naturally incorporates a hierarchy of partially synchronized subspaces in which the new attracting states lie. Using high-precision numerical simulations, we describe completely the sequence of bifurcations and the stability of all fixed points and limit cycles for N=2-4. The set of possible attracting states can be used to distinguish different classes of neuron models. For instance from our previous work [Chaos 24, 013114 (2014)CHAOEH1054-150010.1063/1.4858458] we know that of the types of partially synchronized states discussed here, only the (N-1,1) states can be stable in systems of identical coupled sinusoidal (i.e., Kuramoto type) oscillators, such as θ-neuron models. Upon introducing a small variation in individual neuron parameters, the attracting fixed points we discuss here generalize to equivalent fixed points in which neurons need not fire coincidently.
Cluster synchronization in networks of identical oscillators with α -function pulse coupling
NASA Astrophysics Data System (ADS)
Chen, Bolun; Engelbrecht, Jan R.; Mirollo, Renato
2017-02-01
We study a network of N identical leaky integrate-and-fire model neurons coupled by α -function pulses, weighted by a coupling parameter K . Studies of the dynamics of this system have mostly focused on the stability of the fully synchronized and the fully asynchronous splay states, which naturally depends on the sign of K , i.e., excitation vs inhibition. We find that there is also a rich set of attractors consisting of clusters of fully synchronized oscillators, such as fixed (N -1 ,1 ) states, which have synchronized clusters of sizes N -1 and 1, as well as splay states of clusters with equal sizes greater than 1. Additionally, we find limit cycles that clarify the stability of previously observed quasiperiodic behavior. Our framework exploits the neutrality of the dynamics for K =0 which allows us to implement a dimensional reduction strategy that simplifies the dynamics to a continuous flow on a codimension 3 subspace with the sign of K determining the flow direction. This reduction framework naturally incorporates a hierarchy of partially synchronized subspaces in which the new attracting states lie. Using high-precision numerical simulations, we describe completely the sequence of bifurcations and the stability of all fixed points and limit cycles for N =2 -4 . The set of possible attracting states can be used to distinguish different classes of neuron models. For instance from our previous work [Chaos 24, 013114 (2014), 10.1063/1.4858458] we know that of the types of partially synchronized states discussed here, only the (N -1 ,1 ) states can be stable in systems of identical coupled sinusoidal (i.e., Kuramoto type) oscillators, such as θ -neuron models. Upon introducing a small variation in individual neuron parameters, the attracting fixed points we discuss here generalize to equivalent fixed points in which neurons need not fire coincidently.
Williamson, Joyce E.; Jarrell, Gregory J.; Clawges, Rick M.; Galloway, Joel M.; Carter, Janet M.
2000-01-01
This compact disk contains digital data produced as part of the 1:100,000-scale map products for the Black Hills Hydrology Study conducted in western South Dakota. The digital data include 28 individual Geographic Information System (GIS) data sets: data sets for the hydrogeologic unit map including all mapped hydrogeologic units within the study area (1 data set) and major geologic structure including anticlines and synclines (1 data set); data sets for potentiometric maps including the potentiometric contours for the Inyan Kara, Minnekahta, Minnelusa, Madison, and Deadwood aquifers (5 data sets), wells used as control points for each aquifer (5 data sets), and springs used as control points for the potentiometric contours (1 data set); and data sets for the structure-contour maps including the structure contours for the top of each formation that contains major aquifers (5 data sets), wells and tests holes used as control points for each formation (5 data sets), and surficial deposits (alluvium and terrace deposits) that directly overlie each of the major aquifer outcrops (5 data sets). These data sets were used to produce the maps published by the U.S. Geological Survey.
Design of air-gapped magnetic-core inductors for superimposed direct and alternating currents
NASA Technical Reports Server (NTRS)
Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.
1976-01-01
Using data on standard magnetic-material properties and standard core sizes for air-gap-type cores, an algorithm designed for a computer solution is developed which optimally determines the air-gap length and locates the quiescent point on the normal magnetization curve so as to yield an inductor design with the minimum number of turns for a given ac voltage and frequency and with a given dc bias current superimposed in the same winding. Magnetic-material data used in the design are the normal magnetization curve and a family of incremental permeability curves. A second procedure, which requires a simpler set of calculations, starts from an assigned quiescent point on the normal magnetization curve and first screens candidate core sizes for suitability, then determines the required turns and air-gap length.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
Meta-analysis: exercise therapy for nonspecific low back pain.
Hayden, Jill A; van Tulder, Maurits W; Malmivaara, Antti V; Koes, Bart W
2005-05-03
Exercise therapy is widely used as an intervention in low back pain. To evaluate the effectiveness of exercise therapy in adult nonspecific acute, subacute, and chronic low back pain versus no treatment and other conservative treatments. MEDLINE, EMBASE, PsychInfo, CINAHL, and Cochrane Library databases to October 2004; citation searches and bibliographic reviews of previous systematic reviews. Randomized, controlled trials evaluating exercise therapy for adult nonspecific low back pain and measuring pain, function, return to work or absenteeism, and global improvement outcomes. Two reviewers independently selected studies and extracted data on study characteristics, quality, and outcomes at short-, intermediate-, and long-term follow-up. 61 randomized, controlled trials (6390 participants) met inclusion criteria: acute (11 trials), subacute (6 trials), and chronic (43 trials) low back pain (1 trial was unclear). Evidence suggests that exercise therapy is effective in chronic back pain relative to comparisons at all follow-up periods. Pooled mean improvement (of 100 points) was 7.3 points (95% CI, 3.7 to 10.9 points) for pain and 2.5 points (CI, 1.0 to 3.9 points) for function at earliest follow-up. In studies investigating patients (people seeking care for back pain), mean improvement was 13.3 points (CI, 5.5 to 21.1 points) for pain and 6.9 points (CI, 2.2 to 11.7 points) for function, compared with studies where some participants had been recruited from a general population (for example, with advertisements). Some evidence suggests effectiveness of a graded-activity exercise program in subacute low back pain in occupational settings, although the evidence for other types of exercise therapy in other populations is inconsistent. In acute low back pain, exercise therapy and other programs were equally effective (pain, 0.03 point [CI, -1.3 to 1.4 points]). Limitations of the literature, including low-quality studies with heterogeneous outcome measures inconsistent and poor reporting, and possibility of publication bias. Exercise therapy seems to be slightly effective at decreasing pain and improving function in adults with chronic low back pain, particularly in health care populations. In subacute low back pain populations, some evidence suggests that a graded-activity program improves absenteeism outcomes, although evidence for other types of exercise is unclear. In acute low back pain populations, exercise therapy is as effective as either no treatment or other conservative treatments.
Ishihara, Hisashi; Ota, Nobuyuki; Asada, Minoru
2017-11-27
It is quite difficult for android robots to replicate the numerous and various types of human facial expressions owing to limitations in terms of space, mechanisms, and materials. This situation could be improved with greater knowledge regarding these expressions and their deformation rules, i.e. by using the biomimetic approach. In a previous study, we investigated 16 facial deformation patterns and found that each facial point moves almost only in its own principal direction and different deformation patterns are created with different combinations of moving lengths. However, the replication errors caused by moving each control point of a face in only their principal direction were not evaluated for each deformation pattern at that time. Therefore, we calculated the replication errors in this study using the second principal component scores of the 16 sets of flow vectors at each point on the face. More than 60% of the errors were within 1 mm, and approximately 90% of them were within 3 mm. The average error was 1.1 mm. These results indicate that robots can replicate the 16 investigated facial expressions with errors within 3 mm and 1 mm for about 90% and 60% of the vectors, respectively, even if each point on the robot face moves in only its own principal direction. This finding seems promising for the development of robots capable of showing various facial expressions because significantly fewer types of movements than previously predicted are necessary.
Protein and Genetic Composition of Four Chromatin Types in Drosophila melanogaster Cell Lines.
Boldyreva, Lidiya V; Goncharov, Fyodor P; Demakova, Olga V; Zykova, Tatyana Yu; Levitsky, Victor G; Kolesnikov, Nikolay N; Pindyurin, Alexey V; Semeshin, Valeriy F; Zhimulev, Igor F
2017-04-01
Recently, we analyzed genome-wide protein binding data for the Drosophila cell lines S2, Kc, BG3 and Cl.8 (modENCODE Consortium) and identified a set of 12 proteins enriched in the regions corresponding to interbands of salivary gland polytene chromosomes. Using these data, we developed a bioinformatic pipeline that partitioned the Drosophila genome into four chromatin types that we hereby refer to as aquamarine, lazurite, malachite and ruby. Here, we describe the properties of these chromatin types across different cell lines. We show that aquamarine chromatin tends to harbor transcription start sites (TSSs) and 5' untranslated regions (5'UTRs) of the genes, is enriched in diverse "open" chromatin proteins, histone modifications, nucleosome remodeling complexes and transcription factors. It encompasses most of the tRNA genes and shows enrichment for non-coding RNAs and miRNA genes. Lazurite chromatin typically encompasses gene bodies. It is rich in proteins involved in transcription elongation. Frequency of both point mutations and natural deletion breakpoints is elevated within lazurite chromatin. Malachite chromatin shows higher frequency of insertions of natural transposons. Finally, ruby chromatin is enriched for proteins and histone modifications typical for the "closed" chromatin. Ruby chromatin has a relatively low frequency of point mutations and is essentially devoid of miRNA and tRNA genes. Aquamarine and ruby chromatin types are highly stable across cell lines and have contrasting properties. Lazurite and malachite chromatin types also display characteristic protein composition, as well as enrichment for specific genomic features. We found that two types of chromatin, aquamarine and ruby, retain their complementary protein patterns in four Drosophila cell lines.
Architecture of chaotic attractors for flows in the absence of any singular point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letellier, Christophe; Malasoma, Jean-Marc
2016-06-15
Some chaotic attractors produced by three-dimensional dynamical systems without any singular point have now been identified, but explaining how they are structured in the state space remains an open question. We here want to explain—in the particular case of the Wei system—such a structure, using one-dimensional sets obtained by vanishing two of the three derivatives of the flow. The neighborhoods of these sets are made of points which are characterized by the eigenvalues of a 2 × 2 matrix describing the stability of flow in a subspace transverse to it. We will show that the attractor is spiralling and twisted in themore » neighborhood of one-dimensional sets where points are characterized by a pair of complex conjugated eigenvalues. We then show that such one-dimensional sets are also useful in explaining the structure of attractors produced by systems with singular points, by considering the case of the Lorenz system.« less
Developing points-based risk-scoring systems in the presence of competing risks.
Austin, Peter C; Lee, Douglas S; D'Agostino, Ralph B; Fine, Jason P
2016-09-30
Predicting the occurrence of an adverse event over time is an important issue in clinical medicine. Clinical prediction models and associated points-based risk-scoring systems are popular statistical methods for summarizing the relationship between a multivariable set of patient risk factors and the risk of the occurrence of an adverse event. Points-based risk-scoring systems are popular amongst physicians as they permit a rapid assessment of patient risk without the use of computers or other electronic devices. The use of such points-based risk-scoring systems facilitates evidence-based clinical decision making. There is a growing interest in cause-specific mortality and in non-fatal outcomes. However, when considering these types of outcomes, one must account for competing risks whose occurrence precludes the occurrence of the event of interest. We describe how points-based risk-scoring systems can be developed in the presence of competing events. We illustrate the application of these methods by developing risk-scoring systems for predicting cardiovascular mortality in patients hospitalized with acute myocardial infarction. Code in the R statistical programming language is provided for the implementation of the described methods. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Almiron-Roig, Eva; Domínguez, Angélica; Vaughan, David; Solis-Trapala, Ivonne; Jebb, Susan A
2016-12-01
Exposure to large portion sizes is a risk factor for obesity. Specifically designed tableware may modulate how much is eaten and help with portion control. We examined the experience of using a guided crockery set (CS) and a calibrated serving spoon set (SS) by individuals trying to manage their weight. Twenty-nine obese adults who had completed 7-12 weeks of a community weight-loss programme were invited to use both tools for 2 weeks each, in a crossover design, with minimal health professional contact. A paper-based questionnaire was used to collect data on acceptance, perceived changes in portion size, frequency, and type of meal when the tool was used. Scores describing acceptance, ease of use and perceived effectiveness were derived from five-point Likert scales from which binary indicators (high/low) were analysed using logistic regression. Mean acceptance, ease of use and perceived effectiveness were moderate to high (3·7-4·4 points). Tool type did not have an impact on indicators of acceptance, ease of use and perceived effectiveness (P>0·32 for all comparisons); 55 % of participants used the CS on most days v. 21 % for the SS. The CS was used for all meals, whereas the SS was mostly used for evening meals. Self-selected portion sizes increased for vegetables and decreased for chips and potatoes with both tools. Participants rated both tools as equally acceptable, easy to use and with similar perceived effectiveness. Formal trials to evaluate the impact of such tools on weight control are warranted.
Methods and apparatuses for detection of radiation with semiconductor image sensors
Cogliati, Joshua Joseph
2018-04-10
A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.
Viswanathan, Vijay; Mohan, Viswanathan; Subramani, Poongothai; Parthasarathy, Nandakumar; Subramaniyam, Gayathri; Manoharan, Deepa; Sundaramoorthy, Chandru; Gnudi, Luigi; Viberti, Giancarlo
2013-01-01
Summary Background and objectives Thiazolidinediones (pioglitazone and rosiglitazone) induce renal epithelial sodium channel (ENaC)–mediated sodium reabsorption, resulting in plasma volume (PV) expansion. Incidence and long-term management of fluid retention induced by thiazolidinediones remain unclear. Design, setting, participants, & measurements In a 4-week run-in period, rosiglitazone, 4 mg twice daily, was added to a background anti-diabetic therapy in 260 South Indian patients with type 2 diabetes mellitus. Patients with PV expansion (absolute reduction in hematocrit in run-in, ≥1.5 percentage points) entered a randomized, placebo-controlled study to evaluate effects of amiloride and spironolactone on attenuating rosiglitazone-induced fluid retention. Primary endpoint was change in hematocrit in each diuretic group versus placebo (control group). Results Of the 260 patients, 70% (n=180) had PV expansion. These 180 patients (70% male; mean age, 47.8 years [range, 30–80 years]) were randomly assigned to rosiglitazone, 4 mg twice daily, plus spironolactone, 50 mg once daily; rosiglitazone, 4 mg twice daily, plus amiloride, 10 mg once daily; or rosiglitazone, 4 mg twice daily, plus placebo for 24 weeks. Hematocrit continued to decrease significantly in control and spironolactone groups (mean absolute change, −1.2 [P=0.01] and −0.7 [P=0.02] percentage points, respectively), suggesting continued PV expansion. No change occurred with amiloride (mean change, 0.0 percentage points). Amiloride, but not spironolactone, was superior to control (mean hematocrit difference [95% confidence interval] relative to control, 1.27 [0.21–2.55] and 0.49 [−0.79–1.77] percentage points [P=0.04 and P=0.61], respectively). Conclusions Prevalence of rosiglitazone-induced fluid retention in South Indian patients with type 2 diabetes is high. Amiloride, a direct ENaC blocker, but not spironolactone, prevented protracted fluid retention in these patients. PMID:23184569
Yang, Y; Zhu, X R; Xu, Q G; Metcalfe, H; Wang, Z C; Yang, J K
2012-04-01
To assess the efficacy of using magnetic resonance imaging measurements of retinal oxygenation response to detect early diabetic retinopathy in patients with Type 2 diabetes. Magnetic resonance imaging was conducted during 100% oxygen inhalation in patients with Type 2 diabetes with either no diabetic retinopathy (n = 12) or mild to moderate background diabetic retinopathy (n = 12), as well as in healthy control subjects (n = 12). Meanwhile, changes in retinal oxygenation response were measured. In the healthy control group, levels of retinal oxygenation response increased slowly during 100% oxygen inhalation. In contrast, they increased more quickly and attained homeostasis much earlier in the groups with background diabetic retinopathy (at the 20-min time point) and with no diabetic retinopathy (at the 25-min time point) than in the healthy control group (at the 42-min time point). Furthermore, levels of retinal oxygenation response in the group with background diabetic retinopathy increased more than that of the group with no diabetic retinopathy, which in turn increased more than that of the healthy control group. There are statistically significant differences between the group with background diabetic retinopathy and the healthy control group at 6-, 8-, 10-, 15-, 20- and 25-min time points (P < 0.05). According to the normal range of the healthy control group by setting fundus photography results as 'gold standard' in our research, the sensitivity, specificity, positive predictive value, negative predictive value and receiver operating characteristic area for reporting the early indications of utility of diabetic retinopathy were 83.33%, 58.33%, 50%, 87.5% and 0.774, respectively. The results indicate that magnetic resonance imaging is a potential screening method and probably a quantitative physiological biomarker to find early diabetic retinopathy in patients with Type 2 diabetes. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
Bergenstal, Richard M; Freemantle, Nick; Leyk, Malgorzata; Cutler, Gordon B; Hayes, Risa P; Muchmore, Douglas B
2009-09-01
In the concordance model, physician and patient discuss treatment options, explore the impact of treatment decisions from the patient's perspective, and make treatment choices together. We tested, in a concordance setting, whether the availability of AIR inhaled insulin (developed by Alkermes, Inc. [Cambridge, MA] and Eli Lilly and Company [Indianapolis, IN]; AIR is a registered trademark of Alkermes, Inc.), as compared with existing treatment options alone, leads to greater initiation and maintenance of insulin therapy and improves glycemic control in patients with type 2 diabetes. This was a 9-month, multicenter, parallel, open-label study in adult, nonsmoking patients with diabetes not optimally controlled by two or more oral antihyperglycemic medications. Patients were randomized to the Standard Options group (n = 516), in which patients chose a regimen from drugs in each major treatment class excluding inhaled insulin, or the Standard Options + AIR insulin group (n = 505), in which patients had the same choices plus AIR insulin. The primary end points were the proportion of patients in each group using insulin at end point and change in hemoglobin A1C (A1C) from baseline to end point. At end point, 53% of patients in the Standard Options group and 59% in the Standard Options + AIR insulin group were using insulin (P = 0.07). Both groups reduced A1C by about 1.2% and reported increased well-being and treatment satisfaction. The most common adverse event with AIR insulin was transient cough. The opportunity to choose AIR insulin did not affect overall use of insulin at end point or A1C outcomes. Regardless of group assignment, utilizing a shared decision-making approach to treatment choices (concordance model), resulted in improved treatment satisfaction and A1C values at end point. Therefore, increasing patient involvement in treatment decisions may improve outcomes.
Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana Maria; Dan, Bernard; Cheron, Guy; McIntyre, Joseph
2014-01-01
A central question in Neuroscience is that of how the nervous system generates the spatiotemporal commands needed to realize complex gestures, such as handwriting. A key postulate is that the central nervous system (CNS) builds up complex movements from a set of simpler motor primitives or control modules. In this study we examined the control modules underlying the generation of muscle activations when performing different types of movement: discrete, point-to-point movements in eight different directions and continuous figure-eight movements in both the normal, upright orientation and rotated 90°. To test for the effects of biomechanical constraints, movements were performed in the frontal-parallel or sagittal planes, corresponding to two different nominal flexion/abduction postures of the shoulder. In all cases we measured limb kinematics and surface electromyographic activity (EMG) signals for seven different muscles acting around the shoulder. We first performed principal component analysis (PCA) of the EMG signals on a movement-by-movement basis. We found a surprisingly consistent pattern of muscle groupings across movement types and movement planes, although we could detect systematic differences between the PCs derived from movements performed in each shoulder posture and between the principal components associated with the different orientations of the figure. Unexpectedly we found no systematic differences between the figure eights and the point-to-point movements. The first three principal components could be associated with a general co-contraction of all seven muscles plus two patterns of reciprocal activation. From these results, we surmise that both "discrete-rhythmic movements" such as the figure eight, and discrete point-to-point movement may be constructed from three different fundamental modules, one regulating the impedance of the limb over the time span of the movement and two others operating to generate movement, one aligned with the vertical and the other aligned with the horizontal.
Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana Maria; Dan, Bernard; Cheron, Guy; McIntyre, Joseph
2015-01-01
A central question in Neuroscience is that of how the nervous system generates the spatiotemporal commands needed to realize complex gestures, such as handwriting. A key postulate is that the central nervous system (CNS) builds up complex movements from a set of simpler motor primitives or control modules. In this study we examined the control modules underlying the generation of muscle activations when performing different types of movement: discrete, point-to-point movements in eight different directions and continuous figure-eight movements in both the normal, upright orientation and rotated 90°. To test for the effects of biomechanical constraints, movements were performed in the frontal-parallel or sagittal planes, corresponding to two different nominal flexion/abduction postures of the shoulder. In all cases we measured limb kinematics and surface electromyographic activity (EMG) signals for seven different muscles acting around the shoulder. We first performed principal component analysis (PCA) of the EMG signals on a movement-by-movement basis. We found a surprisingly consistent pattern of muscle groupings across movement types and movement planes, although we could detect systematic differences between the PCs derived from movements performed in each shoulder posture and between the principal components associated with the different orientations of the figure. Unexpectedly we found no systematic differences between the figure eights and the point-to-point movements. The first three principal components could be associated with a general co-contraction of all seven muscles plus two patterns of reciprocal activation. From these results, we surmise that both “discrete-rhythmic movements” such as the figure eight, and discrete point-to-point movement may be constructed from three different fundamental modules, one regulating the impedance of the limb over the time span of the movement and two others operating to generate movement, one aligned with the vertical and the other aligned with the horizontal. PMID:25620928
Factors influencing sustainability of communally-managed water facilities in rural areas of Zimbabwe
NASA Astrophysics Data System (ADS)
Kativhu, T.; Mazvimavi, D.; Tevera, D.; Nhapi, I.
2017-08-01
Sustainability of point water facilities is a major development challenge in many rural settings of developing countries not sparing those in the Sub-Saharan Africa region. This study was done in Zimbabwe to investigate the factors influencing sustainability of rural water supply systems. A total of 399 water points were studied in Nyanga, Chivi and Gwanda districts. Data was collected using a questionnaire, observation checklist and key informant interview guide. Multi-Criteria analysis was used to assess the sustainability of water points and inferential statistical analysis such as Chi square tests and Analysis of Variance (ANOVA) were used to determine if there were significant differences on selected variables across districts and types of lifting devices used in the study area. The thematic approach was used to analyze qualitative data. Results show that most water points were not functional and only 17% across the districts were found to be sustainable. A fusion of social, technical, financial, environmental and institutional factors was found to be influencing sustainability. On technical factors the ANOVA results show that the type of lifting device fitted at a water point significantly influences sustainability (F = 37.4, p < 0.01). Availability of spare parts at community level was found to be determining the downtime period of different lifting devices in the studied wards. Absence of user committees was found to be central in influencing sustainability as water points that did not have user committees were not sustainable and most of them were not functional during the time of the survey. Active participation by communities at the planning stage of water projects was also found to be critical for sustainability although field results showed passive participation by communities at this critical project stage. Financial factors of adequacy of financial contributions and establishment of operation and maintenance funds were also found to be of great importance in sustaining water supply systems. It is recommended that all factors should be considered when assessing sustainability since they are interrelated.
Spline curve matching with sparse knot sets
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2004-01-01
This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...
Bradbury, Penelope; Seymour, Lesley
2009-01-01
Phase II clinical trials have long been used to screen new cancer therapeutics for antitumor activity ("efficacy") worthy of further evaluation. Traditionally, the primary end point used in these screening trials has been objective response rate (RR), with the desired rate being arbitrarily set by the researchers before initiation of the trial. For cytotoxic agents, especially in common tumor types, response has been a reasonably robust and validated surrogate of benefit. Phase II trials with response as an end point have a modest sample size (15-40 patients) and are completed rapidly allowing early decisions regarding future development of a given agent. More recently, a number of new agents have proven successful in pivotal phase III studies, despite a low or very modest RR demonstrated in early clinical trials. Researchers have postulated that these novel agents, as a class, may not induce significant regression of tumors, and that the use of RR as an end point for phase II studies will result in false negative results, and point out that not all available data is used in making the decision. Others have pointed out that even novel agents have proven unsuccessful in pivotal trials if objective responses are not demonstrated in early clinical trials. We review here the historical and current information regarding objective tumor response.
A 3D clustering approach for point clouds to detect and quantify changes at a rock glacier front
NASA Astrophysics Data System (ADS)
Micheletti, Natan; Tonini, Marj; Lane, Stuart N.
2016-04-01
Terrestrial Laser Scanners (TLS) are extensively used in geomorphology to remotely-sense landforms and surfaces of any type and to derive digital elevation models (DEMs). Modern devices are able to collect many millions of points, so that working on the resulting dataset is often troublesome in terms of computational efforts. Indeed, it is not unusual that raw point clouds are filtered prior to DEM creation, so that only a subset of points is retained and the interpolation process becomes less of a burden. Whilst this procedure is in many cases necessary, it implicates a considerable loss of valuable information. First, and even without eliminating points, the common interpolation of points to a regular grid causes a loss of potentially useful detail. Second, it inevitably causes the transition from 3D information to only 2.5D data where each (x,y) pair must have a unique z-value. Vector-based DEMs (e.g. triangulated irregular networks) partially mitigate these issues, but still require a set of parameters to be set and a considerable burden in terms of calculation and storage. Because of the reasons above, being able to perform geomorphological research directly on point clouds would be profitable. Here, we propose an approach to identify erosion and deposition patterns on a very active rock glacier front in the Swiss Alps to monitor sediment dynamics. The general aim is to set up a semiautomatic method to isolate mass movements using 3D-feature identification directly from LiDAR data. An ultra-long range LiDAR RIEGL VZ-6000 scanner was employed to acquire point clouds during three consecutive summers. In order to isolate single clusters of erosion and deposition we applied the Density-Based Scan Algorithm with Noise (DBSCAN), previously successfully employed by Tonini and Abellan (2014) in a similar case for rockfall detection. DBSCAN requires two input parameters, strongly influencing the number, shape and size of the detected clusters: the minimum number of points (i) at a maximum distance (ii) around each core-point. Under this condition, seed points are said to be density-reachable by a core point delimiting a cluster around it. A chain of intermediate seed-points can connect contiguous clusters allowing clusters of arbitrary shape to be defined. The novelty of the proposed approach consists in the implementation of the DBSCAN 3D-module, where the xyz-coordinates identify each point and the density of points within a sphere is considered. This allows detecting volumetric features with a higher accuracy, depending only on actual sampling resolution. The approach is truly 3D and exploits all TLS measurements without the need of interpolation or data reduction. Using this method, enhanced geomorphological activity during the summer of 2015 in respect to the previous two years was observed. We attribute this result to the exceptionally high temperatures of that summer, which we deem responsible for accelerating the melting process at the rock glacier front and probably also increasing creep velocities. References: - Tonini, M. and Abellan, A. (2014). Rockfall detection from terrestrial LiDAR point clouds: A clustering approach using R. Journal of Spatial Information Sciences. Number 8, pp95-110 - Hennig, C. Package fpc: Flexible procedures for clustering. https://cran.r-project.org/web/packages/fpc/index.html, 2015. Accessed 2016-01-12.
Analysis of Mass Averaged Tissue Doses in CAM, CAF, MAX, and FAX
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Qualls, Garry D.; Clowdsley, Martha S.; Blattnig, Steve R.; Simonsen, Lisa C.; Walker, Steven A.; Singleterry, Robert C.
2009-01-01
To estimate astronaut health risk due to space radiation, one must have the ability to calculate exposure-related quantities averaged over specific organs and tissue types. In this study, we first examine the anatomical properties of the Computerized Anatomical Man (CAM), Computerized Anatomical Female (CAF), Male Adult voXel (MAX), and Female Adult voXel (FAX) models by comparing the masses of various tissues to the reference values specified by the International Commission on Radiological Protection (ICRP). Major discrepancies are found between the CAM and CAF tissue masses and the ICRP reference data for almost all of the tissues. We next examine the distribution of target points used with the deterministic transport code HZETRN to compute mass averaged exposure quantities. A numerical algorithm is used to generate multiple point distributions for many of the effective dose tissues identified in CAM, CAF, MAX, and FAX. It is concluded that the previously published CAM and CAF point distributions were under-sampled and that the set of point distributions presented here should be adequate for future studies involving CAM, CAF, MAX, or FAX. It is concluded that MAX and FAX are more accurate than CAM and CAF for space radiation analyses.
Matsumoto, Keiichi; Kitamura, Keishi; Mizuta, Tetsuro; Shimizu, Keiji; Murase, Kenya; Senda, Michio
2006-02-20
Transmission scanning can be successfully performed with a Cs-137 single-photon-emitting point source for three-dimensional PET imaging. This method was effective for postinjection transmission scanning because of differences in physical energy. However, scatter contamination in the transmission data lowers measured attenuation coefficients. The purpose of this study was to investigate the accuracy of the influence of object scattering by measuring the attenuation coefficients on the transmission images. We also compared the results with the conventional germanium line source method. Two different types of PET scanner, the SET-3000 G/X (Shimadzu Corp.) and ECAT EXACT HR(+) (Siemens/CTI) , were used. For the transmission scanning, the SET-3000 G/X and ECAT HR(+) were the Cs-137 point source and Ge-68/Ga-68 line source, respectively. With the SET-3000 G/X, we performed transmission measurement at two energy gate settings, the standard 600-800 keV as well as 500-800 keV. The energy gate setting of the ECAT HR(+) was 350-650 keV. The effects of scattering in a uniform phantom with different cross-sectional areas ranging from 201 cm(2) to 314 cm(2) to 628 cm(2) (apposition of the two 20 cm diameter phantoms) and 943 cm(2) (stacking of the three 20 cm diameter phantoms) were acquired without emission activity. First, we evaluated the attenuation coefficients of the two different types of transmission scanning using region of interest (ROI) analysis. In addition, we evaluated the attenuation coefficients with and without segmentation for Cs-137 transmission images using the same analysis. The segmentation method was a histogram-based soft-tissue segmentation process that can also be applied to reconstructed transmission images. In the Cs-137 experiment, the maximum underestimation was 3% without segmentation, which was reduced to less than 1% with segmentation at the center of the largest phantom. In the Ge-68/Ga-68 experiment, the difference in mean attenuation coefficients was stable with all phantoms. We evaluated the accuracy of attenuation coefficients of Cs-137 single-transmission scans. The results for Cs-137 suggest that scattered photons depend on object size. Although Cs-137 single-transmission scans contained scattered photons, attenuation coefficient error could be reduced using by the segmentation method.
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Learning From Past Failures of Oral Insulin Trials.
Michels, Aaron W; Gottlieb, Peter A
2018-07-01
Very recently one of the largest type 1 diabetes prevention trials using daily administration of oral insulin or placebo was completed. After 9 years of study enrollment and follow-up, the randomized controlled trial failed to delay the onset of clinical type 1 diabetes, which was the primary end point. The unfortunate outcome follows the previous large-scale trial, the Diabetes Prevention Trial-Type 1 (DPT-1), which again failed to delay diabetes onset with oral insulin or low-dose subcutaneous insulin injections in a randomized controlled trial with relatives at risk for type 1 diabetes. These sobering results raise the important question, "Where does the type 1 diabetes prevention field move next?" In this Perspective, we advocate for a paradigm shift in which smaller mechanistic trials are conducted to define immune mechanisms and potentially identify treatment responders. The stage is set for these interventions in individuals at risk for type 1 diabetes as Type 1 Diabetes TrialNet has identified thousands of relatives with islet autoantibodies and general population screening for type 1 diabetes risk is under way. Mechanistic trials will allow for better trial design and patient selection based upon molecular markers prior to large randomized controlled trials, moving toward a personalized medicine approach for the prevention of type 1 diabetes. © 2018 by the American Diabetes Association.
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
NASA Astrophysics Data System (ADS)
Howle, J. F.; Bawden, G. W.; Hunter, L. E.; Rose, R. S.
2009-12-01
High resolution (centimeter level) three-dimensional point-cloud imagery of offset glacial outwash deposits were collected by using ground based tripod LiDAR (T-LiDAR) to characterize the cumulative fault slip across the recently identified Polaris fault (Hunter et al., 2009) near Truckee, California. The type-section site for the Polaris fault is located 6.5 km east of Truckee where progressive right-lateral displacement of middle to late Pleistocene deposits is evident. Glacial outwash deposits, aggraded during the Tioga glaciation, form a flat lying ‘fill’ terrace on both the north and south sides of the modern Truckee River. During the Tioga deglaciation melt water incised into the terrace producing fluvial scarps or terrace risers (Birkeland, 1964). Subsequently, the terrace risers on both banks have been right-laterally offset by the Polaris fault. By using T-LiDAR on an elevated tripod (4.25 m high), we collected 3D high-resolution (thousands of points per square meter; ± 4 mm) point-cloud imagery of the offset terrace risers. Vegetation was removed from the data using commercial software, and large protruding boulders were manually deleted to generate a bare-earth point-cloud dataset with an average data density of over 240 points per square meter. From the bare-earth point cloud we mathematically reconstructed a pristine terrace/scarp morphology on both sides of the fault, defined coupled sets of piercing points, and extracted a corresponding displacement vector. First, the Polaris fault was approximated as a vertical plane that bisects the offset terrace risers, as well as bisecting linear swales and tectonic depressions in the outwash terrace. Then, piercing points to the vertical fault plane were constructed from the geometry of the geomorphic elements on either side of the fault. On each side of the fault, the best-fit modeled outwash plane is projected laterally and the best-fit modeled terrace riser projected upward to a virtual intersection in space, creating a vector. These constructed vectors were projected to intersection with the fault plane, defining statistically significant piercing points. The distance between the coupled set of piercing points, within the plane of the fault, is the cumulative displacement vector. To assess the variability of the modeled geomorphic surfaces, including surface roughness and nonlinearity, we generated a suite of displacement models by systematically incorporating larger areas of the model domain symmetrically about the fault. Preliminary results of 10 models yield an average cumulative displacement of 5.6 m (1 Std Dev = 0.31 m). As previously described, Tioga deglaciation melt water incised into the outwash terrace leaving terrace risers that were subsequently offset by the Polaris fault. Therefore, the age of the Tioga outwash terrace represents a maximum limiting age of the tectonic displacement. Using regional age constraints of 15 to 13 kya for the Tioga outwash terrace (Benson et al., 1990; Clark and Gillespie, 1997; James et al., 2002) and the above model results, we estimate a preliminary minimum fault slip rate of 0.40 ± 0.05 mm/yr for the Polaris type-section site.
NASA Astrophysics Data System (ADS)
Zolotaryuk, A. V.
2017-06-01
Several families of one-point interactions are derived from the system consisting of two and three δ-potentials which are regularized by piecewise constant functions. In physical terms such an approximating system represents two or three extremely thin layers separated by some distance. The two-scale squeezing of this heterostructure to one point as both the width of δ-approximating functions and the distance between these functions simultaneously tend to zero is studied using the power parameterization through a squeezing parameter \\varepsilon \\to 0 , so that the intensity of each δ-potential is cj =aj \\varepsilon1-μ , aj \\in {R} , j = 1, 2, 3, the width of each layer l =\\varepsilon and the distance between the layers r = c\\varepsilon^τ , c > 0. It is shown that at some values of the intensities a 1, a 2 and a 3, the transmission across the limit point potentials is non-zero, whereas outside these (resonance) values the one-point interactions are opaque splitting the system at the point of singularity into two independent subsystems. Within the interval 1 < μ < 2 , the resonance sets consist of two curves on the (a_1, a_2) -plane and three surfaces in the (a_1, a_2, a_3) -space. As the parameter μ approaches the value μ =2 , three types of splitting the one-point interactions into countable families are observed.
Alfvén wave dynamics at the neighborhood of a 2.5D magnetic null-point
NASA Astrophysics Data System (ADS)
Sabri, S.; Vasheghani Farahani, S.; Ebadi, H.; Hosseinpour, M.; Fazel, Z.
2018-05-01
The aim of the present study is to highlight the energy transfer via the interaction of magnetohydrodynamic waves with a 2.5D magnetic null-point in a finite plasma-β regime of the solar corona. An initially symmetric Alfvén pulse at a specific distance from a magnetic null-point is kicked towards the isothermal null-point. A shock-capturing Godunov-type PLUTO code is used to solve the ideal magnetohydrodynamic set equations in the context of wave-plasma energy transfer. As the Alfvén wave propagates towards the magnetic null-point it experiences speed lowering which ends up in releasing energy along the separatrices. In this line owing to the Alfvén wave, a series of events take place that contribute towards coronal heating. Nonlinear induced waves are by products of the torsional Alfvén interaction with magnetic null-points. The energy of these induced waves which are fast magnetoacoustic (transverse) and slow magnetoacoustic (longitudinal) waves are supplied by the Alfvén wave. The nonlinearly induced density perturbations are proportional to the Alfvén wave energy loss. This supplies energy for the propagation of fast and slow magnetoacoustic waves, where in contrast to the fast wave the slow wave experiences a continuous energy increase. As such, the slow wave may transfer its energy to the medium at later times, maintaining a continuous heating mechanism at the neighborhood of a magnetic null-point.
ERIC Educational Resources Information Center
Mamona-Downs, Joanna K.; Megalou, Foteini J.
2013-01-01
The aim of this paper is to examine students' understanding of the limiting behavior of a function from [set of real numbers][superscript 2] to [set of real numbers] at a point "P." This understanding depends on which definition is used for a limit. Several definitions are considered; two of these concern the notion of a neighborhood of "P", while…
BOREAS RSS-20 POLDER C-130 Measurements of Surface BRDF
NASA Technical Reports Server (NTRS)
Leroy, Marc; Hall, Forrest G. (Editor); Nickerson, Jaime (Editor); Smith, David E. (Technical Monitor)
2000-01-01
This Boreal Ecosystem-Atmosphere Study (BOREAS) Remote Sensing Science (RSS)-20 data set contains measurements of surface bidirectional reflectance distribution function (BRDF) made by the polarization and Directionality of Earth reflectances (POLDER) instrument over several surface types (pine, spruce, fen) of the BOREAS southern study area (SSA) during the 1994 intensive field campaigns (IFCs). Single-point BRDF values were acquired either from the NASA Ames Research Center (ARC) C-130 aircraft or from a NASA Wallops Flight Facility (WFF) helicopter. A related data set collected from the helicopter platform is available as is POLDER imagery acquired from the C-130. The data are stored in tabular ASCII files. The data files are available on a CD-ROM (see document number 20010000884) or from the Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC).
Health maintenance organizations, independent practice associations, and cesarean section rates.
Tussing, A D; Wojtowycz, M A
1994-04-01
This study tests two hypotheses: that a given delivery is less likely to be by cesarean section (c-section) in an HMO (closed-panel health maintenance organization) or IPA (independent practice association), than in other settings; and that where HMO and IPA penetration is high, the probability of a c-section will be reduced for all deliveries, whether in prepaid groups or not. A data set consisting of 104,595 obstetric deliveries in New York state in 1986 is analyzed. A series of probit regressions is estimated, in which the dependent variable is either the probability that a given delivery is by c-section, or that a given delivery will result in a c-section for dystocia or fetal distress. The Live Birth File is linked with SPARCS hospital discharge data and other variables. HMO setting reduces the probability of a cesarean section by 2.5 to 3.0 percentage points. However, this result is likely to be partly an artifact of offsetting diagnostic labeling and of choice of method of delivery, given diagnosis; a better estimate of the effect of HMO setting is -1.3 percentage points. IPA setting appears to affect the probability of a cesarean section even less, perhaps not at all. And HMO and IPA penetration in a region, as measured by HMO and IPA deliveries, respectively, as a percent of all deliveries, has relatively large depressing effects on the probability of a cesarean section. Ceteris paribus, the probability of a c-section is lower for an HMO delivery than for a fee-for-service delivery; however, HMO effects are smaller than previously reported in the literature for other types of inpatient care. For IPA deliveries, the effects are still smaller, perhaps nil. However, HMO and IPA penetration, possibly measuring the degree of competition in obstetrics markets, have important effects on c-section rates, not only in HMO/IPA settings, but throughout an area. These results appear to have important implications for public policy.
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.
2015-04-01
Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.
Control methods for merging ALSM and ground-based laser point clouds acquired under forest canopies
NASA Astrophysics Data System (ADS)
Slatton, Kenneth C.; Coleman, Matt; Carter, William E.; Shrestha, Ramesh L.; Sartori, Michael
2004-12-01
Merging of point data acquired from ground-based and airborne scanning laser rangers has been demonstrated for cases in which a common set of targets can be readily located in both data sets. However, direct merging of point data was not generally possible if the two data sets did not share common targets. This is often the case for ranging measurements acquired in forest canopies, where airborne systems image the canopy crowns well, but receive a relatively sparse set of points from the ground and understory. Conversely, ground-based scans of the understory do not generally sample the upper canopy. An experiment was conducted to establish a viable procedure for acquiring and georeferencing laser ranging data underneath a forest canopy. Once georeferenced, the ground-based data points can be merged with airborne points even in cases where no natural targets are common to both data sets. Two ground-based laser scans are merged and georeferenced with a final absolute error in the target locations of less than 10cm. This is comparable to the accuracy of the georeferenced airborne data. Thus, merging of the georeferenced ground-based and airborne data should be feasible. The motivation for this investigation is to facilitate a thorough characterization of airborne laser ranging phenomenology over forested terrain as a function of vertical location in the canopy.
Jeon, Jin-Hun; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Woong-Chul
2014-12-01
This study aimed to evaluate the accuracy of digitizing dental impressions of abutment teeth using a white light scanner and to compare the findings among teeth types. To assess precision, impressions of the canine, premolar, and molar prepared to receive all-ceramic crowns were repeatedly scanned to obtain five sets of 3-D data (STL files). Point clouds were compared and error sizes were measured (n=10 per type). Next, to evaluate trueness, impressions of teeth were rotated by 10°-20° and scanned. The obtained data were compared with the first set of data for precision assessment, and the error sizes were measured (n=5 per type). The Kruskal-Wallis test was performed to evaluate precision and trueness among three teeth types, and post-hoc comparisons were performed using the Mann-Whitney U test with Bonferroni correction (α=.05). Precision discrepancies for the canine, premolar, and molar were 3.7 µm, 3.2 µm, and 7.3 µm, respectively, indicating the poorest precision for the molar (P<.001). Trueness discrepancies for teeth types were 6.2 µm, 11.2 µm, and 21.8 µm, respectively, indicating the poorest trueness for the molar (P=.007). In respect to accuracy the molar showed the largest discrepancies compared with the canine and premolar. Digitizing of dental impressions of abutment teeth using a white light scanner was assessed to be a highly accurate method and provided discrepancy values in a clinically acceptable range. Further study is needed to improve digitizing performance of white light scanning in axial wall.
Phelps, G.A.
2008-01-01
This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.
Occupancy in community-level studies
MacKenzie, Darryl I.; Nichols, James; Royle, Andy; Pollock, Kenneth H.; Bailey, Larissa L.; Hines, James
2018-01-01
Another type of multi-species studies, are those focused on community-level metrics such as species richness. In this chapter we detail how some of the single-species occupancy models described in earlier chapters have been applied, or extended, for use in such studies, while accounting for imperfect detection. We highlight how Bayesian methods using MCMC are particularly useful in such settings to easily calculate relevant community-level summaries based on presence/absence data. These modeling approaches can be used to assess richness at a single point in time, or to investigate changes in the species pool over time.
Rutgers zodiacal light experiment on OSO-6
NASA Technical Reports Server (NTRS)
Carroll, B.
1975-01-01
A detector was placed in a slowly spinning wheel on OSO-6 whose axis was perpendicular to the line drawn to the sun, to measure the surface brightness and polarization at all elongations from the immediate neighborhood of the sun to the anti-solar point. Different wavelength settings and polarizations were calculated from the known order of magnitude brightness of the zodiacal light. The measuring sequence was arranged to give longer integration times for the regions of lower surface brightness. Three types of analysis to which the data on OSO-6 were subjected are outlined; (1) photometry, (2) colorimetry and (3) polarimetry.
The SE role in establishing, verifying and controlling top-level program requirements
NASA Technical Reports Server (NTRS)
Mathews, Charles W.
1993-01-01
The program objectives and requirements described in the preceding paragraphs emphasize mission demonstrations. Obtaining desired science or applications information is another type of program objective. The program requirements then state the need for specific data, usually specifying a particular instrument or instrument set; the operating conditions under which the data is to be obtained (e.g., orbit altitude, field of view, and pointing accuracy); and the data handling and use. Conversely, a new instrument may be conceived or created with the program objective to establish its use potential. The Multispectral Scanner employed in the Landsat program is an example.
USGS Toxic Substances Hydrology Program, 2010
Buxton, Herbert T.
2010-01-01
The U.S. Geological Survey (USGS) Toxic Substances Hydrology Program adapts research priorities to address the most important contamination issues facing the Nation and to identify new threats to environmental health. The Program investigates two major types of contamination problems: * Subsurface Point-Source Contamination, and * Watershed and Regional Contamination. Research objectives include developing remediation methods that use natural processes, characterizing and remediating contaminant plumes in fractured-rock aquifers, identifying new environmental contaminants, characterizing new and understudied pesticides in common pesticide-use settings, explaining mercury methylation and bioaccumulation, and developing approaches for remediating watersheds affected by active and historic mining.
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
El-Zawawy, Mohamed A.
2014-01-01
This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof). PMID:24892098
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets ...
Napadow, Vitaly; Liu, Jing; Kaptchuk, Ted J
2004-12-01
Acupuncture textbooks mention a wide assortment of indications for each acupuncture point and, conversely, each disease or indication can be treated by a wide assortment of acupoints. However, little systematic information exists on how acupuncture is actually used in practice: i.e. which points are actually selected and for which conditions. This study prospectively gathered data on acupuncture point usage in two primarily acupuncture hospital clinics in Beijing, China. Of the more than 150 unique acupoints, the 30 most commonly used points represented 68% of the total number of acupoints needled at the first clinic, and 63% of points needled at the second clinic. While acupuncturists use a similar set of most prevalent points, such as LI-4 (used in >65% of treatments at both clinic sites), this core of points only partially overlaps. These results support the hypothesis that while the most commonly used points are similar from one acupuncturist to another, each practitioner tends to have certain acupoints, which are favorites as core points or to round out the point prescription. In addition, the results of this study are consistent with the recent development of "manualized" protocols in randomized controlled trials of acupuncture where a fixed set of acupoints are augmented depending on individualized signs and symptoms (TCM patterns).
Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J
2017-07-01
Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.
MID Plot: a new lithology technique. [Matrix identification plot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clavier, C.; Rust, D.H.
1976-01-01
Lithology interpretation by the Litho-Porosity (M-N) method has been used for years, but is evidently too cumbersome and ambiguous for widespread acceptance as a field technique. To set aside these objections, another method has been devised. Instead of the log-derived parameters M and N, the MID Plot uses quasi-physical quantities, (rho/sub ma/)/sub a/ and (..delta..t/sub ma/)/sub a/, as its porosity-independent variables. These parameters, taken from suitably scaled Neutron-Density and Sonic-Neutron crossplots, define a unique matrix mineral or mixture for each point on the logs. The matrix points on the MID Plot thus remain constant in spite of changes in mudmore » filtrate, porosity, or neutron tool types (all of which significantly affect the M-N Plot). This new development is expected to bring welcome relief in areas where lithology identification is a routine part of log analysis.« less
Determination of stores pointing error due to wing flexibility under flight load
NASA Technical Reports Server (NTRS)
Lokos, William A.; Bahm, Catherine M.; Heinle, Robert A.
1995-01-01
The in-flight elastic wing twist of a fighter-type aircraft was studied to provide for an improved on-board real-time computed prediction of pointing variations of three wing store stations. This is an important capability to correct sensor pod alignment variation or to establish initial conditions of iron bombs or smart weapons prior to release. The original algorithm was based upon coarse measurements. The electro-optical Flight Deflection Measurement System measured the deformed wing shape in flight under maneuver loads to provide a higher resolution database from which an improved twist prediction algorithm could be developed. The FDMS produced excellent repeatable data. In addition, a NASTRAN finite-element analysis was performed to provide additional elastic deformation data. The FDMS data combined with the NASTRAN analysis indicated that an improved prediction algorithm could be derived by using a different set of aircraft parameters, namely normal acceleration, stores configuration, Mach number, and gross weight.
Performance, emissions, and physical characteristics of a rotating combustion aircraft engine
NASA Technical Reports Server (NTRS)
Berkowitz, M.; Hermes, W. L.; Mount, R. E.; Myers, D.
1976-01-01
The RC2-75, a liquid cooled two chamber rotary combustion engine (Wankel type), designed for aircraft use, was tested and representative baseline (212 KW, 285 BHP) performance and emissions characteristics established. The testing included running fuel/air mixture control curves and varied ignition timing to permit selection of desirable and practical settings for running wide open throttle curves, propeller load curves, variable manifold pressure curves covering cruise conditions, and EPA cycle operating points. Performance and emissions data were recorded for all of the points run. In addition to the test data, information required to characterize the engine and evaluate its performance in aircraft use is provided over a range from one half to twice its present power. The exhaust emissions results are compared to the 1980 EPA requirements. Standard day take-off brake specific fuel consumption is 356 g/KW-HR (.585 lb/BHP-HR) for the configuration tested.
Model selection bias and Freedman's paradox
Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.
2010-01-01
In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.
Control allocation for gimballed/fixed thrusters
NASA Astrophysics Data System (ADS)
Servidia, Pablo A.
2010-02-01
Some overactuated control systems use a control distribution law between the controller and the set of actuators, usually called control allocator. Beyond the control allocator, the configuration of actuators may be designed to be able to operate after a single point of failure, for system optimization and/or decentralization objectives. For some type of actuators, a control allocation is used even without redundancy, being a good example the design and operation of thruster configurations. In fact, as the thruster mass flow direction and magnitude only can be changed under certain limits, this must be considered in the feedback implementation. In this work, the thruster configuration design is considered in the fixed (F), single-gimbal (SG) and double-gimbal (DG) thruster cases. The minimum number of thrusters for each case is obtained and for the resulting configurations a specific control allocation is proposed using a nonlinear programming algorithm, under nominal and single-point of failure conditions.
Environment parameters and basic functions for floating-point computation
NASA Technical Reports Server (NTRS)
Brown, W. S.; Feldman, S. I.
1978-01-01
A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
ERIC Educational Resources Information Center
Wang, Jun; Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of…
Halovic, Shaun; Kroos, Christian
2017-12-01
This data set describes the experimental data collected and reported in the research article "Walking my way? Walker gender and display format confounds the perception of specific emotions" (Halovic and Kroos, in press) [1]. The data set represent perceiver identification rates for different emotions (happiness, sadness, anger, fear and neutral), as displayed by full-light, point-light and synthetic point-light walkers. The perceiver identification scores have been transformed into H t rates, which represent proportions/percentages of correct identifications above what would be expected by chance. This data set also provides H t rates separately for male, female and ambiguously gendered walkers.
Influence of alignment errors of a telescope system on its aberration field
NASA Astrophysics Data System (ADS)
Shack, R. V.; Thompson, K.
1980-01-01
The study of aberrations in a system is considered. It is pointed out that a system in which the elements are tilted and decentered has no axial symmetry, and in fact no symmetry at all if the tilts, and decentrations are not coplanar. It is customary in such a case to give up on an aberration-theoretic treatment and simply trace enough rays to produce a set of spot diagrams for various points in the field. However, in connection with the lack of symmetry, it is necessary to select a relatively large number of points. The considered investigation is concerned with an aberration-theoretic approach which can be applied to such systems. This approach provides insight into the field behavior of the aberrations with great economy in the calculation. It is based on a concept suggested by Buchroeder (1976). In the given case, this concept considers for the component fields corresponding to the various surfaces centers of symmetry which do not coincide. Attention is given to the procedure for locating the centers of symmetry, aberrations fields, spherical aberration, and various types of astigmatism.
Accurate attitude determination of the LACE satellite
NASA Technical Reports Server (NTRS)
Miglin, M. F.; Campion, R. E.; Lemos, P. J.; Tran, T.
1993-01-01
The Low-power Atmospheric Compensation Experiment (LACE) satellite, launched in February 1990 by the Naval Research Laboratory, uses a magnetic damper on a gravity gradient boom and a momentum wheel with its axis perpendicular to the plane of the orbit to stabilize and maintain its attitude. Satellite attitude is determined using three types of sensors: a conical Earth scanner, a set of sun sensors, and a magnetometer. The Ultraviolet Plume Instrument (UVPI), on board LACE, consists of two intensified CCD cameras and a gimbal led pointing mirror. The primary purpose of the UVPI is to image rocket plumes from space in the ultraviolet and visible wavelengths. Secondary objectives include imaging stars, atmospheric phenomena, and ground targets. The problem facing the UVPI experimenters is that the sensitivity of the LACF satellite attitude sensors is not always adequate to correctly point the UVPI cameras. Our solution is to point the UVPI cameras at known targets and use the information thus gained to improve attitude measurements. This paper describes the three methods developed to determine improved attitude values using the UVPI for both real-time operations and post observation analysis.
Lotka-Volterra competition models for sessile organisms.
Spencer, Matthew; Tanner, Jason E
2008-04-01
Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.
Radiometer Calibrations: Saving Time by Automating the Gathering and Analysis Procedures
NASA Technical Reports Server (NTRS)
Sadino, Jeffrey L.
2005-01-01
Mr. Abtahi custom-designs radiometers for Mr. Hook's research group. Inherently, when the radiometers report the temperature of arbitrary surfaces, the results are affected by errors in accuracy. This problem can be reduced if the errors can be accounted for in a polynomial. This is achieved by pointing the radiometer at a constant-temperature surface. We have been using a Hartford Scientific WaterBath. The measurements from the radiometer are collected at many different temperatures and compared to the measurements made by a Hartford Chubb thermometer with a four-decimal point resolution. The data is analyzed and fit to a fifth-order polynomial. This formula is then uploaded into the radiometer software, enabling accurate data gathering. Traditionally, Mr. Abtahi has done this by hand, spending several hours of his time setting the temperature, waiting for stabilization, taking measurements, and then repeating for other temperatures. My program, written in the Python language, has enabled the data gathering and analysis process to be handed off to a less-senior member of the team. Simply by entering several initial settings, the program will simultaneously control all three instruments and organize the data suitable for computer analyses, thus giving the desired fifth-order polynomial. This will save time, allow for a more complete calibration data set, and allow for base calibrations to be developed. The program is expandable to simultaneously take any type of measurement from up to nine distinct instruments.
Factors that impact clinical laboratory scientists' commitment to their work organizations.
Bamberg, Richard; Akroyd, Duane; Moore, Ti'eshia M
2008-01-01
To assess the predictive ability of various aspects of the work environment for organizational commitment. A questionnaire measuring three dimensions of organizational commitment along with five aspects of work environment and 10 demographic and work setting characteristics was sent to a national, convenience sample of clinical laboratory professionals. All persons obtaining the CLS certification by NCA from January 1, 1997 to December 31, 2006. Only respondents who worked full-time in a clinical laboratory setting were included in the database. Levels of affective, normative, and continuance organizational commitment, organizational support, role clarity, role conflict, transformational leadership behavior of supervisor, and organizational type, total years work experience in clinical laboratories, and educational level of respondents. Questionnaire items used either a 7-point or 5-point Likert response scale. Based on multiple regression analysis for the 427 respondents, organizational support and transformational leadership behavior were found to be significant positive predictors of affective and normative organizational commitment. Work setting (non-hospital laboratory) and total years of work experience in clinical laboratories were found to be significant positive predictors of continuance organizational commitment. Overall the organizational commitment levels for all three dimensions were at the neutral rating or below in the slightly disagree range. The results indicate a less than optimal level of organizational commitment to employers, which were predominantly hospitals, by CLS practitioners. This may result in continuing retention problems for hospital laboratories. The results offer strategies for improving organizational commitment via the significant predictors.
17 CFR 240.14a-5 - Presentation of information in proxy statement.
Code of Federal Regulations, 2011 CFR
2011-04-01
... roman type at least as large and as legible as 10-point modern type, except that to the extent necessary..., may be in roman type at least as large and as legible as 8-point modern type. All such type shall be...
Hydrologic indices for nontidal wetlands
Lent, Robert M.; Weiskel, Peter K.; Lyford, Forest P.; Armstrong, David S.
1997-01-01
Two sets of hydrologic indices were developed to characterize the water-budget components of nontidal wetlands. The first set consisted of six water-budget indices for input and output variables, and the second set consisted of two hydrologic interaction indices derived from the water-budget indices. The indices then were applied to 19 wetlands with previously published water-budget data. Two trilinear diagrams for each wetland were constructed, one for the three input indices and another for the three output indices. These two trilinear diagrams then were combined with a central quadrangle to form a Piper-type diagram, with data points from the trilinear diagrams projected onto the quadrangle. The quadrangle then was divided into nine fields that summarized the water-budget information. Two quantitative "interaction indices" were calculated from two of the six water-budget indices (precipitation and evapotranspiration). They also were obtained graphically from the water-budget indices, which were first projected to the central quadrangle of a Piper-type diagram from the flanking trilinear plots. The first interaction index (l) defines the strength of interaction between a wetland and the surrounding ground- and surface-water system. The second interaction index (S) defines the nature of the interaction between the wetland and the surrounding ground- and surface-water system (source versus sink). Evaluation of these indices using published wetland water-budget data illustrates the usefulness of the technique.
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2003-01-01
Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...
NASA Astrophysics Data System (ADS)
Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem
2017-07-01
All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.
NASA Astrophysics Data System (ADS)
Piasecki, M.; Ji, P.
2014-12-01
Geoscience data comes in many flavors that are determined by type of data such as continous on a grid or mesh or discrete colelcted at point either as one time samples or a stream of data coming of sensors, but coudl also encompass digital files of any time type such text files, WORD or EXCEL documents, or audio and video files. We present a storage facility that is comprsed of 6 nodes each of speciaized to host a certain data type: grid based data (netCDF on a THREDDS server), GIS data (shapefiles using GeoServer), point time series data (CUAHSI ODM), sample data (EDBS), and any digital data (RAMADAA) plus a server fro Remote sensing data and its products. While there is overlap in data type storage capabilities (rasters can go into several of these nodes) we prefer to use dedicated storage facilities that are a) freeware, and b) have a good degree of maturity, and c) have shown their utility for stroing a cetain type. In addition it allows to place these commonly used software stacks and storage solutiosn side-by-side to develop interoprability strategies. We have used a DRUPAL based system to handle user regoistration and authentication, and also use the system for data submission and data search. In support for tis system we developed an extensive controlled vocabulary system that is an amalgamation of various CVs used in the geosciecne community in order to achieve as high a degree of recognition, such the CF conventions, CUAHSI Cvs, , NASA (GCMD), EPA and USGS taxonomies, GEMET, in addition to ontological representations such as SWEET.
Mitochondrial flashes regulate ATP homeostasis in the heart
Wang, Xianhua; Zhang, Xing; Wu, Di; Huang, Zhanglong; Hou, Tingting; Jian, Chongshu; Yu, Peng; Lu, Fujian; Zhang, Rufeng; Sun, Tao; Li, Jinghang; Qi, Wenfeng; Wang, Yanru; Gao, Feng; Cheng, Heping
2017-01-01
The maintenance of a constant ATP level (‘set-point’) is a vital homeostatic function shared by eukaryotic cells. In particular, mammalian myocardium exquisitely safeguards its ATP set-point despite 10-fold fluctuations in cardiac workload. However, the exact mechanisms underlying this regulation of ATP homeostasis remain elusive. Here we show mitochondrial flashes (mitoflashes), recently discovered dynamic activity of mitochondria, play an essential role for the auto-regulation of ATP set-point in the heart. Specifically, mitoflashes negatively regulate ATP production in isolated respiring mitochondria and, their activity waxes and wanes to counteract the ATP supply-demand imbalance caused by superfluous substrate and altered workload in cardiomyocytes. Moreover, manipulating mitoflash activity is sufficient to inversely shift the otherwise stable ATP set-point. Mechanistically, the Bcl-xL-regulated proton leakage through F1Fo-ATP synthase appears to mediate the coupling between mitoflash production and ATP set-point regulation. These findings indicate mitoflashes appear to constitute a digital auto-regulator for ATP homeostasis in the heart. DOI: http://dx.doi.org/10.7554/eLife.23908.001 PMID:28692422
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Bonetti, Marco; Pagano, Marcello
2005-03-15
The topic of this paper is the distribution of the distance between two points distributed independently in space. We illustrate the use of this interpoint distance distribution to describe the characteristics of a set of points within some fixed region. The properties of its sample version, and thus the inference about this function, are discussed both in the discrete and in the continuous setting. We illustrate its use in the detection of spatial clustering by application to a well-known leukaemia data set, and report on the results of a simulation experiment designed to study the power characteristics of the methods within that study region and in an artificial homogenous setting. Copyright (c) 2004 John Wiley & Sons, Ltd.
S66: A Well-balanced Database of Benchmark Interaction Energies Relevant to Biomolecular Structures
2011-01-01
With numerous new quantum chemistry methods being developed in recent years and the promise of even more new methods to be developed in the near future, it is clearly critical that highly accurate, well-balanced, reference data for many different atomic and molecular properties be available for the parametrization and validation of these methods. One area of research that is of particular importance in many areas of chemistry, biology, and material science is the study of noncovalent interactions. Because these interactions are often strongly influenced by correlation effects, it is necessary to use computationally expensive high-order wave function methods to describe them accurately. Here, we present a large new database of interaction energies calculated using an accurate CCSD(T)/CBS scheme. Data are presented for 66 molecular complexes, at their reference equilibrium geometries and at 8 points systematically exploring their dissociation curves; in total, the database contains 594 points: 66 at equilibrium geometries, and 528 in dissociation curves. The data set is designed to cover the most common types of noncovalent interactions in biomolecules, while keeping a balanced representation of dispersion and electrostatic contributions. The data set is therefore well suited for testing and development of methods applicable to bioorganic systems. In addition to the benchmark CCSD(T) results, we also provide decompositions of the interaction energies by means of DFT-SAPT calculations. The data set was used to test several correlated QM methods, including those parametrized specifically for noncovalent interactions. Among these, the SCS-MI-CCSD method outperforms all other tested methods, with a root-mean-square error of 0.08 kcal/mol for the S66 data set. PMID:21836824
Dietscher, Christina
2017-02-01
Networks in health promotion (HP) have, after the launch of WHO's Ottawa Charter [(World Health Organization (WHO) (eds). (1986) Ottawa Charter on Health Promotion. Towards A New Public Health. World Health Organization, Geneva], become a widespread tool to disseminate HP especially in conjunction with the settings approach. Despite their allegedly high importance for HP practice and more than two decades of experiences with networking so far, a sound theoretical basis to support effective planning, formation, coordination and strategy development for networks in the settings approach of HP (HPSN) is still widely missing. Brößkamp-Stone's multi-facetted interorganizational network assessment framework (2004) provides a starting point but falls short of specifying the outcomes that can be reasonably expected from the specific network type of HPSN, and the specific processes/strategies and structures that are needed to achieve them. Based on outcome models in HP, on social, managerial and health science theories of networks, settings and organizations, a sociological systems theory approach and the capacity approach in HP, this article points out why existing approaches to studying networks are insufficient for HPSN, what can be understood by their functioning and effectiveness, what preconditions there are for HPSN effectiveness and how an HPSN functioning and effectiveness framework proposed on these grounds can be used for researching networks in practice, drawing on experiences from the ‘Project on an Internationally Comparative Evaluation Study of the International Network of Health Promoting Hospitals and Health Services’ (PRICES-HPH), which was coordinated by the WHO Collaborating Centre for Health Promotion in Hospitals and Health Services (Vienna WHO-CC) from 2008 to 2012.
Duan, Yong; Wu, Chun; Chowdhury, Shibasish; Lee, Mathew C; Xiong, Guoming; Zhang, Wei; Yang, Rong; Cieplak, Piotr; Luo, Ray; Lee, Taisung; Caldwell, James; Wang, Junmei; Kollman, Peter
2003-12-01
Molecular mechanics models have been applied extensively to study the dynamics of proteins and nucleic acids. Here we report the development of a third-generation point-charge all-atom force field for proteins. Following the earlier approach of Cornell et al., the charge set was obtained by fitting to the electrostatic potentials of dipeptides calculated using B3LYP/cc-pVTZ//HF/6-31G** quantum mechanical methods. The main-chain torsion parameters were obtained by fitting to the energy profiles of Ace-Ala-Nme and Ace-Gly-Nme di-peptides calculated using MP2/cc-pVTZ//HF/6-31G** quantum mechanical methods. All other parameters were taken from the existing AMBER data base. The major departure from previous force fields is that all quantum mechanical calculations were done in the condensed phase with continuum solvent models and an effective dielectric constant of epsilon = 4. We anticipate that this force field parameter set will address certain critical short comings of previous force fields in condensed-phase simulations of proteins. Initial tests on peptides demonstrated a high-degree of similarity between the calculated and the statistically measured Ramanchandran maps for both Ace-Gly-Nme and Ace-Ala-Nme di-peptides. Some highlights of our results include (1) well-preserved balance between the extended and helical region distributions, and (2) favorable type-II poly-proline helical region in agreement with recent experiments. Backward compatibility between the new and Cornell et al. charge sets, as judged by overall agreement between dipole moments, allows a smooth transition to the new force field in the area of ligand-binding calculations. Test simulations on a large set of proteins are also discussed. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1999-2012, 2003
Dynamic control of type I IFN signalling by an integrated network of negative regulators.
Porritt, Rebecca A; Hertzog, Paul J
2015-03-01
Whereas type I interferons (IFNs) have critical roles in protection from pathogens, excessive IFN responses contribute to pathology in both acute and chronic settings, pointing to the importance of balancing activating signals with regulatory mechanisms that appropriately tune the response. Here we review evidence for an integrated network of negative regulators of IFN production and action, which function at all levels of the activating and effector signalling pathways. We propose that the aim of this extensive network is to limit tissue damage while enabling an IFN response that is temporally appropriate and of sufficient magnitude. Understanding the architecture and dynamics of this network, and how it differs in distinct tissues, will provide new insights into IFN biology and aid the design of more effective therapeutics. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Schwab, J. R.
1979-01-01
Performance data obtained through experimental testing of a 22.4 kW traction motor using two types of excitation are presented. Ripple free dc from a motor-generator set for baseline data and pulse width modulated dc as supplied by a battery pack and chopper controller were used for excitation. For the same average values of input voltage and current, the motor power output was independent of the type of excitation. However, at the same speeds, the motor efficiency at low power output (corresponding to low duty cycle of the controller) was 5 to 10 percentage points lower on chopped dc than on ripple free dc. The chopped dc locked-rotor torque was approximately 1 to 3 percent greater than the ripple free dc torque for the same average current.
Attitude control requirements for various solar sail missions
NASA Technical Reports Server (NTRS)
Williams, Trevor
1990-01-01
The differences are summarized between the attitude control requirements for various types of proposed solar sail missions (Earth orbiting; heliocentric; asteroid rendezvous). In particular, it is pointed out that the most demanding type of mission is the Earth orbiting one, with the solar orbit case quite benign and asteroid station keeping only slightly more difficult. It is then shown, using numerical results derived for the British Solar Sail Group Earth orbiting design, that the disturbance torques acting on a realistic sail can completely dominate the torques required for nominal maneuvering of an 'ideal' sail. This is obviously an important consideration when sizing control actuators; not so obvious is the fact that it makes the standard rotating vane actuator unsatisfactory in practice. The reason for this is given, and a set of new actuators described which avoids the difficulty.
NASA Technical Reports Server (NTRS)
Montgomery, H. E.; Chan, F. K.
1973-01-01
A study is made of the mathematical solution of the differential equation of motion of a test particle in the equatorial plane of the Kerr gravitational field, using S (Schwarzschild-like) coordinates. A qualitative solution of this equation leads to the conclusion that there can only be 25 different types of orbits. For each value of a, the results are presented in a master diagram for which h and e are the parameters. A master diagram divides the h, e parameter space into regions such that at each point within one of these regions the types of admissible orbits are qualitatively the same. A pictorial representation of the physical orbits in the r, phi plane is also given.
17 CFR 230.420 - Legibility of prospectus.
Code of Federal Regulations, 2010 CFR
2010-04-01
... size, type size and font, bold-face type, italics and red ink, by presenting all required information... data included therein shall be in roman type at least as large and as legible as 10-point modern type... rule 482 (17 CFR 230.482) may be in roman type at least as large and as legible as 8-point modern type...
EXPLORING DATA-DRIVEN SPECTRAL MODELS FOR APOGEE M DWARFS
NASA Astrophysics Data System (ADS)
Lua Birky, Jessica; Hogg, David; Burgasser, Adam J.; Jessica Birky
2018-01-01
The Cannon (Ness et al. 2015; Casey et al. 2016) is a flexible, data-driven spectral modeling and parameter inference framework, demonstrated on high-resolution Apache Point Galactic Evolution Experiment (APOGEE; λ/Δλ~22,500, 1.5-1.7µm) spectra of giant stars to estimate stellar labels (Teff, logg, [Fe/H], and chemical abundances) to precisions higher than the model-grid pipeline. The lack of reliable stellar parameters reported by the APOGEE pipeline for temperatures less than ~3550K, motivates extension of this approach to M dwarf stars. Using a training set of 51 M dwarfs with spectral types ranging M0-M9 obtained from SDSS optical spectra, we demonstrate that the Cannon can infer spectral types to a precision of +/-0.6 types, making it an effective tool for classifying high-resolution near-infrared spectra. We discuss the potential for extending this work to determine the physical stellar labels Teff, logg, and [Fe/H].This work is supported by the SDSS Faculty and Student (FAST) initiative.
Intrinsic point defects in β-In2S3 studied by means of hybrid density-functional theory
NASA Astrophysics Data System (ADS)
Ghorbani, Elaheh; Albe, Karsten
2018-03-01
We have employed first principles total energy calculations in the framework of density functional theory, with plane wave basis sets and screened exchange hybrid functionals to study the incorporation of intrinsic defects in bulk β-In2S3. The results are obtained for In-rich and S-rich experimental growth conditions. The charge transition level is discussed for all native defects, including VIn, VS, Ini, Si, SIn, and InS, and a comparison between the theoretically calculated charge transition levels and the available experimental findings is presented. The results imply that β-In2S3 shows n-type conductivity under both In-rich and S-rich growth conditions. The indium antiisite (InS), the indium interstitial (Ini), and the sulfur vacancy ( VS ' ) are found to be the leading sources of sample's n-type conductivity. When going from the In-rich to the S-rich condition, the conductivity of the material decreases; however, the type of conductivity remains unchanged.
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
Ness, Roberta B; Catov, Janet
2007-12-15
Birth weight is associated with later-life cardiovascular risk. A new study by Romundstad et al. (Am J Epidemiol 2007;166:1359-1364) challenges us to consider influences on birth weight with respect to timing and type. Timing of effects on birth weight, according to the "fetal origins hypothesis," is in utero. Alternatively, familial aggregation--genetics or shared environment--may explain birth weight and suggests prepregnancy influences. The Romundstad et al. findings support familial effects: maternal metabolic factors predicted birth weight for gestational age. However, because maternal physiology sets the fetal environment, these data do not necessarily counter the fetal origins hypothesis. Types of maternal metabolic influences demonstrated by Romundstad et al. include elevations in blood pressure being associated with lower birth weight for gestational age, whereas unfavorable glucose and lipid levels were associated with higher birth weight. These findings are consistent with the authors prior hypothesis that vascular dysfunction and metabolic profile (glucose and lipids) have divergent effects during pregnancy. Moreover, these new data underscore that both extremes of birth weight may be related to cardiovascular risk. Few data sets contain prepregnancy, pregnancy, and childhood information. Without all such time points, life course effects will remain only partially understood. It is hoped that studies such as the forthcoming National Children's Study will generate critical understanding of this issue.
Labaj, Wojciech; Papiez, Anna; Polanski, Andrzej; Polanska, Joanna
2017-03-01
Large collections of data in studies on cancer such as leukaemia provoke the necessity of applying tailored analysis algorithms to ensure supreme information extraction. In this work, a custom-fit pipeline is demonstrated for thorough investigation of the voluminous MILE gene expression data set. Three analyses are accomplished, each for gaining a deeper understanding of the processes underlying leukaemia types and subtypes. First, the main disease groups are tested for differential expression against the healthy control as in a standard case-control study. Here, the basic knowledge on molecular mechanisms is confirmed quantitatively and by literature references. Second, pairwise comparison testing is performed for juxtaposing the main leukaemia types among each other. In this case by means of the Dice coefficient similarity measure the general relations are pointed out. Moreover, lists of candidate main leukaemia group biomarkers are proposed. Finally, with this approach being successful, the third analysis provides insight into all of the studied subtypes, followed by the emergence of four leukaemia subtype biomarkers. In addition, the class enhanced DEG signature obtained on the basis of novel pipeline processing leads to significantly better classification power of multi-class data classifiers. The developed methodology consisting of batch effect adjustment, adaptive noise and feature filtration coupled with adequate statistical testing and biomarker definition proves to be an effective approach towards knowledge discovery in high-throughput molecular biology experiments.
NIST-NRC Comparison of Total Immersion Liquid-in-Glass Thermometers
NASA Astrophysics Data System (ADS)
Hill, K. D.; Gee, D. J.; Cross, C. D.; Strouse, G. F.
2009-02-01
The use of liquid-in-glass (LIG) thermometers is described in many documentary standards in the fields of environmental testing, material testing, and material transfer. Many national metrology institutes, including the National Institute of Standards and Technology (NIST) and the National Research Council of Canada (NRC), list calibration services for these thermometers among the Calibration Measurement Capabilities of Appendix C of the BIPM Key Comparison Database. NIST and NRC arranged a bilateral comparison of a set of total-immersion ASTM-type LIG thermometers to validate their uncertainty claims. Two each of ASTM thermometer types 62C through 69C were calibrated at NIST and at NRC at four temperatures distributed over the range appropriate to each thermometer, in addition to the ice point. Collectively, the thermometers span a temperature range of - 38 °C to 305 °C. In total, 160 measurements (80 pairs) comprise the comparison data set. Pair-wise differences ( T NIST- T NRC) were formed for each thermometer at each temperature. For 8 of the 80 pairs (10 %), the differences exceed the k = 2 combined uncertainties. These results support the claimed capabilities of NIST and NRC for the calibration of LIG thermometers.
Enhanced Uranium Ore Concentrate Analysis by Handheld Raman Sensor: FY15 Status Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Samuel A.; Johnson, Timothy J.; Orton, Christopher R.
2015-11-11
High-purity uranium ore concentrates (UOC) represent a potential proliferation concern. A cost-effective, “point and shoot” in-field analysis capability to identify ore types, phases of materials present, and impurities, as well as estimate the overall purity would be prudent. Handheld, Raman-based sensor systems are capable of identifying chemical properties of liquid and solid materials. While handheld Raman systems have been extensively applied to many other applications, they have not been broadly studied for application to UOC, nor have they been optimized for this class of chemical compounds. PNNL was tasked in Fiscal Year 2015 by the Office of International Safeguards (NA-241)more » to explore the use of Raman for UOC analysis and characterization. This report summarizes the activities in FY15 related to this project. The following tasks were included: creation of an expanded library of Raman spectra of a UOC sample set, creation of optimal chemometric analysis methods to classify UOC samples by their type and level of impurities, and exploration of the various Raman wavelengths to identify the ideal instrument settings for UOC sample interrogation.« less
Soluyanov, Alexey A; Gresch, Dominik; Wang, Zhijun; Wu, QuanSheng; Troyer, Matthias; Dai, Xi; Bernevig, B Andrei
2015-11-26
Fermions--elementary particles such as electrons--are classified as Dirac, Majorana or Weyl. Majorana and Weyl fermions had not been observed experimentally until the recent discovery of condensed matter systems such as topological superconductors and semimetals, in which they arise as low-energy excitations. Here we propose the existence of a previously overlooked type of Weyl fermion that emerges at the boundary between electron and hole pockets in a new phase of matter. This particle was missed by Weyl because it breaks the stringent Lorentz symmetry in high-energy physics. Lorentz invariance, however, is not present in condensed matter physics, and by generalizing the Dirac equation, we find the new type of Weyl fermion. In particular, whereas Weyl semimetals--materials hosting Weyl fermions--were previously thought to have standard Weyl points with a point-like Fermi surface (which we refer to as type-I), we discover a type-II Weyl point, which is still a protected crossing, but appears at the contact of electron and hole pockets in type-II Weyl semimetals. We predict that WTe2 is an example of a topological semimetal hosting the new particle as a low-energy excitation around such a type-II Weyl point. The existence of type-II Weyl points in WTe2 means that many of its physical properties are very different to those of standard Weyl semimetals with point-like Fermi surfaces.
Construction of Gallium Point at NMIJ
NASA Astrophysics Data System (ADS)
Widiatmo, J. V.; Saito, I.; Yamazawa, K.
2017-03-01
Two open-type gallium point cells were fabricated using ingots whose nominal purities are 7N. Measurement systems for the realization of the melting point of gallium using these cells were built. The melting point of gallium is repeatedly realized by means of the measurement systems for evaluating the repeatability. Measurements for evaluating the effect of hydrostatic pressure coming from the molten gallium existing during the melting process and the effect of gas pressure that fills the cell were also performed. Direct cell comparisons between those cells were conducted. This comparison was aimed to evaluate the consistency of each cell, especially related to the nominal purity. Direct cell comparison between the open-type and the sealed-type gallium point cell was also conducted. Chemical analysis was conducted using samples extracted from ingots used in both the newly built open-type gallium point cells, from which the effect of impurities in the ingot was evaluated.
A common type system for clinical natural language processing
2013-01-01
Background One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. Results We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. Conclusions We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types. PMID:23286462
A common type system for clinical natural language processing.
Wu, Stephen T; Kaggal, Vinod C; Dligach, Dmitriy; Masanz, James J; Chen, Pei; Becker, Lee; Chapman, Wendy W; Savova, Guergana K; Liu, Hongfang; Chute, Christopher G
2013-01-03
One challenge in reusing clinical data stored in electronic medical records is that these data are heterogenous. Clinical Natural Language Processing (NLP) plays an important role in transforming information in clinical text to a standard representation that is comparable and interoperable. Information may be processed and shared when a type system specifies the allowable data structures. Therefore, we aim to define a common type system for clinical NLP that enables interoperability between structured and unstructured data generated in different clinical settings. We describe a common type system for clinical NLP that has an end target of deep semantics based on Clinical Element Models (CEMs), thus interoperating with structured data and accommodating diverse NLP approaches. The type system has been implemented in UIMA (Unstructured Information Management Architecture) and is fully functional in a popular open-source clinical NLP system, cTAKES (clinical Text Analysis and Knowledge Extraction System) versions 2.0 and later. We have created a type system that targets deep semantics, thereby allowing for NLP systems to encapsulate knowledge from text and share it alongside heterogenous clinical data sources. Rather than surface semantics that are typically the end product of NLP algorithms, CEM-based semantics explicitly build in deep clinical semantics as the point of interoperability with more structured data types.
21 CFR 130.14 - General statements of substandard quality and substandard fill of container.
Code of Federal Regulations, 2010 CFR
2010-04-01
... pound, the type of the first line is 12-point, and of the second, 8-point. If such quantity is 1 pound or more, the type of the first line is 14-point, and of the second, 10-point. Such statement is enclosed within lines, not less than 6 points in width, forming a rectangle. Such statement, with enclosing...
Papasavvas, Emmanouil; Foulkes, Andrea; Yin, Xiangfan; Joseph, Jocelin; Ross, Brian; Azzoni, Livio; Kostman, Jay R; Mounzer, Karam; Shull, Jane; Montaner, Luis J
2015-07-01
The identification of immune correlates of HIV control is important for the design of immunotherapies that could support cure or antiretroviral therapy (ART) intensification-related strategies. ART interruptions may facilitate this task through exposure of an ART partially reconstituted immune system to endogenous virus. We investigated the relationship between set-point plasma HIV viral load (VL) during an ART interruption and innate/adaptive parameters before or after interruption. Dendritic cell (DC), natural killer (NK) cell and HIV Gag p55-specific T-cell functional responses were measured in paired cryopreserved peripheral blood mononuclear cells obtained at the beginning (on ART) and at set-point of an open-ended interruption from 31 ART-suppressed chronically HIV-1(+) patients. Spearman correlation and linear regression modeling were used. Frequencies of plasmacytoid DC (pDC), and HIV Gag p55-specific CD3(+) CD4(-) perforin(+) IFN-γ(+) cells at the beginning of interruption associated negatively with set-point plasma VL. Inclusion of both variables with interaction into a model resulted in the best fit (adjusted R(2) = 0·6874). Frequencies of pDC or HIV Gag p55-specific CD3(+) CD4(-) CSFE(lo) CD107a(+) cells at set-point associated negatively with set-point plasma VL. The dual contribution of pDC and anti-HIV T-cell responses to viral control, supported by our models, suggests that these variables may serve as immune correlates of viral control and could be integrated in cure or ART-intensification strategies. © 2015 John Wiley & Sons Ltd.
Nanomedicinal products: a survey on specific toxicity and side effects
Giannakou, Christina; De Jong, Wim H; Kooi, Myrna W; Park, Margriet VDZ; Vandebriel, Rob J; Bosselaers, Irene EM; Scholl, Joep HG; Geertsma, Robert E
2017-01-01
Due to their specific properties and pharmacokinetics, nanomedicinal products (NMPs) may present different toxicity and side effects compared to non-nanoformulated, conventional medicines. To facilitate the safety assessment of NMPs, we aimed to gain insight into toxic effects specific for NMPs by systematically analyzing the available toxicity data on approved NMPs in the European Union. In addition, by comparing five sets of products with the same active pharmaceutical ingredient (API) in a conventional formulation versus a nanoformulation, we aimed to identify any side effects specific for the nano aspect of NMPs. The objective was to investigate whether specific toxicity could be related to certain structural types of NMPs and whether a nanoformulation of an API altered the nature of side effects of the product in humans compared to a conventional formulation. The survey of toxicity data did not reveal nanospecific toxicity that could be related to certain types of structures of NMPs, other than those reported previously in relation to accumulation of iron nanoparticles (NPs). However, given the limited data for some of the product groups or toxicological end points in the analysis, conclusions with regard to (a lack of) potential nanomedicine-specific effects need to be considered carefully. Results from the comparison of side effects of five sets of drugs (mainly liposomes and/or cytostatics) confirmed the induction of pseudo-allergic responses associated with specific NMPs in the literature, in addition to the side effects common to both nanoformulations and regular formulations, eg, with liposomal doxorubicin, and possibly liposomal daunorubicin. Based on the available data, immunotoxicological effects of certain NMPs cannot be excluded, and we conclude that this end point requires further attention. PMID:28883724
Reconstruction of reflectance data using an interpolation technique.
Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh
2009-03-01
A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.
A minimization method on the basis of embedding the feasible set and the epigraph
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.
2016-11-01
We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.
Modeling radiative transfer with the doubling and adding approach in a climate GCM setting
NASA Astrophysics Data System (ADS)
Lacis, A. A.
2017-12-01
The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.
NASA Astrophysics Data System (ADS)
Banesh, D.; Oskin, M. E.; Mu, A.; Vu, C.; Westerteiger, R.; Krishnan, A.; Hamann, B.; Glennie, C. L.; Hinojosa, A.; Borsa, A. A.
2013-12-01
Differential LiDAR provides unprecedented images of the near-field ground deformation and fault slip due to earthquakes. Here we examine the performance of the Iterative Closest Point (ICP) technique for data registration between pre- and post-earthquake LiDAR point clouds of varying density. We use the 2010 El Mayor-Cucapah data set as our region of interest since this earthquake produced different types of surface ruptures, yielding a variety of deformation styles for analysis. We also test a more simplistic, Chi-Squared minimization approach and find that it produces good results when compared to ICP. We present different techniques for visualizing large vector fields, and show how each method highlights a unique feature in the data set. Dense vector fields are useful when analyzing smaller deformations in the surface. A sparse, averaged vector field analyzes the bigger, overall shifts without interference caused by small details. Flow-based visualizations like Line Integral Convolution (LIC) graphs, provide insight into particular artifacts of data collection, such as distortions due to uncorrected pitch and yaw of the aircraft during the survey. Animations of the vector field establish the direction of movement in the landscape, quickly highlighting areas of interest.
NASA Astrophysics Data System (ADS)
Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan
2016-03-01
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
Constructing a polynomial whose nodal set is the three-twist knot 52
NASA Astrophysics Data System (ADS)
Dennis, Mark R.; Bode, Benjamin
2017-06-01
We describe a procedure that creates an explicit complex-valued polynomial function of three-dimensional space, whose nodal lines are the three-twist knot 52. The construction generalizes a similar approach for lemniscate knots: a braid representation is engineered from finite Fourier series and then considered as the nodal set of a certain complex polynomial which depends on an additional parameter. For sufficiently small values of this parameter, the nodal lines form the three-twist knot. Further mathematical properties of this map are explored, including the relationship of the phase critical points with the Morse-Novikov number, which is nonzero as this knot is not fibred. We also find analogous functions for other simple knots and links. The particular function we find, and the general procedure, should be useful for designing knotted fields of particular knot types in various physical systems.
Research on TRIZ and CAIs Application Problems for Technology Innovation
NASA Astrophysics Data System (ADS)
Li, Xiangdong; Li, Qinghai; Bai, Zhonghang; Geng, Lixiao
In order to realize application of invent problem solve theory (TRIZ) and computer aided innovation software (CAIs) , need to solve some key problems, such as the mode choice of technology innovation, establishment of technology innovation organization network(TION), and achievement of innovative process based on TRIZ and CAIs, etc.. This paper shows that the demands for TRIZ and CAIs according to the characteristics and existing problem of the manufacturing enterprises. Have explained that the manufacturing enterprises need to set up an open TION of enterprise leading type, and achieve the longitudinal cooperation innovation with institution of higher learning. The process of technology innovation based on TRIZ and CAIs has been set up from researching and developing point of view. Application of TRIZ and CAIs in FY Company has been summarized. The application effect of TRIZ and CAIs has been explained using technology innovation of the close goggle valve product.
Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H
2005-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.
Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.
2010-01-01
In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884
Soler, Miguel A; de Marco, Ario; Fortuna, Sara
2016-10-10
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
NASA Astrophysics Data System (ADS)
Soler, Miguel A.; De Marco, Ario; Fortuna, Sara
2016-10-01
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
NASA Astrophysics Data System (ADS)
Yakub, Eugene; Ronchi, Claudio; Staicu, Dragos
2007-09-01
Results of molecular dynamics (MD) simulation of UO2 in a wide temperature range are presented and discussed. A new approach to the calibration of a partly ionic Busing-Ida-type model is proposed. A potential parameter set is obtained reproducing the experimental density of solid UO2 in a wide range of temperatures. A conventional simulation of the high-temperature stoichiometric UO2 on large MD cells, based on a novel fast method of computation of Coulomb forces, reveals characteristic features of a premelting λ transition at a temperature near to that experimentally observed (Tλ=2670K ). A strong deviation from the Arrhenius behavior of the oxygen self-diffusion coefficient was found in the vicinity of the transition point. Predictions for liquid UO2, based on the same potential parameter set, are in good agreement with existing experimental data and theoretical calculations.
Dance choreography is coordinated with song repertoire in a complex avian display.
Dalziell, Anastasia H; Peters, Richard A; Cockburn, Andrew; Dorland, Alexandra D; Maisey, Alex C; Magrath, Robert D
2013-06-17
All human cultures have music and dance, and the two activities are so closely integrated that many languages use just one word to describe both. Recent research points to a deep cognitive connection between music and dance-like movements in humans, fueling speculation that music and dance have coevolved and prompting the need for studies of audiovisual displays in other animals. However, little is known about how nonhuman animals integrate acoustic and movement display components. One striking property of human displays is that performers coordinate dance with music by matching types of dance movements with types of music, as when dancers waltz to waltz music. Here, we show that a bird also temporally coordinates a repertoire of song types with a repertoire of dance-like movements. During displays, male superb lyrebirds (Menura novaehollandiae) sing four different song types, matching each with a unique set of movements and delivering song and dance types in a predictable sequence. Crucially, display movements are both unnecessary for the production of sound and voluntary, because males sometimes sing without dancing. Thus, the coordination of independently produced repertoires of acoustic and movement signals is not a uniquely human trait. Copyright © 2013 Elsevier Ltd. All rights reserved.
Accurate Typing of Human Leukocyte Antigen Class I Genes by Oxford Nanopore Sequencing.
Liu, Chang; Xiao, Fangzhou; Hoisington-Lopez, Jessica; Lang, Kathrin; Quenzel, Philipp; Duffy, Brian; Mitra, Robi David
2018-04-03
Oxford Nanopore Technologies' MinION has expanded the current DNA sequencing toolkit by delivering long read lengths and extreme portability. The MinION has the potential to enable expedited point-of-care human leukocyte antigen (HLA) typing, an assay routinely used to assess the immunologic compatibility between organ donors and recipients, but the platform's high error rate makes it challenging to type alleles with accuracy. We developed and validated accurate typing of HLA by Oxford nanopore (Athlon), a bioinformatic pipeline that i) maps nanopore reads to a database of known HLA alleles, ii) identifies candidate alleles with the highest read coverage at different resolution levels that are represented as branching nodes and leaves of a tree structure, iii) generates consensus sequences by remapping the reads to the candidate alleles, and iv) calls the final diploid genotype by blasting consensus sequences against the reference database. Using two independent data sets generated on the R9.4 flow cell chemistry, Athlon achieved a 100% accuracy in class I HLA typing at the two-field resolution. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Catellani, Alessandra; Calzolari, Arrigo
2017-01-01
We report on first principle investigations about the electrical character of Li-X codoped ZnO transparent conductive oxides (TCOs). We studied a set of possible X codopants including either unintentional dopants typically present in the system (e.g., H, O) or monovalent acceptor groups, based on nitrogen and halogens (F, Cl, I). The interplay between dopants and structural point defects in the host (such as vacancies) is also taken explicitly into account, demonstrating the crucial effect that zinc and oxygen vacancies have on the final properties of TCOs. Our results show that Li-ZnO has a p-type character, when Li is included as Zn substitutional dopant, but it turns into an n-type when Li is in interstitial sites. The inclusion of X-codopants is considered to deactivate the n-type character of interstitial Li atoms: the total Li-X compensation effect and the corresponding electrical character of the doped compounds selectively depend on the presence of vacancies in the host. We prove that LiF-doped ZnO is the only codoped system that exhibits a p-type character in the presence of Zn vacancies. PMID:28772691
NASA Astrophysics Data System (ADS)
Abadjiev, Valentin; Abadjieva, Emilia
2016-06-01
Hyperboloid gear drives with face mating gears are used to transform rotations between shafts with non-parallel and non-intersecting axes. A special case of these transmissions are Spiroid and Helicon gear drives. The classical gear drives of this type are the Archimedean ones. The objective of this study are hyperboloid gear drives with face meshing, when the pinion possesses threads of conic convolute, Archimedean and involute types, or the pinion has threads of cylindrical convolute, Archimedean and involute types. For simplicity, all three types transmis- sions with face mating gears and a conic pinion are titled Spiroid and all three types transmissions with face mating gears and a cylindrical pinion are titled Helicon. Principles of the mathematical modelling of tooth contact synthesis are discussed in this study. The presented research shows that the synthesis is realized by application of two mathematical models: pitch contact point and mesh region models. Two approaches for synthesis of the gear drives in accordance with Olivier's principles are illustrated. The algorithms and computer programs for optimization synthesis and design of the studied hyperboloid gear drives are presented.
Multi-font printed Mongolian document recognition system
NASA Astrophysics Data System (ADS)
Peng, Liangrui; Liu, Changsong; Ding, Xiaoqing; Wang, Hua; Jin, Jianming
2009-01-01
Mongolian is one of the major ethnic languages in China. Large amount of Mongolian printed documents need to be digitized in digital library and various applications. Traditional Mongolian script has unique writing style and multi-font-type variations, which bring challenges to Mongolian OCR research. As traditional Mongolian script has some characteristics, for example, one character may be part of another character, we define the character set for recognition according to the segmented components, and the components are combined into characters by rule-based post-processing module. For character recognition, a method based on visual directional feature and multi-level classifiers is presented. For character segmentation, a scheme is used to find the segmentation point by analyzing the properties of projection and connected components. As Mongolian has different font-types which are categorized into two major groups, the parameter of segmentation is adjusted for each group. A font-type classification method for the two font-type group is introduced. For recognition of Mongolian text mixed with Chinese and English, language identification and relevant character recognition kernels are integrated. Experiments show that the presented methods are effective. The text recognition rate is 96.9% on the test samples from practical documents with multi-font-types and mixed scripts.
Lombardi, A M
2017-09-18
Stochastic models provide quantitative evaluations about the occurrence of earthquakes. A basic component of this type of models are the uncertainties in defining main features of an intrinsically random process. Even if, at a very basic level, any attempting to distinguish between types of uncertainty is questionable, an usual way to deal with this topic is to separate epistemic uncertainty, due to lack of knowledge, from aleatory variability, due to randomness. In the present study this problem is addressed in the narrow context of short-term modeling of earthquakes and, specifically, of ETAS modeling. By mean of an application of a specific version of the ETAS model to seismicity of Central Italy, recently struck by a sequence with a main event of Mw6.5, the aleatory and epistemic (parametric) uncertainty are separated and quantified. The main result of the paper is that the parametric uncertainty of the ETAS-type model, adopted here, is much lower than the aleatory variability in the process. This result points out two main aspects: an analyst has good chances to set the ETAS-type models, but he may retrospectively describe and forecast the earthquake occurrences with still limited precision and accuracy.
Summary of the Fourth AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Rider, Ben; Zickuhr, Tom; Levy, David W.; Brodersen, Olaf P.; Eisfeld, Bernhard; Crippa, Simone; Wahls, Richard A.;
2010-01-01
Results from the Fourth AIAA Drag Prediction Workshop (DPW-IV) are summarized. The workshop focused on the prediction of both absolute and differential drag levels for wing-body and wing-body-horizontal-tail configurations that are representative of transonic transport air- craft. Numerical calculations are performed using industry-relevant test cases that include lift- specific flight conditions, trimmed drag polars, downwash variations, dragrises and Reynolds- number effects. Drag, lift and pitching moment predictions from numerous Reynolds-Averaged Navier-Stokes computational fluid dynamics methods are presented. Solutions are performed on structured, unstructured and hybrid grid systems. The structured-grid sets include point- matched multi-block meshes and over-set grid systems. The unstructured and hybrid grid sets are comprised of tetrahedral, pyramid, prismatic, and hexahedral elements. Effort is made to provide a high-quality and parametrically consistent family of grids for each grid type about each configuration under study. The wing-body-horizontal families are comprised of a coarse, medium and fine grid; an optional extra-fine grid augments several of the grid families. These mesh sequences are utilized to determine asymptotic grid-convergence characteristics of the solution sets, and to estimate grid-converged absolute drag levels of the wing-body-horizontal configuration using Richardson extrapolation.
Flow analysis of human chromosome sets by means of mixing-stirring device
NASA Astrophysics Data System (ADS)
Zenin, Valeri V.; Aksenov, Nicolay D.; Shatrova, Alla N.; Klopov, Nicolay V.; Cram, L. Scott; Poletaev, Andrey I.
1997-05-01
A new mixing and stirring device (MSD) was used to perform flow karyotype analysis of single human mitotic chromosomes analyzed so as to maintain the identity of chromosomes derived from the same cell. An improved method for cell preparation and intracellular staining of chromosomes was developed. The method includes enzyme treatment, incubation with saponin and separation of prestained cells from debris on a sucrose gradient. Mitotic cells are injected one by one in the MSD which is located inside the flow chamber where cells are ruptured, thereby releasing chromosomes. The set of chromosomes proceeds to flow in single file fashion to the point of analysis. The device works in a stepwise manner. The concentration of cells in the sample must be kept low to ensure that only one cell at a time enters the breaking chamber. Time-gated accumulation of data in listmode files makes it possible to separate chromosome sets comprising of single cells. The software that was developed classifies chromosome sets according to different criteria: total number of chromosomes, overall DNA content in the set, and the number of chromosomes of certain types. This approach combines the high performance of flow cytometry with the advantages of image analysis. Examples obtained with different human cell lines are presented.
Jankowska, Marzena; Kupka, Teobald; Stobiński, Leszek; Faber, Rasmus; Lacerda, Evanildo G; Sauer, Stephan P A
2016-02-05
Hartree-Fock and density functional theory with the hybrid B3LYP and general gradient KT2 exchange-correlation functionals were used for nonrelativistic and relativistic nuclear magnetic shielding calculations of helium, neon, argon, krypton, and xenon dimers and free atoms. Relativistic corrections were calculated with the scalar and spin-orbit zeroth-order regular approximation Hamiltonian in combination with the large Slater-type basis set QZ4P as well as with the four-component Dirac-Coulomb Hamiltonian using Dyall's acv4z basis sets. The relativistic corrections to the nuclear magnetic shieldings and chemical shifts are combined with nonrelativistic coupled cluster singles and doubles with noniterative triple excitations [CCSD(T)] calculations using the very large polarization-consistent basis sets aug-pcSseg-4 for He, Ne and Ar, aug-pcSseg-3 for Kr, and the AQZP basis set for Xe. For the dimers also, zero-point vibrational (ZPV) corrections are obtained at the CCSD(T) level with the same basis sets were added. Best estimates of the dimer chemical shifts are generated from these nuclear magnetic shieldings and the relative importance of electron correlation, ZPV, and relativistic corrections for the shieldings and chemical shifts is analyzed. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tehrany, M. Sh.; Jones, S.
2017-10-01
This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.
Constrained tracking control for nonlinear systems.
Khani, Fatemeh; Haeri, Mohammad
2017-09-01
This paper proposes a tracking control strategy for nonlinear systems without needing a prior knowledge of the reference trajectory. The proposed method consists of a set of local controllers with appropriate overlaps in their stability regions and an on-line switching strategy which implements these controllers and uses some augmented intermediate controllers to ensure steering the system states to the desired set points without needing to redesign the controller for each value of set point changes. The proposed approach provides smooth transient responses despite switching among the local controllers. It should be mentioned that the stability regions of the proposed controllers could be estimated off-line for a range of set-point changes. The efficiencies of the proposed algorithm are illustrated via two example simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Progress in the development of paper-based diagnostics for low-resource point-of-care settings
Byrnes, Samantha; Thiessen, Gregory; Fu, Elain
2014-01-01
This Review focuses on recent work in the field of paper microfluidics that specifically addresses the goal of translating the multistep processes that are characteristic of gold-standard laboratory tests to low-resource point-of-care settings. A major challenge is to implement multistep processes with the robust fluid control required to achieve the necessary sensitivity and specificity of a given application in a user-friendly package that minimizes equipment. We review key work in the areas of fluidic controls for automation in paper-based devices, readout methods that minimize dedicated equipment, and power and heating methods that are compatible with low-resource point-of-care settings. We also highlight a focused set of recent applications and discuss future challenges. PMID:24256361
Henwood, Patricia C; Mackenzie, David C; Rempell, Joshua S; Murray, Alice F; Leo, Megan M; Dean, Anthony J; Liteplo, Andrew S; Noble, Vicki E
2014-09-01
The value of point-of-care ultrasound education in resource-limited settings is increasingly recognized, though little guidance exists on how to best construct a sustainable training program. Herein we offer a practical overview of core factors to consider when developing and implementing a point-of-care ultrasound education program in a resource-limited setting. Considerations include analysis of needs assessment findings, development of locally relevant curriculum, access to ultrasound machines and related technological and financial resources, quality assurance and follow-up plans, strategic partnerships, and outcomes measures. Well-planned education programs in these settings increase the potential for long-term influence on clinician skills and patient care. Copyright © 2014 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Lincoff, A Michael; Tardif, Jean-Claude; Schwartz, Gregory G; Nicholls, Stephen J; Rydén, Lars; Neal, Bruce; Malmberg, Klas; Wedel, Hans; Buse, John B; Henry, Robert R; Weichert, Arlette; Cannata, Ruth; Svensson, Anders; Volz, Dietmar; Grobbee, Diederick E
2014-04-16
No therapy directed against diabetes has been shown to unequivocally reduce the excess risk of cardiovascular complications. Aleglitazar is a dual agonist of peroxisome proliferator-activated receptors with insulin-sensitizing and glucose-lowering actions and favorable effects on lipid profiles. To determine whether the addition of aleglitazar to standard medical therapy reduces cardiovascular morbidity and mortality among patients with type 2 diabetes mellitus and a recent acute coronary syndrome (ACS). AleCardio was a phase 3, multicenter, randomized, double-blind, placebo-controlled trial conducted in 720 hospitals in 26 countries throughout North America, Latin America, Europe, and Asia-Pacific regions. The enrollment of 7226 patients hospitalized for ACS (myocardial infarction or unstable angina) with type 2 diabetes occurred between February 2010 and May 2012; treatment was planned to continue until patients were followed-up for at least 2.5 years and 950 primary end point events were positively adjudicated. Randomized in a 1:1 ratio to receive aleglitazar 150 µg or placebo daily. The primary efficacy end point was time to cardiovascular death, nonfatal myocardial infarction, or nonfatal stroke. Principal safety end points were hospitalization due to heart failure and changes in renal function. The trial was terminated on July 2, 2013, after a median follow-up of 104 weeks, upon recommendation of the data and safety monitoring board due to futility for efficacy at an unplanned interim analysis and increased rates of safety end points. A total of 3.1% of patients were lost to follow-up and 3.2% of patients withdrew consent. The primary end point occurred in 344 patients (9.5%) in the aleglitazar group and 360 patients (10.0%) in the placebo group (hazard ratio, 0.96 [95% CI, 0.83-1.11]; P = .57). Rates of serious adverse events, including heart failure (3.4% for aleglitazar vs 2.8% for placebo, P = .14), gastrointestinal hemorrhages (2.4% for aleglitazar vs 1.7% for placebo, P = .03), and renal dysfunction (7.4% for aleglitazar vs 2.7% for placebo, P < .001) were increased. Among patients with type 2 diabetes and recent ACS, use of aleglitazar did not reduce the risk of cardiovascular outcomes. These findings do not support the use of aleglitazar in this setting with a goal of reducing cardiovascular risk. clinicaltrials.gov Identifier: NCT01042769.
Remote temperature-set-point controller
Burke, W.F.; Winiecki, A.L.
1984-10-17
An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Remote temperature-set-point controller
Burke, William F.; Winiecki, Alan L.
1986-01-01
An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Preconditioning 2D Integer Data for Fast Convex Hull Computations.
Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
Extracting Exact Answers to Questions Based on Structural Links
2002-01-01
type of asking point and answer point (e.g. NePerson asking point matches NePerson and its sub-types NeMan and NeWoman; ‘how’ matches manner-modifier...NePerson V-S win [John Smith]/ NeMan Some sample results are given in section 4 to illustrate how answer-points are identified based on matching binary
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2010 CFR
2010-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2013 CFR
2013-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2011 CFR
2011-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2012 CFR
2012-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2014 CFR
2014-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
Public Data Set: Control and Automation of the Pegasus Multi-point Thomson Scattering System
Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448)
2016-08-12
This public data set contains openly-documented, machine readable digital research data corresponding to figures published in G.M. Bodner et al., 'Control and Automation of the Pegasus Multi-point Thomson Scattering System,' Rev. Sci. Instrum. 87, 11E523 (2016).
Northrip, Kimberly; Chen, Candice; Marsh, Jennifer
2008-04-29
Key informants are individuals with insight into a community or a problem of interest. Our objective was to evaluate the effect of the employment type of key informants on the outcome of a pediatric needs assessment for an urban community. Twenty-one interviews were conducted during the course of a pediatric community needs assessment. As part of the interview, informants were asked to list the top three problems facing children in their community. We analyzed their answers to determine if informant responses differed by employment type. Key informants were divided into four employment types: health care setting, social service, business, and infrastructure. Responses were coded as being primarily one of three types: medical, social, or resource. Our results showed that those informants who worked in a health care setting listed medical problems more often than those who did not (p < 0.04). Those who worked in social services listed resource problems more often than those who did not (p < 0.05). Those in business and infrastructure positions listed more social problems (p < 0.37). No difference was observed in response type between those who had lived in the community at some point and those who had not. This study lends support to the hypothesis that informants' reporting of community problems is biased by their vocation. Clinicians often focus their needs assessments on health care workers. This study suggests, however, that we need to take into consideration the bias this presents and to seek to interview people with diverse work experiences. By limiting the process to health care workers, clinicians are likely to get a skewed perspective of a community's needs and wants.
Robust group-wise rigid registration of point sets using t-mixture model
NASA Astrophysics Data System (ADS)
Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.
2016-03-01
A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.
PIV study of the wake of a model wind turbine transitioning between operating set points
NASA Astrophysics Data System (ADS)
Houck, Dan; Cowen, Edwin (Todd)
2016-11-01
Wind turbines are ideally operated at their most efficient tip speed ratio for a given wind speed. There is increasing interest, however, in operating turbines at other set points to increase the overall power production of a wind farm. Specifically, Goit and Meyers (2015) used LES to examine a wind farm optimized by unsteady operation of its turbines. In this study, the wake of a model wind turbine is measured in a water channel using PIV. We measure the wake response to a change in operational set point of the model turbine, e.g., from low to high tip speed ratio or vice versa, to examine how it might influence a downwind turbine. A modified torque transducer after Kang et al. (2010) is used to calibrate in situ voltage measurements of the model turbine's generator operating across a resistance to the torque on the generator. Changes in operational set point are made by changing the resistance or the flow speed, which change the rotation rate measured by an encoder. Single camera PIV on vertical planes reveals statistics of the wake at various distances downstream as the turbine transitions from one set point to another. From these measurements, we infer how the unsteady operation of a turbine may affect the performance of a downwind turbine as its incoming flow. National Science Foundation and the Atkinson Center for a Sustainable Future.
Variationally consistent discretization schemes and numerical algorithms for contact problems
NASA Astrophysics Data System (ADS)
Wohlmuth, Barbara
We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.
Bendifallah, Sofiane; Canlorbe, Geoffroy; Arsène, Emmanuelle; Collinet, Pierre; Huguet, Florence; Coutant, Charles; Hudry, Delphine; Graesslin, Olivier; Raimond, Emilie; Touboul, Cyril; Daraï, Emile; Ballester, Marcos
2015-08-01
This study was designed to develop a risk scoring system (RSS) for predicting lymph node (LN) metastases in patients with early-stage endometrial cancer (EC). Data of 457 patients with early-stage EC who received primary surgical treatment between January 2001 and December 2012 were abstracted from a prospective, multicentre database (training set). A risk model based on factors impacting LN metastases was developed. To assess the discrimination of the RSS, both internal by the bootstrap approach and external validation (validation set) were adopted. Overall the LN metastasis rate was 11.8 % (54/457). LN metastases were associated with five variables: age ≥60 years, histological grade 3 and/or type 2, primary tumor diameter ≥1.5 cm, depth of myometrial invasion ≥50 %, and the positive lymphovascular space involvement status. These variables were included in the RSS and assigned scores ranging from 0 to 9. The discrimination of the RSS was 0.81 [95 % confidence interval (CI) 0.78-0.84] in the training set. The area under the curve of the receiver-operating characteristics for predicting LN metastases after internal and external validation was 0.80 (95 % CI 0.77-0.83) and 0.85 (95 % CI 0.81-0.89), respectively. A total score of 6 points corresponded to the optimal threshold of the RSS with a rate of LN metastases of 7.5 % (29/385) and 34.7 % (25/72) for low-risk (≤6 points) and high-risk patients (>6 points), respectively. At this threshold, the diagnostic accuracy was 83 %. This RSS could be useful in clinical practice to determine which patients with early-stage EC should benefit from secondary surgical staging including complete lymphadenectomy.
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Path planning during combustion mode switch
Jiang, Li; Ravi, Nikhil
2015-12-29
Systems and methods are provided for transitioning between a first combustion mode and a second combustion mode in an internal combustion engine. A current operating point of the engine is identified and a target operating point for the internal combustion engine in the second combustion mode is also determined. A predefined optimized transition operating point is selected from memory. While operating in the first combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion engine to approach the selected optimized transition operating point. When the engine is operating at the selected optimized transition operating point, the combustion mode is switched from the first combustion mode to the second combustion mode. While operating in the second combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion to approach the target operating point.
Tange, Mio; Matsumoto, Akino; Yoshida, Miyako; Kojima, Honami; Haraguchi, Tamami; Uchida, Takahiro
2017-01-01
The purpose of the study was to evaluate the adsorption of filgrastim on infusion sets (comprising infusion bag, line and filter) and to compare the adsorption of the original filgrastim preparation with biosimilar preparations using HPLC. The inhibitory effect of polysorbate 80 on this adsorption was also evaluated. Filgrastim was mixed with isotonic sodium chloride solution or 5% (w/v) glucose solution in the infusion fluid. Filgrastim adsorption on infusion sets was observed with all preparations and with both types of infusion solution. The adsorption ratio was about 30% in all circumstances. Filgrastim adsorption on all parts of the infusion set (bag, line and filter) was dramatically decreased by the addition of polysorbate 80 solution at concentrations at or over its critical micelle concentration (CMC). The filgrastim adsorption ratio was highest at a solution pH of 5.65, which is the isoelectric point (pI) of filgrastim. This study showed that the degree of filgrastim adsorption on infusion sets is similar for original and biosimilar preparations, but that the addition of polysorbate 80 to the infusion solution at concentrations at or above its CMC is effective in preventing filgrastim adsorption. The addition of a total-vitamin preparation with a polysorbate 80 concentration over its CMC may be an effective way of preventing filgrastim adsorption on infusion sets.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
A new prognostic model for chemotherapy-induced febrile neutropenia.
Ahn, Shin; Lee, Yoon-Seon; Lee, Jae-Lyun; Lim, Kyung Soo; Yoon, Sung-Cheol
2016-02-01
The objective of this study was to develop and validate a new prognostic model for febrile neutropenia (FN). This study comprised 1001 episodes of FN: 718 for the derivation set and 283 for the validation set. Multivariate logistic regression analysis was performed with unfavorable outcome as the primary endpoint and bacteremia as the secondary endpoint. In the derivation set, risk factors for adverse outcomes comprised age ≥ 60 years (2 points), procalcitonin ≥ 0.5 ng/mL (5 points), ECOG performance score ≥ 2 (2 points), oral mucositis grade ≥ 3 (3 points), systolic blood pressure <90 mmHg (3 points), and respiratory rate ≥ 24 breaths/min (3 points). The model stratified patients into three severity classes, with adverse event rates of 6.0 % in class I (score ≤ 2), 27.3 % in class II (score 3-8), and 67.9 % in class III (score ≥ 9). Bacteremia was present in 1.1, 11.5, and 29.8 % of patients in class I, II, and III, respectively. The outcomes of the validation set were similar in each risk class. When the derivation and validation sets were integrated, unfavorable outcomes occurred in 5.9 % of the low-risk group classified by the new prognostic model and in 12.2 % classified by the Multinational Association for Supportive Care in Cancer (MASCC) risk index. With the new prognostic model, we can classify patients with FN into three classes of increasing adverse outcomes and bacteremia. Early discharge would be possible for class I patients, short-term observation could safely manage class II patients, and inpatient admission is warranted for class III patients.
Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets
NASA Astrophysics Data System (ADS)
Gold, P. O.; Cowgill, E.; Kreylos, O.
2009-12-01
Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.
Gschwind, Michael K
2013-04-16
Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.
40 CFR 1065.659 - Removed water correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
... know that saturated water vapor conditions exist. Use good engineering judgment to measure the... absolute pressure based on an alarm set point, a pressure regulator set point, or good engineering judgment... from raw exhaust, you may determine the amount of water based on intake-air humidity, plus a chemical...
40 CFR 1065.659 - Removed water correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... know that saturated water vapor conditions exist. Use good engineering judgment to measure the... absolute pressure based on an alarm set point, a pressure regulator set point, or good engineering judgment... from raw exhaust, you may determine the amount of water based on intake-air humidity, plus a chemical...
Assessment of information needs in diabetes: Development and evaluation of a questionnaire.
Chernyak, N; Stephan, A; Bächle, C; Genz, J; Jülich, F; Icks, A
2016-08-01
To develop a questionnaire suitable for assessing the information needs of individuals with diabetes mellitus types 1 and 2 in diverse healthcare settings (e.g. primary care or long-term care) and at different time points during the course of the disease. The initial questionnaire was developed on the basis of literature search and analysis, reviewed by clinical experts, and evaluated in two focus groups. The revised version was pilot-tested on 39 individuals with diabetes type 2, type 1 and gestational diabetes. The final questionnaire reveals the most important information needs in diabetes. A choice task, a rating task and open-ended questions are combined. First, participants have to choose three topics that interest them out of a list with 12 general topics and specify in their own words their particular information needs for the chosen topics. They are then asked how informed they feel with regard to all topics (4-point Likert-scale), and whether information is currently desired (yes/no). The questionnaire ends with an open-ended question asking for additional topics of interest. Careful selection of topics and inclusion of open-ended questions seem to be essential prerequisites for the unbiased assessment of information needs. The questionnaire can be applied in surveys in order to examine patterns of information needs across various groups and changes during the course of the disease. Such knowledge would contribute to more patient-guided information, counselling and support. Copyright © 2015 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Chaotic phase synchronization in bursting-neuron models driven by a weak periodic force
NASA Astrophysics Data System (ADS)
Ando, Hiroyasu; Suetani, Hiromichi; Kurths, Jürgen; Aihara, Kazuyuki
2012-07-01
We investigate the entrainment of a neuron model exhibiting a chaotic spiking-bursting behavior in response to a weak periodic force. This model exhibits two types of oscillations with different characteristic time scales, namely, long and short time scales. Several types of phase synchronization are observed, such as 1:1 phase locking between a single spike and one period of the force and 1:l phase locking between the period of slow oscillation underlying bursts and l periods of the force. Moreover, spiking-bursting oscillations with chaotic firing patterns can be synchronized with the periodic force. Such a type of phase synchronization is detected from the position of a set of points on a unit circle, which is determined by the phase of the periodic force at each spiking time. We show that this detection method is effective for a system with multiple time scales. Owing to the existence of both the short and the long time scales, two characteristic phenomena are found around the transition point to chaotic phase synchronization. One phenomenon shows that the average time interval between successive phase slips exhibits a power-law scaling against the driving force strength and that the scaling exponent has an unsmooth dependence on the changes in the driving force strength. The other phenomenon shows that Kuramoto's order parameter before the transition exhibits stepwise behavior as a function of the driving force strength, contrary to the smooth transition in a model with a single time scale.
NASA Astrophysics Data System (ADS)
Buta, Ronald J.
2017-10-01
Dark gaps are commonly seen in early-to-intermediate-type barred galaxies having inner and outer rings or related features. In this paper, the morphologies of 54 barred and oval ringed galaxies have been examined with the goal of determining what the dark gaps are telling us about the structure and evolution of barred galaxies. The analysis is based mainly on galaxies selected from the Galaxy Zoo 2 data base and the Catalogue of Southern Ringed Galaxies. The dark gaps between inner and outer rings are of interest because of their likely association with the L4 and L5 Lagrangian points that would be present in the gravitational potential of a bar or oval. Since the points are theoretically expected to lie very close to the corotation resonance (CR) of the bar pattern, the gaps provide the possibility of locating corotation in some galaxies simply by measuring the radius rgp of the gap region and setting rCR=rgp. With the additional assumption of generally flat rotation curves, the locations of other resonances can be predicted and compared with observed morphological features. It is shown that this `gap method' provides remarkably consistent interpretations of the morphology of early-to-intermediate-type barred galaxies. The paper also brings attention to cases where the dark gaps lie inside an inner ring, rather than between inner and outer rings. These may have a different origin compared to the inner/outer ring gaps.
Map generation in unknown environments by AUKF-SLAM using line segment-type and point-type landmarks
NASA Astrophysics Data System (ADS)
Nishihta, Sho; Maeyama, Shoichi; Watanebe, Keigo
2018-02-01
Recently, autonomous mobile robots that collect information at disaster sites are being developed. Since it is difficult to obtain maps in advance in disaster sites, the robots being capable of autonomous movement under unknown environments are required. For this objective, the robots have to build maps, as well as the estimation of self-location. This is called a SLAM problem. In particular, AUKF-SLAM which uses corners in the environment as point-type landmarks has been developed as a solution method so far. However, when the robots move in an environment like a corridor consisting of few point-type features, the accuracy of self-location estimated by the landmark is decreased and it causes some distortions in the map. In this research, we propose AUKF-SLAM which uses walls in the environment as a line segment-type landmark. We demonstrate that the robot can generate maps in unknown environment by AUKF-SLAM, using line segment-type and point-type landmarks.
Multi-Beam Approach for Accelerating Alignment and Calibration of HyspIRI-Like Imaging Spectrometers
NASA Technical Reports Server (NTRS)
Eastwood, Michael L.; Green, Robert O.; Mouroulis, Pantazis; Hochberg, Eric B.; Hein, Randall C.; Kroll, Linley A.; Geier, Sven; Coles, James B.; Meehan, Riley
2012-01-01
A paper describes an optical stimulus that produces more consistent results, and can be automated for unattended, routine generation of data analysis products needed by the integration and testing team assembling a high-fidelity imaging spectrometer system. One key attribute of the system is an arrangement of pick-off mirrors that provides multiple input beams (five in this implementation) to simultaneously provide stimulus light to several field angles along the field of view of the sensor under test, allowing one data set to contain all the information that previously required five data sets to be separately collected. This stimulus can also be fed by quickly reconfigured sources that ultimately provide three data set types that would previously be collected separately using three different setups: Spectral Response Function (SRF), Cross-track Response Function (CRF), and Along-track Response Function (ARF), respectively. This method also lends itself to expansion of the number of field points if less interpolation across the field of view is desirable. An absolute minimum of three is required at the beginning stages of imaging spectrometer alignment.
Richter, E.; Barach, P.; Berman, T.; Ben-David, G; Weinberger, Z.
2001-01-01
To examine the ethical issues involved in governmental decisions with potential health risks, we review the history of the decision to raise the interurban speed limit in Israel in light of its impact on road death and injury. In 1993, the Israeli Ministry of Transportation initiated an "experiment" to raise the interurban speed limit from 90 to 100 kph. The "experiment" did not include a protocol and did not specify cut-off points for early termination in the case of adverse results. After the raise in the speed limit, the death toll on interurban roads rose as a result of a sudden increase in speeds and case fatality rates. The committee's decision is a case study in unfettered human experimentation and public health risks when the setting is non-medical and lacks a defined ethical framework. The case study states the case for extending Helsinki type safeguards to experimentation in non-medical settings. Key Words: Declaration of Helsinki • human experimentation • speed limit PMID:11314157
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline
2013-01-01
We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026
Guided discovery of the nine-point circle theorem and its proof
NASA Astrophysics Data System (ADS)
Buchbinder, Orly
2018-01-01
The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through investigation in a dynamic geometry environment, and consequently prove it using a method of guided discovery. The paper concludes with a variety of suggestions for the ways in which the whole set of activities can be implemented in geometry classrooms.
Holographic calculation for large interval Rényi entropy at high temperature
NASA Astrophysics Data System (ADS)
Chen, Bin; Wu, Jie-qiang
2015-11-01
In this paper, we study the holographic Rényi entropy of a large interval on a circle at high temperature for the two-dimensional conformal field theory (CFT) dual to pure AdS3 gravity. In the field theory, the Rényi entropy is encoded in the CFT partition function on n -sheeted torus connected with each other by a large branch cut. As proposed by Chen and Wu [Large interval limit of Rényi entropy at high temperature,
Dengue expansion in Africa-not recognized or not happening?
Jaenisch, Thomas; Junghanss, Thomas; Wills, Bridget; Brady, Oliver J; Eckerle, Isabella; Farlow, Andrew; Hay, Simon I; McCall, Philip J; Messina, Jane P; Ofula, Victor; Sall, Amadou A; Sakuntabhai, Anavaj; Velayudhan, Raman; Wint, G R William; Zeller, Herve; Margolis, Harold S; Sankoh, Osman
2014-10-01
An expert conference on Dengue in Africa was held in Accra, Ghana, in February 2013 to consider key questions regarding the possible expansion of dengue in Africa. Four key action points were highlighted to advance our understanding of the epidemiology of dengue in Africa. First, dengue diagnostic tools must be made more widely available in the healthcare setting in Africa. Second, representative data need to be collected across Africa to uncover the true burden of dengue. Third, established networks should collaborate to produce these types of data. Fourth, policy needs to be informed so the necessary steps can be taken to provide dengue vector control and health services.
Dengue Expansion in Africa—Not Recognized or Not Happening?
Junghanss, Thomas; Wills, Bridget; Brady, Oliver J.; Eckerle, Isabella; Farlow, Andrew; Hay, Simon I.; McCall, Philip J.; Messina, Jane P.; Ofula, Victor; Sall, Amadou A.; Sakuntabhai, Anavaj; Velayudhan, Raman; Wint, G.R. William; Zeller, Herve; Margolis, Harold S.; Sankoh, Osman
2014-01-01
An expert conference on Dengue in Africa was held in Accra, Ghana, in February 2013 to consider key questions regarding the possible expansion of dengue in Africa. Four key action points were highlighted to advance our understanding of the epidemiology of dengue in Africa. First, dengue diagnostic tools must be made more widely available in the healthcare setting in Africa. Second, representative data need to be collected across Africa to uncover the true burden of dengue. Third, established networks should collaborate to produce these types of data. Fourth, policy needs to be informed so the necessary steps can be taken to provide dengue vector control and health services. PMID:25271370
Solution of internal ballistic problem for SRM with grain of complex shape during main firing phase
NASA Astrophysics Data System (ADS)
Kiryushkin, A. E.; Minkov, L. L.
2017-10-01
Solid rocket motor (SRM) internal ballistics problems are related to the problems with moving boundaries. The algorithm able to solve similar problems in axisymmetric formulation on Cartesian mesh with an arbitrary order of accuracy is considered in this paper. The base of this algorithm is the ghost point extrapolation using inverse Lax-Wendroff procedure. Level set method is used as an implicit representation of the domain boundary. As an example, the internal ballistics problem for SRM with umbrella type grain was solved during the main firing phase. In addition, flow parameters distribution in the combustion chamber was obtained for different time moments.
Discussion on “A Fuzzy Method for Medical Diagnosis of Headache”
NASA Astrophysics Data System (ADS)
Hung, Kuo-Chen; Wou, Yu-Wen; Julian, Peterson
This paper is in response to the report of Ahn, Mun, Kim, Oh, and Han published in IEICE Trans. INF. & SYST., Vol.E91-D, No.4, 2008, 1215-1217. They tried to extend their previous paper that published on IEICE Trans. INF. & SYST., Vol.E86-D, No.12, 2003, 2790-2793. However, we will point out that their extension is based on the detailed data of knowing the frequency of three types. Their new occurrence information based on intuitionistic fuzzy set for medical diagnosis of headache becomes redundant. We advise researchers to directly use the detailed data to decide the diagnosis of headache.
A new continuous light source for high-speed imaging
NASA Astrophysics Data System (ADS)
Paton, R. T.; Hall, R. E.; Skews, B. W.
2017-02-01
Xenon arc lamps have been identified as a suitable continuous light source for high-speed imaging, specifically high-speed Schlieren and shadowgraphy. One issue when setting us such systems is the time that it takes to reduce a finite source to the approximation of a point source for z-type schlieren. A preliminary design of a compact compound lens for use with a commercial Xenon arc lamp was tested for suitability. While it was found that there is some dimming of the illumination at the spot periphery, the overall spectral and luminance distribution of the compact source is quite acceptable, especially considering the time benefit that it represents.
Structural optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.
1983-01-01
A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.
a Method for the Registration of Hemispherical Photographs and Tls Intensity Images
NASA Astrophysics Data System (ADS)
Schmidt, A.; Schilling, A.; Maas, H.-G.
2012-07-01
Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.
Moser, Othmar; Tschakert, Gerhard; Mueller, Alexander; Groeschl, Werner; Pieber, Thomas R; Koehler, Gerd; Eckstein, Max L; Bracken, Richard M; Hofmann, Peter
2017-06-30
Therapy must be adapted for people with type 1 diabetes to avoid exercise-induced hypoglycemia caused by increased exercise-related glucose uptake into muscles. Therefore, to avoid hypoglycemia, the preexercise short-acting insulin dose must be reduced for safety reasons. We report a case of a man with long-lasting type 1 diabetes in whom no blood glucose decrease during different types of exercise with varying exercise intensities and modes was found, despite physiological hormone responses. A Caucasian man diagnosed with type 1 diabetes for 24 years performed three different continuous high-intensity interval cycle ergometer exercises as part of a clinical trial (ClinicalTrials.gov identifier NCT02075567). Intensities for both modes of exercises were set at 5% below and 5% above the first lactate turn point and 5% below the second lactate turn point. Short-acting insulin doses were reduced by 25%, 50%, and 75%, respectively. Measurements taken included blood glucose, blood lactate, gas exchange, heart rate, adrenaline, noradrenaline, cortisol, glucagon, and insulin-like growth factor-1. Unexpectedly, no significant blood glucose decreases were observed during all exercise sessions (start versus end, 12.97 ± 2.12 versus 12.61 ± 2.66 mmol L -1 , p = 0.259). All hormones showed the expected response, dependent on the different intensities and modes of exercises. People with type 1 diabetes typically experience a decrease in blood glucose levels, particularly during low- and moderate-intensity exercises. In our patient, we clearly found no decline in blood glucose, despite a normal hormone response and no history of any insulin insensitivity. This report indicates that there might be patients for whom the recommended preexercise therapy adaptation to avoid exercise-induced hypoglycemia needs to be questioned because this could increase the risk of severe hyperglycemia and ketosis.
Schroeder, Lee F; Elbireer, Ali; Jackson, J Brooks; Amukele, Timothy K
2015-01-01
Diagnostic laboratory tests are routinely defined in terms of their sensitivity, specificity, and ease of use. But the actual clinical impact of a diagnostic test also depends on its availability and price. This is especially true in resource-limited settings such as sub-Saharan Africa. We present a first-of-its-kind report of diagnostic test types, availability, and prices in Kampala, Uganda. Test types (identity) and availability were based on menus and volumes obtained from clinical laboratories in late 2011 in Kampala using a standard questionnaire. As a measure of test availability, we used the Availability Index (AI). AI is the combined daily testing volumes of laboratories offering a given test, divided by the combined daily testing volumes of all laboratories in Kampala. Test prices were based on a sampling of prices collected in person and via telephone surveys in 2015. Test volumes and menus were obtained for 95% (907/954) of laboratories in Kampala city. These 907 laboratories offered 100 different test types. The ten most commonly offered tests in decreasing order were Malaria, HCG, HIV serology, Syphilis, Typhoid, Urinalysis, Brucellosis, Stool Analysis, Glucose, and ABO/Rh. In terms of AI, the 100 tests clustered into three groups: high (12 tests), moderate (33 tests), and minimal (55 tests) availability. 50% and 36% of overall availability was provided through private and public laboratories, respectively. Point-of-care laboratories contributed 35% to the AI of high availability tests, but only 6% to the AI of the other tests. The mean price of the most commonly offered test types was $2.62 (range $1.83-$3.46). One hundred different laboratory test types were in use in Kampala in late 2011. Both public and private laboratories were critical to test availability. The tests offered in point-of-care laboratories tended to be the most available tests. Prices of the most common tests ranged from $1.83-$3.46.
Störmer method for a problem of point injection of charged particles into a magnetic dipole field
NASA Astrophysics Data System (ADS)
Kolesnikov, E. K.
2017-03-01
The problem of point injection of charged particles into a magnetic dipole field was considered. Analytical expressions were obtained by the Störmer method for regions of allowed pulses of charged particles at random points of a dipole field at a set position of the point source of particles. It was found that, for a fixed location of the studied point, there was a specific structure of the coordinate space in the form of a set of seven regions, where the injector location in each region corresponded to a definite form of an allowed pulse region at the studied point. It was shown that the allowed region boundaries in four of the mentioned regions were surfaces of conic section revolution.
Combining statistical inference and decisions in ecology
Williams, Perry J.; Hooten, Mevin B.
2016-01-01
Statistical decision theory (SDT) is a sub-field of decision theory that formally incorporates statistical investigation into a decision-theoretic framework to account for uncertainties in a decision problem. SDT provides a unifying analysis of three types of information: statistical results from a data set, knowledge of the consequences of potential choices (i.e., loss), and prior beliefs about a system. SDT links the theoretical development of a large body of statistical methods including point estimation, hypothesis testing, and confidence interval estimation. The theory and application of SDT have mainly been developed and published in the fields of mathematics, statistics, operations research, and other decision sciences, but have had limited exposure in ecology. Thus, we provide an introduction to SDT for ecologists and describe its utility for linking the conventionally separate tasks of statistical investigation and decision making in a single framework. We describe the basic framework of both Bayesian and frequentist SDT, its traditional use in statistics, and discuss its application to decision problems that occur in ecology. We demonstrate SDT with two types of decisions: Bayesian point estimation, and an applied management problem of selecting a prescribed fire rotation for managing a grassland bird species. Central to SDT, and decision theory in general, are loss functions. Thus, we also provide basic guidance and references for constructing loss functions for an SDT problem.
Stringy horizons and generalized FZZ duality in perturbation theory
NASA Astrophysics Data System (ADS)
Giribet, Gaston
2017-02-01
We study scattering amplitudes in two-dimensional string theory on a black hole bakground. We start with a simple derivation of the Fateev-Zamolodchikov-Zamolodchikov (FZZ) duality, which associates correlation functions of the sine-Liouville integrable model on the Riemann sphere to tree-level string amplitudes on the Euclidean two-dimensional black hole. This derivation of FZZ duality is based on perturbation theory, and it relies on a trick originally due to Fateev, which involves duality relations between different Selberg type integrals. This enables us to rewrite the correlation functions of sine-Liouville theory in terms of a special set of correlators in the gauged Wess-Zumino-Witten (WZW) theory, and use this to perform further consistency checks of the recently conjectured Generalized FZZ (GFZZ) duality. In particular, we prove that n-point correlation functions in sine-Liouville theory involving n - 2 winding modes actually coincide with the correlation functions in the SL(2,R)/U(1) gauged WZW model that include n - 2 oscillator operators of the type described by Giveon, Itzhaki and Kutasov in reference [1]. This proves the GFZZ duality for the case of tree level maximally winding violating n-point amplitudes with arbitrary n. We also comment on the connection between GFZZ and other marginal deformations previously considered in the literature.
Aguilar, Carlos A.; Shcherbina, Anna; Ricke, Darrell O.; Pop, Ramona; Carrigan, Christopher T.; Gifford, Casey A.; Urso, Maria L.; Kottke, Melissa A.; Meissner, Alexander
2015-01-01
Traumatic lower-limb musculoskeletal injuries are pervasive amongst athletes and the military and typically an individual returns to activity prior to fully healing, increasing a predisposition for additional injuries and chronic pain. Monitoring healing progression after a musculoskeletal injury typically involves different types of imaging but these approaches suffer from several disadvantages. Isolating and profiling transcripts from the injured site would abrogate these shortcomings and provide enumerative insights into the regenerative potential of an individual’s muscle after injury. In this study, a traumatic injury was administered to a mouse model and healing progression was examined from 3 hours to 1 month using high-throughput RNA-Sequencing (RNA-Seq). Comprehensive dissection of the genome-wide datasets revealed the injured site to be a dynamic, heterogeneous environment composed of multiple cell types and thousands of genes undergoing significant expression changes in highly regulated networks. Four independent approaches were used to determine the set of genes, isoforms, and genetic pathways most characteristic of different time points post-injury and two novel approaches were developed to classify injured tissues at different time points. These results highlight the possibility to quantitatively track healing progression in situ via transcript profiling using high- throughput sequencing. PMID:26381351
NASA Astrophysics Data System (ADS)
Vaughan, Jessica M.; England, John H.; Evans, David J. A.
2014-05-01
Hill-hole pairs, comprising an ice-pushed hill and associated source depression, cluster in a belt along the west coast of Banks Island, NT. Ongoing coastal erosion at Worth Point, southwest Banks Island, has exposed a section (6 km long and ˜30 m high) through an ice-pushed hill that was transported ˜ 2 km from a corresponding source depression to the southeast. The exposed stratigraphic sequence is polydeformed and comprises folded and faulted rafts of Early Cretaceous and Late Tertiary bedrock, a prominent organic raft, Quaternary glacial sediments, and buried glacial ice. Three distinct structural domains can be identified within the stratigraphic sequence that represent proximal to distal deformation in an ice-marginal setting. Complex thrust sequences, interfering fold-sets, brecciated bedrock and widespread shear structures superimposed on this ice-marginally deformed sequence record subsequent deformation in a subglacial shear zone. Analysis of cross-cutting relationships within the stratigraphic sequence combined with OSL dating indicate that the Worth Point hill-hole pair was deformed during two separate glaciotectonic events. Firstly, ice sheet advance constructed the hill-hole pair and glaciotectonized the strata ice-marginally, producing a proximal to distal deformation sequence. A glacioisostatically forced marine transgression resulted in extensive reworking of the strata and the deposition of a glaciomarine diamict. A readvance during this initial stage redeformed the strata in a subglacial shear zone, overprinting complex deformation structures and depositing a glaciotectonite ˜20 m thick. Outwash channels that incise the subglacially deformed strata record a deglacial marine regression, whereas aggradation of glaciofluvial sand and gravel infilling the channels record a subsequent marine transgression. Secondly, a later, largely non-erosive ice margin overrode Worth Point, deforming only the most surficial units in the section and depositing a capping till. The investigation of the Worth Point stratigraphic sequence provides the first detailed description of the internal architecture of a polydeformed hill-hole pair, and as such provides an insight into the formation and evolution of an enigmatic landform. Notably, the stratigraphic sequence documents ice-marginal and subglacial glaciotectonics in permafrost terrain, as well as regional glacial and relative sea level histories. The reinterpreted stratigraphy fundamentally rejects the long-established paleoenvironmental history of Worth Point that assumed a simple ‘layer-cake’ stratigraphy including the type-site for an organically rich, preglacial interval (Worth Point Fm).
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
NASA Astrophysics Data System (ADS)
Xu, Lixin
2012-06-01
In this paper, the holographic dark energy model, where the future event horizon is taken as an IR cutoff, is confronted by using currently available cosmic observational data sets which include type Ia supernovae, baryon acoustic oscillation, and cosmic microwave background radiation from full information of WMAP 7-yr data. Via the Markov chain Monte Carlo method, we obtain the values of model parameter c=0.696-0.0737-0.132-0.190+0.0736+0.159+0.264 with 1, 2, 3σ regions. Therefore, one can conclude that at at least 3σ level the future Universe will be dominated by phantom-like dark energy. It is not consistent with positive energy condition, however this condition must be satisfied to derive the holographic bound. It implies that the current cosmic observational data points disfavor the holographic dark energy model.
Spinocerebellar ataxia in monozygotic twins.
Anderson, John H; Christova, Peka S; Xie, Ting-dong; Schott, Kelly S; Ward, Kenneth; Gomez, Christopher M
2002-12-01
Although phenotypic heterogeneity in autosomal dominant spinocerebellar ataxia (SCA) has been explained in part by genotypic heterogeneity, clinical observations suggest the influence of additional factors. To demonstrate, quantitate, and localize physiologic abnormalities attributable to nongenetic factors in the development of hereditary SCA. Quantitative assessments of ocular motor function and postural control in 2 sets of identical twins, one with SCA type 2 and the other with episodic ataxia type 2. University laboratory. Saccadic velocity and amplitude, pursuit gain, and dynamic posturography. We found significant differences in saccade velocity, saccade metrics, and postural stability between each monozygotic twin. The differences point to differential involvement between twins of discrete regions in the cerebellum and brainstem. These results demonstrate the presence of quantitative differences in the severity, rate of progression, and regional central nervous system involvement in monozygotic twins with SCA that must be owing to the existence of nongermline or external factors.
Validation and Improvement of SRTM Performance over Rugged Terrain
NASA Technical Reports Server (NTRS)
Zebker, Howard A.
2004-01-01
We have previously reported work related to basic technique development in phase unwrapping and generation of digital elevation models (DEM). In the final year of this work we have applied our technique work to the improvement of DEM's produced by SRTM. In particular, we have developed a rigorous mathematical algorithm and means to fill in missing data over rough terrain from other data sets. We illustrate this method by using a higher resolution, but globally less accurate, DEM produced by the TOPSAR airborne instrument over the Galapagos Islands to augment the SRTM data set in this area, We combine this data set with SRTM to use each set to fill in holes left over by the other imaging system. The infilling is done by first interpolating each data set using a prediction error filter that reproduces the same statistical characterization as exhibited by the entire data set within the interpolated region. After this procedure is implemented on each data set, the two are combined on a point by point basis with weights that reflect the accuracy of each data point in its original image. In areas that are better covered by SRTM, TOPSAR data are weighted down but still retain TOPSAR statistics. The reverse is true for regions better covered by TOPSAR. The resulting DEM passes statistical tests and appears quite feasible to the eye, but as this DEM is the best available for the region we cannot fully veri@ its accuracy. Spot checks with GPS points show that locally the technique results in a more comprehensive and accurate map than either data set alone.
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Conceptual framework for holistic dialysis management based on key performance indicators.
Liu, Hu-Chen; Itoh, Kenji
2013-10-01
This paper develops a theoretical framework of holistic hospital management based on performance indicators that can be applied to dialysis hospitals, clinics or departments in Japan. Selection of a key indicator set and its validity tests were performed primarily by a questionnaire survey to dialysis experts as well as their statements obtained through interviews. The expert questionnaire asked respondents to rate the degree of "usefulness" for each of 66 indicators on a three-point scale (19 responses collected). Applying the theoretical framework, we selected a minimum set of key performance indicators for dialysis management that can be used in the Japanese context. The indicator set comprised 27 indicators and items that will be collected through three surveys: patient satisfaction, employee satisfaction, and safety culture. The indicators were confirmed by expert judgment from viewpoints of face, content and construct validity as well as their usefulness. This paper established a theoretical framework of performance measurement for holistic dialysis management from primary healthcare stakeholders' perspectives. In this framework, performance indicators were largely divided into healthcare outcomes and performance shaping factors. Indicators of the former type may be applied for the detection of operational problems or weaknesses in a dialysis hospital, clinic or department, while latent causes of each problem can be more effectively addressed by the latter type of indicators in terms of process, structure and culture/climate within the organization. © 2013 The Authors. Therapeutic Apheresis and Dialysis © 2013 International Society for Apheresis.
Quantifying inhomogeneity in fractal sets
NASA Astrophysics Data System (ADS)
Fraser, Jonathan M.; Todd, Mike
2018-04-01
An inhomogeneous fractal set is one which exhibits different scaling behaviour at different points. The Assouad dimension of a set is a quantity which finds the ‘most difficult location and scale’ at which to cover the set and its difference from box dimension can be thought of as a first-level overall measure of how inhomogeneous the set is. For the next level of analysis, we develop a quantitative theory of inhomogeneity by considering the measure of the set of points around which the set exhibits a given level of inhomogeneity at a certain scale. For a set of examples, a family of -invariant subsets of the 2-torus, we show that this quantity satisfies a large deviations principle. We compare members of this family, demonstrating how the rate function gives us a deeper understanding of their inhomogeneity.
SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark
NASA Astrophysics Data System (ADS)
Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.
2017-05-01
This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.
Lithium-ion drifting: Application to the study of point defects in floating-zone silicon
NASA Technical Reports Server (NTRS)
Walton, J. T.; Wong, Y. K.; Zulehner, W.
1997-01-01
The use of lithium-ion (Li(+)) drifting to study the properties of point defects in p-type Floating-Zone (FZ) silicon crystals is reported. The Li(+) drift technique is used to detect the presence of vacancy-related defects (D defects) in certain p-type FZ silicon crystals. SUPREM-IV modeling suggests that the silicon point defect diffusivities are considerably higher than those commonly accepted, but are in reasonable agreement with values recently proposed. These results demonstrate the utility of Li(+) drifting in the study of silicon point defect properties in p-type FZ crystals. Finally, a straightforward measurement of the Li(+) compensation depth is shown to yield estimates of the vacancy-related defect concentration in p-type FZ crystals.
Common fixed point theorems for maps under a contractive condition of integral type
NASA Astrophysics Data System (ADS)
Djoudi, A.; Merghadi, F.
2008-05-01
Two common fixed point theorems for mapping of complete metric space under a general contractive inequality of integral type and satisfying minimal commutativity conditions are proved. These results extend and improve several previous results, particularly Theorem 4 of Rhoades [B.E. Rhoades, Two fixed point theorems for mappings satisfying a general contractive condition of integral type, Int. J. Math. Math. Sci. 63 (2003) 4007-4013] and Theorem 4 of Sessa [S. Sessa, On a weak commutativity condition of mappings in fixed point considerations, Publ. Inst. Math. (Beograd) (N.S.) 32 (46) (1982) 149-153].
NASA Astrophysics Data System (ADS)
Roelfsema, Chris M.; Kovacs, Eva M.; Phinn, Stuart R.
2015-08-01
This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2014) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.
Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I
2017-10-01
Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448); Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586)
2016-09-16
This public data set contains openly-documented, machine readable digital research data corresponding to figures published in D.J. Schlossberg et. al., 'A Novel, Cost-Effective, Multi-Point Thomson Scattering System on the Pegasus Toroidal Experiment,' Rev. Sci. Instrum. 87, 11E403 (2016).
Two-stage fan. 4: Performance data for stator setting angle optimization
NASA Technical Reports Server (NTRS)
Burger, G. D.; Keenan, M. J.
1975-01-01
Stator setting angle optimization tests were conducted on a two-stage fan to improve efficiency at overspeed, stall margin at design speed, and both efficiency and stall margin at partspeed. The fan has a design pressure ratio of 2.8, a flow rate of 184.2 lb/sec (83.55 kg/sec) and a 1st-stage rotor tip speed of 1450 ft/sec (441.96 in/sec). Performance was obtained at 70,100, and 105 percent of design speed with different combinations of 1st-stage and 2nd-stage stator settings. One combination of settings, other than design, was common to all three speeds. At design speed, a 2.0 percentage point increase in stall margin was obtained at the expense of a 1.3 percentage point efficiency decrease. At 105 percent speed, efficiency was improved by 1.8 percentage points but stall margin decreased 4.7 percentage points. At 70 percent speed, no change in stall margin or operating line efficiency was obtained with stator resets although considerable speed-flow requlation occurred.
Zhang, Feifan; Bhattacharya, Abhishek; Nelson, Jessica C; Abe, Namiko; Gordon, Patricia; Lloret-Fernandez, Carla; Maicas, Miren; Flames, Nuria; Mann, Richard S; Colón-Ramos, Daniel A; Hobert, Oliver
2014-01-01
Transcription factors that drive neuron type-specific terminal differentiation programs in the developing nervous system are often expressed in several distinct neuronal cell types, but to what extent they have similar or distinct activities in individual neuronal cell types is generally not well explored. We investigate this problem using, as a starting point, the C. elegans LIM homeodomain transcription factor ttx-3, which acts as a terminal selector to drive the terminal differentiation program of the cholinergic AIY interneuron class. Using a panel of different terminal differentiation markers, including neurotransmitter synthesizing enzymes, neurotransmitter receptors and neuropeptides, we show that ttx-3 also controls the terminal differentiation program of two additional, distinct neuron types, namely the cholinergic AIA interneurons and the serotonergic NSM neurons. We show that the type of differentiation program that is controlled by ttx-3 in different neuron types is specified by a distinct set of collaborating transcription factors. One of the collaborating transcription factors is the POU homeobox gene unc-86, which collaborates with ttx-3 to determine the identity of the serotonergic NSM neurons. unc-86 in turn operates independently of ttx-3 in the anterior ganglion where it collaborates with the ARID-type transcription factor cfi-1 to determine the cholinergic identity of the IL2 sensory and URA motor neurons. In conclusion, transcription factors operate as terminal selectors in distinct combinations in different neuron types, defining neuron type-specific identity features.
An integrated set of UNIX based system tools at control room level
NASA Astrophysics Data System (ADS)
Potepan, F.; Scafuri, C.; Bortolotto, C.; Surace, G.
1994-12-01
The design effort of providing a simple point-and-click approach to the equipment access has led to the definition and realization of a modular set of software tools to be used at the ELETTRA control room level. Point-to-point equipment access requires neither programming nor specific knowledge of the control system architecture. The development and integration of communication, graphic, editing and global database modules are described in depth, followed by a report of their use in the first commissioning period.
Marrero-Ponce, Yovani; Medina-Marrero, Ricardo; Castillo-Garit, Juan A; Romero-Zaldivar, Vicente; Torrens, Francisco; Castro, Eduardo A
2005-04-15
A novel approach to bio-macromolecular design from a linear algebra point of view is introduced. A protein's total (whole protein) and local (one or more amino acid) linear indices are a new set of bio-macromolecular descriptors of relevance to protein QSAR/QSPR studies. These amino-acid level biochemical descriptors are based on the calculation of linear maps on Rn[f k(xmi):Rn-->Rn] in canonical basis. These bio-macromolecular indices are calculated from the kth power of the macromolecular pseudograph alpha-carbon atom adjacency matrix. Total linear indices are linear functional on Rn. That is, the kth total linear indices are linear maps from Rn to the scalar R[f k(xm):Rn-->R]. Thus, the kth total linear indices are calculated by summing the amino-acid linear indices of all amino acids in the protein molecule. A study of the protein stability effects for a complete set of alanine substitutions in the Arc repressor illustrates this approach. A quantitative model that discriminates near wild-type stability alanine mutants from the reduced-stability ones in a training series was obtained. This model permitted the correct classification of 97.56% (40/41) and 91.67% (11/12) of proteins in the training and test set, respectively. It shows a high Matthews correlation coefficient (MCC=0.952) for the training set and an MCC=0.837 for the external prediction set. Additionally, canonical regression analysis corroborated the statistical quality of the classification model (Rcanc=0.824). This analysis was also used to compute biological stability canonical scores for each Arc alanine mutant. On the other hand, the linear piecewise regression model compared favorably with respect to the linear regression one on predicting the melting temperature (tm) of the Arc alanine mutants. The linear model explains almost 81% of the variance of the experimental tm (R=0.90 and s=4.29) and the LOO press statistics evidenced its predictive ability (q2=0.72 and scv=4.79). Moreover, the TOMOCOMD-CAMPS method produced a linear piecewise regression (R=0.97) between protein backbone descriptors and tm values for alanine mutants of the Arc repressor. A break-point value of 51.87 degrees C characterized two mutant clusters and coincided perfectly with the experimental scale. For this reason, we can use the linear discriminant analysis and piecewise models in combination to classify and predict the stability of the mutant Arc homodimers. These models also permitted the interpretation of the driving forces of such folding process, indicating that topologic/topographic protein backbone interactions control the stability profile of wild-type Arc and its alanine mutants.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.