Spline curve matching with sparse knot sets
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2004-01-01
This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...
Pippel, Kristina; Meinck, M; Lübke, N
2017-06-01
Mobile geriatric rehabilitation can be provided in the setting of nursing homes, short-term care (STC) facilities and exclusively in private homes. This study analyzed the common features and differences of mobile rehabilitation interventions in various settings. Stratified by setting 1,879 anonymized mobile geriatric rehabilitation treatments between 2011 and 2014 from 11 participating institutions were analyzed with respect to patient, process and outcome-related features. Significant differences between the settings nursing home (n = 514, 27 %), STC (n = 167, 9 %) and private homes (n = 1198, 64 %) were evident for mean age (83 years, 83 years and 80 years, respectively), percentage of women (72 %, 64 % and 55 %), degree of dependency on pre-existing care (92 %, 76 % and 64 %), total treatment sessions (TS, 38 TS, 42 TS and 41 TS), treatment duration (54 days, 61 days and 58 days) as well as the Barthel index at the start of rehabilitation (34 points, 39 points and 46 points) and the gain in the Barthel index (15 points, 21 points and 18 points), whereby the gain in the capacity for self-sufficiency was significant in all settings. The setting-specific evaluation of mobile geriatric rehabilitation showed differences for relevant patient, process and outcome-related features. Compared to inpatient rehabilitation mobile rehabilitation in all settings made an above average contribution to the rehabilitation of patients with pre-existing dependency on care. The gains in the capacity for self-sufficiency achieved in all settings support the efficacy of mobile geriatric rehabilitation under the current prerequisites for applicability.
Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman
2003-01-01
Splines can be used to approximate noisy data with a few control points. This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of...
The Building America Indoor Temperature and Humidity Measurement Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzger, C.; Norton, Paul
2014-02-01
When modeling homes using simulation tools, the heating and cooling set points can have a significant impact on home energy use. Every four years, the Energy Information Administration (EIA) Residential Energy Consumption Survey (RECS) asks homeowners about their heating and cooling set points. Unfortunately, no temperature data is measured, and most of the time, the homeowner may be guessing at this number. Even one degree Fahrenheit difference in heating set point can make a 5% difference in heating energy use! So, the survey-based RECS data cannot be used as the definitive reference for the set point for the "average occupant"more » in simulations. The purpose of this document is to develop a protocol for collecting consistent data for heating/cooling set points and relative humidity so that an average set point can be determined for asset energy models in residential buildings. This document covers the decision making process for researchers to determine how many sensors should be placed in each home, where to put those sensors, and what kind of asset data should be taken while they are in the home. The authors attempted to design the protocols to maximize the value of this study and minimize the resources required to achieve that value.« less
Building America Indoor Temperature and Humidity Measurement Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engebrecht-Metzger, Cheryn; Norton, Paul
2014-02-01
When modeling homes using simulation tools, the heating and cooling set points can have a significant impact on home energy use. Every 4 years the Energy Information Administration (EIA) Residential Energy Consumption Survey (RECS) asks homeowners about their heating and cooling set points. Unfortunately, no temperature data is measured, and most of the time, the homeowner may be guessing at this number. Even one degree Fahrenheit difference in heating set point can make a 5% difference in heating energy use! So, the survey-based RECS data cannot be used as the definitive reference for the set point for the 'average occupant'more » in simulations. The purpose of this document is to develop a protocol for collecting consistent data for heating/cooling set points and relative humidity so that an average set point can be determined for asset energy models in residential buildings. This document covers the decision making process for researchers to determine how many sensors should be placed in each home, where to put those sensors, and what kind of asset data should be taken while they are in the home. The authors attempted to design the protocols to maximize the value of this study and minimize the resources required to achieve that value.« less
Guided discovery of the nine-point circle theorem and its proof
NASA Astrophysics Data System (ADS)
Buchbinder, Orly
2018-01-01
The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through investigation in a dynamic geometry environment, and consequently prove it using a method of guided discovery. The paper concludes with a variety of suggestions for the ways in which the whole set of activities can be implemented in geometry classrooms.
Guided Discovery of the Nine-Point Circle Theorem and Its Proof
ERIC Educational Resources Information Center
Buchbinder, Orly
2018-01-01
The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through…
Clinical Management of Heat-Related Illnesses
2012-01-01
rhabdomyolysis and multiorgan dysfunction syndrome, and it may result in death from overwhelming cell necrosis caused by a lethal heat-shock exposure...complications such as rhabdomyolysis and multiorgan dysfunction syndrome, and it may result in death from overwhelming cell necrosis caused by a...acetaminophen lower Tco by normalizing the elevated hypothalamic set point that is caused by pyrogens; in heatstroke, the set point is normal, with
Henwood, Patricia C; Mackenzie, David C; Rempell, Joshua S; Murray, Alice F; Leo, Megan M; Dean, Anthony J; Liteplo, Andrew S; Noble, Vicki E
2014-09-01
The value of point-of-care ultrasound education in resource-limited settings is increasingly recognized, though little guidance exists on how to best construct a sustainable training program. Herein we offer a practical overview of core factors to consider when developing and implementing a point-of-care ultrasound education program in a resource-limited setting. Considerations include analysis of needs assessment findings, development of locally relevant curriculum, access to ultrasound machines and related technological and financial resources, quality assurance and follow-up plans, strategic partnerships, and outcomes measures. Well-planned education programs in these settings increase the potential for long-term influence on clinician skills and patient care. Copyright © 2014 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Local, smooth, and consistent Jacobi set simplification
Bhatia, Harsh; Wang, Bei; Norgard, Gregory; ...
2014-10-31
The relation between two Morse functions defined on a smooth, compact, and orientable 2-manifold can be studied in terms of their Jacobi set. The Jacobi set contains points in the domain where the gradients of the two functions are aligned. Both the Jacobi set itself as well as the segmentation of the domain it induces, have shown to be useful in various applications. In practice, unfortunately, functions often contain noise and discretization artifacts, causing their Jacobi set to become unmanageably large and complex. Although there exist techniques to simplify Jacobi sets, they are unsuitable for most applications as they lackmore » fine-grained control over the process, and heavily restrict the type of simplifications possible. In this paper, we introduce a new framework that generalizes critical point cancellations in scalar functions to Jacobi set in two dimensions. We present a new interpretation of Jacobi set simplification based on the perspective of domain segmentation. Generalizing the cancellation of critical points from scalar functions to Jacobi sets, we focus on simplifications that can be realized by smooth approximations of the corresponding functions, and show how these cancellations imply simultaneous simplification of contiguous subsets of the Jacobi set. Using these extended cancellations as atomic operations, we introduce an algorithm to successively cancel subsets of the Jacobi set with minimal modifications to some user-defined metric. We show that for simply connected domains, our algorithm reduces a given Jacobi set to its minimal configuration, that is, one with no birth–death points (a birth–death point is a specific type of singularity within the Jacobi set where the level sets of the two functions and the Jacobi set have a common normal direction).« less
Ocular stability and set-point adaptation
Jareonsettasin, P.; Leigh, R. J.
2017-01-01
A fundamental challenge to the brain is how to prevent intrusive movements when quiet is needed. Unwanted limb movements such as tremor impair fine motor control and unwanted eye drifts such as nystagmus impair vision. A stable platform is also necessary to launch accurate movements. Accordingly, nature has designed control systems with agonist (excitation) and antagonist (inhibition) muscle pairs functioning in push–pull, around a steady level of balanced tonic activity, the set-point. Sensory information can be organized similarly, as in the vestibulo-ocular reflex, which generates eye movements that compensate for head movements. The semicircular canals, working in coplanar pairs, one in each labyrinth, are reciprocally excited and inhibited as they transduce head rotations. The relative change in activity is relayed to the vestibular nuclei, which operate around a set-point of stable balanced activity. When a pathological imbalance occurs, producing unwanted nystagmus without head movement, an adaptive mechanism restores the proper set-point and eliminates the nystagmus. Here we used 90 min of continuous 7 T magnetic field labyrinthine stimulation (MVS) in normal humans to produce sustained nystagmus simulating vestibular imbalance. We identified multiple time-scale processes towards a new zero set-point showing that MVS is an excellent paradigm to investigate the neurobiology of set-point adaptation. This article is part of the themed issue ‘Movement suppression: brain mechanisms for stopping and stillness’. PMID:28242733
Normalization of relative and incomplete temporal expressions in clinical narratives.
Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem
2015-09-01
To improve the normalization of relative and incomplete temporal expressions (RI-TIMEXes) in clinical narratives. We analyzed the RI-TIMEXes in temporally annotated corpora and propose two hypotheses regarding the normalization of RI-TIMEXes in the clinical narrative domain: the anchor point hypothesis and the anchor relation hypothesis. We annotated the RI-TIMEXes in three corpora to study the characteristics of RI-TMEXes in different domains. This informed the design of our RI-TIMEX normalization system for the clinical domain, which consists of an anchor point classifier, an anchor relation classifier, and a rule-based RI-TIMEX text span parser. We experimented with different feature sets and performed an error analysis for each system component. The annotation confirmed the hypotheses that we can simplify the RI-TIMEXes normalization task using two multi-label classifiers. Our system achieves anchor point classification, anchor relation classification, and rule-based parsing accuracy of 74.68%, 87.71%, and 57.2% (82.09% under relaxed matching criteria), respectively, on the held-out test set of the 2012 i2b2 temporal relation challenge. Experiments with feature sets reveal some interesting findings, such as: the verbal tense feature does not inform the anchor relation classification in clinical narratives as much as the tokens near the RI-TIMEX. Error analysis showed that underrepresented anchor point and anchor relation classes are difficult to detect. We formulate the RI-TIMEX normalization problem as a pair of multi-label classification problems. Considering only RI-TIMEX extraction and normalization, the system achieves statistically significant improvement over the RI-TIMEX results of the best systems in the 2012 i2b2 challenge. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Miller, Stephan W.
1981-01-01
A second set of related problems deals with how this format and other representations of spatial entities, such as vector formats for point and line features, can be interrelated for manipulation, retrieval, and analysis by a spatial database management subsystem. Methods have been developed for interrelating areal data sets in the raster format with point and line data in a vector format and these are described.
Teaching Light Compensation Point: A New Practical Approach.
ERIC Educational Resources Information Center
Aston, T. J.; Robinson, G.
1986-01-01
Describes a simple method for measuring respiration, net photosynthesis, and compensation points of plants in relation to light intensity. Outlines how the method can be used in teaching physiological adaptation. Includes a set of the experiment's results. (ML)
Positive psychology in rehabilitation medicine: a brief report.
Bertisch, Hilary; Rath, Joseph; Long, Coralynn; Ashman, Teresa; Rashid, Tayyab
2014-01-01
The field of positive psychology has grown exponentially within the last decade. To date, however, there have been few empirical initiatives to clarify the constructs within positive psychology as they relate to rehabilitation medicine. Character strengths, and in particular resilience, following neurological trauma are clinically observable within rehabilitation settings, and greater knowledge of the way in which these factors relate to treatment variables may allow for enhanced treatment conceptualization and planning. The goal of this study was to explore the relationships between positive psychology constructs (character strengths, resilience, and positive mood) and rehabilitation-related variables (perceptions of functional ability post-injury and beliefs about treatment) within a baseline data set, a six-month follow-up data set, and longitudinally across time points. Pearson correlations and supplementary multiple regression analyses were conducted within and across these time points from a starting sample of thirty-nine individuals with acquired brain injury (ABI) in an outpatient rehabilitation program. Positive psychology constructs were related to rehabilitation-related variables within the baseline data set, within the follow-up data set, and longitudinally between baseline positive psychology variables and follow-up rehabilitation-related data. These preliminary findings support relationships between character strengths, resilience, and positive mood states with perceptions of functional ability and expectations of treatment, respectively, which are primary factors in treatment success and quality of life outcomes in rehabilitation medicine settings. The results suggest the need for more research in this area, with an ultimate goal of incorporating positive psychology constructs into rehabilitation conceptualization and treatment planning.
Evaluation of Humidity Control Options in Hot-Humid Climate Homes (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-12-01
This technical highlight describes NREL research to analyze the indoor relative humidity in three home types in the hot-humid climate zone, and examine the impacts of various dehumidification equipment and controls. As the Building America program researches construction of homes that achieve greater source energy savings over typical mid-1990s construction, proper modeling of whole-house latent loads and operation of humidity control equipment has become a high priority. Long-term high relative humidity can cause health and durability problems in homes, particularly in a hot-humid climate. In this study, researchers at the National Renewable Energy Laboratory (NREL) used the latest EnergyPlus toolmore » equipped with the moisture capacitance model to analyze the indoor relative humidity in three home types: a Building America high-performance home; a mid-1990s reference home; and a 2006 International Energy Conservation Code (IECC)-compliant home in hot-humid climate zones. They examined the impacts of various dehumidification equipment and controls on the high-performance home where the dehumidification equipment energy use can become a much larger portion of whole-house energy consumption. The research included a number of simulated cases: thermostat reset, A/C with energy recovery ventilator, heat exchanger assisted A/C, A/C with condenser reheat, A/C with desiccant wheel dehumidifier, A/C with DX dehumidifier, A/C with energy recovery ventilator, and DX dehumidifier. Space relative humidity, thermal comfort, and whole-house source energy consumption were compared for indoor relative humidity set points of 50%, 55%, and 60%. The study revealed why similar trends of high humidity were observed in all three homes regardless of energy efficiency, and why humidity problems are not necessarily unique in the high-performance home. Thermal comfort analysis indicated that occupants are unlikely to notice indoor humidity problems. The study confirmed that supplemental dehumidification is needed to maintain space relative humidity (RH) below 60% in a hot-humid climate home. Researchers also concluded that while all the active dehumidification options included in the study successfully controlled space relative humidity excursions, the increase in whole-house energy consumption was much more sensitive to the humidity set point than the chosen technology option. In the high-performance home, supplemental dehumidification equipment results in a significant source energy consumption penalty at 50% RH set point (12.6%-22.4%) compared to the consumption at 60% RH set point (1.5%-2.7%). At 50% and 55% RH set points, A/C with desiccant wheel dehumidifier and A/C with ERV and high-efficiency DX dehumidifier stand out as the two cases resulting in the smallest increase of source energy consumption. At an RH set point of 60%, all explicit dehumidification technologies result in similar insignificant increases in source energy consumption and thus are equally competitive.« less
Is there a relation between the 2D Causal Set action and the Lorentzian Gauss-Bonnet theorem?
NASA Astrophysics Data System (ADS)
Benincasa, Dionigi M. T.
2011-07-01
We investigate the relation between the two dimensional Causal Set action, Script S, and the Lorentzian Gauss-Bonnet theorem (LGBT). We give compelling reasons why the answer to the title's question is no. In support of this point of view we calculate the causal set inspired action of causal intervals in some two dimensional spacetimes: Minkowski, the flat cylinder and the flat trousers.
Modeling fixation locations using spatial point processes.
Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix
2013-10-01
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
Fast Semantic Segmentation of 3d Point Clouds with Strongly Varying Density
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2016-06-01
We describe an effective and efficient method for point-wise semantic classification of 3D point clouds. The method can handle unstructured and inhomogeneous point clouds such as those derived from static terrestrial LiDAR or photogammetric reconstruction; and it is computationally efficient, making it possible to process point clouds with many millions of points in a matter of minutes. The key issue, both to cope with strong variations in point density and to bring down computation time, turns out to be careful handling of neighborhood relations. By choosing appropriate definitions of a point's (multi-scale) neighborhood, we obtain a feature set that is both expressive and fast to compute. We evaluate our classification method both on benchmark data from a mobile mapping platform and on a variety of large, terrestrial laser scans with greatly varying point density. The proposed feature set outperforms the state of the art with respect to per-point classification accuracy, while at the same time being much faster to compute.
Control methods for merging ALSM and ground-based laser point clouds acquired under forest canopies
NASA Astrophysics Data System (ADS)
Slatton, Kenneth C.; Coleman, Matt; Carter, William E.; Shrestha, Ramesh L.; Sartori, Michael
2004-12-01
Merging of point data acquired from ground-based and airborne scanning laser rangers has been demonstrated for cases in which a common set of targets can be readily located in both data sets. However, direct merging of point data was not generally possible if the two data sets did not share common targets. This is often the case for ranging measurements acquired in forest canopies, where airborne systems image the canopy crowns well, but receive a relatively sparse set of points from the ground and understory. Conversely, ground-based scans of the understory do not generally sample the upper canopy. An experiment was conducted to establish a viable procedure for acquiring and georeferencing laser ranging data underneath a forest canopy. Once georeferenced, the ground-based data points can be merged with airborne points even in cases where no natural targets are common to both data sets. Two ground-based laser scans are merged and georeferenced with a final absolute error in the target locations of less than 10cm. This is comparable to the accuracy of the georeferenced airborne data. Thus, merging of the georeferenced ground-based and airborne data should be feasible. The motivation for this investigation is to facilitate a thorough characterization of airborne laser ranging phenomenology over forested terrain as a function of vertical location in the canopy.
NASA Technical Reports Server (NTRS)
Frew, A. M.; Eisenhut, D. F.; Farrenkopf, R. L.; Gates, R. F.; Iwens, R. P.; Kirby, D. K.; Mann, R. J.; Spencer, D. J.; Tsou, H. S.; Zaremba, J. G.
1972-01-01
The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target.
NASA Astrophysics Data System (ADS)
Rerikh, K. V.
1998-02-01
Using classic results of algebraic geometry for birational plane mappings in plane CP 2 we present a general approach to algebraic integrability of autonomous dynamical systems in C 2 with discrete time and systems of two autonomous functional equations for meromorphic functions in one complex variable defined by birational maps in C 2. General theorems defining the invariant curves, the dynamics of a birational mapping and a general theorem about necessary and sufficient conditions for integrability of birational plane mappings are proved on the basis of a new idea — a decomposition of the orbit set of indeterminacy points of direct maps relative to the action of the inverse mappings. A general method of generating integrable mappings and their rational integrals (invariants) I is proposed. Numerical characteristics Nk of intersections of the orbits Φn- kOi of fundamental or indeterminacy points Oi ɛ O ∩ S, of mapping Φn, where O = { O i} is the set of indeterminacy points of Φn and S is a similar set for invariant I, with the corresponding set O' ∩ S, where O' = { O' i} is the set of indeterminacy points of inverse mapping Φn-1, are introduced. Using the method proposed we obtain all nine integrable multiparameter quadratic birational reversible mappings with the zero fixed point and linear projective symmetry S = CΛC-1, Λ = diag(±1), with rational invariants generated by invariant straight lines and conics. The relations of numbers Nk with such numerical characteristics of discrete dynamical systems as the Arnold complexity and their integrability are established for the integrable mappings obtained. The Arnold complexities of integrable mappings obtained are determined. The main results are presented in Theorems 2-5, in Tables 1 and 2, and in Appendix A.
Models for the hotspot distribution
NASA Technical Reports Server (NTRS)
Jurdy, Donna M.; Stefanick, Michael
1990-01-01
Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.
Heidema, A Geert; Thissen, Uwe; Boer, Jolanda M A; Bouwman, Freek G; Feskens, Edith J M; Mariman, Edwin C M
2009-06-01
In this study, we applied the multivariate statistical tool Partial Least Squares (PLS) to analyze the relative importance of 83 plasma proteins in relation to coronary heart disease (CHD) mortality and the intermediate end points body mass index, HDL-cholesterol and total cholesterol. From a Dutch monitoring project for cardiovascular disease risk factors, men who died of CHD between initial participation (1987-1991) and end of follow-up (January 1, 2000) (N = 44) and matched controls (N = 44) were selected. Baseline plasma concentrations of proteins were measured by a multiplex immunoassay. With the use of PLS, we identified 15 proteins with prognostic value for CHD mortality and sets of proteins associated with the intermediate end points. Subsequently, sets of proteins and intermediate end points were analyzed together by Principal Components Analysis, indicating that proteins involved in inflammation explained most of the variance, followed by proteins involved in metabolism and proteins associated with total-C. This study is one of the first in which the association of a large number of plasma proteins with CHD mortality and intermediate end points is investigated by applying multivariate statistics, providing insight in the relationships among proteins, intermediate end points and CHD mortality, and a set of proteins with prognostic value.
A novel plant protection strategy for transient reactors
NASA Astrophysics Data System (ADS)
Bhattacharyya, Samit K.; Lipinski, Walter C.; Hanan, Nelson A.
The present plant protection system (PPS) has been defined for use in the TREAT-upgrade (TU) reactor for controlled transient operation of reactor-fuel behavior testing under simulated reactor-accident conditions. A PPS with energy-dependent trip set points lowered worst-case clad temperatures by as much as 180 K, relative to the use of conventional fixed-level trip set points. The multilayered multilevel protection strategy represents the state-of-the-art in terrestrial transient reactor protection systems, and should be applicable to multi-MW space reactors.
Schema Knowledge Structures for Representing and Understanding Arithmetic Story Problems.
1987-03-01
do so on a common unit of measure. Implicit in the CP relation is the concept of one-to- one matching of one element in the problem with the other. As...engages in one-to-one matching , removing one member from each set and setting them apart as a matched pair. The smaller of the two sets is the one...to be critical. As we pointed out earlier, some of the semantic * relations can be present in situations that demand any of * the four arithmetic
Uniqueness of the joint measurement and the structure of the set of compatible quantum measurements
NASA Astrophysics Data System (ADS)
Guerini, Leonardo; Terra Cunha, Marcelo
2018-04-01
We address the problem of characterising the compatible tuples of measurements that admit a unique joint measurement. We derive a uniqueness criterion based on the method of perturbations and apply it to show that extremal points of the set of compatible tuples admit a unique joint measurement, while all tuples that admit a unique joint measurement lie in the boundary of such a set. We also provide counter-examples showing that none of these properties are both necessary and sufficient, thus completely describing the relation between the joint measurement uniqueness and the structure of the compatible set. As a by-product of our investigations, we completely characterise the extremal and boundary points of the set of general tuples of measurements and of the subset of compatible tuples.
NASA Technical Reports Server (NTRS)
Draine, B. T.; Goodman, Jeremy
1993-01-01
We derive the dispersion relation for electromagnetic waves propagating on a lattice of polarizable points. From this dispersion relation we obtain a prescription for choosing dipole polarizabilities so that an infinite lattice with finite lattice spacing will mimic a continuum with dielectric constant. The discrete dipole approximation is used to calculate scattering and absorption by a finite target by replacing the target with an array of point dipoles. We compare different prescriptions for determining the dipole polarizabilities. We show that the most accurate results are obtained when the lattice dispersion relation is used to set the polarizabilities.
Trajectories of Dop Points on a Machining Wheel During Grinding of High Quality Plane Surfaces
NASA Astrophysics Data System (ADS)
Petrikova, I.; Vrzala, R.; Kafka, J.
The basic requirement for plane grinding synthetic monocrystals is uniform wear of the grinding tool. This article deals with the case where the grinding process is carried out by relative motion between the front faces of rotating wheels with parallel axes. The dop is attached by the end of the pendulous arm, which movement is controlled by a cam. Kinematic relations have been drawn for the relative motion of the dop points in the reference to the abrasive wheel. The aim of the work is set the methodology for finding out of uniformity respectively nonuniformity of the motion of dop points on the abrasive wheel. The computational program was compiled in MATLAB. The sums of the number of passes were performed in the transmission range of 0.4-1. The number of passes of selected points on the dop passed over areas of the square mash was computed. The density of trajectory passes depends on four factors: the speed of both wheels, the number of arm operating cycles, the angle of the arm swings and the cam shape. All these dependencies were investigated. The uniformity the density of passes is one of the criteria for setting the grinding machine.
Error detection and data smoothing based on local procedures
NASA Technical Reports Server (NTRS)
Guerra, V. M.
1974-01-01
An algorithm is presented which is able to locate isolated bad points and correct them without contaminating the rest of the good data. This work has been greatly influenced and motivated by what is currently done in the manual loft. It is not within the scope of this work to handle small random errors characteristic of a noisy system, and it is therefore assumed that the bad points are isolated and relatively few when compared with the total number of points. Motivated by the desire to imitate the loftsman a visual experiment was conducted to determine what is considered smooth data. This criterion is used to determine how much the data should be smoothed and to prove that this method produces such data. The method utimately converges to a set of points that lies on the polynomial that interpolates the first and last points; however convergence to such a set is definitely not the purpose of our algorithm. The proof of convergence is necessary to demonstrate that oscillation does not take place and that in a finite number of steps the method produces a set as smooth as desired.
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
Phelps, G.A.
2008-01-01
This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.
a Method for the Registration of Hemispherical Photographs and Tls Intensity Images
NASA Astrophysics Data System (ADS)
Schmidt, A.; Schilling, A.; Maas, H.-G.
2012-07-01
Terrestrial laser scanners generate dense and accurate 3D point clouds with minimal effort, which represent the geometry of real objects, while image data contains texture information of object surfaces. Based on the complementary characteristics of both data sets, a combination is very appealing for many applications, including forest-related tasks. In the scope of our research project, independent data sets of a plain birch stand have been taken by a full-spherical laser scanner and a hemispherical digital camera. Previously, both kinds of data sets have been considered separately: Individual trees were successfully extracted from large 3D point clouds, and so-called forest inventory parameters could be determined. Additionally, a simplified tree topology representation was retrieved. From hemispherical images, leaf area index (LAI) values, as a very relevant parameter for describing a stand, have been computed. The objective of our approach is to merge a 3D point cloud with image data in a way that RGB values are assigned to each 3D point. So far, segmentation and classification of TLS point clouds in forestry applications was mainly based on geometrical aspects of the data set. However, a 3D point cloud with colour information provides valuable cues exceeding simple statistical evaluation of geometrical object features and thus may facilitate the analysis of the scan data significantly.
Uncertainty representation of grey numbers and grey sets.
Yang, Yingjie; Liu, Sifeng; John, Robert
2014-09-01
In the literature, there is a presumption that a grey set and an interval-valued fuzzy set are equivalent. This presumption ignores the existence of discrete components in a grey number. In this paper, new measurements of uncertainties of grey numbers and grey sets, consisting of both absolute and relative uncertainties, are defined to give a comprehensive representation of uncertainties in a grey number and a grey set. Some simple examples are provided to illustrate that the proposed uncertainty measurement can give an effective representation of both absolute and relative uncertainties in a grey number and a grey set. The relationships between grey sets and interval-valued fuzzy sets are also analyzed from the point of view of the proposed uncertainty representation. The analysis demonstrates that grey sets and interval-valued fuzzy sets provide different but overlapping models for uncertainty representation in sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamburrini, G.; Termini, S.
1982-01-01
The general thesis underlying the present paper is that there are very strong methodological relations among cybernetics, system science, artificial intelligence, fuzzy sets and many other related fields. Then, in order to understand better both the achievements and the weak points of all the previous disciplines, one should look for some common features for looking at them in this general frame. What will be done is to present a brief analysis of the primitive program of cybernetics, presenting it as a case study useful for developing the previous thesis. Among the discussed points are the problems of interdisciplinarity and ofmore » the unity of cybernetics. Some implications of this analysis for a new reading of general system theory and fuzzy sets are briefly outlined at the end of the paper. 3 references.« less
Papasavvas, Emmanouil; Foulkes, Andrea; Yin, Xiangfan; Joseph, Jocelin; Ross, Brian; Azzoni, Livio; Kostman, Jay R; Mounzer, Karam; Shull, Jane; Montaner, Luis J
2015-07-01
The identification of immune correlates of HIV control is important for the design of immunotherapies that could support cure or antiretroviral therapy (ART) intensification-related strategies. ART interruptions may facilitate this task through exposure of an ART partially reconstituted immune system to endogenous virus. We investigated the relationship between set-point plasma HIV viral load (VL) during an ART interruption and innate/adaptive parameters before or after interruption. Dendritic cell (DC), natural killer (NK) cell and HIV Gag p55-specific T-cell functional responses were measured in paired cryopreserved peripheral blood mononuclear cells obtained at the beginning (on ART) and at set-point of an open-ended interruption from 31 ART-suppressed chronically HIV-1(+) patients. Spearman correlation and linear regression modeling were used. Frequencies of plasmacytoid DC (pDC), and HIV Gag p55-specific CD3(+) CD4(-) perforin(+) IFN-γ(+) cells at the beginning of interruption associated negatively with set-point plasma VL. Inclusion of both variables with interaction into a model resulted in the best fit (adjusted R(2) = 0·6874). Frequencies of pDC or HIV Gag p55-specific CD3(+) CD4(-) CSFE(lo) CD107a(+) cells at set-point associated negatively with set-point plasma VL. The dual contribution of pDC and anti-HIV T-cell responses to viral control, supported by our models, suggests that these variables may serve as immune correlates of viral control and could be integrated in cure or ART-intensification strategies. © 2015 John Wiley & Sons Ltd.
Point-of-Care Diagnostics for Improving Maternal Health in South Africa
Mashamba-Thompson, Tivani P.; Sartorius, Benn; Drain, Paul K.
2016-01-01
Improving maternal health is a global priority, particularly in high HIV-endemic, resource-limited settings. Failure to use health care facilities due to poor access is one of the main causes of maternal deaths in South Africa. “Point-of-care” (POC) diagnostics are an innovative healthcare approach to improve healthcare access and health outcomes in remote and resource-limited settings. In this review, POC testing is defined as a diagnostic test that is carried out near patients and leads to rapid clinical decisions. We review the current and emerging POC diagnostics for maternal health, with a specific focus on the World Health Organization (WHO) quality-ASSURED (Affordability, Sensitivity, Specificity, User friendly, Rapid and robust, Equipment free and Delivered) criteria for an ideal point-of-care test in resource-limited settings. The performance of POC diagnostics, barriers and challenges related to implementing POC diagnostics for maternal health in rural and resource-limited settings are reviewed. Innovative strategies for overcoming these barriers are recommended to achieve substantial progress on improving maternal health outcomes in these settings. PMID:27589808
Universal relations between non-Gaussian fluctuations in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Chen, Jiunn-Wei; Deng, Jian; Kohyama, Hiroaki; Labun, Lance
2017-01-01
We show that universality near a critical end point implies a characteristic relation between third- and fourth-order baryon susceptibilities χ3 and χ4, resulting in a banana-shaped loop when χ4 is plotted as a function of χ3 along a freeze-out line. This result relies only on the derivative relation between χ3 and χ4, the enhancement of the correlation length and the scaling symmetry near a critical point, and the freeze-out line near the critical point not too parallel to the μB axis. Including the individual enhancements of χ3 and χ4 near a critical point, these features may be a consistent set of observations supporting the interpretation of baryon fluctuation data as arising from criticality.
‘Parabolic’ trapped modes and steered Dirac cones in platonic crystals
McPhedran, R. C.; Movchan, A. B.; Movchan, N. V.; Brun, M.; Smith, M. J. A.
2015-01-01
This paper discusses the properties of flexural waves governed by the biharmonic operator, and propagating in a thin plate pinned at doubly periodic sets of points. The emphases are on the design of dispersion surfaces having the Dirac cone topology, and on the related topic of trapped modes in plates for a finite set (cluster) of pinned points. The Dirac cone topologies we exhibit have at least two cones touching at a point in the reciprocal lattice, augmented by another band passing through the point. We show that these Dirac cones can be steered along symmetry lines in the Brillouin zone by varying the aspect ratio of rectangular lattices of pins, and that, as the cones are moved, the involved band surfaces tilt. We link Dirac points with a parabolic profile in their neighbourhood, and the characteristic of this parabolic profile decides the direction of propagation of the trapped mode in finite clusters. PMID:27547089
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
Using the range to calculate the coefficient of variation.
Rhiel, G Steven
2004-12-01
In this research a coefficient of variation (CVhigh-low) is calculated from the highest and lowest values in a set of data. Use of CVhigh-low when the population is normal, leptokurtic, and skewed is discussed. The statistic is the most effective when sampling from the normal distribution. With the leptokurtic distributions, CVhigh-low works well for comparing the relative variability between two or more distributions but does not provide a very "good" point estimate of the population coefficient of variation. With skewed distributions CVhigh-low works well in identifying which data set has the more relative variation but does not specify how much difference there is in the variation. It also does not provide a "good" point estimate.
Estimating animal resource selection from telemetry data using point process models
Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.
2013-01-01
To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.
Validation and Improvement of SRTM Performance over Rugged Terrain
NASA Technical Reports Server (NTRS)
Zebker, Howard A.
2004-01-01
We have previously reported work related to basic technique development in phase unwrapping and generation of digital elevation models (DEM). In the final year of this work we have applied our technique work to the improvement of DEM's produced by SRTM. In particular, we have developed a rigorous mathematical algorithm and means to fill in missing data over rough terrain from other data sets. We illustrate this method by using a higher resolution, but globally less accurate, DEM produced by the TOPSAR airborne instrument over the Galapagos Islands to augment the SRTM data set in this area, We combine this data set with SRTM to use each set to fill in holes left over by the other imaging system. The infilling is done by first interpolating each data set using a prediction error filter that reproduces the same statistical characterization as exhibited by the entire data set within the interpolated region. After this procedure is implemented on each data set, the two are combined on a point by point basis with weights that reflect the accuracy of each data point in its original image. In areas that are better covered by SRTM, TOPSAR data are weighted down but still retain TOPSAR statistics. The reverse is true for regions better covered by TOPSAR. The resulting DEM passes statistical tests and appears quite feasible to the eye, but as this DEM is the best available for the region we cannot fully veri@ its accuracy. Spot checks with GPS points show that locally the technique results in a more comprehensive and accurate map than either data set alone.
Schiavon, S; Yang, B; Donner, Y; Chang, V W-C; Nazaroff, W W
2017-05-01
In a warm and humid climate, increasing the temperature set point offers considerable energy benefits with low first costs. Elevated air movement generated by a personally controlled fan can compensate for the negative effects caused by an increased temperature set point. Fifty-six tropically acclimatized persons in common Singaporean office attire (0.7 clo) were exposed for 90 minutes to each of five conditions: 23, 26, and 29°C and in the latter two cases with and without occupant-controlled air movement. Relative humidity was maintained at 60%. We tested thermal comfort, perceived air quality, sick building syndrome symptoms, and cognitive performance. We found that thermal comfort, perceived air quality, and sick building syndrome symptoms are equal or better at 26°C and 29°C than at the common set point of 23°C if a personally controlled fan is available for use. The best cognitive performance (as indicated by task speed) was obtained at 26°C; at 29°C, the availability of an occupant-controlled fan partially mitigated the negative effect of the elevated temperature. The typical Singaporean indoor air temperature set point of 23°C yielded the lowest cognitive performance. An elevated set point in air-conditioned buildings augmented with personally controlled fans might yield benefits for reduced energy use and improved indoor environmental quality in tropical climates. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
Anomaly detection in forward looking infrared imaging using one-class classifiers
NASA Astrophysics Data System (ADS)
Popescu, Mihail; Stone, Kevin; Havens, Timothy; Ho, Dominic; Keller, James
2010-04-01
In this paper we describe a method for generating cues of possible abnormal objects present in the field of view of an infrared (IR) camera installed on a moving vehicle. The proposed method has two steps. In the first step, for each frame, we generate a set of possible points of interest using a corner detection algorithm. In the second step, the points related to the background are discarded from the point set using an one class classifier (OCC) trained on features extracted from a local neighborhood of each point. The advantage of using an OCC is that we do not need examples from the "abnormal object" class to train the classifier. Instead, OCC is trained using corner points from images known to be abnormal object free, i.e., that contain only background scenes. To further reduce the number of false alarms we use a temporal fusion procedure: a region has to be detected as "interesting" in m out of n, m
Nonlocal games and optimal steering at the boundary of the quantum set
NASA Astrophysics Data System (ADS)
Zhen, Yi-Zheng; Goh, Koon Tong; Zheng, Yu-Lin; Cao, Wen-Fei; Wu, Xingyao; Chen, Kai; Scarani, Valerio
2016-08-01
The boundary between classical and quantum correlations is well characterized by linear constraints called Bell inequalities. It is much harder to characterize the boundary of the quantum set itself in the space of no-signaling correlations. For the points on the quantum boundary that violate maximally some Bell inequalities, J. Oppenheim and S. Wehner [Science 330, 1072 (2010), 10.1126/science.1192065] pointed out a complex property: Alice's optimal measurements steer Bob's local state to the eigenstate of an effective operator corresponding to its maximal eigenvalue. This effective operator is the linear combination of Bob's local operators induced by the coefficients of the Bell inequality, and it can be interpreted as defining a fine-grained uncertainty relation. It is natural to ask whether the same property holds for other points on the quantum boundary, using the Bell expression that defines the tangent hyperplane at each point. We prove that this is indeed the case for a large set of points, including some that were believed to provide counterexamples. The price to pay is to acknowledge that the Oppenheim-Wehner criterion does not respect equivalence under the no-signaling constraint: for each point, one has to look for specific forms of writing the Bell expressions.
On the structure of the set of coincidence points
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arutyunov, A V; Gel'man, B D
2015-03-31
We consider the set of coincidence points for two maps between metric spaces. Cardinality, metric and topological properties of the coincidence set are studied. We obtain conditions which guarantee that this set (a) consists of at least two points; (b) consists of at least n points; (c) contains a countable subset; (d) is uncountable. The results are applied to study the structure of the double point set and the fixed point set for multivalued contractions. Bibliography: 12 titles.
40 CFR 51.50 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... accuracy description (MAD) codes means a set of six codes used to define the accuracy of latitude/longitude data for point sources. The six codes and their definitions are: (1) Coordinate Data Source Code: The... physical piece of or a closely related set of equipment. The EPA's reporting format for a given inventory...
Inattentional blindness: A combination of a relational set and a feature inhibition set?
Goldstein, Rebecca R; Beck, Melissa R
2016-07-01
Two experiments were conducted to directly test the feature set hypothesis and the relational set hypothesis in an inattentional blindness task. The feature set hypothesis predicts that unexpected objects that match the to-be-attended stimuli will be reported most. The relational set hypothesis predicts that unexpected objects that match the relationship between the to-be-attended and the to-be-ignored stimuli will be reported the most. Experiment 1 manipulated the luminance of the stimuli. Participants were instructed to monitor the gray letter shapes and to ignore either black or white letter shapes. The unexpected objects that exhibited the luminance relation of the to-be-attended to the to-be-ignored stimuli were reported by participants the most. Experiment 2 manipulated the color of the stimuli. Participants were instructed to monitor the yellower orange or the redder orange letter shapes and to ignore the redder orange or yellower letter shapes. The unexpected objects that exhibited the color relation of the to-be-attended to the to-be-ignored stimuli were reported the most. The results do not support the use of a feature set to accomplish the task and instead support the use of a relational set. In addition, the results point to the concurrent use of multiple attentional sets that are both excitatory and inhibitory.
Explore Stochastic Instabilities of Periodic Points by Transition Path Theory
NASA Astrophysics Data System (ADS)
Cao, Yu; Lin, Ling; Zhou, Xiang
2016-06-01
We consider the noise-induced transitions from a linearly stable periodic orbit consisting of T periodic points in randomly perturbed discrete logistic map. Traditional large deviation theory and asymptotic analysis at small noise limit cannot distinguish the quantitative difference in noise-induced stochastic instabilities among the T periodic points. To attack this problem, we generalize the transition path theory to the discrete-time continuous-space stochastic process. In our first criterion to quantify the relative instability among T periodic points, we use the distribution of the last passage location related to the transitions from the whole periodic orbit to a prescribed disjoint set. This distribution is related to individual contributions to the transition rate from each periodic points. The second criterion is based on the competency of the transition paths associated with each periodic point. Both criteria utilize the reactive probability current in the transition path theory. Our numerical results for the logistic map reveal the transition mechanism of escaping from the stable periodic orbit and identify which periodic point is more prone to lose stability so as to make successful transitions under random perturbations.
NASA Astrophysics Data System (ADS)
Frisch, Michael J.; Binkley, J. Stephen; Schaefer, Henry F., III
1984-08-01
The relative energies of the stationary points on the FH2 and H2CO nuclear potential energy surfaces relevant to the hydrogen atom abstraction, H2 elimination and 1,2-hydrogen shift reactions have been examined using fourth-order Møller-Plesset perturbation theory and a variety of basis sets. The theoretical absolute zero activation energy for the F+H2→FH+H reaction is in better agreement with experiment than previous theoretical studies, and part of the disagreement between earlier theoretical calculations and experiment is found to result from the use of assumed rather than calculated zero-point vibrational energies. The fourth-order reaction energy for the elimination of hydrogen from formaldehyde is within 2 kcal mol-1 of the experimental value using the largest basis set considered. The qualitative features of the H2CO surface are unchanged by expansion of the basis set beyond the polarized triple-zeta level, but diffuse functions and several sets of polarization functions are found to be necessary for quantitative accuracy in predicted reaction and activation energies. Basis sets and levels of perturbation theory which represent good compromises between computational efficiency and accuracy are recommended.
Application of the QSPR approach to the boiling points of azeotropes.
Katritzky, Alan R; Stoyanova-Slavova, Iva B; Tämm, Kaido; Tamm, Tarmo; Karelson, Mati
2011-04-21
CODESSA Pro derivative descriptors were calculated for a data set of 426 azeotropic mixtures by the centroid approximation and the weighted-contribution-factor approximation. The two approximations produced almost identical four-descriptor QSPR models relating the structural characteristic of the individual components of azeotropes to the azeotropic boiling points. These models were supported by internal and external validations. The descriptors contributing to the QSPR models are directly related to the three components of the enthalpy (heat) of vaporization.
Intrinsic time quantum geometrodynamics
NASA Astrophysics Data System (ADS)
Ita, Eyo Eyo; Soo, Chopin; Yu, Hoi-Lai
2015-08-01
Quantum geometrodynamics with intrinsic time development and momentric variables is presented. An underlying SU(3) group structure at each spatial point regulates the theory. The intrinsic time behavior of the theory is analyzed, together with its ground state and primordial quantum fluctuations. Cotton-York potential dominates at early times when the universe was small; the ground state naturally resolves Penrose's Weyl curvature hypothesis, and thermodynamic and gravitational "arrows of time" point in the same direction. Ricci scalar potential corresponding to Einstein's general relativity emerges as a zero-point energy contribution. A new set of fundamental commutation relations without Planck's constant emerges from the unification of gravitation and quantum mechanics.
Dissipative and nonunitary solutions of operator commutation relations
NASA Astrophysics Data System (ADS)
Makarov, K. A.; Tsekanovskii, E.
2016-01-01
We study the (generalized) semi-Weyl commutation relations UgAU* g = g(A) on Dom(A), where A is a densely defined operator and G ∋ g ↦ Ug is a unitary representation of the subgroup G of the affine group G, the group of affine orientation-preserving transformations of the real axis. If A is a symmetric operator, then the group G induces an action/flow on the operator unit ball of contracting transformations from Ker(A* - iI) to Ker(A* + iI). We establish several fixed-point theorems for this flow. In the case of one-parameter continuous subgroups of linear transformations, self-adjoint (maximal dissipative) operators associated with the fixed points of the flow yield solutions of the (restricted) generalized Weyl commutation relations. We show that in the dissipative setting, the restricted Weyl relations admit a variety of representations that are not unitarily equivalent. For deficiency indices (1, 1), the basic results can be strengthened and set in a separate case.
NASA Technical Reports Server (NTRS)
Edwards, T. R. (Inventor)
1985-01-01
Apparatus for doubling the data density rate of an analog to digital converter or doubling the data density storage capacity of a memory deviced is discussed. An interstitial data point midway between adjacent data points in a data stream having an even number of equal interval data points is generated by applying a set of predetermined one-dimensional convolute integer coefficients which can include a set of multiplier coefficients and a normalizer coefficient. Interpolator means apply the coefficients to the data points by weighting equally on each side of the center of the even number of equal interval data points to obtain an interstital point value at the center of the data points. A one-dimensional output data set, which is twice as dense as a one-dimensional equal interval input data set, can be generated where the output data set includes interstitial points interdigitated between adjacent data points in the input data set. The method for generating the set of interstital points is a weighted, nearest-neighbor, non-recursive, moving, smoothing averaging technique, equivalent to applying a polynomial regression calculation to the data set.
Wallace, Lorraine S; Keenum, Amy J
2008-08-01
To evaluate the readability and related features of English language Quick Reference Guides (QRGs) and User Manuals (UMs) accompanying home blood pressure monitors (HBPMs). We evaluated QRGs and UMs for 22 HBPMs [arm (n=12); wrist (n=10)]. Using established criteria, we evaluated reading grade level, language availability, dimensions, text point size, use of illustrations, layout/formatting characteristics, and emphasis of key points of English-language patient instructions accompanying HBPMs. Readability was calculated using McLaughlin's Simplified Measure of Gobbledygoop. Items from the Suitability of Materials Assessment and User-Friendliness Tool were used to assess various layout features. Simplified Measure of Gobbledygoop scores of both QRGs (mean+/-SD=9.1+/-0.8) and UMs (9.3+/-0.8) ranged from 8th to 10th grade. QRGs and UMs presented steps in chronological order, used active voice throughout, avoided use of specialty fonts, focused on need to know, and used realistic illustrations. Seven sets of instructions included all seven key points related to proper HPBM use, whereas three sets of instructions included less than or equal to three key points (mean=4.8+/-1.9). Although most QRGs and UMs met at least some recommended low-literacy formatting guidelines, all instructional materials should be developed and tested to meet the needs of the patient population at large. Key points related to proper HBPM use should not only be included within these instructions, but highlighted to emphasize their importance.
Code of Federal Regulations, 2014 CFR
2014-04-01
... standard for biometric data specifications for personal identity verification. Operating point means a... records on its servers. Audit trail means a record showing who has accessed an information technology... information on a local server or hard drive. Certificate policy means a named set of rules that sets forth the...
Code of Federal Regulations, 2011 CFR
2011-04-01
... standard for biometric data specifications for personal identity verification. Operating point means a... records on its servers. Audit trail means a record showing who has accessed an information technology... information on a local server or hard drive. Certificate policy means a named set of rules that sets forth the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... standard for biometric data specifications for personal identity verification. Operating point means a... records on its servers. Audit trail means a record showing who has accessed an information technology... information on a local server or hard drive. Certificate policy means a named set of rules that sets forth the...
Making Quality Health Websites a National Public Health Priority: Toward Quality Standards.
Devine, Theresa; Broderick, Jordan; Harris, Linda M; Wu, Huijuan; Hilfiker, Sandra Williams
2016-08-02
Most US adults have limited health literacy skills. They struggle to understand complex health information and services and to make informed health decisions. The Internet has quickly become one of the most popular places for people to search for information about their health, thereby making access to quality information on the Web a priority. However, there are no standardized criteria for evaluating Web-based health information. Every 10 years, the US Department of Health and Human Services' Office of Disease Prevention and Health Promotion (ODPHP) develops a set of measurable objectives for improving the health of the nation over the coming decade, known as Healthy People. There are two objectives in Healthy People 2020 related to website quality. The first is objective Health Communication and Health Information Technology (HC/HIT) 8.1: increase the proportion of health-related websites that meet 3 or more evaluation criteria for disclosing information that can be used to assess information reliability. The second is objective HC/HIT-8.2: increase the proportion of health-related websites that follow established usability principles. The ODPHP conducted a nationwide assessment of the quality of Web-based health information using the Healthy People 2020 objectives. The ODPHP aimed to establish (1) a standardized approach to defining and measuring the quality of health websites; (2) benchmarks for measurement; (3) baseline data points to capture the current status of website quality; and (4) targets to drive improvement. The ODPHP developed the National Quality Health Website Survey instrument to assess the quality of health-related websites. The ODPHP used this survey to review 100 top-ranked health-related websites in order to set baseline data points for these two objectives. The ODPHP then set targets to drive improvement by 2020. This study reviewed 100 health-related websites. For objective HC/HIT-8.1, a total of 58 out of 100 (58.0%) websites met 3 or more out of 6 reliability criteria. For objective HC/HIT-8.2, a total of 42 out of 100 (42.0%) websites followed 10 or more out of 19 established usability principles. On the basis of these baseline data points, ODPHP set targets for the year 2020 that meet the minimal statistical significance-increasing objective HC/HIT-8.1 data point to 70.5% and objective HC/HIT-8.2 data point to 55.7%. This research is a critical first step in evaluating the quality of Web-based health information. The criteria proposed by ODPHP provide methods to assess website quality for professionals designing, developing, and managing health-related websites. The criteria, baseline data, and targets are valuable tools for driving quality improvement.
Making Quality Health Websites a National Public Health Priority: Toward Quality Standards
2016-01-01
Background Most US adults have limited health literacy skills. They struggle to understand complex health information and services and to make informed health decisions. The Internet has quickly become one of the most popular places for people to search for information about their health, thereby making access to quality information on the Web a priority. However, there are no standardized criteria for evaluating Web-based health information. Every 10 years, the US Department of Health and Human Services' Office of Disease Prevention and Health Promotion (ODPHP) develops a set of measurable objectives for improving the health of the nation over the coming decade, known as Healthy People. There are two objectives in Healthy People 2020 related to website quality. The first is objective Health Communication and Health Information Technology (HC/HIT) 8.1: increase the proportion of health-related websites that meet 3 or more evaluation criteria for disclosing information that can be used to assess information reliability. The second is objective HC/HIT-8.2: increase the proportion of health-related websites that follow established usability principles. Objective The ODPHP conducted a nationwide assessment of the quality of Web-based health information using the Healthy People 2020 objectives. The ODPHP aimed to establish (1) a standardized approach to defining and measuring the quality of health websites; (2) benchmarks for measurement; (3) baseline data points to capture the current status of website quality; and (4) targets to drive improvement. Methods The ODPHP developed the National Quality Health Website Survey instrument to assess the quality of health-related websites. The ODPHP used this survey to review 100 top-ranked health-related websites in order to set baseline data points for these two objectives. The ODPHP then set targets to drive improvement by 2020. Results This study reviewed 100 health-related websites. For objective HC/HIT-8.1, a total of 58 out of 100 (58.0%) websites met 3 or more out of 6 reliability criteria. For objective HC/HIT-8.2, a total of 42 out of 100 (42.0%) websites followed 10 or more out of 19 established usability principles. On the basis of these baseline data points, ODPHP set targets for the year 2020 that meet the minimal statistical significance—increasing objective HC/HIT-8.1 data point to 70.5% and objective HC/HIT-8.2 data point to 55.7%. Conclusions This research is a critical first step in evaluating the quality of Web-based health information. The criteria proposed by ODPHP provide methods to assess website quality for professionals designing, developing, and managing health-related websites. The criteria, baseline data, and targets are valuable tools for driving quality improvement. PMID:27485512
Impact of fatty acid status on immune function of children in low-income countries.
Prentice, Andrew M; van der Merwe, Liandré
2011-04-01
In vitro and animal studies point to numerous mechanisms by which fatty acids, especially long-chain polyunsaturated fatty acids (LCPUFA), can modulate the innate and adaptive arms of the immune system. These data strongly suggest that improving the fatty acid supply of young children in low-income countries might have immune benefits. Unfortunately, there have been virtually no studies of fatty acid/immune interactions in such settings. Clinical trial registers list over 150 randomized controlled trials (RCTs) involving PUFAs, only one in a low-income setting (the Gambia). We summarize those results here. There was evidence for improved growth and nutritional status, but the primary end point of chronic environmental enteropathy showed no benefit, possibly because the infants were still substantially breastfed. In high-income settings, there have been RCTs with fatty acids (usually LCPUFAs) in relation to 18 disease end points, for some of which there have been numerous trials (asthma, inflammatory bowel disease and rheumatoid arthritis). For these diseases, the evidence is judged reasonable for risk reduction for childhood asthma (but not in adults), as yielding possible benefit in Crohn's disease (insufficient evidence in ulcerative colitis) and for convincing evidence for rheumatoid arthritis at sufficient dose levels, though formal meta-analyses are not yet available. This analysis suggests that fatty acid interventions could yield immune benefits in children in poor settings, especially in non-breastfed children and in relation to inflammatory conditions such as persistent enteropathy. Benefits might include improved responses to enteric vaccines, which frequently perform poorly in low-income settings, and these questions merit randomized trials. © 2011 Blackwell Publishing Ltd.
Circular motion geometry using minimal data.
Jiang, Guang; Quan, Long; Tsui, Hung-Tat
2004-06-01
Circular motion or single axis motion is widely used in computer vision and graphics for 3D model acquisition. This paper describes a new and simple method for recovering the geometry of uncalibrated circular motion from a minimal set of only two points in four images. This problem has been previously solved using nonminimal data either by computing the fundamental matrix and trifocal tensor in three images or by fitting conics to tracked points in five or more images. It is first established that two sets of tracked points in different images under circular motion for two distinct space points are related by a homography. Then, we compute a plane homography from a minimal two points in four images. After that, we show that the unique pair of complex conjugate eigenvectors of this homography are the image of the circular points of the parallel planes of the circular motion. Subsequently, all other motion and structure parameters are computed from this homography in a straighforward manner. The experiments on real image sequences demonstrate the simplicity, accuracy, and robustness of the new method.
Wigner surmises and the two-dimensional homogeneous Poisson point process.
Sakhr, Jamal; Nieminen, John M
2006-04-01
We derive a set of identities that relate the higher-order interpoint spacing statistics of the two-dimensional homogeneous Poisson point process to the Wigner surmises for the higher-order spacing distributions of eigenvalues from the three classical random matrix ensembles. We also report a remarkable identity that equates the second-nearest-neighbor spacing statistics of the points of the Poisson process and the nearest-neighbor spacing statistics of complex eigenvalues from Ginibre's ensemble of 2 x 2 complex non-Hermitian random matrices.
A Logical Basis In The Layered Computer Vision Systems Model
NASA Astrophysics Data System (ADS)
Tejwani, Y. J.
1986-03-01
In this paper a four layer computer vision system model is described. The model uses a finite memory scratch pad. In this model planar objects are defined as predicates. Predicates are relations on a k-tuple. The k-tuple consists of primitive points and relationship between primitive points. The relationship between points can be of the direct type or the indirect type. Entities are goals which are satisfied by a set of clauses. The grammar used to construct these clauses is examined.
Apparatus and methods for humidity control
NASA Technical Reports Server (NTRS)
Dinauer, William R. (Inventor); Otis, David R. (Inventor); El-Wakil, Mohamed M. (Inventor); Vignali, John C. (Inventor); Macaulay, Philip D. (Inventor)
1994-01-01
Apparatus is provided which controls humidity in a gas. The apparatus employs a porous interface that is preferably a manifolded array of stainless steel tubes through whose porous surface water vapor can pass. One side of the porous interface is in contact with water and the opposing side is in contact with gas whose humidity is being controlled. Water vapor is emitted from the porous surface of the tubing into the gas when the gas is being humidified, and water vapor is removed from the gas through the porous surfaces when the gas is being dehumidified. The temperature of the porous interface relative to the gas temperature determines whether humidification or dehumidification is being carried out. The humidity in the gas is sensed and compared to the set point humidity. The water temperature, and consequently the porous interface temperature, are automatically controlled in response to changes in the gas humidity level above or below the set point. Any deviation from the set point humidity is thus corrected.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F
2015-01-01
Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.
Atomic and vibrational origins of mechanical toughness in bioactive cement during setting
Tian, Kun V.; Yang, Bin; Yue, Yuanzheng; Bowron, Daniel T.; Mayers, Jerry; Donnan, Robert S.; Dobó-Nagy, Csaba; Nicholson, John W.; Fang, De-Cai; Greer, A. Lindsay; Chass, Gregory A.; Greaves, G. Neville
2015-01-01
Bioactive glass ionomer cements (GICs) have been in widespread use for ∼40 years in dentistry and medicine. However, these composites fall short of the toughness needed for permanent implants. Significant impediment to improvement has been the requisite use of conventional destructive mechanical testing, which is necessarily retrospective. Here we show quantitatively, through the novel use of calorimetry, terahertz (THz) spectroscopy and neutron scattering, how GIC's developing fracture toughness during setting is related to interfacial THz dynamics, changing atomic cohesion and fluctuating interfacial configurations. Contrary to convention, we find setting is non-monotonic, characterized by abrupt features not previously detected, including a glass–polymer coupling point, an early setting point, where decreasing toughness unexpectedly recovers, followed by stress-induced weakening of interfaces. Subsequently, toughness declines asymptotically to long-term fracture test values. We expect the insight afforded by these in situ non-destructive techniques will assist in raising understanding of the setting mechanisms and associated dynamics of cementitious materials. PMID:26548704
NASA Astrophysics Data System (ADS)
Alpers, Andreas; Gritzmann, Peter
2018-03-01
We consider the problem of reconstructing the paths of a set of points over time, where, at each of a finite set of moments in time the current positions of points in space are only accessible through some small number of their x-rays. This particular particle tracking problem, with applications, e.g. in plasma physics, is the basic problem in dynamic discrete tomography. We introduce and analyze various different algorithmic models. In particular, we determine the computational complexity of the problem (and various of its relatives) and derive algorithms that can be used in practice. As a byproduct we provide new results on constrained variants of min-cost flow and matching problems.
Stability of Poisson Equilibria and Hamiltonian Relative Equilibria by Energy Methods
NASA Astrophysics Data System (ADS)
Patrick, George W.; Roberts, Mark; Wulff, Claudia
2004-12-01
We develop a general stability theory for equilibrium points of Poisson dynamical systems and relative equilibria of Hamiltonian systems with symmetries, including several generalisations of the Energy-Casimir and Energy-Momentum Methods. Using a topological generalisation of Lyapunov’s result that an extremal critical point of a conserved quantity is stable, we show that a Poisson equilibrium is stable if it is an isolated point in the intersection of a level set of a conserved function with a subset of the phase space that is related to the topology of the symplectic leaf space at that point. This criterion is applied to generalise the energy-momentum method to Hamiltonian systems which are invariant under non-compact symmetry groups for which the coadjoint orbit space is not Hausdorff. We also show that a G-stable relative equilibrium satisfies the stronger condition of being A-stable, where A is a specific group-theoretically defined subset of G which contains the momentum isotropy subgroup of the relative equilibrium. The results are illustrated by an application to the stability of a rigid body in an ideal irrotational fluid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.
2013-08-01
To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results aboutmore » interval graphs to intervals over posets.« less
Sullivan, Patrick S.; Fideli, Ulgen; Wall, Kristin M.; Chomba, Elwyn; Vwalika, Cheswa; Kilembe, William; Tichacek, Amanda; Luisi, Nicole; Mulenga, Joseph; Hunter, Eric; Boeras, Debrah; Allen, Susan
2013-01-01
Objective To describe symptoms, physical exam findings, and set point viral load associated with acute HIV seroconversion in a heterosexual cohort of discordant couples in Zambia. Design We followed HIV serodiscordant couples in Lusaka, Zambia from 1995–2009 with HIV testing of negative partners and symptom inventories 3-monthly, and physical examinations annually. Methods We compared prevalence of self-reported or treated symptoms (malaria syndrome, chronic diarrhea, asthenia, night sweats, and oral candidiasis) and annual physical exam [PE] findings (unilateral or bilateral neck, axillary, or inguinal adenopathy; and dermatosis) in seroconverting versus HIV-negative or HIV-positive intervals, controlling for repeated observations, age, and sex. A composite score comprised of significant symptoms and PE findings predictive of seroconversion versus HIV-negative intervals was constructed. We modeled the relationship between number of symptoms and PE findings at seroconversion and log set-point viral load [VL] using linear regression. Results 2,388 HIV-negative partners were followed for a median of 18 months; 429 seroconversions occurred. Neither symptoms nor PE findings were reported for most seroconverters. Seroconversion was significantly associated with malaria syndrome among non-diarrheic patients (adjusted odds ratio [aOR]=4.0) night sweats (aOR=1.4), and bilateral axillary (aOR = 1.6), inguinal (aOR=2.2), and neck (aOR=2.2) adenopathy relative to HIV-negative intervals. Median number of symptoms was positively associated with set-point VL (p<0.001). Conclusions Though most acute and early infections were asymptomatic, malaria syndrome was more common and more severe during seroconversion compared with HIV-negative and HIV-positive intervals. When present, symptoms and physical exam findings were non-specific and associated with higher set point viremia. PMID:22089380
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
NASA Astrophysics Data System (ADS)
Lewis, Debra
2013-05-01
Relative equilibria of Lagrangian and Hamiltonian systems with symmetry are critical points of appropriate scalar functions parametrized by the Lie algebra (or its dual) of the symmetry group. Setting aside the structures - symplectic, Poisson, or variational - generating dynamical systems from such functions highlights the common features of their construction and analysis, and supports the construction of analogous functions in non-Hamiltonian settings. If the symmetry group is nonabelian, the functions are invariant only with respect to the isotropy subgroup of the given parameter value. Replacing the parametrized family of functions with a single function on the product manifold and extending the action using the (co)adjoint action on the algebra or its dual yields a fully invariant function. An invariant map can be used to reverse the usual perspective: rather than selecting a parametrized family of functions and finding their critical points, conditions under which functions will be critical on specific orbits, typically distinguished by isotropy class, can be derived. This strategy is illustrated using several well-known mechanical systems - the Lagrange top, the double spherical pendulum, the free rigid body, and the Riemann ellipsoids - and generalizations of these systems.
Survival analysis with error-prone time-varying covariates: a risk set calibration approach
Liao, Xiaomei; Zucker, David M.; Li, Yi; Spiegelman, Donna
2010-01-01
Summary Occupational, environmental, and nutritional epidemiologists are often interested in estimating the prospective effect of time-varying exposure variables such as cumulative exposure or cumulative updated average exposure, in relation to chronic disease endpoints such as cancer incidence and mortality. From exposure validation studies, it is apparent that many of the variables of interest are measured with moderate to substantial error. Although the ordinary regression calibration approach is approximately valid and efficient for measurement error correction of relative risk estimates from the Cox model with time-independent point exposures when the disease is rare, it is not adaptable for use with time-varying exposures. By re-calibrating the measurement error model within each risk set, a risk set regression calibration method is proposed for this setting. An algorithm for a bias-corrected point estimate of the relative risk using an RRC approach is presented, followed by the derivation of an estimate of its variance, resulting in a sandwich estimator. Emphasis is on methods applicable to the main study/external validation study design, which arises in important applications. Simulation studies under several assumptions about the error model were carried out, which demonstrated the validity and efficiency of the method in finite samples. The method was applied to a study of diet and cancer from Harvard’s Health Professionals Follow-up Study (HPFS). PMID:20486928
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
NASA Astrophysics Data System (ADS)
Hurter, F.; Maier, O.
2013-11-01
We reconstruct atmospheric wet refractivity profiles for the western part of Switzerland with a least-squares collocation approach from data sets of (a) zenith path delays that are a byproduct of the GPS (global positioning system) processing, (b) ground meteorological measurements, (c) wet refractivity profiles from radio occultations whose tangent points lie within the study area, and (d) radiosonde measurements. Wet refractivity is a parameter partly describing the propagation of electromagnetic waves and depends on the atmospheric parameters temperature and water vapour pressure. In addition, we have measurements of a lower V-band microwave radiometer at Payerne. It delivers temperature profiles at high temporal resolution, especially in the range from ground to 3000 m a.g.l., though vertical information content decreases with height. The temperature profiles together with the collocated wet refractivity profiles provide near-continuous dew point temperature or relative humidity profiles at Payerne for the study period from 2009 to 2011. In the validation of the humidity profiles, we adopt a two-step procedure. We first investigate the reconstruction quality of the wet refractivity profiles at the location of Payerne by comparing them to wet refractivity profiles computed from radiosonde profiles available for that location. We also assess the individual contributions of the data sets to the reconstruction quality and demonstrate a clear benefit from the data combination. Secondly, the accuracy of the conversion from wet refractivity to dew point temperature and relative humidity profiles with the radiometer temperature profiles is examined, comparing them also to radiosonde profiles. For the least-squares collocation solution combining GPS and ground meteorological measurements, we achieve the following error figures with respect to the radiosonde reference: maximum median offset of relative refractivity error is -16% and quartiles are 5% to 40% for the lower troposphere. We further added 189 radio occultations that met our requirements. They mostly improved the accuracy in the upper troposphere. Maximum median offsets have decreased from 120% relative error to 44% at 8 km height. Dew point temperature profiles after the conversion with radiometer temperatures compare to radiosonde profiles as to: absolute dew point temperature errors in the lower troposphere have a maximum median offset of -2 K and maximum quartiles of 4.5 K. For relative humidity, we get a maximum mean offset of 7.3%, with standard deviations of 12-20%. The methodology presented allows us to reconstruct humidity profiles at any location where temperature profiles, but no atmospheric humidity measurements other than from GPS are available. Additional data sets of wet refractivity are shown to be easily integrated into the framework and strongly aid the reconstruction. Since the used data sets are all operational and available in near-realtime, we envisage the methodology of this paper to be a tool for nowcasting of clouds and rain and to understand processes in the boundary layer and at its top.
A Semiparametric Change-Point Regression Model for Longitudinal Observations.
Xing, Haipeng; Ying, Zhiliang
2012-12-01
Many longitudinal studies involve relating an outcome process to a set of possibly time-varying covariates, giving rise to the usual regression models for longitudinal data. When the purpose of the study is to investigate the covariate effects when experimental environment undergoes abrupt changes or to locate the periods with different levels of covariate effects, a simple and easy-to-interpret approach is to introduce change-points in regression coefficients. In this connection, we propose a semiparametric change-point regression model, in which the error process (stochastic component) is nonparametric and the baseline mean function (functional part) is completely unspecified, the observation times are allowed to be subject-specific, and the number, locations and magnitudes of change-points are unknown and need to be estimated. We further develop an estimation procedure which combines the recent advance in semiparametric analysis based on counting process argument and multiple change-points inference, and discuss its large sample properties, including consistency and asymptotic normality, under suitable regularity conditions. Simulation results show that the proposed methods work well under a variety of scenarios. An application to a real data set is also given.
Duan, Fenghai; Xu, Ye
2017-01-01
To analyze a microarray experiment to identify the genes with expressions varying after the diagnosis of breast cancer. A total of 44 928 probe sets in an Affymetrix microarray data publicly available on Gene Expression Omnibus from 249 patients with breast cancer were analyzed by the nonparametric multivariate adaptive splines. Then, the identified genes with turning points were grouped by K-means clustering, and their network relationship was subsequently analyzed by the Ingenuity Pathway Analysis. In total, 1640 probe sets (genes) were reliably identified to have turning points along with the age at diagnosis in their expression profiling, of which 927 expressed lower after turning points and 713 expressed higher after the turning points. K-means clustered them into 3 groups with turning points centering at 54, 62.5, and 72, respectively. The pathway analysis showed that the identified genes were actively involved in various cancer-related functions or networks. In this article, we applied the nonparametric multivariate adaptive splines method to a publicly available gene expression data and successfully identified genes with expressions varying before and after breast cancer diagnosis.
ERIC Educational Resources Information Center
Nevo, Dorit; McClean, Ron; Nevo, Saggi
2010-01-01
This paper discusses the relative advantage offered by online Students' Evaluations of Teaching (SET) and describes a study conducted at a Canadian university to identify critical success factors of online evaluations from students' point of view. Factors identified as important by the students include anonymity, ease of use (of both SET survey…
Zaylaa, Amira; Charara, Jamal; Girault, Jean-Marc
2015-08-01
The analysis of biomedical signals demonstrating complexity through recurrence plots is challenging. Quantification of recurrences is often biased by sojourn points that hide dynamic transitions. To overcome this problem, time series have previously been embedded at high dimensions. However, no one has quantified the elimination of sojourn points and rate of detection, nor the enhancement of transition detection has been investigated. This paper reports our on-going efforts to improve the detection of dynamic transitions from logistic maps and fetal hearts by reducing sojourn points. Three signal-based recurrence plots were developed, i.e. embedded with specific settings, derivative-based and m-time pattern. Determinism, cross-determinism and percentage of reduced sojourn points were computed to detect transitions. For logistic maps, an increase of 50% and 34.3% in sensitivity of detection over alternatives was achieved by m-time pattern and embedded recurrence plots with specific settings, respectively, and with a 100% specificity. For fetal heart rates, embedded recurrence plots with specific settings provided the best performance, followed by derivative-based recurrence plot, then unembedded recurrence plot using the determinism parameter. The relative errors between healthy and distressed fetuses were 153%, 95% and 91%. More than 50% of sojourn points were eliminated, allowing better detection of heart transitions triggered by gaseous exchange factors. This could be significant in improving the diagnosis of fetal state. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chan, Poh Yin; Tong, Chi Ming; Durrant, Marcus C
2011-09-01
An empirical method for estimation of the boiling points of organic molecules based on density functional theory (DFT) calculations with polarized continuum model (PCM) solvent corrections has been developed. The boiling points are calculated as the sum of three contributions. The first term is calculated directly from the structural formula of the molecule, and is related to its effective surface area. The second is a measure of the electronic interactions between molecules, based on the DFT-PCM solvation energy, and the third is employed only for planar aromatic molecules. The method is applicable to a very diverse range of organic molecules, with normal boiling points in the range of -50 to 500 °C, and includes ten different elements (C, H, Br, Cl, F, N, O, P, S and Si). Plots of observed versus calculated boiling points gave R²=0.980 for a training set of 317 molecules, and R²=0.979 for a test set of 74 molecules. The role of intramolecular hydrogen bonding in lowering the boiling points of certain molecules is quantitatively discussed. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Point process analyses of variations in smoking rate by setting, mood, gender, and dependence
Shiffman, Saul; Rathbun, Stephen L.
2010-01-01
The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683
Do gravid female Anolis nebulosus thermoregulate differently than males and non-gravid females?
Woolrich-Piña, Guillermo A; Smith, Geoffrey R; Lemos-Espinal, Julio A; Ramírez-Silva, Juan Pablo
2015-08-01
In lizards, the role of gravid oviparous females in controlling the temperature experienced by developing embryos prior to oviposition has been rarely examined. In particular, relatively little is known about the effect of gravidity on the thermal ecology of female Anolis lizards. Here we examine the thermal ecology of Anolis nebulosus from Nayarit, Mexico, with a particular goal of comparing the thermal ecology of gravid females to that of non-gravid females and males. The thermal efficiency (E) of gravid female A. nebulosus (E=0.782) was higher than in males (E=0.464), and to a lesser extent, non-gravid females (E=0.637), despite no significant differences observed in body, air, operative, or set point temperatures among males, gravid females, and non-gravid females. Gravid females had smaller differences between body temperatures and set point temperatures (db), but did not differ in the difference between operative temperature and set point temperature (de). Gravid females used sun-shade and shaded microhabitats proportionately more than males and non-gravid females, and rarely used sunny microhabitats. Our results suggest that gravid A. nebulosus are using a different and more efficient thermoregulatory strategy than other adults in the population. Such efficient thermoregulation is possibly related to females attempting to provide a thermal environment that is conducive to the development of embryos in eggs prior to oviposition. Copyright © 2015 Elsevier Ltd. All rights reserved.
Beran, Michael J; James, Brielle T; Whitham, Will; Parrish, Audrey E
2016-10-01
The reverse-reward contingency task presents 2 food sets to an animal, and they are required to choose the smaller of the 2 sets in order to receive the larger food set. Intriguingly, the majority of species tested on the reverse-reward task fail to learn this contingency in the absence of large trial counts, correction trials, and punishment techniques. The unique difficulty of this seemingly simple task likely reflects a failure of inhibitory control which is required to point toward a smaller and less desirable reward rather than a larger and more desirable reward. This failure by chimpanzees and other primates to pass the reverse-reward task is striking given the self-control they exhibit in a variety of other paradigms. For example, chimpanzees have consistently demonstrated a high capacity for delay of gratification in order to maximize accumulating food rewards in which foods are added item-by-item to a growing set until the subject consumes the rewards. To study the mechanisms underlying success in the accumulation task and failure in the reverse-reward task, we presented chimpanzees with several combinations of these 2 tasks to determine when chimpanzees might succeed in pointing to smaller food sets over larger food sets and how the nature of the task might determine the animals' success or failure. Across experiments, 3 chimpanzees repeatedly failed to solve the reverse-reward task, whereas they accumulated nearly all food items across all instances of the accumulation self-control task, even when they had to point to small amounts of food to accumulate larger amounts. These data indicate that constraints of these 2 related but still different tasks of behavioral inhibition are dependent upon the animals' perceptions of the choice set, their sense of control over the contents of choice sets, and the nature of the task constraints. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Vermeer, Willemijn M; Steenhuis, Ingrid H M; Seidell, Jacob C
2009-04-01
Food portion-sizes might be a promising starting point for interventions targeting obesity. The purpose of this qualitative study was to assess how representatives of point-of-purchase settings perceived the feasibility of interventions aimed at portion-size. Semi-structured interviews were conducted with 22 representatives of various point-of-purchase settings. Constructs derived from the diffusion of innovations theory were incorporated into the interview guide. Each interview was recorded and transcribed verbatim. Data were coded and analysed with Atlas.ti 5.2 using the framework approach. According to the participants, offering a larger variety of portion-sizes had the most relative advantages, and reducing portions was the most disadvantageous. The participants also considered portion-size reduction and linear pricing of portion-sizes to be risky. Lastly, a larger variety of portion-sizes, pricing strategies and portion-size labelling were seen as the most complex interventions. In general, participants considered offering a larger variety of portion-sizes, portion-size labelling and, to a lesser extent, pricing strategies with respect to portion-sizes as most feasible to implement. Interventions aimed at portion-size were seen as innovative by most participants. Developing adequate communication strategies about portion-size interventions with both decision-makers in point-of-purchase settings and the general public is crucial for successful implementation.
Scroll bar growth on the coastal Trinity River, TX, USA
NASA Astrophysics Data System (ADS)
Mason, J.; Hassenruck-Gudipati, H. J.; Mohrig, D. C.
2017-12-01
The processes leading to the formation and growth of scroll bars remain relatively mysterious despite how often they are referenced in fluvial literature. Their definition is descriptive; they are characterized as arcuate topographic highs present on the inner banks of channel bends on meandering rivers, landward of point bars. Often, they are used as proxies for previous positions of point bars. This assumption of a one-to-one correspondence between point bars and scroll bars should be reconsidered as 1) planform curvature for scroll bars is consistently smaller than the curvature for adjacent point bars, and 2) deposition on the scroll bar is typically distinct and disconnected from the adjacent point bar deposition. Results from time-lapse airborne lidar data as well as from trenches through five separate scroll bar - point bar pairings on the Trinity River in east TX, USA, will be discussed in relation to formative scroll bar processes and their connection to point bars. On the lidar difference map, scroll bar growth appears as a strip of increased deposition flanked on both the land- and channel-ward sides by areas with no or limited deposition. Trenches perpendicular to these scrolls typically show a base of dune-scale cross stratification interpreted to be associated with a previous position of the point bar. These dune sets are overlain by sets of climbing-ripple cross-strata that form the core of the modern scroll bar and preserve a record of multiple transport directions (away from, towards, and parallel to the channel). Preliminary Trinity River grain-size analyses show that the constructional scrolls are enriched in all grain sizes less than 250 microns in diameter, while point bars are enriched in all grain sizes above this cut off. Scroll bars are hypothesized to be akin to levees along the inner banks of channels-flow expansion caused by the presence of point bars induces deposition of suspended sediment that defines the positions of the scroll bars.
Silva, Diego S; Gibson, Jennifer L; Robertson, Ann; Bensimon, Cécile M; Sahni, Sachin; Maunula, Laena; Smith, Maxwell J
2012-03-26
Pandemic influenza may exacerbate existing scarcity of life-saving medical resources. As a result, decision-makers may be faced with making tough choices about who will receive care and who will have to wait or go without. Although previous studies have explored ethical issues in priority setting from the perspective of clinicians and policymakers, there has been little investigation into how the public views priority setting during a pandemic influenza, in particular related to intensive care resources. To bridge this gap, we conducted three public town hall meetings across Canada to explore Canadian's perspectives on this ethical challenge. Town hall discussions group discussions were digitally recorded, transcribed, and analyzed using thematic analysis. Six interrelated themes emerged from the town hall discussions related to: ethical and empirical starting points for deliberation; criteria for setting priorities; pre-crisis planning; in-crisis decision-making; the need for public deliberation and input; and participants' deliberative struggle with the ethical issues. Our findings underscore the importance of public consultation in pandemic planning for sustaining public trust in a public health emergency. Participants appreciated the empirical and ethical uncertainty of decision-making in an influenza pandemic and demonstrated nuanced ethical reasoning about priority setting of intensive care resources in an influenza pandemic. Policymakers may benefit from a better understanding the public's empirical and ethical 'starting points' in developing effective pandemic plans.
Bar-Kochva, Irit
2011-01-01
Orthographies range from shallow orthographies with transparent grapheme-phoneme relations, to deep orthographies, in which these relations are opaque. Two forms of script transcribe the Hebrew language: the shallow pointed script (with diacritics) and the deep unpointed script (without diacritics). This study was set out to examine whether the reading of these scripts evokes distinct brain activity. Preliminary results indicate distinct Event-related-potentials (ERPs). As an equivalent finding was absent when ERPs of non-orthographic stimuli with and without meaningless diacritics were compared, the results imply that print-specific aspects of processing account for the distinct activity elicited by the pointed and unpointed scripts.
Pointing Device Performance in Steering Tasks.
Senanayake, Ransalu; Goonetilleke, Ravindra S
2016-06-01
Use of touch-screen-based interactions is growing rapidly. Hence, knowing the maneuvering efficacy of touch screens relative to other pointing devices is of great importance in the context of graphical user interfaces. Movement time, accuracy, and user preferences of four pointing device settings were evaluated on a computer with 14 participants aged 20.1 ± 3.13 years. It was found that, depending on the difficulty of the task, the optimal settings differ for ballistic and visual control tasks. With a touch screen, resting the arm increased movement time for steering tasks. When both performance and comfort are considered, whether to use a mouse or a touch screen for person-computer interaction depends on the steering difficulty. Hence, a input device should be chosen based on the application, and should be optimized to match the graphical user interface. © The Author(s) 2016.
Reconfiguration Schemes for Fault-Tolerant Processor Arrays
1992-10-15
partially notion of linear schedule are easily related to similar ordered subset of a multidimensional integer lattice models and concepts used in [11-[131...and several other (called indec set). The points of this lattice correspond works. to (i.e.. are the indices of) computations, and the partial There are...These data dependencies are represented as vectors that of all computations of the algorithm is to be minimized. connect points of the lattice . If a
Towards semi-automatic rock mass discontinuity orientation and set analysis from 3D point clouds
NASA Astrophysics Data System (ADS)
Guo, Jiateng; Liu, Shanjun; Zhang, Peina; Wu, Lixin; Zhou, Wenhui; Yu, Yinan
2017-06-01
Obtaining accurate information on rock mass discontinuities for deformation analysis and the evaluation of rock mass stability is important. Obtaining measurements for high and steep zones with the traditional compass method is difficult. Photogrammetry, three-dimensional (3D) laser scanning and other remote sensing methods have gradually become mainstream methods. In this study, a method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information. The original data are pre-treated prior to segmentation by removing outlier points. The next step is to segment the point cloud into different point subsets. Various parameters, such as the normal, dip/direction and dip, can be calculated for each point subset after obtaining the equation of the best fit plane for the relevant point subset. A cluster analysis (a point subset that satisfies some conditions and thus forms a cluster) is performed based on the normal vectors by introducing the firefly algorithm (FA) and the fuzzy c-means (FCM) algorithm. Finally, clusters that belong to the same discontinuity sets are merged and coloured for visualization purposes. A prototype system is developed based on this method to extract the points of the rock discontinuity from a 3D point cloud. A comparison with existing software shows that this method is feasible. This method can provide a reference for rock mechanics, 3D geological modelling and other related fields.
Mismatch removal via coherent spatial relations
NASA Astrophysics Data System (ADS)
Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen
2014-07-01
We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.
Performance analysis of grazing incidence imaging systems. [X ray telescope aberrations
NASA Technical Reports Server (NTRS)
Winkler, C. E.; Korsch, D.
1977-01-01
An exact expression relating the coordinates of a point on the incident ray, a point of reflection from an arbitrary surface, and a point on the reflected ray is derived. The exact relation is then specialized for the case of grazing incidence, and first order and third order systematic analyses are carried out for a single reflective surface and then for a combination of two surfaces. The third order treatment yields a complete set of primary aberrations for single element and two element systems. The importance of a judicious choice for a coordinate system in showing field curvature to clearly be the predominant aberration for a two element system is discussed. The validity of the theory is verified through comparisons with the exact ray trace results for the case of the telescope.
1992-10-01
MSFC Test Engineer performing a functional test on the TES. The TES can be operated as a refrigerator, with a minimum set point temperature of 4.0 degrees C, or as an incubator, with a maximum set point temperature 40.0 degrees C of the set point. The TES can be set to maintain a constant temperature or programmed to change temperature settings over time, internal temperature recorded by a date logger.
Thermal diffusivity of UO2 up to the melting point
NASA Astrophysics Data System (ADS)
Vlahovic, L.; Staicu, D.; Küst, A.; Konings, R. J. M.
2018-02-01
The thermal diffusivity of uranium dioxide was measured from 500 to 3060 K with two different set-ups, both based on the laser-flash technique. Above 1600 K the measurements were performed with an advanced laser-flash technique, which was slightly improved in comparison with a former work. In the temperature range 500-2000 K the thermal diffusivity is decreasing, then relatively constant up to 2700 K, and tends to increase by approaching the melting point. The measurements of the thermal diffusivity in the vicinity of the melting point are possible under certain conditions, and are discussed in this paper.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
Unified Pairwise Spatial Relations: An Application to Graphical Symbol Retrieval
NASA Astrophysics Data System (ADS)
Santosh, K. C.; Wendling, Laurent; Lamiroy, Bart
In this paper, we present a novel unifying concept of pairwise spatial relations. We develop two way directional relations with respect to a unique point set, based on topology of the studied objects and thus avoids problems related to erroneous choices of reference objects while preserving symmetry. The method is robust to any type of image configuration since the directional relations are topologically guided. An automatic prototype graphical symbol retrieval is presented in order to establish its expressiveness.
ERIC Educational Resources Information Center
Calvert, Carol Elaine
2014-01-01
This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…
Large-scale urban point cloud labeling and reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Liqiang; Li, Zhuqiang; Li, Anjian; Liu, Fangyu
2018-04-01
The large number of object categories and many overlapping or closely neighboring objects in large-scale urban scenes pose great challenges in point cloud classification. In this paper, a novel framework is proposed for classification and reconstruction of airborne laser scanning point cloud data. To label point clouds, we present a rectified linear units neural network named ReLu-NN where the rectified linear units (ReLu) instead of the traditional sigmoid are taken as the activation function in order to speed up the convergence. Since the features of the point cloud are sparse, we reduce the number of neurons by the dropout to avoid over-fitting of the training process. The set of feature descriptors for each 3D point is encoded through self-taught learning, and forms a discriminative feature representation which is taken as the input of the ReLu-NN. The segmented building points are consolidated through an edge-aware point set resampling algorithm, and then they are reconstructed into 3D lightweight models using the 2.5D contouring method (Zhou and Neumann, 2010). Compared with deep learning approaches, the ReLu-NN introduced can easily classify unorganized point clouds without rasterizing the data, and it does not need a large number of training samples. Most of the parameters in the network are learned, and thus the intensive parameter tuning cost is significantly reduced. Experimental results on various datasets demonstrate that the proposed framework achieves better performance than other related algorithms in terms of classification accuracy and reconstruction quality.
Functional Test on (TES) Thermal Enclosure System
NASA Technical Reports Server (NTRS)
1992-01-01
MSFC Test Engineer performing a functional test on the TES. The TES can be operated as a refrigerator, with a minimum set point temperature of 4.0 degrees C, or as an incubator, with a maximum set point temperature 40.0 degrees C of the set point. The TES can be set to maintain a constant temperature or programmed to change temperature settings over time, internal temperature recorded by a date logger.
Access and Quality of HIV-Related Point-of-Care Diagnostic Testing in Global Health Programs.
Fonjungo, Peter N; Boeras, Debrah I; Zeh, Clement; Alexander, Heather; Parekh, Bharat S; Nkengasong, John N
2016-02-01
Access to point-of-care testing (POCT) improves patient care, especially in resource-limited settings where laboratory infrastructure is poor and the bulk of the population lives in rural settings. However, because of challenges in rolling out the technology and weak quality assurance measures, the promise of human immunodeficiency virus (HIV)-related POCT in resource-limited settings has not been fully exploited to improve patient care and impact public health. Because of these challenges, the Joint United Nations Programme on HIV/AIDS (UNAIDS), in partnership with other organizations, recently launched the Diagnostics Access Initiative. Expanding HIV programs, including the "test and treat" strategies and the newly established UNAIDS 90-90-90 targets, will require increased access to reliable and accurate POCT results. In this review, we examine various components that could improve access and uptake of quality-assured POC tests to ensure coverage and public health impact. These components include evaluation, policy, regulation, and innovative approaches to strengthen the quality of POCT. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Duncan, Gertrude Florence; Roth, Lisa Marie; Donner-Banzhoff, Nobert; Boesner, Stefan
2016-04-18
A general practice rotation is mandatory in most undergraduate medical education programs. However, little is known about the student-teacher interaction which takes place in this setting. In this study we analyzed occurrence and content of teaching points. From April to December 2012, 410 individual patient consultations were observed in twelve teaching practices associated with the Philipps University Marburg, Germany. Material was collected using structured field-note forms and videotaping. Data analysis was descriptive in form. A teaching point is defined here as a general rule or specific, case-related information divulged by the teaching practitioner. According to the analysis of 410 consultations, teaching points were made in 66.3% of consultations. During these consultations, 74.3% general- and 46.3% case related teaching points occurred; multiple categorizations were possible. Of seven possible topics, therapy was most common, followed, in frequency of occurrence, by patient history, diagnostic procedure, physical examination, disease pathology, differential diagnosis, risk factors and case presentation. The majority of consultations conducted within student presence contained teaching points, most frequently concerning therapy. General teaching points were more common than specific teaching points. Whilst it is encouraging that most consultations included teaching points, faculty development aimed at raising awareness for teaching and learning techniques is important.
A density based algorithm to detect cavities and holes from planar points
NASA Astrophysics Data System (ADS)
Zhu, Jie; Sun, Yizhong; Pang, Yueyong
2017-12-01
Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.
2012-01-01
Background Pandemic influenza may exacerbate existing scarcity of life-saving medical resources. As a result, decision-makers may be faced with making tough choices about who will receive care and who will have to wait or go without. Although previous studies have explored ethical issues in priority setting from the perspective of clinicians and policymakers, there has been little investigation into how the public views priority setting during a pandemic influenza, in particular related to intensive care resources. Methods To bridge this gap, we conducted three public town hall meetings across Canada to explore Canadian's perspectives on this ethical challenge. Town hall discussions group discussions were digitally recorded, transcribed, and analyzed using thematic analysis. Results Six interrelated themes emerged from the town hall discussions related to: ethical and empirical starting points for deliberation; criteria for setting priorities; pre-crisis planning; in-crisis decision-making; the need for public deliberation and input; and participants' deliberative struggle with the ethical issues. Conclusions Our findings underscore the importance of public consultation in pandemic planning for sustaining public trust in a public health emergency. Participants appreciated the empirical and ethical uncertainty of decision-making in an influenza pandemic and demonstrated nuanced ethical reasoning about priority setting of intensive care resources in an influenza pandemic. Policymakers may benefit from a better understanding the public's empirical and ethical 'starting points' in developing effective pandemic plans. PMID:22449119
Schierz, Oliver; Reissmann, Daniel
2016-10-01
To compare the impact of canine guided vs. bilateral balanced occlusion on oral health related quality of life (OHRQoL) as a patient-reported outcome measure. In this randomized single-blind crossover trial, 19 patients have been provided with new complete dentures in the maxilla and mandible. OHRQoL was assessed using the 49-item Oral Health Impact Profile (OHIP-49) before the start of the prosthodontic treatment (B), 3 months after insertion of the new dentures (T1), and 3 months after rework into the alternative concept (T2). Multilevel mixed-effect linear regression models were computed to determine the effect provided by the new set of dentures and the specific impact of the occlusal concept on OHRQoL using summary scores of the OHIP-49 and of a 19-items subset specific for edentulous patients (OHIP-EDENT). At baseline, participants' OHRQoL was substantially impaired indicated by an average OHIP-49 score of 42.1 points and an OHIP-EDENT score of 21.1 points. The effect of provision of a new set of complete dentures was a statistically significant decrease of 8.3 points (OHIP-49) and 4.0 points (OHIP-EDENT), respectively, representing a clinically relevant improvement in OHRQoL. When wearing dentures with bilateral balanced occlusion, participants showed on average 1.6 points higher OHIP-49 scores and 0.9 points higher OHIP-EDENT scores compared to canine guided dentures. This effect of the occlusal concept was neither statistically nor clinically significant. Both investigated occlusal concepts for complete dentures were comparable in their effect on patients' perceptions with none being considerably superior in terms of OHRQoL. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Case Management and Rehabilitation Counseling: Procedures and Techniques. Fourth Edition
ERIC Educational Resources Information Center
Roessler, Richard T.; Rubin, Stanford E.
2006-01-01
"Case Management and Rehabilitation Counseling" discusses procedures that are useful to rehabilitation professionals working in many settings. Specifically, this book reviews the finer points relating to diagnosing, arranging services, monitoring program outcomes, arranging for placement, planning for accommodations, ethical decision making,…
18F-FDG PET/MRI fusion in characterizing pancreatic tumors: comparison to PET/CT.
Tatsumi, Mitsuaki; Isohashi, Kayako; Onishi, Hiromitsu; Hori, Masatoshi; Kim, Tonsok; Higuchi, Ichiro; Inoue, Atsuo; Shimosegawa, Eku; Takeda, Yutaka; Hatazawa, Jun
2011-08-01
To demonstrate that positron emission tomography (PET)/magnetic resonance imaging (MRI) fusion was feasible in characterizing pancreatic tumors (PTs), comparing MRI and computed tomography (CT) as mapping images for fusion with PET as well as fused PET/MRI and PET/CT. We retrospectively reviewed 47 sets of (18)F-fluorodeoxyglucose ((18)F -FDG) PET/CT and MRI examinations to evaluate suspected or known pancreatic cancer. To assess the ability of mapping images for fusion with PET, CT (of PET/CT), T1- and T2-weighted (w) MR images (all non-contrast) were graded regarding the visibility of PT (5-point confidence scale). Fused PET/CT, PET/T1-w or T2-w MR images of the upper abdomen were evaluated to determine whether mapping images provided additional diagnostic information to PET alone (3-point scale). The overall quality of PET/CT or PET/MRI sets in diagnosis was also assessed (3-point scale). These PET/MRI-related scores were compared to PET/CT-related scores and the accuracy in characterizing PTs was compared. Forty-three PTs were visualized on CT or MRI, including 30 with abnormal FDG uptake and 13 without. The confidence score for the visibility of PT was significantly higher on T1-w MRI than CT. The scores for additional diagnostic information to PET and overall quality of each image set in diagnosis were significantly higher on the PET/T1-w MRI set than the PET/CT set. The diagnostic accuracy was higher on PET/T1-w or PET/T2-w MRI (93.0 and 90.7%, respectively) than PET/CT (88.4%), but statistical significance was not obtained. PET/MRI fusion, especially PET with T1-w MRI, was demonstrated to be superior to PET/CT in characterizing PTs, offering better mapping and fusion image quality.
NASA Technical Reports Server (NTRS)
Estefan, Jeff A.; Giovannoni, Brian J.
2014-01-01
The Advanced Multi-Mission Operations Systems (AMMOS) is NASA's premier space mission operations product line offering for use in deep-space robotic and astrophysics missions. The general approach to AMMOS modernization over the course of its 29-year history exemplifies a continual, evolutionary approach with periods of sponsor investment peaks and valleys in between. Today, the Multimission Ground Systems and Services (MGSS) office-the program office that manages the AMMOS for NASA-actively pursues modernization initiatives and continues to evolve the AMMOS by incorporating enhanced capabilities and newer technologies into its end-user tool and service offerings. Despite the myriad of modernization investments that have been made over the evolutionary course of the AMMOS, pain points remain. These pain points, based on interviews with numerous flight project mission operations personnel, can be classified principally into two major categories: 1) information-related issues, and 2) process-related issues. By information-related issues, we mean pain points associated with the management and flow of MOS data across the various system interfaces. By process-related issues, we mean pain points associated with the MOS activities performed by mission operators (i.e., humans) and supporting software infrastructure used in support of those activities. In this paper, three foundational concepts-Timeline, Closed Loop Control, and Separation of Concerns-collectively form the basis for expressing a set of core architectural tenets that provides a multifaceted approach to AMMOS system architecture modernization intended to address the information- and process-related issues. Each of these architectural tenets will be further explored in this paper. Ultimately, we envision the application of these core tenets resulting in a unified vision of a future-state architecture for the AMMOS-one that is intended to result in a highly adaptable, highly efficient, and highly cost-effective set of multimission MOS products and services.
2015-01-01
exercis ead over time i e rendezvous C function. nformation ne that set of po antify the relat are combined to as a morpho runs independ rs. EMPath a...Glider pilots do not use the temporal waypoint estimate from GOST in their control of the glider; waypoints are simply treated as a sequence of points...glider pilots’ interpretation of points as only sequential instead of temporal will eliminate this as a hindrance to the system’s use. Further
NASA Astrophysics Data System (ADS)
Ge, Xuming
2017-08-01
The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.
Generalization of the Time-Energy Uncertainty Relation of Anandan-Aharonov Type
NASA Technical Reports Server (NTRS)
Hirayama, Minoru; Hamada, Takeshi; Chen, Jin
1996-01-01
A new type of time-energy uncertainty relation was proposed recently by Anandan and Aharonov. Their formula, to estimate the lower bound of time-integral of the energy-fluctuation in a quantum state is generalized to the one involving a set of quantum states. This is achieved by obtaining an explicit formula for the distance between two finitely separated points in the Grassman manifold.
Single Rapamycin Administration Induces Prolonged Downward Shift in Defended Body Weight in Rats
Hebert, Mark; Licursi, Maria; Jensen, Brittany; Baker, Ashley; Milway, Steve; Malsbury, Charles; Grant, Virginia L.; Adamec, Robert; Hirasawa, Michiru; Blundell, Jacqueline
2014-01-01
Manipulation of body weight set point may be an effective weight loss and maintenance strategy as the homeostatic mechanism governing energy balance remains intact even in obese conditions and counters the effort to lose weight. However, how the set point is determined is not well understood. We show that a single injection of rapamycin (RAP), an mTOR inhibitor, is sufficient to shift the set point in rats. Intraperitoneal RAP decreased food intake and daily weight gain for several days, but surprisingly, there was also a long-term reduction in body weight which lasted at least 10 weeks without additional RAP injection. These effects were not due to malaise or glucose intolerance. Two RAP administrations with a two-week interval had additive effects on body weight without desensitization and significantly reduced the white adipose tissue weight. When challenged with food deprivation, vehicle and RAP-treated rats responded with rebound hyperphagia, suggesting that RAP was not inhibiting compensatory responses to weight loss. Instead, RAP animals defended a lower body weight achieved after RAP treatment. Decreased food intake and body weight were also seen with intracerebroventricular injection of RAP, indicating that the RAP effect is at least partially mediated by the brain. In summary, we found a novel effect of RAP that maintains lower body weight by shifting the set point long-term. Thus, RAP and related compounds may be unique tools to investigate the mechanisms by which the defended level of body weight is determined; such compounds may also be used to complement weight loss strategy. PMID:24787262
Synergies in the space of control variables within the equilibrium-point hypothesis
Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M.; Latash, Mark L.
2015-01-01
We use an approach rooted in the recent theory of synergies to analyze possible co-variation between two hypothetical control variables involved in finger force production based in the equilibrium-point hypothesis. These control variables are the referent coordinate (R) and apparent stiffness (C) of the finger. We tested a hypothesis that inter-trial co-variation in the {R; C} space during repeated, accurate force production trials stabilizes the fingertip force. This was expected to correspond to a relatively low amount of inter-trial variability affecting force and a high amount of variability keeping the force unchanged. We used the “inverse piano” apparatus to apply small and smooth positional perturbations to fingers during force production tasks. Across trials, R and C showed strong co-variation with the data points lying close to a hyperbolic curve. Hyperbolic regressions accounted for over 99% of the variance in the {R; C} space. Another analysis was conducted by randomizing the original {R; C} data sets and creating surrogate data sets that were then used to compute predicted force values. The surrogate sets always showed much higher force variance compared to the actual data, thus reinforcing the conclusion that finger force control was organized in the {R; C} space, as predicted by the equilibrium-point hypothesis, and involved co-variation in that space stabilizing total force. PMID:26701299
Executive Cognitive Function and Food Intake in Children
ERIC Educational Resources Information Center
Riggs, Nathaniel R.; Spruijt-Metz, Donna; Sakuma, Kari-Lyn; Chou, Chih-Ping; Pentz, Mary Ann
2010-01-01
Objective: The current study investigated relations among neurocognitive skills important for behavioral regulation, and the intake of fruit, vegetables, and snack food in children. Design: Participants completed surveys at a single time point. Setting: Assessments took place during school. Participants: Participants were 107 fourth-grade children…
International "best practices" in health care: the roles of context and innovation.
Goes, Jim; Savage, Grant T; Friedman, Leonard H
2015-01-01
Explores recent approaches to international best practices and how they relate to context and innovation in health services. Critical review of existing research on best practices and how they created, diffused, and translate in the international setting. Best practices are widely used and discussed, but processes by which they are developed and diffused across international settings are not well understood. Further research is needed on innovation and dissemination of best practices internationally. This commentary points out directions for future research on innovation and diffusion of best practices, particularly in the international setting.
Shallow ground-water quality beneath a major urban center: Denver, Colorado, USA
Bruce, B.W.; McMahon, P.B.
1996-01-01
A survey of the chemical quality of ground water in the unconsolidated alluvial aquifer beneath a major urban center (Denver, Colorado, USA) was performed in 1993 with the objective of characterizing the quality of shallow ground-water in the urban area and relating water quality to land use. Thirty randomly selected alluvial wells were each sampled once for a broad range of dissolved constituents. The urban land use at each well site was sub- classified into one of three land-use settings: residential, commercial, and industrial. Shallow ground-water quality was highly variable in the urban area and the variability could be related to these land-use setting classifications. Sulfate (SO4) was the predominant anion in most samples from the residential and commercial land-use settings, whereas bicarbonate (HCO3) was the predominant anion in samples from the industrial land-use setting, indicating a possible shift in redox conditions associated with land use. Only three of 30 samples had nitrate concentrations that exceeded the US national drinking-water standard of 10 mg l-1 as nitrogen, indicating that nitrate contamination of shallow ground water may not be a serious problem in this urban area. However, the highest median nitrate concentration (4.2 mg l-1) was in samples from the residential setting, where fertilizer application is assumed to be most intense. Twenty-seven of 30 samples had detectable pesticides and nine of 82 analyzed pesticide compounds were detected at low concentrations, indicating that pesticides are widely distributed in shallow ground water in this urban area. Although the highest median total pesticide concentration (0.17 ??g l-1) was in the commercial setting, the herbicides prometon and atrazine were found in each land-use setting. Similarly, 25 of 29 samples analyzed had detectable volatile organic compounds (VOCs) indicating these compounds are also widely distributed in this urban area. The total VOC concentrations in sampled wells ranged from nondetectable to 23 442 ??g l-1. Widespread detections and occasionally high concentrations point to VOCs as the major anthropogenic ground-water impact in this urban environment. Generally, the highest VOC concentrations occurred in samples from the industrial setting. The most frequently detected VOC was the gasoline additive methyl tertbutyl ether (MTBE, in 23 of 29 wells). Results from this study indicate that the quality of shallow ground water in major urban areas can be related to land-use settings. Moreover, some VOCs and pesticides may be widely distributed at low concentrations in shallow ground water throughout major urban areas. As a result, the differentiation between point and non-point sources for these compounds in urban areas may be difficult.
Safe landing area determination for a Moon lander by reachability analysis
NASA Astrophysics Data System (ADS)
Arslantaş, Yunus Emre; Oehlschlägel, Thimo; Sagliano, Marco
2016-11-01
In the last decades developments in space technology paved the way to more challenging missions like asteroid mining, space tourism and human expansion into the Solar System. These missions result in difficult tasks such as guidance schemes for re-entry, landing on celestial bodies and implementation of large angle maneuvers for spacecraft. There is a need for a safety system to increase the robustness and success of these missions. Reachability analysis meets this requirement by obtaining the set of all achievable states for a dynamical system starting from an initial condition with given admissible control inputs of the system. This paper proposes an algorithm for the approximation of nonconvex reachable sets (RS) by using optimal control. Therefore subset of the state space is discretized by equidistant points and for each grid point a distance function is defined. This distance function acts as an objective function for a related optimal control problem (OCP). Each infinite dimensional OCP is transcribed into a finite dimensional Nonlinear Programming Problem (NLP) by using Pseudospectral Methods (PSM). Finally, the NLPs are solved using available tools resulting in approximated reachable sets with information about the states of the dynamical system at these grid points. The algorithm is applied on a generic Moon landing mission. The proposed method computes approximated reachable sets and the attainable safe landing region with information about propellant consumption and time.
2009-01-01
employs a set of reference targets such as asteroids that are relatively numer- ous, more or less uniformly distributed around the Sun, and relatively...point source-like. Just such a population exists—90 km-class asteroids . There are about 100 of these objects with relatively well-know orbits...These are main belt objects that are approximately evenly distributed around the sun. They are large enough to be quasi-spherical in nature, and as a
Reliability and validity of the Dutch pediatric Voice Handicap Index.
Veder, Laura; Pullens, Bas; Timmerman, Marieke; Hoeve, Hans; Joosten, Koen; Hakkesteegt, Marieke
2017-05-01
The pediatric voice handicap index (pVHI) has been developed to provide a better insight into the parents' perception of their child's voice related quality of life. The purpose of the present study was to validate the Dutch pVHI by evaluating its internal consistency and reliability. Furthermore, we determined the optimal cut-off point for a normal pVHI score. All items of the English pVHI were translated into Dutch. Parents of children in our dysphonic and control group were asked to fill out the questionnaire. For the test re-test analysis we used a different study group who filled out the pVHI twice as part of a large follow up study. Internal consistency was analyzed through Cronbach's α coefficient. The test-retest reliability was assessed by determining Pearson's correlation coefficient. Mann-Whitney test was used to compare the scores of the questionnaire of the control group with the dysphonic group. By calculating receiver operating characteristic (ROC) curves, sensitivity and specificity we were able to set a cut-off point. We obtained data from 122 asymptomatic children and from 79 dysphonic children. The scores of the questionnaire significantly differed between both groups. The internal consistency showed an overall Cronbach α coefficient of 0.96 and an excellent test-retest reliability of the total pVHI questionnaire with a Pearson's correlation coefficient of 0.90. A cut-off point for the total pVHI questionnaire was set at 7 points with a specificity of 85% and sensitivity of 100%. A cut-off point for the VAS score was set at 13 with a specificity of 93% and sensitivity of 97%. The Dutch pVHI is a valid and reliable tool for the assessment of children with voice problems. By setting a cut-off point for the score of the total pVHI questionnaire of 7 points and the VAS score of 13, the pVHI might be used as a screening tool to assess dysphonic complaints and the pVHI might be a useful and complementary tool to identify children with dysphonia. Copyright © 2017 Elsevier B.V. All rights reserved.
Alternator control for battery charging
Brunstetter, Craig A.; Jaye, John R.; Tallarek, Glen E.; Adams, Joseph B.
2015-07-14
In accordance with an aspect of the present disclosure, an electrical system for an automotive vehicle has an electrical generating machine and a battery. A set point voltage, which sets an output voltage of the electrical generating machine, is set by an electronic control unit (ECU). The ECU selects one of a plurality of control modes for controlling the alternator based on an operating state of the vehicle as determined from vehicle operating parameters. The ECU selects a range for the set point voltage based on the selected control mode and then sets the set point voltage within the range based on feedback parameters for that control mode. In an aspect, the control modes include a trickle charge mode and battery charge current is the feedback parameter and the ECU controls the set point voltage within the range to maintain a predetermined battery charge current.
General subspace learning with corrupted training data via graph embedding.
Bao, Bing-Kun; Liu, Guangcan; Hong, Richang; Yan, Shuicheng; Xu, Changsheng
2013-11-01
We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.
Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean
2017-12-04
Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further using multiple metrics with much larger scale comparisons, prospective testing as well as assessment of different fingerprints and DNN architectures beyond those used.
Rea, A.H.; Becker, C.J.
1997-01-01
This compact disc contains 25 digital map data sets covering the State of Oklahoma that may be of interest to the general public, private industry, schools, and government agencies. Fourteen data sets are statewide. These data sets include: administrative boundaries; 104th U.S. Congressional district boundaries; county boundaries; latitudinal lines; longitudinal lines; geographic names; indexes of U.S. Geological Survey 1:100,000, and 1:250,000-scale topographic quadrangles; a shaded-relief image; Oklahoma State House of Representatives district boundaries; Oklahoma State Senate district boundaries; locations of U.S. Geological Survey stream gages; watershed boundaries and hydrologic cataloging unit numbers; and locations of weather stations. Eleven data sets are divided by county and are located in 77 county subdirectories. These data sets include: census block group boundaries with selected demographic data; city and major highways text; geographic names; land surface elevation contours; elevation points; an index of U.S. Geological Survey 1:24,000-scale topographic quadrangles; roads, streets and address ranges; highway text; school district boundaries; streams, river and lakes; and the public land survey system. All data sets are provided in a readily accessible format. Most data sets are provided in Digital Line Graph (DLG) format. The attributes for many of the DLG files are stored in related dBASE(R)-format files and may be joined to the data set polygon attribute or arc attribute tables using dBASE(R)-compatible software. (Any use of trade names in this publication is for descriptive purposes only and does not imply endorsement by the U.S. Government.) Point attribute tables are provided in dBASE(R) format only, and include the X and Y map coordinates of each point. Annotation (text plotted in map coordinates) are provided in AutoCAD Drawing Exchange format (DXF) files. The shaded-relief image is provided in TIFF format. All data sets except the shaded-relief image also are provided in ARC/INFO export-file format.
Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set
NASA Astrophysics Data System (ADS)
Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.
2000-06-01
Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.
Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo
2010-01-01
The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.
System Theory Aspects of Multi-Body Dynamics.
1978-08-18
systems are described from a system theory point of view. Various system theory concepts and research topics which have applicability to this class of...systems are identified and briefly described. The subject of multi-body dynamics is presented in a vector space setting and is related to system theory concepts. (Author)
NASA Technical Reports Server (NTRS)
House, Frederick B.
1986-01-01
The Nimbus 7 Earth Radiation Budget (ERB) data set is reviewed to examine its strong and weak points. In view of the timing of this report relative to the processing schedule of Nimbus 7 ERB observations, emphasis is placed on the methodology of interpreting the scanning radiometer data to develop directional albedo models. These findings enhance the value of the Nimbus 7 ERB data set and can be applied to the interpretation of both the scanning and nonscanning radiometric observations.
2012-02-12
is the total number of data points, is an approximately unbiased estimate of the “expected relative Kullback - Leibler distance” ( information loss...possible models). Thus, after each model from Table 2 is fit to a data set, we can compute the Akaike weights for the set of candidate models and use ...computed from the OLS best- fit model solution (top), from a deconvolution of the data using normal curves (middle) and from a deconvolution of the data
Kosmulski, Marek
2012-01-01
The numerical values of points of zero charge (PZC, obtained by potentiometric titration) and of isoelectric points (IEP) of various materials reported in the literature have been analyzed. In sets of results reported for the same chemical compound (corresponding to certain chemical formula and crystallographic structure), the IEP are relatively consistent. In contrast, in materials other than metal oxides, the sets of PZC are inconsistent. In view of the inconsistence in the sets of PZC and of the discrepancies between PZC and IEP reported for the same material, it seems that IEP is more suitable than PZC as the unique number characterizing the pH-dependent surface charging of materials other than metal oxides. The present approach is opposite to the usual approach, in which the PZC and IEP are considered as two equally important parameters characterizing the pH-dependent surface charging of materials other than metal oxides. Copyright © 2012 Elsevier B.V. All rights reserved.
Stability of the Kasner universe in f (T ) gravity
NASA Astrophysics Data System (ADS)
Paliathanasis, Andronikos; Said, Jackson Levi; Barrow, John D.
2018-02-01
f (T ) gravity theory offers an alternative context in which to consider gravitational interactions where torsion, rather than curvature, is the mechanism by which gravitation is communicated. We investigate the stability of the Kasner solution with several forms of the arbitrary Lagrangian function examined within the f (T ) context. This is a Bianchi type-I vacuum solution with anisotropic expansion factors. In the f (T ) gravity setting, the solution must conform to a set of conditions in order to continue to be a vacuum solution of the generalized field equations. With this solution in hand, the perturbed field equations are determined for power-law and exponential forms of the f (T ) function. We find that the point which describes the Kasner solution is a saddle point which means that the singular solution is unstable. However, we find the de Sitter universe is a late-time attractor. In general relativity, the cosmological constant drives the isotropization of the spacetime while in this setting the extra f (T ) contributions now provide this impetus.
78 FR 24816 - Pricing for the 2013 American Eagle West Point Two-Coin Silver Set
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
... DEPARTMENT OF THE TREASURY United States Mint Pricing for the 2013 American Eagle West Point Two-Coin Silver Set AGENCY: United States Mint, Department of the Treasury. ACTION: Notice. SUMMARY: The United States Mint is announcing the price of the 2013 American Eagle West Point Two-Coin Silver Set. The...
Quantifying morphological changes of cape-related shoals
NASA Astrophysics Data System (ADS)
Paniagua-Arroyave, J. F.; Adams, P. N.; Parra, S. M.; Valle-Levinson, A.
2017-12-01
The rising demand for marine resources has motivated the study of inner shelf transport processes, especially in locations with highly-developed coastlines, endangered-species habitats, and valuable economic resources. These characteristics are found at Cape Canaveral shoals, on the Florida Atlantic coast, where transport dynamics and morphological evolution are not well understood. To study morphological changes at these shoals, two sets of paired upward- and downward-pointing acoustic Doppler current profilers (ADCPs) were deployed in winter 2015-2016. One set was deployed at the inner swale of Shoal E, 20 km southeast of the cape tip in 13 m depth, while the other set was located at the edge of Southeast shoal in 5 m deep. Upward-pointing velocity profiles and suspended particle concentrations were implemented in the Exner equation to quantify instantaneous rates of change in bed elevation. This computation includes changes in sediment concentration and the advection of suspended particles, but does not account for spatial gradients in bed-load fluxes and water velocities. The results of the computation were then compared to bed change rates measured directly by the downward-pointing ADCPs. At the easternmost ridge, quantified bed elevation change rates ranged from -7×10-7 to 4×10-7 m/s, and those at the inner swale ranged from -4×10-7 to 8×10-7 m/s. These values were two orders of magnitude smaller than rates measured by downward-pointing ADCPs. Moreover, the cumulative changes were two orders of magnitude larger at the ridge (-0.33 m, downward, and -0.13, m upward) than at the inner swale (cf. -6×10-3 m, downward, and 3×10-3 m, upward). These values suggest that bedform migration may be occurring at the ridge, that suspended sediments account for up to 30% of total bed changes, and that gradients in bed-load fluxes exert control on morphological change over the shoals. Despite uncertainties related to the ADCP-derived sediment concentrations, these findings provide preliminary evidence about the spatial variability in morphological changes over cape-related shoals.
GOES-12 SXI Operational Calibration
NASA Astrophysics Data System (ADS)
Pizzo, V. J.; Hill, S. M.; Balch, C.
2002-12-01
The prototype Solar X-ray Imager (SXI) was lofted into orbit aboard the NOAA GOES-12 spacecraft on 23 July 2001. The results of pre-launch ground-based optical tests have been combined with an extensive set of imagery taken during the post-launch checkout period from late August through mid December 2001 to establish an operational calibration for the full instrument performance. Although the nickel-coated mirror is a conventional Wolter-I grazing incidence optic, the detector consists of an MCP-enhanced CCD configuration not previously used for direct solar imaging. A full set of calibration data for each optical component (mirror, filters, detector) as well as for net system throughput have been derived and are available on the SXI website (http://sec.noaa.gov/sxi/ScienceUserGuide.html). In addition, a wide variety of information on instrument spatial resolution, point-spread function, dynamic range, photon statistics, and gain dependence (related to voltage settings for the MCP) have been derived. An improved background correction has been developed and applied to the recent release of the post-launch data now publicly available in FITS format. Special instrument topics including issues related to solar pointing and image timing aboard a geo-synchronous platform, CCD blooming properties, detector flat-field effects, and response to SEP events are also detailed.
Using Lin's method to solve Bykov's problems
NASA Astrophysics Data System (ADS)
Knobloch, Jürgen; Lamb, Jeroen S. W.; Webster, Kevin N.
2014-10-01
We consider nonwandering dynamics near heteroclinic cycles between two hyperbolic equilibria. The constituting heteroclinic connections are assumed to be such that one of them is transverse and isolated. Such heteroclinic cycles are associated with the termination of a branch of homoclinic solutions, and called T-points in this context. We study codimension-two T-points and their unfoldings in Rn. In our consideration we distinguish between cases with real and complex leading eigenvalues of the equilibria. In doing so we establish Lin's method as a unified approach to (re)gain and extend results of Bykov's seminal studies and related works. To a large extent our approach reduces the study to the discussion of intersections of lines and spirals in the plane. Case (RR): Under open conditions on the eigenvalues, there exist open sets in parameter space for which there exist periodic orbits close to the heteroclinic cycle. In addition, there exist two one-parameter families of homoclinic orbits to each of the saddle points p1 and p2.See Theorem 2.1 and Proposition 2.2 for precise statements and Fig. 2 for bifurcation diagrams. Cases (RC) and (CC): At the bifurcation point μ=0 and for each N≥2, there exists an invariant set S0N close to the heteroclinic cycle on which the first return map is topologically conjugated to a full shift on N symbols. For any fixed N≥2, the invariant set SμN persists for |μ| sufficiently small.In addition, there exist infinitely many transversal and non-transversal heteroclinic orbits connecting the saddle points p1 and p2 in a neighbourhood of μ=0, as well as infinitely many one-parameter families of homoclinic orbits to each of the saddle points.For full statements of the results see Theorem 2.3 and Propositions 2.4, 2.5 and Fig. 3 for bifurcation diagrams. The dynamics near T-points has been studied previously by Bykov [6-10], Glendinning and Sparrow [20], Kokubu [27,28] and Labouriau and Rodrigues [30,31,38]. See also the surveys by Homburg and Sandstede [24], Shilnikov et al. [43] and Fiedler [18]. The occurrence of T-points in local bifurcations has been discussed by Barrientos et al. [4], and by Lamb et al. [32] in the context of reversible systems. All these studies consider dynamics in R3 using a geometric return map approach, and their results reflect the description of types of nonwandering dynamics described above.Further related studies concerning T-points can be found in [34] and [37], where inclination flips were considered in this context. In [5], numerical studies of T-points are performed using kneading invariants.The main aim of this paper is to present a comprehensive study of dynamics near T-points, including detailed proofs of all results, employing a unified functional-analytic approach, without making any assumption on the dimension of the phase space. In the process, we recover and generalise to higher dimensional settings all previously reported results for T-points in R3. In addition, we reveal the existence of richer dynamics in the (RC) and (CC) cases. A detailed discussion of our results is contained in Section 2.The functional analytic approach we follow is commonly referred to as Lin's method, after the seminal paper by Lin [33], and employs a reduction on an appropriate Banach space of piecewise continuous functions approximating the initial heteroclinic cycle to yield bifurcation equations whose solutions represent orbits of the nonwandering set. The development of such an approach is typical for the school of Hale, and is in contrast to the analysis contained in previous T-point studies, which relies on the construction of a first return map. Our choice of analytical framework is motivated by the fact that Lin's method provides a unified approach to study global bifurcations in arbitrary dimension, and has been shown to extend to a larger class of settings, such as delay and advance-delay equations [19,33].
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Experimental data filtration algorithm
NASA Astrophysics Data System (ADS)
Oanta, E.; Tamas, R.; Danisor, A.
2017-08-01
Experimental data reduction is an important topic because the resulting information is used to calibrate the theoretical models and to verify the accuracy of their results. The paper presents some ideas used to extract a subset of points from the initial set of points which defines an experimentally acquired curve. The objective is to get a subset with significantly fewer points as the initial data set and which accurately defines a smooth curve that preserves the shape of the initial curve. Being a general study we used only data filtering criteria based geometric features that at a later stage may be related to upper level conditions specific to the phenomenon under investigation. Five algorithms were conceived and implemented in an original software consisting of more than 1800 computer code lines which has a flexible structure that allows us to easily update it using new algorithms. The software instrument was used to process the data of several case studies. Conclusions are drawn regarding the values of the parameters used in the algorithms to decide if a series of points may be considered either noise, or a relevant part of the curve. Being a general analysis, the result is a computer based trial-and-error method that efficiently solves this kind of problems.
Gadd, C S; Baskaran, P; Lobach, D F
1998-01-01
Extensive utilization of point-of-care decision support systems will be largely dependent on the development of user interaction capabilities that make them effective clinical tools in patient care settings. This research identified critical design features of point-of-care decision support systems that are preferred by physicians, through a multi-method formative evaluation of an evolving prototype of an Internet-based clinical decision support system. Clinicians used four versions of the system--each highlighting a different functionality. Surveys and qualitative evaluation methodologies assessed clinicians' perceptions regarding system usability and usefulness. Our analyses identified features that improve perceived usability, such as telegraphic representations of guideline-related information, facile navigation, and a forgiving, flexible interface. Users also preferred features that enhance usefulness and motivate use, such as an encounter documentation tool and the availability of physician instruction and patient education materials. In addition to identifying design features that are relevant to efforts to develop clinical systems for point-of-care decision support, this study demonstrates the value of combining quantitative and qualitative methods of formative evaluation with an iterative system development strategy to implement new information technology in complex clinical settings.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Combined-probability space and certainty or uncertainty relations for a finite-level quantum system
NASA Astrophysics Data System (ADS)
Sehrawat, Arun
2017-08-01
The Born rule provides a probability vector (distribution) with a quantum state for a measurement setting. For two settings, we have a pair of vectors from the same quantum state. Each pair forms a combined-probability vector that obeys certain quantum constraints, which are triangle inequalities in our case. Such a restricted set of combined vectors, called the combined-probability space, is presented here for a d -level quantum system (qudit). The combined space is a compact convex subset of a Euclidean space, and all its extreme points come from a family of parametric curves. Considering a suitable concave function on the combined space to estimate the uncertainty, we deliver an uncertainty relation by finding its global minimum on the curves for a qudit. If one chooses an appropriate concave (or convex) function, then there is no need to search for the absolute minimum (maximum) over the whole space; it will be on the parametric curves. So these curves are quite useful for establishing an uncertainty (or a certainty) relation for a general pair of settings. We also demonstrate that many known tight certainty or uncertainty relations for a qubit can be obtained with the triangle inequalities.
NASA Astrophysics Data System (ADS)
Kreylos, O.; Bawden, G. W.; Kellogg, L. H.
2005-12-01
We are developing a visualization application to display and interact with very large (tens of millions of points) four-dimensional point position datasets in an immersive environment such that point groups from repeated Tripod LiDAR (Light Detection And Ranging) surveys can be selected, measured, and analyzed for land surface change using 3D~interactions. Ground-based tripod or terrestrial LiDAR (T-LiDAR) can remotely collect ultra-high resolution (centimeter to subcentimeter) and accurate (± 4 mm) digital imagery of the scanned target, and at scanning rates of 2,000 (x, y, z, i) (3D~position~+ intensity) points per second over 7~million points can be collected for a given target in an hour. We developed a multiresolution point set data representation based on octrees to display large T-LiDAR point cloud datasets at the frame rates required for immersive display (between 60 Hz and 120 Hz). Data inside an observer's region of interest is shown in full detail, whereas data outside the field of view or far away from the observer is shown at reduced resolution to provide context. Using 3D input devices at the University of California Davis KeckCAVES, users can navigate large point sets, accurately select related point groups in two or more point sets by sweeping regions of space, and guide the software in deriving positional information from point groups to compute their displacements between surveys. We used this new software application in the KeckCAVES to analyze 4D T-LiDAR imagery from the June~1, 2005 Blue Bird Canyon landslide in Laguna Beach, southern California. Over 50~million (x, y, z, i) data points were collected between 10 and 21~days after the landslide to evaluate T-LiDAR as a natural hazards response tool. The visualization of the T-LiDAR scans within the immediate landslide showed minor readjustments in the weeks following the primarily landslide with no observable continued motion on the primary landslide. Recovery and demolition efforts across the landslide, such as the building of new roads and removal of unstable structures, are easily identified and assessed with the new software through the differencing of aligned imagery.
Variance-Based Cluster Selection Criteria in a K-Means Framework for One-Mode Dissimilarity Data.
Vera, J Fernando; Macías, Rodrigo
2017-06-01
One of the main problems in cluster analysis is that of determining the number of groups in the data. In general, the approach taken depends on the cluster method used. For K-means, some of the most widely employed criteria are formulated in terms of the decomposition of the total point scatter, regarding a two-mode data set of N points in p dimensions, which are optimally arranged into K classes. This paper addresses the formulation of criteria to determine the number of clusters, in the general situation in which the available information for clustering is a one-mode [Formula: see text] dissimilarity matrix describing the objects. In this framework, p and the coordinates of points are usually unknown, and the application of criteria originally formulated for two-mode data sets is dependent on their possible reformulation in the one-mode situation. The decomposition of the variability of the clustered objects is proposed in terms of the corresponding block-shaped partition of the dissimilarity matrix. Within-block and between-block dispersion values for the partitioned dissimilarity matrix are derived, and variance-based criteria are subsequently formulated in order to determine the number of groups in the data. A Monte Carlo experiment was carried out to study the performance of the proposed criteria. For simulated clustered points in p dimensions, greater efficiency in recovering the number of clusters is obtained when the criteria are calculated from the related Euclidean distances instead of the known two-mode data set, in general, for unequal-sized clusters and for low dimensionality situations. For simulated dissimilarity data sets, the proposed criteria always outperform the results obtained when these criteria are calculated from their original formulation, using dissimilarities instead of distances.
Stijkel, A; van Eijndhoven, J C; Bal, R
1996-12-01
The Dutch procedure for standard setting for occupational exposure to chemicals, just like the European Union (EU) procedure, is characterized by an organizational separation between considerations of health on the one side, and of technology, economics, and policy on the other side. Health considerations form the basis for numerical guidelines. These guidelines are next combined with technical-economical considerations. Standards are then proposed, and are finally set by the Ministry of Social Affairs and Employment. An analysis of this procedure might be of relevance to the US, where other procedures are used and criticized. In this article we focus on the first stage of the standard-setting procedure. In this stage, the Dutch Expert Committee on Occupational Standards (DECOS) drafts a criteria document in which a health-based guideline is proposed. The drafting is based on a set of starting points for assessing toxicity. We raise the questions, "Does DECOS limit itself only to health considerations? And if not, what are the consequences of such a situation?" We discuss DECOS' starting points and analyze the relationships between those starting points, and then explore eight criteria documents where DECOS was considering reproductive risks as a possible critical effect. For various reasons, it will be concluded that the starting points leave much interpretative space, and that this space is widened further by the manner in which DECOS utilizes it. This is especially true in situations involving sex-specific risks and uncertainties in knowledge. Consequently, even at the first stage, where health considerations alone are intended to play a role, there is much room for other than health-related factors to influence decision making, although it is unavoidable that some interpretative space will remain. We argue that separating the various types of consideration should not be abandoned. Rather, through adjustments in the starting points and aspects of the procedure, clarity should be guaranteed about the way the interpretative space is being employed.
NASA Astrophysics Data System (ADS)
Wang, Jingmei; Gong, Adu; Li, Jing; Chen, Yanling
2017-04-01
Typhoon is a kind of strong weather system formed in tropical or subtropical oceans. China, located on the west side of the Pacific Ocean, is the country affected by the typhoon most frequently and seriously. To provide theoretical support for effectively reducing the damage caused by typhoon, the variation law of typhoon frequency is explored by analyzing the distribution of typhoon path and landing sites, sphere of influence, and the statistical characteristics of typhoon for every 5 years. In this study, the typhoon point data set was formed using the Best Path Data Set (0.1 ° × 0.1 °) compiled by China Meteorological Administration from 1950 to 2014. By using the tool of Point to Line in software ArgGIS, the typhoon paths are produced from the point data set. The influence sphere of typhoon is calculated from Euclidean distance of typhoon, whose threshold is set to 1°.The typhoon landing site was extracted by using the Chinese vector layer provided by the research group. By counting the frequency of typhoons, the landing sites, and the sphere of influence, some conclusions can be drawn as follows. In recent years, the number of typhoons generated has been reduced, typhoon intensity is relatively stable, but the impact of typhoon area has increased. Specific performance can be seen from the typhoon statistical and spatial distribution characteristics in China. In terms of frequency of typhoon landing, the number of typhoons landing in China has increased while the total number of typhoons is reduced. In terms of distribution of landing sites, the range of typhoon landing fluctuates. However, during the process of fluctuation, the range is gradually expanding. For example, in south of China, Hainan Island is affected by typhoon more frequently meanwhile China's northeast region is also gradually affected, which is extremely unusual before. Key words: spatial point model, distribution of typhoon, frequency of typhoon
Standard map in magnetized relativistic systems: fixed points and regular acceleration.
de Sousa, M C; Steffens, F M; Pakter, R; Rizzato, F B
2010-08-01
We investigate the concept of a standard map for the interaction of relativistic particles and electrostatic waves of arbitrary amplitudes, under the action of external magnetic fields. The map is adequate for physical settings where waves and particles interact impulsively, and allows for a series of analytical result to be exactly obtained. Unlike the traditional form of the standard map, the present map is nonlinear in the wave amplitude and displays a series of peculiar properties. Among these properties we discuss the relation involving fixed points of the maps and accelerator regimes.
Iterative algorithms for computing the feedback Nash equilibrium point for positive systems
NASA Astrophysics Data System (ADS)
Ivanov, I.; Imsland, Lars; Bogdanova, B.
2017-03-01
The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1977-01-01
The problem of mathematically defining a smooth surface, passing through a finite set of given points is studied. Literature relating to the problem is briefly reviewed. An algorithm is described that first constructs a triangular grid in the (x,y) domain, and first partial derivatives at the modal points are estimated. Interpolation in the triangular cells using a method that gives C sup.1 continuity overall is examined. Performance of software implementing the algorithm is discussed. Theoretical results are presented that provide valuable guidance in the development of algorithms for constructing triangular grids.
Inverse consistent non-rigid image registration based on robust point set matching
2014-01-01
Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889
Geometric registration of images by similarity transformation using two reference points
NASA Technical Reports Server (NTRS)
Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)
2011-01-01
A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.
Fickian dispersion is anomalous
Cushman, John H.; O’Malley, Dan
2015-06-22
The thesis put forward here is that the occurrence of Fickian dispersion in geophysical settings is a rare event and consequently should be labeled as anomalous. What people classically call anomalous is really the norm. In a Lagrangian setting, a process with mean square displacement which is proportional to time is generally labeled as Fickian dispersion. With a number of counter examples we show why this definition is fraught with difficulty. In a related discussion, we show an infinite second moment does not necessarily imply the process is super dispersive. By employing a rigorous mathematical definition of Fickian dispersion wemore » illustrate why it is so hard to find a Fickian process. We go on to employ a number of renormalization group approaches to classify non-Fickian dispersive behavior. Scaling laws for the probability density function for a dispersive process, the distribution for the first passage times, the mean first passage time, and the finite-size Lyapunov exponent are presented for fixed points of both deterministic and stochastic renormalization group operators. The fixed points of the renormalization group operators are p-self-similar processes. A generalized renormalization group operator is introduced whose fixed points form a set of generalized self-similar processes. Finally, power-law clocks are introduced to examine multi-scaling behavior. Several examples of these ideas are presented and discussed.« less
Tactile recognition and localization using object models: the case of polyhedra on a plane.
Gaston, P C; Lozano-Perez, T
1984-03-01
This paper discusses how data from multiple tactile sensors may be used to identify and locate one object, from among a set of known objects. We use only local information from sensors: 1) the position of contact points and 2) ranges of surface normals at the contact points. The recognition and localization process is structured as the development and pruning of a tree of consistent hypotheses about pairings between contact points and object surfaces. In this paper, we deal with polyhedral objects constrained to lie on a known plane, i.e., having three degrees of positioning freedom relative to the sensors. We illustrate the performance of the algorithm by simulation.
Theoretical study of the XP3 (X = Al, B, Ga) clusters
NASA Astrophysics Data System (ADS)
Ueno, Leonardo T.; Lopes, Cinara; Malaspina, Thaciana; Roberto-Neto, Orlando; Canuto, Sylvio; Machado, Francisco B. C.
2012-05-01
The lowest singlet and triplet states of AlP3, GaP3 and BP3 molecules with Cs, C2v and C3v symmetries were characterized using the B3LYP functional and the aug-cc-pVTZ and aug-cc-pVQZ correlated consistent basis sets. Geometrical parameters and vibrational frequencies were calculated and compared to existent experimental and theoretical data. Relative energies were obtained with single point CCSD(T) calculations using the aug-cc-pVTZ, aug-cc-pVQZ and aug-cc-pV5Z basis sets, and then extrapolating to the complete basis set (CBS) limit.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Health Instruction Packages: Dental Personnel.
ERIC Educational Resources Information Center
Hayes, Gary E.; And Others
Text, illustrations, and exercises are utilized in this set of four learning modules designed to instruct non-professional dental personnel in selected job-related skills. The first module, by Gary E. Hayes, describes how to locate the hinge axis point of the jaw, place and secure a bitefork, and perform a facebow transfer. The second module,…
Youth and Work: Toward a Model of Lifetime Economic Prospects.
ERIC Educational Resources Information Center
Carroll, Stephen J.; Pascal, Anthony H.
As part of an effort to reduce inequalities in economic opportunities confronting the young, this general model of youth behavior and opportunity was developed. Underlying the model are three sets of variables which influence economic opportunities: experience, perceptions, and opportunities. The relations between behavior at a point in time and…
A Lattice Boltzmann Method for Turbomachinery Simulations
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Lopez, I.
2003-01-01
Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.
Visualizing Three-Dimensional Calculus Concepts: The Study of a Manipulative's Effectiveness
ERIC Educational Resources Information Center
McGee, Daniel, Jr.; Moore-Russo, Deborah; Ebersole, Dennis; Lomen, David O.; Quintero, Maider Marin
2012-01-01
With the help of the National Science Foundation, the Department of Mathematics at the University of Puerto Rico in Mayaguez has developed a set of manipulatives to help students of science and engineering visualize concepts relating to points, surfaces, curves, contours, and vectors in three dimensions. This article will present the manipulatives…
ERIC Educational Resources Information Center
White, Jane; Connelly, Graham; Thompson, Lucy; Wilson, Phil
2013-01-01
Background: Emotional and behavioural disorders in early childhood are related to poorer academic attainment and school engagement, and difficulties already evident at the point of starting school can affect a child's later social and academic development. Successful transfer from pre-school settings to primary education is helped by communication…
Leaders Behaving Badly: Using Power to Generate Undiscussables in Action Learning Sets
ERIC Educational Resources Information Center
Donovan, Paul Jeffrey
2014-01-01
"Undiscussables" are topics associated with threat or embarrassment that are avoided by groups, where that avoidance is also not discussed. Their deleterious effect on executive groups has been a point of discussion for several decades. More recently critical action learning (AL) has brought a welcome focus to power relations within AL…
Abel inversion using fast Fourier transforms.
Kalal, M; Nugent, K A
1988-05-15
A fast Fourier transform based Abel inversion technique is proposed. The method is faster than previously used techniques, potentially very accurate (even for a relatively small number of points), and capable of handling large data sets. The technique is discussed in the context of its use with 2-D digital interferogram analysis algorithms. Several examples are given.
AOD Screening Tools for College Students. Prevention Update
ERIC Educational Resources Information Center
Higher Education Center for Alcohol, Drug Abuse, and Violence Prevention, 2012
2012-01-01
According to the National Institute on Alcohol Abuse and Alcoholism (NIAAA), the goal of screening in student health or other college settings is to reduce alcohol-related harm. NIAAA points out that identifying those students at greatest risk for alcohol problems is the first step in prevention. Colleges and universities have used a number of…
Poon, Betty P.K
2011-01-01
Interactions between genetic regions located across the genome maintain its three-dimensional organization and function. Recent studies point to key roles for a set of coiled-coil domain-containing complexes (cohibin, cohesin, condensin and monopolin) and related factors in the regulation of DNA-DNA connections across the genome. These connections are critical to replication, recombination, gene expression as well as chromosome segregation. PMID:21822055
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernal, José Luis; Cuesta, Antonio J.; Verde, Licia, E-mail: joseluis.bernal@icc.ub.edu, E-mail: liciaverde@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu
We perform an empirical consistency test of General Relativity/dark energy by disentangling expansion history and growth of structure constraints. We replace each late-universe parameter that describes the behavior of dark energy with two meta-parameters: one describing geometrical information in cosmological probes, and the other controlling the growth of structure. If the underlying model (a standard wCDM cosmology with General Relativity) is correct, that is under the null hypothesis, the two meta-parameters coincide. If they do not, it could indicate a failure of the model or systematics in the data. We present a global analysis using state-of-the-art cosmological data sets whichmore » points in the direction that cosmic structures prefer a weaker growth than that inferred by background probes. This result could signify inconsistencies of the model, the necessity of extensions to it or the presence of systematic errors in the data. We examine all these possibilities. The fact that the result is mostly driven by a specific sub-set of galaxy clusters abundance data, points to the need of a better understanding of this probe.« less
Guidelines for a cancer prevention smartphone application: A mixed-methods study.
Ribeiro, Nuno; Moreira, Luís; Barros, Ana; Almeida, Ana Margarida; Santos-Silva, Filipe
2016-10-01
This study sought to explore the views and experiences of healthy young adults concerning the fundamental features of a cancer prevention smartphone app that seeks behaviour change. Three focus groups were conducted with 16 healthy young adults that explored prior experiences, points of view and opinions about currently available health-related smartphone apps. Then, an online questionnaire was designed and applied to a larger sample of healthy young adults. Focus group and online questionnaire data were analysed and confronted. Study results identified behaviour tracking, goal setting, tailored information and use of reminders as the most desired features in a cancer prevention app. Participants highlighted the importance of privacy and were reluctant to share personal health information with other users. The results also point out important dimensions to be considered for long-term use of health promotion apps related with usability and perceived usefulness. Participants didn't consider gamification features as important dimensions for long-term use of apps. This study allowed the definition of a guideline set for the development of a cancer prevention app. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Efficient Algorithms for Segmentation of Item-Set Time Series
NASA Astrophysics Data System (ADS)
Chundi, Parvathi; Rosenkrantz, Daniel J.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets—Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.
Surrogate markers for time-varying treatments and outcomes
Hsu, Jesse Y; Kennedy, Edward H; Roy, Jason A; Stephens-Shields, Alisa J; Small, Dylan S; Joffe, Marshall M
2015-01-01
Background A surrogate marker is a variable commonly used in clinical trials to guide treatment decisions when the outcome of ultimate interest is not available. A good surrogate marker is one where the treatment effect on the surrogate is a strong predictor of the effect of treatment on the outcome. We review the situation when there is one treatment delivered at baseline, one surrogate measured at one later time point and one ultimate outcome of interest, and discuss new issues arising when variables are time-varying. Methods Most of the literature on surrogate markers has only considered simple settings with one treatment, one surrogate, and one outcome of interest at a fixed time point. However, more complicated time-varying settings are common in practice. In this paper, we describe the unique challenges in two settings, time-varying treatments and time-varying surrogates, while relating the ideas back to the causal-effects and causal-association paradigms. Conclusions In addition to discussing and extending popular notions of surrogacy to time-varying settings, we give examples illustrating that one can be misled by not taking into account time-varying information about the surrogate or treatment. We hope this paper has provided some motivation for future work on estimation and inference in such settings. PMID:25948621
Fluid/electrolyte and endocrine changes in space flight
NASA Technical Reports Server (NTRS)
Huntoon, Carolyn Leach
1989-01-01
The primary effects of space flight that influence the endocrine system and fluid and electrolyte regulation are the reduction of hydrostatic gradients, reduction in use and gravitational loading of bone and muscle, and stress. Each of these sets into motion a series of responses that culminates in alteration of some homeostatic set points for the environment of space. Set point alterations are believed to include decreases in venous pressure; red blood cell mass; total body water; plasma volume; and serum sodium, chloride, potassium, and osmolality. Serum calcium and phosphate increase. Hormones such as erythropoietin, atrial natriuretic peptide, aldosterone, cortisol, antidiuretic hormone, and growth hormone are involved in the dynamic processes that bring about the new set points. The inappropriateness of microgravity set points for 1-G conditions contributes to astronaut postflight responses.
A comparison of methods for determining HIV viral set point.
Mei, Y; Wang, L; Holte, S E
2008-01-15
During a course of human immunodeficiency virus (HIV-1) infection, the viral load usually increases sharply to a peak following infection and then drops rapidly to a steady state, where it remains until progression to AIDS. This steady state is often referred to as the viral set point. It is believed that the HIV viral set point results from an equilibrium between the HIV virus and immune response and is an important indicator of AIDS disease progression. In this paper, we analyze a real data set of viral loads measured before antiretroviral therapy is initiated, and propose two-phase regression models to utilize all available data to estimate the viral set point. The advantages of the proposed methods are illustrated by comparing them with two empirical methods, and the reason behind the improvement is also studied. Our results illustrate that for our data set, the viral load data are highly correlated and it is cost effective to estimate the viral set point based on one or two measurements obtained between 5 and 12 months after HIV infection. The utility and limitations of this recommendation will be discussed. Copyright (c) 2007 John Wiley & Sons, Ltd.
Salient Point Detection in Protrusion Parts of 3D Object Robust to Isometric Variations
NASA Astrophysics Data System (ADS)
Mirloo, Mahsa; Ebrahimnezhad, Hosein
2018-03-01
In this paper, a novel method is proposed to detect 3D object salient points robust to isometric variations and stable against scaling and noise. Salient points can be used as the representative points from object protrusion parts in order to improve the object matching and retrieval algorithms. The proposed algorithm is started by determining the first salient point of the model based on the average geodesic distance of several random points. Then, according to the previous salient point, a new point is added to this set of points in each iteration. By adding every salient point, decision function is updated. Hence, a condition is created for selecting the next point in which the iterative point is not extracted from the same protrusion part so that drawing out of a representative point from every protrusion part is guaranteed. This method is stable against model variations with isometric transformations, scaling, and noise with different levels of strength due to using a feature robust to isometric variations and considering the relation between the salient points. In addition, the number of points used in averaging process is decreased in this method, which leads to lower computational complexity in comparison with the other salient point detection algorithms.
Gschwind, Michael K [Chappaqua, NY
2011-03-01
Mechanisms for implementing a floating point only single instruction multiple data instruction set architecture are provided. A processor is provided that comprises an issue unit, an execution unit coupled to the issue unit, and a vector register file coupled to the execution unit. The execution unit has logic that implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA). The floating point vector registers of the vector register file store both scalar and floating point values as vectors having a plurality of vector elements. The processor may be part of a data processing system.
Trapp, Georgina S. A.; Knuiman, Matthew; Hooper, Paula; Ambrosini, Gina L.
2018-01-01
Large, longitudinal surveys often lack consistent dietary data, limiting the use of existing tools and methods that are available to measure diet quality. This study describes a method that was used to develop a simple index for ranking individuals according to their diet quality in a longitudinal study. The RESIDential Environments (RESIDE) project (2004–2011) collected dietary data in varying detail, across four time points. The most detailed dietary data were collected using a 24-item questionnaire at the final time point (n = 555; age ≥ 25 years). At preceding time points, sub-sets of the 24 items were collected. A RESIDE dietary guideline index (RDGI) that was based on the 24-items was developed to assess diet quality in relation to the Australian Dietary Guidelines. The RDGI scores were regressed on the longitudinal sub-sets of six and nine questionnaire items at T4, from which two simple index scores (S-RDGI1 and S-RDGI2) were predicted. The S-RDGI1 and S-RDGI2 showed reasonable agreement with the RDGI (Spearman’s rho = 0.78 and 0.84; gross misclassification = 1.8%; correct classification = 64.9% and 69.7%; and, Cohen’s weighted kappa = 0.58 and 0.64, respectively). For all of the indices, higher diet quality was associated with being female, undertaking moderate to high amounts of physical activity, not smoking, and self-reported health. The S-RDGI1 and S-RDGI2 explained 62% and 73% of the variation in RDGI scores, demonstrating that a large proportion of the variability in diet quality scores can be captured using a relatively small sub-set of questionnaire items. The methods described in this study can be applied elsewhere, in situations where limited dietary data are available, to generate a sample-specific score for ranking individuals according to diet quality. PMID:29652828
Bivoltsis, Alexia; Trapp, Georgina S A; Knuiman, Matthew; Hooper, Paula; Ambrosini, Gina L
2018-04-13
Large, longitudinal surveys often lack consistent dietary data, limiting the use of existing tools and methods that are available to measure diet quality. This study describes a method that was used to develop a simple index for ranking individuals according to their diet quality in a longitudinal study. The RESIDential Environments (RESIDE) project (2004-2011) collected dietary data in varying detail, across four time points. The most detailed dietary data were collected using a 24-item questionnaire at the final time point ( n = 555; age ≥ 25 years). At preceding time points, sub-sets of the 24 items were collected. A RESIDE dietary guideline index (RDGI) that was based on the 24-items was developed to assess diet quality in relation to the Australian Dietary Guidelines. The RDGI scores were regressed on the longitudinal sub-sets of six and nine questionnaire items at T4, from which two simple index scores (S-RDGI1 and S-RDGI2) were predicted. The S-RDGI1 and S-RDGI2 showed reasonable agreement with the RDGI (Spearman's rho = 0.78 and 0.84; gross misclassification = 1.8%; correct classification = 64.9% and 69.7%; and, Cohen's weighted kappa = 0.58 and 0.64, respectively). For all of the indices, higher diet quality was associated with being female, undertaking moderate to high amounts of physical activity, not smoking, and self-reported health. The S-RDGI1 and S-RDGI2 explained 62% and 73% of the variation in RDGI scores, demonstrating that a large proportion of the variability in diet quality scores can be captured using a relatively small sub-set of questionnaire items. The methods described in this study can be applied elsewhere, in situations where limited dietary data are available, to generate a sample-specific score for ranking individuals according to diet quality.
Caregiver informational support in different patient care settings at end of life.
Lavalley, Susan A
2018-01-01
Caregivers of the terminally ill face many complicated tasks including providing direct patient care, communicating with clinicians, and managing the logistical demands of daily activities. They require instructive information at all points in the illness process and across several settings where patients receive end-of-life care. This study examines how the setting where a patient receives end-of-life care affects caregivers' informational support needs by thematically analyzing data from caregiver interviews and clinical observations. Caregivers providing care for patients at home received informational support related to meeting patients' mobility, medication, and nutritional needs. Caregivers who provided care remotely received informational support to navigate transitions between patient care settings or long-term care arrangements, including financial considerations and insurance logistics. The findings document that interventions designed to enhance information for caregivers should account for caregiving context and that health care providers should proactively and repeatedly assess caregiver information needs related to end-of-life patient care.
Computing convex quadrangulations☆
Schiffer, T.; Aurenhammer, F.; Demuth, M.
2012-01-01
We use projected Delaunay tetrahedra and a maximum independent set approach to compute large subsets of convex quadrangulations on a given set of points in the plane. The new method improves over the popular pairing method based on triangulating the point set. PMID:22389540
A PERFECT MATCH CONDITION FOR POINT-SET MATCHING PROBLEMS USING THE OPTIMAL MASS TRANSPORT APPROACH
CHEN, PENGWEN; LIN, CHING-LONG; CHERN, I-LIANG
2013-01-01
We study the performance of optimal mass transport-based methods applied to point-set matching problems. The present study, which is based on the L2 mass transport cost, states that perfect matches always occur when the product of the point-set cardinality and the norm of the curl of the non-rigid deformation field does not exceed some constant. This analytic result is justified by a numerical study of matching two sets of pulmonary vascular tree branch points whose displacement is caused by the lung volume changes in the same human subject. The nearly perfect match performance verifies the effectiveness of this mass transport-based approach. PMID:23687536
Pedersen, N E; Oestergaard, D; Lippert, A
2016-05-01
When investigating early warning scores and similar physiology-based risk stratification tools, death, cardiac arrest and intensive care unit admission are traditionally used as end points. A large proportion of the patients identified by these end points cannot be saved, even with optimal treatment. This could pose a limitation to studies using these end points. We studied current expert opinion on end points for validating tools for the identification of patients in hospital wards at risk of imminent critical illness. The Delphi consensus methodology was used. We identified 22 experts based on objective criteria; 17 participated in the study. Each expert panel member's suggestions for end points were collected and distributed to the entire expert panel in anonymised form. The experts reviewed, rated and commented the suggested end points through the rounds in the Delphi process, and the experts' combined rating of the usefulness of each suggestion was established. A gross list of 86 suggestions for end points, relating to 13 themes, was produced. No items were uniformly recognised as ideal. The themes cardiac arrest, death, and level of care contained the items receiving highest ratings. End points relating to death, cardiac arrest and intensive care unit admission currently comprise the most obvious compromises for investigating early warning scores and similar risk stratification tools. Additional end points from the gross list of suggested end points could become feasible with the increased availability of large data sets with a multitude of recorded parameters. © 2015 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Cancer diagnostics using neural network sorting of processed images
NASA Astrophysics Data System (ADS)
Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.
1996-03-01
A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.
Tóth, Gergely; Bodai, Zsolt; Héberger, Károly
2013-10-01
Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.
Voronoi cell patterns: Theoretical model and applications
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2011-11-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We use our model to describe the Voronoi cell patterns of several systems. Specifically, we study the island nucleation with irreversible attachment, the 1D car-parking problem, the formation of second-level administrative divisions, and the pattern formed by the Paris Métro stations.
Voronoi Cell Patterns: theoretical model and application to submonolayer growth
NASA Astrophysics Data System (ADS)
González, Diego Luis; Einstein, T. L.
2012-02-01
We use a simple fragmentation model to describe the statistical behavior of the Voronoi cell patterns generated by a homogeneous and isotropic set of points in 1D and in 2D. In particular, we are interested in the distribution of sizes of these Voronoi cells. Our model is completely defined by two probability distributions in 1D and again in 2D, the probability to add a new point inside an existing cell and the probability that this new point is at a particular position relative to the preexisting point inside this cell. In 1D the first distribution depends on a single parameter while the second distribution is defined through a fragmentation kernel; in 2D both distributions depend on a single parameter. The fragmentation kernel and the control parameters are closely related to the physical properties of the specific system under study. We apply our model to describe the Voronoi cell patterns of island nucleation for critical island sizes i=0,1,2,3. Experimental results for the Voronoi cells of InAs/GaAs quantum dots are also described by our model.
Point-of-decision prompts for increasing park-based physical activity: a crowdsource analysis
Wilhelm Stanis, Sonja A.; Hipp, J. Aaron
2014-01-01
Objective To examine the potential efficacy of using point-of-decision prompts to influence intentions to be active in a park setting. Methods In June 2013, participants from across the U.S. (n=250) completed an online experiment using Amazon’s Mechanical Turk and Survey Monkey. Participants were randomly exposed to a park photo containing a persuasive, theoretically-based message in the form of a sign (treatment) or an identical photo with no sign (control). Differences in intentions to engage in moderate-to-vigorous physical activity within the park were examined between the two conditions for multiple gender, age, and race groups. Results Participants who were exposed to the park photo with the sign reported significantly greater intentions to be active than those who viewed the photo without a sign. This effect was especially strong for women compared to men, but no differences were observed across age or race groups. Conclusion Point-of-decision prompts are a relatively inexpensive, simple, sustainable, and scalable strategy for evoking behavior change in parks and further testing of diverse messages in actual park settings is warranted. PMID:25204987
Interactive algebraic grid-generation technique
NASA Technical Reports Server (NTRS)
Smith, R. E.; Wiese, M. R.
1986-01-01
An algebraic grid generation technique and use of an associated interactive computer program are described. The technique, called the two boundary technique, is based on Hermite cubic interpolation between two fixed, nonintersecting boundaries. The boundaries are referred to as the bottom and top, and they are defined by two ordered sets of points. Left and right side boundaries which intersect the bottom and top boundaries may also be specified by two ordered sets of points. when side boundaries are specified, linear blending functions are used to conform interior interpolation to the side boundaries. Spacing between physical grid coordinates is determined as a function of boundary data and uniformly space computational coordinates. Control functions relating computational coordinates to parametric intermediate variables that affect the distance between grid points are embedded in the interpolation formulas. A versatile control function technique with smooth-cubic-spline functions is presented. The technique works best in an interactive graphics environment where computational displays and user responses are quickly exchanged. An interactive computer program based on the technique and called TBGG (two boundary grid generation) is also described.
Rigter, Tessel; Henneman, Lidewij; Kristoffersson, Ulf; Hall, Alison; Yntema, Helger G; Borry, Pascal; Tönnies, Holger; Waisfisz, Quinten; Elting, Mariet W; Dondorp, Wybo J; Cornel, Martina C
2013-01-01
High-throughput nucleotide sequencing (often referred to as next-generation sequencing; NGS) is increasingly being chosen as a diagnostic tool for cases of expected but unresolved genetic origin. When exploring a higher number of genetic variants, there is a higher chance of detecting unsolicited findings. The consequential increased need for decisions on disclosure of these unsolicited findings poses a challenge for the informed consent procedure. This article discusses the ethical and practical dilemmas encountered when contemplating informed consent for NGS in diagnostics from a multidisciplinary point of view. By exploring recent similar experiences with unsolicited findings in other settings, an attempt is made to describe what can be learned so far for implementing NGS in standard genetic diagnostics. The article concludes with a set of points to consider in order to guide decision-making on the extent of return of results in relation to the mode of informed consent. We hereby aim to provide a sound basis for developing guidelines for optimizing the informed consent procedure. PMID:23784691
Elder, William P.; Saul, LouElla
1993-01-01
The Pigeon Point Formation crops out along the San Mateo County coastline in a northern and southern sequence of folded and faulted strata. Correlation of the two sequences remains somewhat equivocal, although on the basis of biostratigraphy and a reversed magnetic interval both appear to have been deposited during the early to middle Campanian. Sedimentary structures suggest that the northern sequence was deposited by turbidity currents in a continental rise setting, whereas the southern sequence primarily reflects deposition in shelf and slope environments . Right-lateral offset on the San Andreas and subsidiary faults to the east of the Pigeon Point Formation can account for 100's of km of northward transport since its deposition. However, Champion and others (1984) suggested 2500 km of northward transport from a tropical setting of about 21°N. Molluscan assemblages in the formation argue strongly for a less tropical site of deposition. Relative abundances of warm and temperate taxa and the presence or absence of key species are similar to those of the Santa Ana Mountains Cretaceous section, and are indicative of a war
Sets that Contain Their Circle Centers
ERIC Educational Resources Information Center
Martin, Greg
2008-01-01
Say that a subset S of the plane is a "circle-center set" if S is not a subset of a line, and whenever we choose three non-collinear points from S, the center of the circle through those three points is also an element of S. A problem appearing on the Macalester College Problem of the Week website stated that a finite set of points in the plane,…
Servo-control for maintaining abdominal skin temperature at 36C in low birth weight infants.
Sinclair, J C
2000-01-01
Randomized trials have shown that the neonatal mortality rate of low birth-weight babies can be reduced by keeping them warm. For low birth-weight babies nursed in incubators, warm conditions may be achieved either by heating the air to a desired temperature, or by servo-controlling the baby's body temperature at a desired set-point. In low birth weight infants, to determine the effect on death and other important clinical outcomes of targeting body temperature rather than air temperature as the end-point of control of incubator heating. Standard search strategy of the Cochrane Neonatal Collaborative Review Group. Randomized or quasi-randomized trials which test the effects of having the heat output of the incubator servo-controlled from body temperature compared with setting a constant incubator air temperature. Trial methodologic quality was systematically assessed. Outcome measures included death, timing of death, cause of death, and other clinical outcomes. Categorical outcomes were analyzed using relative risk and risk difference. Meta-analysis assumed a fixed effect model. Compared to setting a constant incubator air temperature of 31.8C, servo-control of abdominal skin temperature at 36C reduces the neonatal death rate among low birth weight infants: relative risk 0.72 (95% CI 0.54, 0.97); risk difference -12.7% (95% CI -1.6, -23.9). This effect is even greater among VLBW infants. During at least the first week after birth, low birth weight babies should be provided with a carefully regulated thermal environment that is near the thermoneutral point. For LBW babies in incubators, this can be achieved by adjusting incubator temperature to maintain an anterior abdominal skin temperature of at least 36C, using either servo-control or frequent manual adjustment of incubator air temperature.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Effect of electromagnetic field on Kordylewski clouds formation
NASA Astrophysics Data System (ADS)
Salnikova, Tatiana; Stepanov, Sergey
2018-05-01
In previous papers the authors suggest a clarification of the phenomenon of appearance-disappearance of Kordylewski clouds - accumulation of cosmic dust mass in the vicinity of the triangle libration points of the Earth-Moon system. Under gravi-tational and light perturbation of the Sun the triangle libration points aren't the points of relative equilibrium. However, there exist the stable periodic motion of the particles, surrounding every of the triangle libration points. Due to this fact we can consider a probabilistic model of the dust clouds formation. These clouds move along the periodical orbits in small vicinity of the point of periodical orbit. To continue this research we suggest a mathematical model to investigate also the electromagnetic influences, arising under consideration of the charged dust particles in the vicinity of the triangle libration points of the Earth-Moon system. In this model we take under consideration the self-unduced force field within the set of charged particles, the probability distribution density evolves according to the Vlasov equation.
Datum Feature Extraction and Deformation Analysis Method Based on Normal Vector of Point Cloud
NASA Astrophysics Data System (ADS)
Sun, W.; Wang, J.; Jin, F.; Liang, Z.; Yang, Y.
2018-04-01
In order to solve the problem lacking applicable analysis method in the application of three-dimensional laser scanning technology to the field of deformation monitoring, an efficient method extracting datum feature and analysing deformation based on normal vector of point cloud was proposed. Firstly, the kd-tree is used to establish the topological relation. Datum points are detected by tracking the normal vector of point cloud determined by the normal vector of local planar. Then, the cubic B-spline curve fitting is performed on the datum points. Finally, datum elevation and the inclination angle of the radial point are calculated according to the fitted curve and then the deformation information was analyzed. The proposed approach was verified on real large-scale tank data set captured with terrestrial laser scanner in a chemical plant. The results show that the method could obtain the entire information of the monitor object quickly and comprehensively, and reflect accurately the datum feature deformation.
NASA Astrophysics Data System (ADS)
Robinson, W. P.; Gillibrand, E.
2004-06-01
The primary purpose was to investigate the efficacy of a full year of single-sex (SS) teaching of science. The secondary aims were to locate any differentiation by set and gender, and to relate these to more proximal variables. Participants were 13 year olds. Higher set girls gave evidence of clear benefits overall, and higher set boys also, except in biology. Lower set pupils performed at or below expectations. Analyses of additional questionnaire and interview data pointed to further reasons for avoiding the making of unqualified generalizations about SS teaching. Pupil preferences for SS teaching were relevant, as were gender differences in attitudes to biology and physics. Qualitative data suggested higher set girls benefited from more learningrelated classroom interaction and less interference and exploitation of girls by boys in SS classes. Lower set pupils complained that SS teaching deprived them of social interaction with the other sex. The concluding suggestion was that SS teaching offers affordances of benefits when mixed-sex teaching has specifiable disadvantages.
On the Asymptotic Relative Efficiency of Planned Missingness Designs.
Rhemtulla, Mijke; Savalei, Victoria; Little, Todd D
2016-03-01
In planned missingness (PM) designs, certain data are set a priori to be missing. PM designs can increase validity and reduce cost; however, little is known about the loss of efficiency that accompanies these designs. The present paper compares PM designs to reduced sample (RN) designs that have the same total number of data points concentrated in fewer participants. In 4 studies, we consider models for both observed and latent variables, designs that do or do not include an "X set" of variables with complete data, and a full range of between- and within-set correlation values. All results are obtained using asymptotic relative efficiency formulas, and thus no data are generated; this novel approach allows us to examine whether PM designs have theoretical advantages over RN designs removing the impact of sampling error. Our primary findings are that (a) in manifest variable regression models, estimates of regression coefficients have much lower relative efficiency in PM designs as compared to RN designs, (b) relative efficiency of factor correlation or latent regression coefficient estimates is maximized when the indicators of each latent variable come from different sets, and (c) the addition of an X set improves efficiency in manifest variable regression models only for the parameters that directly involve the X-set variables, but it substantially improves efficiency of most parameters in latent variable models. We conclude that PM designs can be beneficial when the model of interest is a latent variable model; recommendations are made for how to optimize such a design.
Review Article: Increasing physical activity with point-of-choice prompts--a systematic review.
Nocon, Marc; Müller-Riemenschneider, Falk; Nitzschke, Katleen; Willich, Stefan N
2010-08-01
Stair climbing is an activity that can easily be integrated into everyday life and has positive health effects. Point-of-choice prompts are informational or motivational signs near stairs and elevators/escalators aimed at increased stair climbing. The aim of this review was to assess the effectiveness of point-of-choice prompts for the promotion of stair climbing. In a systematic search of the literature, studies that assessed the effectiveness of point-of-choice prompts to increase the rate of stair climbing in the general population were identified. No restrictions were made regarding the setting, the duration of the intervention, or the kind of message. A total of 25 studies were identified. Point-of-choice prompts were predominantly posters or stair-riser banners in public traffic stations, shopping malls or office buildings. The 25 studies reported 42 results. Of 10 results for elevator settings, only three reported a significant increase in stair climbing, whereas 28 of 32 results for escalator settings reported a significant increase in stair climbing. Overall, point-of-choice prompts are able to increase the rate of stair climbing, especially in escalator settings. In elevator settings, point-of-choice prompts seem less effective. The long-term efficacy and the most efficient message format have yet to be determined in methodologically rigorous studies.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N F; Sitek, A
2011-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
NASA Astrophysics Data System (ADS)
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
3D active shape models of human brain structures: application to patient-specific mesh generation
NASA Astrophysics Data System (ADS)
Ravikumar, Nishant; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Taylor, Zeike A.
2015-03-01
The use of biomechanics-based numerical simulations has attracted growing interest in recent years for computer-aided diagnosis and treatment planning. With this in mind, a method for automatic mesh generation of brain structures of interest, using statistical models of shape (SSM) and appearance (SAM), for personalised computational modelling is presented. SSMs are constructed as point distribution models (PDMs) while SAMs are trained using intensity profiles sampled from a training set of T1-weighted magnetic resonance images. The brain structures of interest are, the cortical surface (cerebrum, cerebellum & brainstem), lateral ventricles and falx-cerebri membrane. Two methods for establishing correspondences across the training set of shapes are investigated and compared (based on SSM quality): the Coherent Point Drift (CPD) point-set registration method and B-spline mesh-to-mesh registration method. The MNI-305 (Montreal Neurological Institute) average brain atlas is used to generate the template mesh, which is deformed and registered to each training case, to establish correspondence over the training set of shapes. 18 healthy patients' T1-weightedMRimages form the training set used to generate the SSM and SAM. Both model-training and model-fitting are performed over multiple brain structures simultaneously. Compactness and generalisation errors of the BSpline-SSM and CPD-SSM are evaluated and used to quantitatively compare the SSMs. Leave-one-out cross validation is used to evaluate SSM quality in terms of these measures. The mesh-based SSM is found to generalise better and is more compact, relative to the CPD-based SSM. Quality of the best-fit model instance from the trained SSMs, to test cases are evaluated using the Hausdorff distance (HD) and mean absolute surface distance (MASD) metrics.
Smart POI: Open and linked spatial data
NASA Astrophysics Data System (ADS)
Cerba, Otakar; Berzins, Raitis; Charvat, Karel; Mildorf, Tomas
2016-04-01
The Smart Point of Interest (SPOI) represents an unique seamless spatial data set based on standards recommended for Linked and open data, which are supported by scientist and researchers as well as by several government authorities and European Union. This data set developed in cooperation of partners of SDI4Apps project contains almost 24 millions points of interest focused mainly on tourism, natural features, transport or citizen services. The SPOI data covers almost all countries and territories over the world. It is created as a harmonized combination of global data resources (selected points from OpenStreetMap, Natural Earth and GeoNames.org) and several local data sets (for example data published by the Citadel on the Move project, data from Posumavi region in the Czech Republic or experimental ontologies developed in the University of West Bohemia including ski regions in Europe or historical sights in Rome). The added value of the SDI4Apps approach in comparison to other similar solutions consists in implementation of linked data approach (several objects are connected to DBpedia or GeoNames.org), using of universal RDF format, using of standardized and respected properties or vocabularies (for example FOAF or GeoSPARQL) and development of the completely harmonized data set with uniform data model and common classification (not only a copy of original resources). The SPOI data is published as SPARQL endpoint as well as in the map client. The SPOI dataset is a specific set of POIs which could be "a data fuel" for applications and services related to tourism, local business, statistics or landscape monitoring. It can be used also as a background data layer for thematic maps.
Alternative forms of the Spencer-Fano equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inokuti, M.; Kowari, K.
We point out a relation between the electron degradation spectra determined by two differing cross-section sets but subject to the same source. The relation takes a form of the Fredholm integral equation of the second kind and may be viewed as an alternative form of the Spencer-Fano equation. The relation leads to a precise definition of the partial degradation spectra of electrons of successive generations. It also provides a basis for the perturbation theory by which one calculates effects of small changes of cross-section data upon the electron degradation spectrum.
Public Perceptions and the Situation of Males in Early Childhood Settings
ERIC Educational Resources Information Center
Tufan, Mumin
2018-01-01
The main focus areas of this research are pointing out the public perceptions and beliefs about male preschool teachers, fear of child sexual molestation, moral panic, and power relations in the society. The sample of the study composed of one white, female preschool teacher with a single interview transcript, working in the city of Tempe,…
40 CFR 63.11527 - What are the monitoring requirements for new and existing sources?
Code of Federal Regulations, 2011 CFR
2011-07-01
... alarm that will sound when an increase in relative PM loadings is detected over the alarm set point... operating a bag leak detection system, if an alarm sounds, conduct visual monitoring of the monovent or... maintain a continuous parameter monitoring system (CPMS) to measure and record the 3-hour average pressure...
Systematic Approach to the Goalsetting of Higher Education in the Field of Tourism and Hospitality
ERIC Educational Resources Information Center
Romanova, Galina; Maznichenko, Marina; Neskoromnyh, Nataliya
2016-01-01
The article deals with key problems and contradictions of training of university graduates for the tourism and hospitality industry in Russia, primarily associated with the setting of educational goals. The article formulates the discussion points related to the updating of the existing educational standards for the enlarged "Service and…
Beginning Plant Biotechnology Laboratories Using Fast Plants.
ERIC Educational Resources Information Center
Williams, Mike
This set of 16 laboratory activities is designed to illustrate the life cycle of Brassicae plants from seeds in pots to pods in 40 days. At certain points along the production cycle of the central core of labs, there are related lateral labs to provide additional learning opportunities employing this family of plants, referred to as "fast…
USDA-ARS?s Scientific Manuscript database
Thirty one years of spatially distributed air temperature, relative humidity, dew point temperature, precipitation amount, and precipitation phase data are presented for the Reynolds Creek Experimental Watershed. The data are spatially distributed over a 10m Lidar-derived digital elevation model at ...
Symbolic Dynamics, Flower Automata and Infinite Traces
NASA Astrophysics Data System (ADS)
Foryś, Wit; Oprocha, Piotr; Bakalarski, Slawomir
Considering a finite alphabet as a set of allowed instructions, we can identify finite words with basic actions or programs. Hence infinite paths on a flower automaton can represent order in which these programs are executed and a flower shift related with it represents list of instructions to be executed at some mid-point of the computation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan
This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set-point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.
Connecting the Forgotten Half: The School-to-Work Transition of Noncollege-Bound Youth
ERIC Educational Resources Information Center
Ling, Thomson J.; O'Brien, Karen M.
2013-01-01
While previous research has examined the school-to-work transition of noncollege-bound youth, most have considered how a limited set of variables relate to job attainment at a single point in time. This exploratory study extended beyond the identification of constructs associated with obtaining a job to investigate how several factors, collected…
Region 9 NPL Sites (Superfund Sites 2013)
NPL site POINT locations for the US EPA Region 9. NPL (National Priorities List) sites are hazardous waste sites that are eligible for extensive long-term cleanup under the Superfund program. Eligibility is determined by a scoring method called Hazard Ranking System. Sites with high scores are listed on the NPL. The majority of the locations are derived from polygon centroids of digitized site boundaries. The remaining locations were generated from address geocoding and digitizing. Area covered by this data set include Arizona, California, Nevada, Hawaii, Guam, American Samoa, Northern Marianas and Trust Territories. Attributes include NPL status codes, NPL industry type codes and environmental indicators. Related table, NPL_Contaminants contains information about contaminated media types and chemicals. This is a one-to-many relate and can be related to the feature class using the relationship classes under the Feature Data Set ENVIRO_CONTAMINANT.
Meshless Geometric Subdivision
2004-10-01
Michelangelo Youthful data set is shown on the right. for p ∈ M and with boundary condition dM (q, q) = 0 is approximated by |∇dΩrP (p, ·)| = F̃ (p), (2...dealing with more complex geometry. We apply our meshless subdivision operator to a base point set of 10088 points generated from the Michelangelo ...acknowledge the permission to use the Michelangelo point sets granted by the Stanford Computer Graphics group. The Isis, 50% decimated and non
Validating a Monotonically-Integrated Large Eddy Simulation Code for Subsonic Jet Acoustics
NASA Technical Reports Server (NTRS)
Ingraham, Daniel; Bridges, James
2017-01-01
The results of subsonic jet validation cases for the Naval Research Lab's Jet Engine Noise REduction (JENRE) code are reported. Two set points from the Tanna matrix, set point 3 (Ma = 0.5, unheated) and set point 7 (Ma = 0.9, unheated) are attempted on three different meshes. After a brief discussion of the JENRE code and the meshes constructed for this work, the turbulent statistics for the axial velocity are presented and compared to experimental data, with favorable results. Preliminary simulations for set point 23 (Ma = 0.5, Tj=T1 = 1.764) on one of the meshes are also described. Finally, the proposed configuration for the farfield noise prediction with JENRE's Ffowcs-Williams Hawking solver are detailed.
Bench press exercise: the key points.
Padulo, J; Laffaye, G; Chaouachi, A; Chamari, K
2015-06-01
The bench press exercise (BPE) is receiving increasing interest as a field testing, training/therapeutic modality to improve neuromuscular performance or to increase bone mass density. Several studies have been performed using BPE as a standard for increasing upper-limb strength. For this purpose, the position of the bar, the loads, the sets, the number of repetitions, the recovery time in-between sets, the movement speed, the muscular work and the use of the determination of the one repetition maximum (1-RM) are the classical tools investigated in the literature that have been shown to affect the BPE effect on neuromuscular. The goal of the present short review is to make a picture of the current knowledge on the bench press exercise, which could be very helpful for a better understanding of this standard movement and its effects. Based on the related literature, several recommendations on these key points are presented here.
Managing distance and covariate information with point-based clustering.
Whigham, Peter A; de Graaf, Brandon; Srivastava, Rashmi; Glue, Paul
2016-09-01
Geographic perspectives of disease and the human condition often involve point-based observations and questions of clustering or dispersion within a spatial context. These problems involve a finite set of point observations and are constrained by a larger, but finite, set of locations where the observations could occur. Developing a rigorous method for pattern analysis in this context requires handling spatial covariates, a method for constrained finite spatial clustering, and addressing bias in geographic distance measures. An approach, based on Ripley's K and applied to the problem of clustering with deliberate self-harm (DSH), is presented. Point-based Monte-Carlo simulation of Ripley's K, accounting for socio-economic deprivation and sources of distance measurement bias, was developed to estimate clustering of DSH at a range of spatial scales. A rotated Minkowski L1 distance metric allowed variation in physical distance and clustering to be assessed. Self-harm data was derived from an audit of 2 years' emergency hospital presentations (n = 136) in a New Zealand town (population ~50,000). Study area was defined by residential (housing) land parcels representing a finite set of possible point addresses. Area-based deprivation was spatially correlated. Accounting for deprivation and distance bias showed evidence for clustering of DSH for spatial scales up to 500 m with a one-sided 95 % CI, suggesting that social contagion may be present for this urban cohort. Many problems involve finite locations in geographic space that require estimates of distance-based clustering at many scales. A Monte-Carlo approach to Ripley's K, incorporating covariates and models for distance bias, are crucial when assessing health-related clustering. The case study showed that social network structure defined at the neighbourhood level may account for aspects of neighbourhood clustering of DSH. Accounting for covariate measures that exhibit spatial clustering, such as deprivation, are crucial when assessing point-based clustering.
Subsurface failure in spherical bodies. A formation scenario for linear troughs on Vesta’s surface
Stickle, Angela M.; Schultz, P. H.; Crawford, D. A.
2014-10-13
Many asteroids in the Solar System exhibit unusual, linear features on their surface. The Dawn mission recently observed two sets of linear features on the surface of the asteroid 4 Vesta. Geologic observations indicate that these features are related to the two large impact basins at the south pole of Vesta, though no specific mechanism of origin has been determined. Furthermore, the orientation of the features is offset from the center of the basins. Experimental and numerical results reveal that the offset angle is a natural consequence of oblique impacts into a spherical target. We demonstrate that a set ofmore » shear planes develops in the subsurface of the body opposite to the point of first contact. Moreover, these subsurface failure zones then propagate to the surface under combined tensile-shear stress fields after the impact to create sets of approximately linear faults on the surface. Comparison between the orientation of damage structures in the laboratory and failure regions within Vesta can be used to constrain impact parameters (e.g., the approximate impact point and likely impact trajectory).« less
Determination system for solar cell layout in traffic light network using dominating set
NASA Astrophysics Data System (ADS)
Eka Yulia Retnani, Windi; Fambudi, Brelyanes Z.; Slamin
2018-04-01
Graph Theory is one of the fields in Mathematics that solves discrete problems. In daily life, the applications of Graph Theory are used to solve various problems. One of the topics in the Graph Theory that is used to solve the problem is the dominating set. The concept of dominating set is used, for example, to locate some objects systematically. In this study, the dominating set are used to determine the dominating points for solar panels, where the vertex represents the traffic light point and the edge represents the connection between the points of the traffic light. To search the dominating points for solar panels using the greedy algorithm. This algorithm is used to determine the location of solar panel. This research produced applications that can determine the location of solar panels with optimal results, that is, the minimum dominating points.
Reverse engineering of machine-tool settings with modified roll for spiral bevel pinions
NASA Astrophysics Data System (ADS)
Liu, Guanglei; Chang, Kai; Liu, Zeliang
2013-05-01
Although a great deal of research has been dedicated to the synthesis of spiral bevel gears, little related to reverse engineering can be found. An approach is proposed to reverse the machine-tool settings of the pinion of a spiral bevel gear drive on the basis of the blank and tooth surface data obtained by a coordinate measuring machine(CMM). Real tooth contact analysis(RTCA) is performed to preliminary ascertain the contact pattern, the motion curve, as well as the position of the mean contact point. And then the tangent to the contact path and the motion curve are interpolated in the sense of the least square method to extract the initial values of the bias angle and the higher order coefficients(HOC) in modified roll motion. A trial tooth surface is generated by machine-tool settings derived from the local synthesis relating to the initial meshing performances and modified roll motion. An optimization objective is formed which equals the tooth surface deviation between the real tooth surface and the trial tooth surface. The design variables are the parameters describing the meshing performances at the mean contact point in addition to the HOC. When the objective is optimized within an arbitrarily given convergence tolerance, the machine-tool settings together with the HOC are obtained. The proposed approach is verified by a spiral bevel pinion used in the accessory gear box of an aviation engine. The trial tooth surfaces approach to the real tooth surface on the whole in the example. The results show that the convergent tooth surface deviation for the concave side on the average is less than 0.5 μm, and is less than 1.3 μm for the convex side. The biggest tooth surface deviation is 6.7 μm which is located at the corner of the grid on the convex side. Those nodes with relative bigger tooth surface deviations are all located at the boundary of the grid. An approach is proposed to figure out the machine-tool settings of a spiral bevel pinion by way of reverse engineering without having known the theoretical tooth surfaces and the corresponding machine-tool settings.
Gadd, C. S.; Baskaran, P.; Lobach, D. F.
1998-01-01
Extensive utilization of point-of-care decision support systems will be largely dependent on the development of user interaction capabilities that make them effective clinical tools in patient care settings. This research identified critical design features of point-of-care decision support systems that are preferred by physicians, through a multi-method formative evaluation of an evolving prototype of an Internet-based clinical decision support system. Clinicians used four versions of the system--each highlighting a different functionality. Surveys and qualitative evaluation methodologies assessed clinicians' perceptions regarding system usability and usefulness. Our analyses identified features that improve perceived usability, such as telegraphic representations of guideline-related information, facile navigation, and a forgiving, flexible interface. Users also preferred features that enhance usefulness and motivate use, such as an encounter documentation tool and the availability of physician instruction and patient education materials. In addition to identifying design features that are relevant to efforts to develop clinical systems for point-of-care decision support, this study demonstrates the value of combining quantitative and qualitative methods of formative evaluation with an iterative system development strategy to implement new information technology in complex clinical settings. Images Figure 1 PMID:9929188
Optimization of pressure probe placement and data analysis of engine-inlet distortion
NASA Astrophysics Data System (ADS)
Walter, S. F.
The purpose of this research is to examine methods by which quantification of inlet flow distortion may be improved upon. Specifically, this research investigates how data interpolation effects results, optimizing sampling locations of the flow, and determining the sensitivity related to how many sample locations there are. The main parameters that are indicative of a "good" design are total pressure recovery, mass flow capture, and distortion. This work focuses on the total pressure distortion, which describes the amount of non-uniformity that exists in the flow as it enters the engine. All engines must tolerate some level of distortion, however too much distortion can cause the engine to stall or the inlet to unstart. Flow distortion is measured at the interface between the inlet and the engine. To determine inlet flow distortion, a combination of computational and experimental pressure data is generated and then collapsed into an index that indicates the amount of distortion. Computational simulations generate continuous contour maps, but experimental data is discrete. Researchers require continuous contour maps to evaluate the overall distortion pattern. There is no guidance on how to best manipulate discrete points into a continuous pattern. Using one experimental, 320 probe data set and one, 320 point computational data set with three test runs each, this work compares the pressure results obtained using all 320 points of data from the original sets, both quantitatively and qualitatively, with results derived from selecting 40 grid point subsets and interpolating to 320 grid points. Each of the two, 40 point sets were interpolated to 320 grid points using four different interpolation methods in an attempt to establish the best method for interpolating small sets of data into an accurate, continuous contour map. Interpolation methods investigated are bilinear, spline, and Kriging in Cartesian space, as well as angular in polar space. Spline interpolation methods should be used as they result in the most accurate, precise, and visually correct predictions when compared results achieved from the full data sets. Researchers were interested if fewer than the recommended 40 probes could be used - especially when placed in areas of high interest - but still obtain equivalent or better results. For this investigation, the computational results from a two-dimensional inlet and experimental results of an axisymmetric inlet were used. To find the areas of interest, a uniform sampling of all possible locations was run through a Monte Carlo simulation with a varying number of probes. A probability density function of the resultant distortion index was plotted. Certain probes are required to come within the desired accuracy level of the distortion index based on the full data set. For the experimental results, all three test cases could be characterized with 20 probes. For the axisymmetric inlet, placing 40 probes in select locations could get the results for parameters of interest within less than 10% of the exact solution for almost all cases. For the two dimensional inlet, the results were not as clear. 80 probes were required to get within 10% of the exact solution for all run numbers, although this is largely due to the small value of the exact result. The sensitivity of each probe added to the experiment was analyzed. Instead of looking at the overall pattern established by optimizing probe placements, the focus is on varying the number of sampled probes from 20 to 40. The number of points falling within a 1% tolerance band of the exact solution were counted as good points. The results were normalized for each data set and a general sensitivity function was found to determine the sensitivity of the results. A linear regression was used to generalize the results for all data sets used in this work. However, they can be used by directly comparing the number of good points obtained with various numbers of probes as well. The sensitivity in the results is higher when fewer probes are used and gradually tapers off near 40 probes. There is a bigger gain in good points when the number of probes is increased from 20 to 21 probes than from 39 to 40 probes.
ERIC Educational Resources Information Center
Lopez Alonso, A. O.
From the best-fit lines corresponding to sets of families of conditional judgements, the constant stimulus family and the constant condition family, both defined for a same scale object, the coordinate values of the point of intersection of both lines (indifference point) are obtained. These values are studied in relation to the mean values of the…
Automatic ground control point recognition with parallel associative memory
NASA Technical Reports Server (NTRS)
Al-Tahir, Raid; Toth, Charles K.; Schenck, Anton F.
1990-01-01
The basic principle of the associative memory is to match the unknown input pattern against a stored training set, and responding with the 'closest match' and the corresponding label. Generally, an associative memory system requires two preparatory steps: selecting attributes of the pattern class, and training the system by associating patterns with labels. Experimental results gained from using Parallel Associative Memory are presented. The primary concern is an automatic search for ground control points in aerial photographs. Synthetic patterns are tested followed by real data. The results are encouraging as a relatively high level of correct matches is reached.
Condensation and critical exponents of an ideal non-Abelian gas
NASA Astrophysics Data System (ADS)
Talaei, Zahra; Mirza, Behrouz; Mohammadzadeh, Hosein
2017-11-01
We investigate an ideal gas obeying non-Abelian statistics and derive the expressions for some thermodynamic quantities. It is found that thermodynamic quantities are finite at the condensation point where their derivatives diverge and, near this point, they behave as \\vert T-Tc\\vert^{-ρ} in which Tc denotes the condensation temperature and ρ is a critical exponent. The critical exponents related to the heat capacity and compressibility are obtained by fitting numerical results and others are obtained using the scaling law hypothesis for a three-dimensional non-Abelian ideal gas. This set of critical exponents introduces a new universality class.
Representation and display of vector field topology in fluid flow data sets
NASA Technical Reports Server (NTRS)
Helman, James; Hesselink, Lambertus
1989-01-01
The visualization of physical processes in general and of vector fields in particular is discussed. An approach to visualizing flow topology that is based on the physics and mathematics underlying the physical phenomenon is presented. It involves determining critical points in the flow where the velocity vector vanishes. The critical points, connected by principal lines or planes, determine the topology of the flow. The complexity of the data is reduced without sacrificing the quantitative nature of the data set. By reducing the original vector field to a set of critical points and their connections, a representation of the topology of a two-dimensional vector field that is much smaller than the original data set but retains with full precision the information pertinent to the flow topology is obtained. This representation can be displayed as a set of points and tangent curves or as a graph. Analysis (including algorithms), display, interaction, and implementation aspects are discussed.
Automated crystallographic system for high-throughput protein structure determination.
Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F
2003-07-01
High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.
Applications of invariants in general relativity
NASA Astrophysics Data System (ADS)
Pelavas, Nicos
This thesis explores various kinds of invariants and their use in general relativity. To start, the simplest invariants, those polynomial in the Riemann tensor, are examined and the currently accepted Carminati-Zakhary set is compared to the Carminati-McLenaghan set. A number of algebraic relations linking the two sets are given. The concept of gravitational entropy, as proposed by Penrose, has some physically appealing properties which have motivated attempts to quantify this notion using various invariants. We study this in the context of self-similar spacetimes. A general result is obtained which gives the Lie derivative of any invariant or ratio of invariants along a homothetic trajectory. A direct application of this result shows that the currently used gravitational epoch function fails to satisfy certain criteria. Based on this work, candidates for a gravitational epoch function are proposed that behave accordingly in these models. The instantaneous ergo surface in the Kerr solution is studied and shown to possess conical points at the poles when embedded in three dimensional Euclidean space. These intrinsic singularities had remained undiscovered for a generation. We generalize the Gauss-Bonnet theorem to accommodate these points and use it to compute a topological invariant, the Euler characteristic, for this surface. Interest in solutions admitting a cosmological constant has prompted us to study ergo surfaces in stationary non-asymptotically flat spacetimes. In these cases we show that there is in fact a family of ergo surfaces. By using a kinematic invariant constructed from timelike Killing vectors we try to find a preferred ergo surface. We illustrate to what extent this invariant fails to provide such a measure.
Meteorological conditions are associated with physical activities performed in open-air settings
NASA Astrophysics Data System (ADS)
Suminski, Richard R.; Poston, Walker C.; Market, Patrick; Hyder, Melissa; Sara, Pyle A.
2008-01-01
Meteorological conditions (MC) are believed to modify physical activity. However, studies in this area are limited and none have looked at the associations between MC and physical activity in open-air settings. Therefore, we examined the relationships between MC and physical activities performed on sidewalks/streets and outdoor oval tracks. Observation techniques were used to count individuals walking to school, exercising on oval tracks and walking/jogging/biking on sidewalks/streets. Meteorological conditions were obtained from an Automated Surface Observing System located at a nearby airport for the same time periods physical activities were observed. On weekdays, fewer children were seen walking to school and more bicyclists were observed on sidewalks/streets as wind speed increased ( p < 0.05). Ambient and apparent temperatures were positively ( p < 0.05) and humidity and barometric pressure negatively ( p < 0.005) related to the number of individuals walking on the track. Meteorological conditions were not significantly associated with physical activities observed on weekends. Multiple linear regression analyses showed that apparent temperature (+), barometric pressure (-) and dew point (-) accounted for 58.0% of the variance in the number of walkers on the track. A significant proportion of the variance (>30%) in the number of joggers and the length of time they jogged was accounted for by apparent temperature (+) and dew point (-). We found that meteorological conditions are related to physical activity in open-air settings. The results embellish the context in which environmental-physical activity relationships should be interpreted and provide important information for researchers applying the observation method in open-air settings.
Reliability of an experimental method to analyse the impact point on a golf ball during putting.
Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn
2015-06-01
This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.
Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.
Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao
2017-01-01
The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780
Synergies in the space of control variables within the equilibrium-point hypothesis.
Ambike, S; Mattos, D; Zatsiorsky, V M; Latash, M L
2016-02-19
We use an approach rooted in the recent theory of synergies to analyze possible co-variation between two hypothetical control variables involved in finger force production based on the equilibrium-point (EP) hypothesis. These control variables are the referent coordinate (R) and apparent stiffness (C) of the finger. We tested a hypothesis that inter-trial co-variation in the {R; C} space during repeated, accurate force production trials stabilizes the fingertip force. This was expected to correspond to a relatively low amount of inter-trial variability affecting force and a high amount of variability keeping the force unchanged. We used the "inverse piano" apparatus to apply small and smooth positional perturbations to fingers during force production tasks. Across trials, R and C showed strong co-variation with the data points lying close to a hyperbolic curve. Hyperbolic regressions accounted for over 99% of the variance in the {R; C} space. Another analysis was conducted by randomizing the original {R; C} data sets and creating surrogate data sets that were then used to compute predicted force values. The surrogate sets always showed much higher force variance compared to the actual data, thus reinforcing the conclusion that finger force control was organized in the {R; C} space, as predicted by the EP hypothesis, and involved co-variation in that space stabilizing total force. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
A novel quantum solution to secure two-party distance computation
NASA Astrophysics Data System (ADS)
Peng, Zhen-wan; Shi, Run-hua; Wang, Pan-hong; Zhang, Shun
2018-06-01
Secure Two-Party Distance Computation is an important primitive of Secure Multiparty Computational Geometry that it involves two parties, where each party has a private point, and the two parties want to jointly compute the distance between their points without revealing anything about their respective private information. Secure Two-Party Distance Computation has very important and potential applications in settings of high secure requirements, such as privacy-preserving Determination of Spatial Location-Relation, Determination of Polygons Similarity, and so on. In this paper, we present a quantum protocol for Secure Two-Party Distance Computation by using QKD-based Quantum Private Query. The security of the protocol is based on the physical principles of quantum mechanics, instead of difficulty assumptions, and therefore, it can ensure higher security than the classical related protocols.
Cubature versus Fekete-Gauss nodes for spectral element methods on simplicial meshes
NASA Astrophysics Data System (ADS)
Pasquetti, Richard; Rapetti, Francesca
2017-10-01
In a recent JCP paper [9], a higher order triangular spectral element method (TSEM) is proposed to address seismic wave field modeling. The main interest of this TSEM is that the mass matrix is diagonal, so that an explicit time marching becomes very cheap. This property results from the fact that, similarly to the usual SEM (say QSEM), the basis functions are Lagrange polynomials based on a set of points that shows both nice interpolation and quadrature properties. In the quadrangle, i.e. for the QSEM, the set of points is simply obtained by tensorial product of Gauss-Lobatto-Legendre (GLL) points. In the triangle, finding such an appropriate set of points is however not trivial. Thus, the work of [9] follows anterior works that started in 2000's [2,6,11] and now provides cubature nodes and weights up to N = 9, where N is the total degree of the polynomial approximation. Here we wish to evaluate the accuracy of this cubature nodes TSEM with respect to the Fekete-Gauss one, see e.g.[12], that makes use of two sets of points, namely the Fekete points and the Gauss points of the triangle for interpolation and quadrature, respectively. Because the Fekete-Gauss TSEM is in the spirit of any nodal hp-finite element methods, one may expect that the conclusions of this Note will remain relevant if using other sets of carefully defined interpolation points.
Maxwell Strata and Cut Locus in the Sub-Riemannian Problem on the Engel Group
NASA Astrophysics Data System (ADS)
Ardentov, Andrei A.; Sachkov, Yuri L.
2017-12-01
We consider the nilpotent left-invariant sub-Riemannian structure on the Engel group. This structure gives a fundamental local approximation of a generic rank 2 sub-Riemannian structure on a 4-manifold near a generic point (in particular, of the kinematic models of a car with a trailer). On the other hand, this is the simplest sub-Riemannian structure of step three. We describe the global structure of the cut locus (the set of points where geodesics lose their global optimality), the Maxwell set (the set of points that admit more than one minimizer), and the intersection of the cut locus with the caustic (the set of conjugate points along all geodesics). The group of symmetries of the cut locus is described: it is generated by a one-parameter group of dilations R+ and a discrete group of reflections Z2 × Z2 × Z2. The cut locus admits a stratification with 6 three-dimensional strata, 12 two-dimensional strata, and 2 one-dimensional strata. Three-dimensional strata of the cut locus are Maxwell strata of multiplicity 2 (for each point there are 2 minimizers). Two-dimensional strata of the cut locus consist of conjugate points. Finally, one-dimensional strata are Maxwell strata of infinite multiplicity, they consist of conjugate points as well. Projections of sub-Riemannian geodesics to the 2-dimensional plane of the distribution are Euler elasticae. For each point of the cut locus, we describe the Euler elasticae corresponding to minimizers coming to this point. Finally, we describe the structure of the optimal synthesis, i. e., the set of minimizers for each terminal point in the Engel group.
Morse Theory and Relative Equilibria in the Planar n-Vortex Problem
NASA Astrophysics Data System (ADS)
Roberts, Gareth E.
2018-04-01
Morse theoretical ideas are applied to the study of relative equilibria in the planar n-vortex problem. For the case of positive circulations, we prove that the Morse index of a critical point of the Hamiltonian restricted to a level surface of the angular impulse is equal to the number of pairs of real eigenvalues of the corresponding relative equilibrium periodic solution. The Morse inequalities are then used to prove the instability of some families of relative equilibria in the four-vortex problem with two pairs of equal vorticities. We also show that, for positive circulations, relative equilibria cannot accumulate on the collision set.
Using Gaussian windows to explore a multivariate data set
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1991-01-01
In an earlier paper, I recounted an exploratory analysis, using Gaussian windows, of a data set derived from the Infrared Astronomical Satellite. Here, my goals are to develop strategies for finding structural features in a data set in a many-dimensional space, and to find ways to describe the shape of such a data set. After a brief review of Gaussian windows, I describe the current implementation of the method. I give some ways of describing features that we might find in the data, such as clusters and saddle points, and also extended structures such as a 'bar', which is an essentially one-dimensional concentration of data points. I then define a distance function, which I use to determine which data points are 'associated' with a feature. Data points not associated with any feature are called 'outliers'. I then explore the data set, giving the strategies that I used and quantitative descriptions of the features that I found, including clusters, bars, and a saddle point. I tried to use strategies and procedures that could, in principle, be used in any number of dimensions.
Arndt, Michael; Hitzmann, Bernd
2004-01-01
A glucose control system is presented, which is able to control cultivations of Saccharomyces cerevisiae even at low glucose concentrations. Glucose concentrations are determined using a special flow injection analysis (FIA) system, which does not require a sampling module. An extended Kalman filter is employed for smoothing the glucose measurements as well as for the prediction of glucose and biomass concentration, the maximum specific growth rate, and the volume of the culture broth. The predicted values are utilized for feedforward/feedback control of the glucose concentration at set points of 0.08 and 0.05 g/L. The controller established well-defined conditions over several hours up to biomass concentrations of 13.5 and 20.7 g/L, respectively. The specific glucose uptake rates at both set points were 1.04 and 0.68 g/g/h, respectively. It is demonstrated that during fed-batch cultivation an overall pure oxidative metabolism of glucose is maintained at the lower set point and a specific ethanol production rate of 0.18 g/g/h at the higher set point.
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstad, A., E-mail: anna.bernstad@chemeng.lth.se; Cour Jansen, J. la
2012-12-15
Highlights: Black-Right-Pointing-Pointer GHG-emissions from different treatment alternatives vary largely in 25 reviewed comparative LCAs of bio-waste management. Black-Right-Pointing-Pointer System-boundary settings often vary largely in reviewed studies. Black-Right-Pointing-Pointer Existing LCA guidelines give varying recommendations in relation to several key issues. - Abstract: Twenty-five comparative cycle assessments (LCAs) addressing food waste treatment were reviewed, including the treatment alternatives landfill, thermal treatment, compost (small and large scale) and anaerobic digestion. The global warming potential related to these treatment alternatives varies largely amongst the studies. Large differences in relation to setting of system boundaries, methodological choices and variations in used input data were seenmore » between the studies. Also, a number of internal contradictions were identified, many times resulting in biased comparisons between alternatives. Thus, noticed differences in global warming potential are not found to be a result of actual differences in the environmental impacts from studied systems, but rather to differences in the performance of the study. A number of key issues with high impact on the overall global warming potential from different treatment alternatives for food waste were identified through the use of one-way sensitivity analyses in relation to a previously performed LCA of food waste management. Assumptions related to characteristics in treated waste, losses and emissions of carbon, nutrients and other compounds during the collection, storage and pretreatment, potential energy recovery through combustion, emissions from composting, emissions from storage and land use of bio-fertilizers and chemical fertilizers and eco-profiles of substituted goods were all identified as highly relevant for the outcomes of this type of comparisons. As the use of LCA in this area is likely to increase in coming years, it is highly relevant to establish more detailed guidelines within this field in order to increase both the general quality in assessments as well as the potentials for cross-study comparisons.« less
Permitted and forbidden sets in symmetric threshold-linear networks.
Hahnloser, Richard H R; Seung, H Sebastian; Slotine, Jean-Jacques
2003-03-01
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.
An infinite set of Ward identities for adiabatic modes in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinterbichler, Kurt; Hui, Lam; Khoury, Justin, E-mail: khinterbichler@perimeterinstitute.ca, E-mail: lh399@columbia.edu, E-mail: jkhoury@sas.upenn.edu
2014-01-01
We show that the correlation functions of any single-field cosmological model with constant growing-modes are constrained by an infinite number of novel consistency relations, which relate N+1-point correlation functions with a soft-momentum scalar or tensor mode to a symmetry transformation on N-point correlation functions of hard-momentum modes. We derive these consistency relations from Ward identities for an infinite tower of non-linearly realized global symmetries governing scalar and tensor perturbations. These symmetries can be labeled by an integer n. At each order n, the consistency relations constrain — completely for n = 0,1, and partially for n ≥ 2 — themore » q{sup n} behavior of the soft limits. The identities at n = 0 recover Maldacena's original consistency relations for a soft scalar and tensor mode, n = 1 gives the recently-discovered conformal consistency relations, and the identities for n ≥ 2 are new. As a check, we verify directly that the n = 2 identity is satisfied by known correlation functions in slow-roll inflation.« less
Thermoelectric Control Of Temperatures Of Pressure Sensors
NASA Technical Reports Server (NTRS)
Burkett, Cecil G., Jr.; West, James W.; Hutchinson, Mark A.; Lawrence, Robert M.; Crum, James R.
1995-01-01
Prototype controlled-temperature enclosure containing thermoelectric devices developed to house electronically scanned array of pressure sensors. Enclosure needed because (1) temperatures of transducers in sensors must be maintained at specified set point to ensure proper operation and calibration and (2) sensors sometimes used to measure pressure in hostile environments (wind tunnels in original application) that are hotter or colder than set point. Thus, depending on temperature of pressure-measurement environment, thermoelectric devices in enclosure used to heat or cool transducers to keep them at set point.
Visual Communication in PowerPoint Presentations in Applied Linguistics
ERIC Educational Resources Information Center
Kmalvand, Ayad
2014-01-01
PowerPoint knowledge presentation as a digital genre has established itself as the main software by which the findings of theses are disseminated in the academic settings. Although the importance of PowerPoint presentations is typically realized in academic settings like lectures, conferences, and seminars, the study of the visual features of…
Statistical density modification using local pattern matching
Terwilliger, Thomas C.
2007-01-23
A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.
Fundamental limits of repeaterless quantum communications
Pirandola, Stefano; Laurenza, Riccardo; Ottaviani, Carlo; Banchi, Leonardo
2017-01-01
Quantum communications promises reliable transmission of quantum information, efficient distribution of entanglement and generation of completely secure keys. For all these tasks, we need to determine the optimal point-to-point rates that are achievable by two remote parties at the ends of a quantum channel, without restrictions on their local operations and classical communication, which can be unlimited and two-way. These two-way assisted capacities represent the ultimate rates that are reachable without quantum repeaters. Here, by constructing an upper bound based on the relative entropy of entanglement and devising a dimension-independent technique dubbed ‘teleportation stretching', we establish these capacities for many fundamental channels, namely bosonic lossy channels, quantum-limited amplifiers, dephasing and erasure channels in arbitrary dimension. In particular, we exactly determine the fundamental rate-loss tradeoff affecting any protocol of quantum key distribution. Our findings set the limits of point-to-point quantum communications and provide precise and general benchmarks for quantum repeaters. PMID:28443624
Fundamental limits of repeaterless quantum communications.
Pirandola, Stefano; Laurenza, Riccardo; Ottaviani, Carlo; Banchi, Leonardo
2017-04-26
Quantum communications promises reliable transmission of quantum information, efficient distribution of entanglement and generation of completely secure keys. For all these tasks, we need to determine the optimal point-to-point rates that are achievable by two remote parties at the ends of a quantum channel, without restrictions on their local operations and classical communication, which can be unlimited and two-way. These two-way assisted capacities represent the ultimate rates that are reachable without quantum repeaters. Here, by constructing an upper bound based on the relative entropy of entanglement and devising a dimension-independent technique dubbed 'teleportation stretching', we establish these capacities for many fundamental channels, namely bosonic lossy channels, quantum-limited amplifiers, dephasing and erasure channels in arbitrary dimension. In particular, we exactly determine the fundamental rate-loss tradeoff affecting any protocol of quantum key distribution. Our findings set the limits of point-to-point quantum communications and provide precise and general benchmarks for quantum repeaters.
Van Dornshuld, Eric; Holy, Christina M; Tschumper, Gregory S
2014-05-08
This work provides the first characterization of five stationary points of the homogeneous thioformaldehyde dimer, (CH2S)2, and seven stationary points of the heterogeneous formaldehyde/thioformaldehyde dimer, CH2O/CH2S, with correlated ab initio electronic structure methods. Full geometry optimizations and corresponding harmonic vibrational frequencies were computed with second-order Møller-Plesset perturbation theory (MP2) and 13 different density functionals in conjunction with triple-ζ basis sets augmented with diffuse and multiple sets of polarization functions. The MP2 results indicate that the three stationary points of (CH2S)2 and four of CH2O/CH2S are minima, in contrast to two stationary points of the formaldehyde dimer, (CH2O)2. Single-point energies were also computed using the explicitly correlated MP2-F12 and CCSD(T)-F12 methods and basis sets as large as heavy-aug-cc-pVTZ. The (CH2O)2 and CH2O/CH2S MP2 and MP2-F12 binding energies deviated from the CCSD(T)-F12 binding energies by no more than 0.2 and 0.4 kcal mol(-1), respectively. The (CH2O)2 and CH2O/CH2S global minimum is the same at every level of theory. However, the MP2 methods overbind (CH2S)2 by as much as 1.1 kcal mol(-1), effectively altering the energetic ordering of the thioformaldehyde dimer minima relative to the CCSD(T)-F12 energies. The CCSD(T)-F12 binding energies of the (CH2O)2 and CH2O/CH2S stationary points are quite similar, with the former ranging from around -2.4 to -4.6 kcal mol(-1) and the latter from about -1.1 to -4.4 kcal mol(-1). Corresponding (CH2S)2 stationary points have appreciably smaller CCSD(T)-F12 binding energies ranging from ca. -1.1 to -3.4 kcal mol(-1). The vibrational frequency shifts upon dimerization are also reported for each minimum on the MP2 potential energy surfaces.
Bandholm, Thomas; Thorborg, Kristian; Lunn, Troels Haxholdt; Kehlet, Henrik; Jakobsen, Thomas Linding
2014-01-01
Background Loading and contraction failure (muscular exhaustion) are strength training variables known to influence neural activation of the exercising muscle in healthy subjects, which may help reduce neural inhibition of the quadriceps muscle following total knee arthroplasty (TKA). It is unknown how these exercise variables influence knee pain after TKA. Objective To investigate the effect of loading and contraction failure on knee pain during strength training, shortly following TKA. Design Cross-sectional study. Setting Consecutive sample of patients from the Copenhagen area, Denmark, receiving a TKA, between November 2012 and April 2013. Participants Seventeen patients, no more than 3 weeks after their TKA. Main outcome measures: In a randomized order, the patients performed 1 set of 4 standardized knee extensions, using relative loads of 8, 14, and 20 repetition maximum (RM), and ended with 1 single set to contraction failure (14 RM load). The individual loadings (kilograms) were determined during a familiarization session >72 hours prior. The patients rated their knee pain during each repetition, using a numerical rating scale (0–10). Results Two patients were lost to follow up. Knee pain increased with increasing load (20 RM: 3.1±2.0 points, 14 RM: 3.5±1.8 points, 8 RM: 4.3±2.5 points, P = 0.006), and repetitions to contraction failure (10% failure: 3.2±1.9 points, 100% failure: 5.4±1.6 points, P<0.001). Resting knee pain 60 seconds after the final repetition (2.7±2.4 points) was not different from that recorded before strength training (2.7±1.8 points, P = 0.88). Conclusion Both loading and repetitions performed to contraction failure during knee- extension strength-training, increased post-operative knee pain during strength training implemented shortly following TKA. However, only the increase in pain during repetitions to contraction failure exceeded that defined as clinically relevant, and was very short-lived. Trial Registration ClinicalTrials.gov NCT01729520 PMID:24614574
Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.
Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang
2012-06-20
Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.
Apparatus and method for implementing power saving techniques when processing floating point values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Young Moon; Park, Sang Phill
An apparatus and method are described for reducing power when reading and writing graphics data. For example, one embodiment of an apparatus comprises: a graphics processor unit (GPU) to process graphics data including floating point data; a set of registers, at least one of the registers of the set partitioned to store the floating point data; and encode/decode logic to reduce a number of binary 1 values being read from the at least one register by causing a specified set of bit positions within the floating point data to be read out as 0s rather than 1s.
Williamson, Joyce E.; Jarrell, Gregory J.; Clawges, Rick M.; Galloway, Joel M.; Carter, Janet M.
2000-01-01
This compact disk contains digital data produced as part of the 1:100,000-scale map products for the Black Hills Hydrology Study conducted in western South Dakota. The digital data include 28 individual Geographic Information System (GIS) data sets: data sets for the hydrogeologic unit map including all mapped hydrogeologic units within the study area (1 data set) and major geologic structure including anticlines and synclines (1 data set); data sets for potentiometric maps including the potentiometric contours for the Inyan Kara, Minnekahta, Minnelusa, Madison, and Deadwood aquifers (5 data sets), wells used as control points for each aquifer (5 data sets), and springs used as control points for the potentiometric contours (1 data set); and data sets for the structure-contour maps including the structure contours for the top of each formation that contains major aquifers (5 data sets), wells and tests holes used as control points for each formation (5 data sets), and surficial deposits (alluvium and terrace deposits) that directly overlie each of the major aquifer outcrops (5 data sets). These data sets were used to produce the maps published by the U.S. Geological Survey.
Geometry and combinatorics of Julia sets of real quadratic maps
NASA Astrophysics Data System (ADS)
Barnsley, M. F.; Geronimo, J. S.; Harrington, A. N.
1984-10-01
For real λ a correspondence is made between the Julia set B λ for z→( z- λ)2, in the hyperbolic case, and the set of λ-chains λ±√(λ±√(λ±..., with the aid of Cremer's theorem. It is shown how a number of features of Bλ can be understood in terms of λ-chains. The structure of B λ is determined by certain equivalence classes of λ-chains, fixed by orders of visitation of certain real cycles; and the bifurcation history of a given cycle can be conveniently computed via the combinatorics of λ-chains. The functional equations obeyed by attractive cycles are investigated, and their relation to λ-chains is given. The first cascade of period-doubling bifurcations is described from the point of view of the associated Julia sets and λ-chains. Certain "Julia sets" associated with the Feigenbaum function and some theorems of Lanford are discussed.
Maternal Talk about Mental States and the Emergence of Joint Visual Attention
ERIC Educational Resources Information Center
Slaughter, Virginia; Peterson, Candida C.; Carpenter, Malinda
2008-01-01
Twenty-four infants were tested monthly for gaze and point following between 9 and 15 months of age and mother-infant free play sessions were also conducted at 9, 12, and 15 months (Carpenter, Nagell, & Tomasello, 1998). Using this data set, this study explored relations between maternal talk about mental states during mothers' free play with…
ERIC Educational Resources Information Center
El Kadri, Michele Salles; Roth, Wolff-Michael
2013-01-01
Many teachers point to the theory-practice gap between university training and their school-based work. Coteaching in conjunction with cogenerative dialoguing as a means of teacher induction has been shown to overcome this gap. In this paper, we articulate teacher development in the praxis-centered (coteaching/cogenerative dialoguing) setting of…
ERIC Educational Resources Information Center
Bondestam, Fredrik
2004-01-01
This article explores the effects of certain discourses as they relate to sexual harassment in a Swedish higher education setting. Using a semiological perspective, the author analyzes notions of existence, range, prevention, and stability in order to demonstrate the way they aim at signifying a limited and, from a bureaucratic point of view,…
NASA Astrophysics Data System (ADS)
Vilacoba Ramos, Andrés
2007-04-01
Ethics are the set of moral rules that govern human conduct. Hegel, for his part, asserted that ethicity implied the full realization of freedom, as well as the suppression of it as arbitrariness. In this paper, we point out that, through the relation between Law and Ethics, we can discover how high are the Ethics of a society, as well as the adherence of its members to it.
Strategies to Avoid Audism in Adult Educational Settings
ERIC Educational Resources Information Center
Ballenger, Sheryl
2013-01-01
Humphries first defined the term "audism" as "the notion that one is superior based on one's ability to hear or behave in the manner of one who hears". Audism is a prejudice related to the physical hearing condition of the human body. The point of this article is not to substantiate or negate the term's importance but…
ERIC Educational Resources Information Center
Rosen, Rachel
2017-01-01
Despite critiques pointing out that racism has become normalised in early childhood settings, relatively little attention has been paid in such contexts to the everyday practices in which racial inequities are made. In seeking to interrogate the ways in which racism roosts in the routine, this article interrogates quotidian responses to children's…
ERIC Educational Resources Information Center
Ngan, Chun-Kit
2013-01-01
Making decisions over multivariate time series is an important topic which has gained significant interest in the past decade. A time series is a sequence of data points which are measured and ordered over uniform time intervals. A multivariate time series is a set of multiple, related time series in a particular domain in which domain experts…
ERIC Educational Resources Information Center
Morton, Emily; McKenzie, Jamie
2001-01-01
Includes four articles that discuss issues relating to use of the Internet in classroom settings. Topics include the use of email; curriculum rich strategies that require professional and program development; ranking search engines; and beneficial business partnerships with schools. (LRW)
Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.
2017-10-01
Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.
The detection of oral cancer using differential pathlength spectroscopy
NASA Astrophysics Data System (ADS)
Sterenborg, H. J. C. M.; Kanick, S.; de Visscher, S.; Witjes, M.; Amelink, A.
2010-02-01
The development of optical techniques for non-invasive diagnosis of cancer is an ongoing challenge to biomedical optics. For head and neck cancer we see two main fields of potential application 1) Screening for second primaries in patients with a history of oral cancer. This requires imaging techniques or an approach where a larger area can be scanned quickly. 2) Distinguishing potentially malignant visible primary lesions from benign ones. Here fiberoptic point measurements can be used as the location of the lesion is known. This presentation will focus on point measurement techniques. Various techniques for point measurements have been developed and investigated clinically for different applications. Differential Pathlength Spectroscopy is a recently developed fiberoptic point measurement technique that measures scattered light in a broad spectrum. Due to the specific fiberoptic geometry we measure only scattered photons that have travelled a predetermined pathlength. This allows us to analyse the spectrum mathematically and translate the measured curve into a set of parameters that are related to the microvasculature and to the intracellular morphology. DPS has been extensively evaluated on optical phantoms and tested clinically in various clinical applications. The first measurements in biopsy proven squamous cell carcinoma showed significant changes in both vascular and morphological parameters. Measurements on thick keratinized lesions however failed to generate any vascular signatures. This is related to the sampling depth of the standard optical fibers used. Recently we developed a fiberoptic probe with a ~1 mm sampling depth. Measurements on several leukoplakias showed that with this new probe we sample just below the keratin layer and can obtain vascular signatures. The results of a first set of clinical measurements will be presented and the significance for clinical diagnostics will be discussed.
Screen for intracranial dural arteriovenous fistulae with carotid duplex sonography.
Tsai, L-K; Yeh, S-J; Chen, Y-C; Liu, H-M; Jeng, J-S
2009-11-01
Early diagnosis and management of intracranial dural arteriovenous fistulae (DAVF) may prevent the occurrence of stroke. This study aimed to identify the best carotid duplex sonography (CDS) parameters for screening DAVF. 63 DAVF patients and 170 non-DAVF patients received both CDS and conventional angiography. The use of seven CDS haemodynamic parameter sets related to the resistance index (RI) of the external carotid artery (ECA) for the diagnosis of DAVF was validated and the applicability of the best CDS parameter set in 20 400 patients was tested. The CDS parameter set (ECA RI (cut-off point = 0.7) and internal carotid artery (ICA) to ECA RI ratio (cut-off point = 0.9)) had the highest specificity (99%) for diagnosis of DAVF with moderate sensitivity (51%). Location of the DAVF was a significant determinant of sensitivity of detection, which was 70% for non-cavernous DAVF and 0% for cavernous sinus DAVF (p<0.001). The above parameter set detected abnormality in 92 of 20 400 patients. These abnormalities included DAVF (n = 25), carotid stenosis (n = 32), vertebral artery stenosis (n = 7), intracranial arterial stenosis (n = 6), head and neck tumour (n = 3) and unknown aetiology (n = 19). Combined CDS parameters of ECA RI and ICA to ECA RI ratio can be used as a screening tool for the diagnosis of DAVF.
Point-of-care D-dimer testing in emergency departments.
Marquardt, Udo; Apau, Daniel
2015-09-01
Overcrowding and prolonged patient stays in emergency departments (EDs) affect patients' experiences and outcomes, and increase healthcare costs. One way of addressing these problems is through using point-of-care blood tests, laboratory testing undertaken near patient locations with rapidly available results. D-dimer tests are used to exclude venous thromboembolism (VTE), a common presentation in EDs, in low-risk patients. However, data on the effects of point-of-care D-dimer testing in EDs and other urgent care settings are scarce. This article reports the results of a literature review that examined the benefits to patients of point-of-care D-dimer testing in terms of reduced turnaround times (time to results), and time to diagnosis, discharge or referral. It also considers the benefits to organisations in relation to reduced ED crowding and increased cost effectiveness. The review concludes that undertaking point-of-care D-dimer tests, combined with pre-test probability scores, can be a quick and safe way of ruling out VTE and improving patients' experience.
Enhancing biomedical text summarization using semantic relation extraction.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Automated extraction and analysis of rock discontinuity characteristics from 3D point clouds
NASA Astrophysics Data System (ADS)
Bianchetti, Matteo; Villa, Alberto; Agliardi, Federico; Crosta, Giovanni B.
2016-04-01
A reliable characterization of fractured rock masses requires an exhaustive geometrical description of discontinuities, including orientation, spacing, and size. These are required to describe discontinuum rock mass structure, perform Discrete Fracture Network and DEM modelling, or provide input for rock mass classification or equivalent continuum estimate of rock mass properties. Although several advanced methodologies have been developed in the last decades, a complete characterization of discontinuity geometry in practice is still challenging, due to scale-dependent variability of fracture patterns and difficult accessibility to large outcrops. Recent advances in remote survey techniques, such as terrestrial laser scanning and digital photogrammetry, allow a fast and accurate acquisition of dense 3D point clouds, which promoted the development of several semi-automatic approaches to extract discontinuity features. Nevertheless, these often need user supervision on algorithm parameters which can be difficult to assess. To overcome this problem, we developed an original Matlab tool, allowing fast, fully automatic extraction and analysis of discontinuity features with no requirements on point cloud accuracy, density and homogeneity. The tool consists of a set of algorithms which: (i) process raw 3D point clouds, (ii) automatically characterize discontinuity sets, (iii) identify individual discontinuity surfaces, and (iv) analyse their spacing and persistence. The tool operates in either a supervised or unsupervised mode, starting from an automatic preliminary exploration data analysis. The identification and geometrical characterization of discontinuity features is divided in steps. First, coplanar surfaces are identified in the whole point cloud using K-Nearest Neighbor and Principal Component Analysis algorithms optimized on point cloud accuracy and specified typical facet size. Then, discontinuity set orientation is calculated using Kernel Density Estimation and principal vector similarity criteria. Poles to points are assigned to individual discontinuity objects using easy custom vector clustering and Jaccard distance approaches, and each object is segmented into planar clusters using an improved version of the DBSCAN algorithm. Modal set orientations are then recomputed by cluster-based orientation statistics to avoid the effects of biases related to cluster size and density heterogeneity of the point cloud. Finally, spacing values are measured between individual discontinuity clusters along scanlines parallel to modal pole vectors, whereas individual feature size (persistence) is measured using 3D convex hull bounding boxes. Spacing and size are provided both as raw population data and as summary statistics. The tool is optimized for parallel computing on 64bit systems, and a Graphic User Interface (GUI) has been developed to manage data processing, provide several outputs, including reclassified point clouds, tables, plots, derived fracture intensity parameters, and export to modelling software tools. We present test applications performed both on synthetic 3D data (simple 3D solids) and real case studies, validating the results with existing geomechanical datasets.
Metabolic vs. hedonic obesity: a conceptual distinction and its clinical implications
Zhang, Y.; Mechanick, J. I.; Korner, J.; Peterli, R.
2015-01-01
Summary Body weight is determined via both metabolic and hedonic mechanisms. Metabolic regulation of body weight centres around the ‘body weight set point’, which is programmed by energy balance circuitry in the hypothalamus and other specific brain regions. The metabolic body weight set point has a genetic basis, but exposure to an obesogenic environment may elicit allostatic responses and upward drift of the set point, leading to a higher maintained body weight. However, an elevated steady‐state body weight may also be achieved without an alteration of the metabolic set point, via sustained hedonic over‐eating, which is governed by the reward system of the brain and can override homeostatic metabolic signals. While hedonic signals are potent influences in determining food intake, metabolic regulation involves the active control of both food intake and energy expenditure. When overweight is due to elevation of the metabolic set point (‘metabolic obesity’), energy expenditure theoretically falls onto the standard energy–mass regression line. In contrast, when a steady‐state weight is above the metabolic set point due to hedonic over‐eating (‘hedonic obesity’), a persistent compensatory increase in energy expenditure per unit metabolic mass may be demonstrable. Recognition of the two types of obesity may lead to more effective treatment and prevention of obesity. PMID:25588316
Architecture of chaotic attractors for flows in the absence of any singular point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letellier, Christophe; Malasoma, Jean-Marc
2016-06-15
Some chaotic attractors produced by three-dimensional dynamical systems without any singular point have now been identified, but explaining how they are structured in the state space remains an open question. We here want to explain—in the particular case of the Wei system—such a structure, using one-dimensional sets obtained by vanishing two of the three derivatives of the flow. The neighborhoods of these sets are made of points which are characterized by the eigenvalues of a 2 × 2 matrix describing the stability of flow in a subspace transverse to it. We will show that the attractor is spiralling and twisted in themore » neighborhood of one-dimensional sets where points are characterized by a pair of complex conjugated eigenvalues. We then show that such one-dimensional sets are also useful in explaining the structure of attractors produced by systems with singular points, by considering the case of the Lorenz system.« less
NBOD2- PROGRAM TO DERIVE AND SOLVE EQUATIONS OF MOTION FOR COUPLED N-BODY SYSTEMS
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1994-01-01
The analysis of the dynamic characteristics of a complex system, such as a spacecraft or a robot, is usually best accomplished through the study of a simulation model. The simulation model must have the same dynamic characteristics as the complex system, while lending itself to mathematical quantification. The NBOD2 computer program was developed to aid in the analysis of spacecraft attitude dynamics. NBOD2 is a very general program that may be applied to a large class of problems involving coupled N-body systems. NBOD2 provides the dynamics analyst with the capability to automatically derive and numerically solve the equations of motion for any system that can be modeled as a topological tree of coupled rigid bodies, flexible bodies, point masses, and symmetrical momentum wheels. NBOD2 uses a topological tree model of the dynamic system to derive the vector-dyadic equations of motion for the system. The user builds this topological tree model by using rigid and flexible bodies, point masses, and symmetrical momentum wheels with appropriate connections. To insure that the relative motion between contiguous bodies is kinematically constrained, NBOD2 assumes that contiguous rigid and flexible bodies are connected by physically reliable 0, 1, 2, and 3-degrees-of-freedom gimbals. These gimbals prohibit relative translational motion, while permitting up to 3 degrees of relative rotational freedom at hinge points. Point masses may have 0, 1, 2, or 3-degrees of relative translational freedom, and symmetric momentum wheels may have a single degree of rotational freedom relative to the body in which they are imbedded. Flexible bodies may possess several degrees of vibrational freedom in addition to the degrees of freedom associated with the connection gimbals. Data concerning the natural modes and vibrations of the flexible bodies must be supplied by the user. NBOD2 combines the best features of the discrete-body approach and the nested body approach to reduce the topological tree to a complete set of nonlinear equations of motion in vector-dyadic form for the system being analyzed. NBOD2 can then numerically solve the equations of motion. Input to NBOD2 consists of a user-supplied description of the system to be modeled. The NBOD2 system includes an interactive, tutorial, input support program to aid the NBOD2 user in preparing input data. Output from NBOD2 consists of a listing of the complete set of nonlinear equations of motion in vector-dyadic form and any userspecified set of system state variables. The NBOD2 program is written in FORTRAN 77 for batch execution and has been implemented on a DEC VAX-11/780 computer. The NBOD2 program was developed in 1978 and last updated in 1982.
NASA Astrophysics Data System (ADS)
Morard, G.; Boccato, S.; Rosa, A. D.; Anzellini, S.; Miozzi Ferrini, F.; Laura, H.; Garbarino, G.; Harmand, M.; Guyot, F. J.; Boulard, E.; Kantor, I.; Irifune, T.; Torchio, R.
2017-12-01
Iron is the main constituent of planetary cores. Studying its phase diagram under high pressure is necessary to constrain properties of planetary interiors, and to model key parameters such as the generation of magnetic field. Though, strong controversy on the melting curve of pure Fe still remains. Recently, Aquilanti et al, (PNAS, 2015) reported a Fe melting curved based on XANES measurements which is in open disagreement with previous X-ray diffraction results (Anzellini et al, Science, 2013). Discrepancies in the melting temperature exceed several hundred degrees close to Mbar pressures, which may be related to differences in temperature measurement techniques, melting diagnostics, or to chemical reactions of the sample with the surrounding medium. We therefore performed new in situ high P/T XANES experiments on pure Fe (up to 115 GPa and 4000 K) at the ESRF beamline ID24, combining the energy dispersive absorption set up with laser heated diamond anvil cells. X-ray diffraction maps were collected from all recovered samples in order to identify and characterize laser-heated spots. The XANES melting criterion was further cross checked by analyzing the recovered sample textures using FIB cutting techniques and SEM imaging. We found systematically that low melting temperatures are related to the presence of Fe3C, implying that in those cases chemical reactions occurred during heating resulting in carbon contamination from the diamonds. These low melting points fall onto the melting line reported by Aquilanti et al, (2015). Uncontaminated points are in agreement with the melting curve of Anzellini et al, (2013) within their uncertainties. Moreover, this data set allowed us to refine the location of the triple point in the Fe phase diagram at 105 (±10) GPa and 3600 (±200) K, which may imply a small kink in the melting curve around this point. This refined Fe phase diagram could be then used to compute thermodynamic models for planetary cores.
Methods and apparatuses for detection of radiation with semiconductor image sensors
Cogliati, Joshua Joseph
2018-04-10
A semiconductor image sensor is repeatedly exposed to high-energy photons while a visible light obstructer is in place to block visible light from impinging on the sensor to generate a set of images from the exposures. A composite image is generated from the set of images with common noise substantially removed so the composite image includes image information corresponding to radiated pixels that absorbed at least some energy from the high-energy photons. The composite image is processed to determine a set of bright points in the composite image, each bright point being above a first threshold. The set of bright points is processed to identify lines with two or more bright points that include pixels therebetween that are above a second threshold and identify a presence of the high-energy particles responsive to a number of lines.
Finite-density transition line for QCD with 695 MeV dynamical fermions
NASA Astrophysics Data System (ADS)
Greensite, Jeff; Höllwieser, Roman
2018-06-01
We apply the relative weights method to SU(3) gauge theory with staggered fermions of mass 695 MeV at a set of temperatures in the range 151 ≤T ≤267 MeV , to obtain an effective Polyakov line action at each temperature. We then apply a mean field method to search for phase transitions in the effective theory at finite densities. The result is a transition line in the plane of temperature and chemical potential, with an end point at high temperature, as expected, but also a second end point at a lower temperature. We cannot rule out the possibilities that a transition line reappears at temperatures lower than the range investigated, or that the second end point is absent for light quarks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufmann, Ralph M., E-mail: rkaufman@math.purdue.edu; Khlebnikov, Sergei, E-mail: skhleb@physics.purdue.edu; Wehefritz-Kaufmann, Birgit, E-mail: ebkaufma@math.purdue.edu
2012-11-15
Motivated by the Double Gyroid nanowire network we develop methods to detect Dirac points and classify level crossings, aka. singularities in the spectrum of a family of Hamiltonians. The approach we use is singularity theory. Using this language, we obtain a characterization of Dirac points and also show that the branching behavior of the level crossings is given by an unfolding of A{sub n} type singularities. Which type of singularity occurs can be read off a characteristic region inside the miniversal unfolding of an A{sub k} singularity. We then apply these methods in the setting of families of graph Hamiltonians,more » such as those for wire networks. In the particular case of the Double Gyroid we analytically classify its singularities and show that it has Dirac points. This indicates that nanowire systems of this type should have very special physical properties. - Highlights: Black-Right-Pointing-Pointer New method for analytically finding Dirac points. Black-Right-Pointing-Pointer Novel relation of level crossings to singularity theory. Black-Right-Pointing-Pointer More precise version of the von-Neumann-Wigner theorem for arbitrary smooth families of Hamiltonians of fixed size. Black-Right-Pointing-Pointer Analytical proof of the existence of Dirac points for the Gyroid wire network.« less
Nonlinear equations of dynamics for spinning paraboloidal antennas
NASA Technical Reports Server (NTRS)
Utku, S.; Shoemaker, W. L.; Salama, M.
1983-01-01
The nonlinear strain-displacement and velocity-displacement relations of spinning imperfect rotational paraboloidal thin shell antennas are derived for nonaxisymmetrical deformations. Using these relations with the admissible trial functions in the principle functional of dynamics, the nonlinear equations of stress inducing motion are expressed in the form of a set of quasi-linear ordinary differential equations of the undetermined functions by means of the Rayleigh-Ritz procedure. These equations include all nonlinear terms up to and including the third degree. Explicit expressions are given for the coefficient matrices appearing in these equations. Both translational and rotational off-sets of the axis of revolution (and also the apex point of the paraboloid) with respect to the spin axis are considered. Although the material of the antenna is assumed linearly elastic, it can be anisotropic.
A general scientific information system to support the study of climate-related data
NASA Technical Reports Server (NTRS)
Treinish, L. A.
1984-01-01
The development and use of NASA's Pilot Climate Data System (PCDS) are discussed. The PCDS is used as a focal point for managing and providing access to a large collection of actively used data for the Earth, ocean and atmospheric sciences. The PCDS provides uniform data catalogs, inventories, and access methods for selected NASA and non-NASA data sets. Scientific users can preview the data sets using graphical and statistical methods. The system has evolved from its original purpose as a climate data base management system in response to a national climate program, into an extensive package of capabilities to support many types of data sets from both spaceborne and surface based measurements with flexible data selection and analysis functions.
Condition assessment of nonlinear processes
Hively, Lee M.; Gailey, Paul C.; Protopopescu, Vladimir A.
2002-01-01
There is presented a reliable technique for measuring condition change in nonlinear data such as brain waves. The nonlinear data is filtered and discretized into windowed data sets. The system dynamics within each data set is represented by a sequence of connected phase-space points, and for each data set a distribution function is derived. New metrics are introduced that evaluate the distance between distribution functions. The metrics are properly renormalized to provide robust and sensitive relative measures of condition change. As an example, these measures can be used on EEG data, to provide timely discrimination between normal, preseizure, seizure, and post-seizure states in epileptic patients. Apparatus utilizing hardware or software to perform the method and provide an indicative output is also disclosed.
Combat-related, chronic posttraumatic stress disorder: implications for group-therapy intervention.
Makler, S; Sigal, M; Gelkopf, M; Kochba, B B; Horeb, E
1990-07-01
The patient with combat-related chronic Posttraumatic Stress Disorder suffers from a wide spectrum of maladaptive behaviors. This paper delineates the work that has been done with such a population in group therapy. The plan that is proposed takes into account three interrelated sets of factors: factors important for creating an effective working relation; curative factors; and particular themes. Each of these factors is analyzed in the light of the particularities of group work with such a population. Each of the points discussed is based upon the relevant literature, upon the experience of the therapist, and illustrated with examples.
Mean Field Games for Stochastic Growth with Relative Utility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Minyi, E-mail: mhuang@math.carleton.ca; Nguyen, Son Luu, E-mail: sonluu.nguyen@upr.edu
This paper considers continuous time stochastic growth-consumption optimization in a mean field game setting. The individual capital stock evolution is determined by a Cobb–Douglas production function, consumption and stochastic depreciation. The individual utility functional combines an own utility and a relative utility with respect to the population. The use of the relative utility reflects human psychology, leading to a natural pattern of mean field interaction. The fixed point equation of the mean field game is derived with the aid of some ordinary differential equations. Due to the relative utility interaction, our performance analysis depends on some ratio based approximation errormore » estimate.« less
NASA Astrophysics Data System (ADS)
Xu, Jun; Kong, Fan
2018-05-01
Extreme value distribution (EVD) evaluation is a critical topic in reliability analysis of nonlinear structural dynamic systems. In this paper, a new method is proposed to obtain the EVD. The maximum entropy method (MEM) with fractional moments as constraints is employed to derive the entire range of EVD. Then, an adaptive cubature formula is proposed for fractional moments assessment involved in MEM, which is closely related to the efficiency and accuracy for reliability analysis. Three point sets, which include a total of 2d2 + 1 integration points in the dimension d, are generated in the proposed formula. In this regard, the efficiency of the proposed formula is ensured. Besides, a "free" parameter is introduced, which makes the proposed formula adaptive with the dimension. The "free" parameter is determined by arranging one point set adjacent to the boundary of the hyper-sphere which contains the bulk of total probability. In this regard, the tail distribution may be better reproduced and the fractional moments could be evaluated with accuracy. Finally, the proposed method is applied to a ten-storey shear frame structure under seismic excitations, which exhibits strong nonlinearity. The numerical results demonstrate the efficacy of the proposed method.
NASA Astrophysics Data System (ADS)
Rhodes, Andrew P.; Christian, John A.; Evans, Thomas
2017-12-01
With the availability and popularity of 3D sensors, it is advantageous to re-examine the use of point cloud descriptors for the purpose of pose estimation and spacecraft relative navigation. One popular descriptor is the oriented unique repeatable clustered viewpoint feature histogram (
Ibarbalz, Federico M; Pérez, María Victoria; Figuerola, Eva L M; Erijman, Leonardo
2014-01-01
The performance of two sets of primers targeting variable regions of the 16S rRNA gene V1-V3 and V4 was compared in their ability to describe changes of bacterial diversity and temporal turnover in full-scale activated sludge. Duplicate sets of high-throughput amplicon sequencing data of the two 16S rRNA regions shared a collection of core taxa that were observed across a series of twelve monthly samples, although the relative abundance of each taxon was substantially different between regions. A case in point was the changes in the relative abundance of filamentous bacteria Thiothrix, which caused a large effect on diversity indices, but only in the V1-V3 data set. Yet the relative abundance of Thiothrix in the amplicon sequencing data from both regions correlated with the estimation of its abundance determined using fluorescence in situ hybridization. In nonmetric multidimensional analysis samples were distributed along the first ordination axis according to the sequenced region rather than according to sample identities. The dynamics of microbial communities indicated that V1-V3 and the V4 regions of the 16S rRNA gene yielded comparable patterns of: 1) the changes occurring within the communities along fixed time intervals, 2) the slow turnover of activated sludge communities and 3) the rate of species replacement calculated from the taxa-time relationships. The temperature was the only operational variable that showed significant correlation with the composition of bacterial communities over time for the sets of data obtained with both pairs of primers. In conclusion, we show that despite the bias introduced by amplicon sequencing, the variable regions V1-V3 and V4 can be confidently used for the quantitative assessment of bacterial community dynamics, and provide a proper qualitative account of general taxa in the community, especially when the data are obtained over a convenient time window rather than at a single time point.
An extended affinity propagation clustering method based on different data density types.
Zhao, XiuLi; Xu, WeiXiang
2015-01-01
Affinity propagation (AP) algorithm, as a novel clustering method, does not require the users to specify the initial cluster centers in advance, which regards all data points as potential exemplars (cluster centers) equally and groups the clusters totally by the similar degree among the data points. But in many cases there exist some different intensive areas within the same data set, which means that the data set does not distribute homogeneously. In such situation the AP algorithm cannot group the data points into ideal clusters. In this paper, we proposed an extended AP clustering algorithm to deal with such a problem. There are two steps in our method: firstly the data set is partitioned into several data density types according to the nearest distances of each data point; and then the AP clustering method is, respectively, used to group the data points into clusters in each data density type. Two experiments are carried out to evaluate the performance of our algorithm: one utilizes an artificial data set and the other uses a real seismic data set. The experiment results show that groups are obtained more accurately by our algorithm than OPTICS and AP clustering algorithm itself.
Kegler, Michelle C; Swan, Deanne W; Alcantara, Iris; Wrensford, Louise; Glanz, Karen
2012-09-01
This study examines the relative contribution of social (eg, social support) and physical (eg, programs and facilities) aspects of worksite, church, and home settings to physical activity levels among adults in rural communities. Data are from a cross-sectional survey of 268 African American and Caucasian adults, ages 40-70, living in southwest Georgia. Separate regression models were developed for walking, moderate, vigorous, and total physical activity as measured in METs-minutes-per-week. Social support for physical activity was modest in all 3 settings (mean scores 1.5-1.9 on a 4-point scale). Participants reported limited (<1) programs and facilities for physical activity at their worksites and churches. An interaction of physical and social aspects of the home setting was observed for vigorous and moderate physical activity and total METs. There were also interactions between gender and social support at church for vigorous activity among women, and between race and the physical environment at church for moderate physical activity. A cross-over interaction was found between home and church settings for vigorous physical activity. Social support at church was associated with walking and total METs. Homes and churches may be important behavioral settings for physical activity among adults in rural communities.
Parametric motion control of robotic arms: A biologically based approach using neural networks
NASA Technical Reports Server (NTRS)
Bock, O.; D'Eleuterio, G. M. T.; Lipitkas, J.; Grodski, J. J.
1993-01-01
A neural network based system is presented which is able to generate point-to-point movements of robotic manipulators. The foundation of this approach is the use of prototypical control torque signals which are defined by a set of parameters. The parameter set is used for scaling and shaping of these prototypical torque signals to effect a desired outcome of the system. This approach is based on neurophysiological findings that the central nervous system stores generalized cognitive representations of movements called synergies, schemas, or motor programs. It has been proposed that these motor programs may be stored as torque-time functions in central pattern generators which can be scaled with appropriate time and magnitude parameters. The central pattern generators use these parameters to generate stereotypical torque-time profiles, which are then sent to the joint actuators. Hence, only a small number of parameters need to be determined for each point-to-point movement instead of the entire torque-time trajectory. This same principle is implemented for controlling the joint torques of robotic manipulators where a neural network is used to identify the relationship between the task requirements and the torque parameters. Movements are specified by the initial robot position in joint coordinates and the desired final end-effector position in Cartesian coordinates. This information is provided to the neural network which calculates six torque parameters for a two-link system. The prototypical torque profiles (one per joint) are then scaled by those parameters. After appropriate training of the network, our parametric control design allowed the reproduction of a trained set of movements with relatively high accuracy, and the production of previously untrained movements with comparable accuracy. We conclude that our approach was successful in discriminating between trained movements and in generalizing to untrained movements.
Perturbed Equations of Motion for Formation Flight Near the Sun-Earth L2 Point
NASA Technical Reports Server (NTRS)
Luquette, Richard; Segerman, A. M.; Zedd, M. F.
2005-01-01
NASA is planning missions to the vicinity of the Sun-Earth L(sub 2) point, some involving a distributed system of telescope spacecraft, configured in a plane about a hub. Several sets of differential equations are written for the formation flight of such telescopes relative to the hub, with varying levels of fidelity. Effects are cast as additive perturbations to the circular restricted three-body problem, expanded in terms of the system distanced, to an accuracy of 10-20 m. These include Earth's orbital eccentricity, lunar motion, solar radiation pressure, and small thrusting forces. Simulations validating the expanded differential equations are presented.
Coherent states and parasupersymmetric quantum mechanics
NASA Technical Reports Server (NTRS)
Debergh, Nathalie
1992-01-01
It is well known that Parafermi and Parabose statistics are natural extensions of the usual Fermi and Bose ones, enhancing trilinear (anti)commutation relations instead of bilinear ones. Due to this generalization, positive parameters appear: the so-called orders of paraquantization p (= 1, 2, 3, ...) and h sub 0 (= 1/2, 1, 3/2, ...), respectively, the first value leading in each case to the usual statistics. The superpostion of the parabosonic and parafermionic operators gives rise to parasupermultiplets for which mixed trilinear relations have already been studied leading to two (nonequivalent) sets: the relative Parabose and the relative Parafermi ones. For the specific values p = 1 = 2h sub 0, these sets reduce to the well known supersymmetry. Coherent states associated with this last model have been recently put in evidence through the annihilation operator point of view and the group theoretical approach or displacement operator context. We propose to realize the corresponding studies within the new context p = 2 = 2h sub 0, being then directly extended to any order of paraquantization.
Gene function in early mouse embryonic stem cell differentiation
Sene, Kagnew Hailesellasse; Porter, Christopher J; Palidwor, Gareth; Perez-Iratxeta, Carolina; Muro, Enrique M; Campbell, Pearl A; Rudnicki, Michael A; Andrade-Navarro, Miguel A
2007-01-01
Background Little is known about the genes that drive embryonic stem cell differentiation. However, such knowledge is necessary if we are to exploit the therapeutic potential of stem cells. To uncover the genetic determinants of mouse embryonic stem cell (mESC) differentiation, we have generated and analyzed 11-point time-series of DNA microarray data for three biologically equivalent but genetically distinct mESC lines (R1, J1, and V6.5) undergoing undirected differentiation into embryoid bodies (EBs) over a period of two weeks. Results We identified the initial 12 hour period as reflecting the early stages of mESC differentiation and studied probe sets showing consistent changes of gene expression in that period. Gene function analysis indicated significant up-regulation of genes related to regulation of transcription and mRNA splicing, and down-regulation of genes related to intracellular signaling. Phylogenetic analysis indicated that the genes showing the largest expression changes were more likely to have originated in metazoans. The probe sets with the most consistent gene changes in the three cell lines represented 24 down-regulated and 12 up-regulated genes, all with closely related human homologues. Whereas some of these genes are known to be involved in embryonic developmental processes (e.g. Klf4, Otx2, Smn1, Socs3, Tagln, Tdgf1), our analysis points to others (such as transcription factor Phf21a, extracellular matrix related Lama1 and Cyr61, or endoplasmic reticulum related Sc4mol and Scd2) that have not been previously related to mESC function. The majority of identified functions were related to transcriptional regulation, intracellular signaling, and cytoskeleton. Genes involved in other cellular functions important in ESC differentiation such as chromatin remodeling and transmembrane receptors were not observed in this set. Conclusion Our analysis profiles for the first time gene expression at a very early stage of mESC differentiation, and identifies a functional and phylogenetic signature for the genes involved. The data generated constitute a valuable resource for further studies. All DNA microarray data used in this study are available in the StemBase database of stem cell gene expression data [1] and in the NCBI's GEO database. PMID:17394647
Epidemiology of Vocal Health in Young Adults Attending College in the United States.
Hartley, Naomi A; Breen, Ellen; Thibeault, Susan L
2016-10-01
The purpose of this study was to document typical vocal health characteristics (including voice-related activities, behaviors, and symptomatology) of young adults attending college and to determine lifetime and point prevalence rates of voice disorders. Undergraduates at University of Wisconsin-Madison completed an anonymous online survey detailing vocal use, symptomatology, impact, sociodemographics, and voice-related quality of life. Univariate analyses and multivariate regression models isolated risk factors for lifetime and point prevalence rates of a voice disorder. Vocal health and associated factors were analyzed for 652 students (predominantly 18-25 years of age). Lifetime prevalence rate of a voice disorder was 33.9% (point prevalence = 4.45%). Change in voice function (odds ratio [OR] = 2.77), seasonal or chronic postnasal drip (OR = 2.11), hoarseness (OR = 2.08), and restrictions to social activity (OR = 2.07; all p < .05) were identified as the strongest predictors of disorder. A total of 46% of students reported some form of voice problem in the past year, most frequently lasting between 1 and 6 days (39%). Voice usage in social and work settings exceeded demands in the classroom. Young adults in college frequently experience disturbances to vocal health; however, this is not usually perceived to interfere with communication. Relative weighting of risk factors appears to differ from older adults, highlighting the need for individualized evaluation and management, with reference to age-appropriate normative reference points.
NASA Astrophysics Data System (ADS)
Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.
2018-04-01
In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.
Clendenin, C.W.; Diehl, S.F.
1999-01-01
A pronounced, subparallel set of northeast-striking faults occurs in southeastern Missouri, but little is known about these faults because of poor exposure. The Commerce fault system is the southernmost exposed fault system in this set and has an ancestry related to Reelfoot rift extension. Recent published work indicates that this fault system has a long history of reactivation. The northeast-striking Grays Point fault zone is a segment of the Commerce fault system and is well exposed along the southeast rim of an inactive quarry. Our mapping shows that the Grays Point fault zone also has a complex history of polyphase reactivation, involving three periods of Paleozoic reactivation that occurred in Late Ordovician, Devonian, and post-Mississippian. Each period is characterized by divergent, right-lateral oblique-slip faulting. Petrographic examination of sidwall rip-out clasts in calcite-filled faults associated with the Grays Point fault zone supports a minimum of three periods of right-lateral oblique-slip. The reported observations imply that a genetic link exists between intracratonic fault reactivation and strain produced by Paleozoic orogenies affecting the eastern margin of Laurentia (North America). Interpretation of this link indicate that right-lateral oblique-slip has occurred on all of the northeast-striking faults in southeastern Missouri as a result of strain influenced by the convergence directions of the different Paleozoic orogenies.
NASA Astrophysics Data System (ADS)
Nassiri, Isar; Lombardo, Rosario; Lauria, Mario; Morine, Melissa J.; Moyseos, Petros; Varma, Vijayalakshmi; Nolen, Greg T.; Knox, Bridgett; Sloper, Daniel; Kaput, Jim; Priami, Corrado
2016-07-01
The investigation of the complex processes involved in cellular differentiation must be based on unbiased, high throughput data processing methods to identify relevant biological pathways. A number of bioinformatics tools are available that can generate lists of pathways ranked by statistical significance (i.e. by p-value), while ideally it would be desirable to functionally score the pathways relative to each other or to other interacting parts of the system or process. We describe a new computational method (Network Activity Score Finder - NASFinder) to identify tissue-specific, omics-determined sub-networks and the connections with their upstream regulator receptors to obtain a systems view of the differentiation of human adipocytes. Adipogenesis of human SBGS pre-adipocyte cells in vitro was monitored with a transcriptomic data set comprising six time points (0, 6, 48, 96, 192, 384 hours). To elucidate the mechanisms of adipogenesis, NASFinder was used to perform time-point analysis by comparing each time point against the control (0 h) and time-lapse analysis by comparing each time point with the previous one. NASFinder identified the coordinated activity of seemingly unrelated processes between each comparison, providing the first systems view of adipogenesis in culture. NASFinder has been implemented into a web-based, freely available resource associated with novel, easy to read visualization of omics data sets and network modules.
ERIC Educational Resources Information Center
Mamona-Downs, Joanna K.; Megalou, Foteini J.
2013-01-01
The aim of this paper is to examine students' understanding of the limiting behavior of a function from [set of real numbers][superscript 2] to [set of real numbers] at a point "P." This understanding depends on which definition is used for a limit. Several definitions are considered; two of these concern the notion of a neighborhood of "P", while…
NASA Astrophysics Data System (ADS)
Espinosa-Garcia, J.
Ab initio molecular orbital theory was used to study parts of the reaction between the CH2Br radical and the HBr molecule, and two possibilities were analysed: attack on the hydrogen and attack on the bromine of the HBr molecule. Optimized geometries and harmonic vibrational frequencies were calculated at the second-order Moller-Plesset perturbation theory levels, and comparison with available experimental data was favourable. Then single-point calculations were performed at several higher levels of calculation. In the attack on the hydrogen of HBr, two stationary points were located on the direct hydrogen abstraction reaction path: a very weak hydrogen bonded complex of reactants, C···HBr, close to the reactants, followed by the saddle point (SP). The effects of level of calculation (method + basis set), spin projection, zeropoint energy, thermal corrections (298K), spin-orbit coupling and basis set superposition error (BSSE) on the energy changes were analysed. Taking the reaction enthalpy (298K) as reference, agreement with experiment was obtained only when high correlation energy and large basis sets were used. It was concluded that at room temperature (i.e., with zero-point energy and thermal corrections), when the BSSE was included, the complex disappears and the activation enthalpy (298K) ranges from 0.8kcal mol-1 to 1.4kcal mol-1 above the reactants, depending on the level of calculation. It was concluded also that this result is the balance of a complicated interplay of many factors, which are affected by uncertainties in the theoretical calculations. Finally, another possible complex (X complex), which involves the alkyl radical being attracted to the halogen end of HBr (C···BrH), was explored also. It was concluded that this X complex does not exist at room temperature.
Is a changing climate affecting the tropical cyclone behavior of Cape Verde?
NASA Astrophysics Data System (ADS)
Emmenegger, T. W.; Mann, M. E.; Evans, J. L.
2016-12-01
An existing dataset of synthetic tropical cyclone (TC) tracks derived from climate change simulations were used to explore TC variability within a 250 km radius of the Cape Verde Islands (16.5388N, 23.0418W). The synthetic sets were examined according to genesis point location, track projection, intensity, frequency, and seasonality within the observational era (1851 AD to present). These factors of TC variability have been shown to be strongly related to climate oscillations, thus the historical era was grouped by the increasing and decreasing regimes of sea surface temperature (SST) in the main development region (MDR) of the Atlantic Ocean. Numerous studies have examined Atlantic Basin activity throughout this era; the goal of this study is to investigate possible variations in TC behavior around Cape Verde, ultimately determining whether Cape Verde experiences similar fluctuations in activity as observed basin-wide. We find that several facets of TC variability such as intensity, seasonality, and genesis point location around Cape Verde are not significantly different to that of the entire basin, thus forecasts of the entire basin in these respects may also apply to our site. A long-term trend of increasing TC frequency can be identified basin-wide within the observed set, yet activity around Cape Verde does not display this same behavior observably or in any synthetic set. A relationship between the location of genesis points and the regimes of SST fluctuation is shown to be existent. We find both more observed and synthetic genesis points within the vicinity of Cape Verde during cool periods, and an eastward and equatorward shift in cyclogenesis is evident during warm regimes. This southeastern shift in genesis points attributes to the increased intensities of TCs seen during periods of warmer SST. Years of increased SST are additionally linked to an earlier seasonality in Cape Verde.
Mack, Frederick K.; Wheeler, J.C.; Curtin, Stephen E.
1982-01-01
The map is based on the differences between two sets of water-level measurements made in 65 observation wells. One set was made in 1977, a relatively dry year, and the other set was made in 1980, another relatively dry year. The map shows that the potentiometric surface was higher in 1980, by as much as 9 feet, than it was in 1977, in a band a few miles wide near the outcrop and subcrop areas of the aquifer in northern Prince Georges County and central Anne Arundel County. In the remainder of the map area, the 1980 potentiometric surface was lower than it had been in 1977, with declines as great as 20 feet measured in well fields at Waldorf and Chalk Point. The network of observation wells was developed and is operated and maintained as part of the cooperative program between the U.S. Geological Survey and agencies of the Maryland Department of Natural Resources. (USGS)
Online tracking of outdoor lighting variations for augmented reality with moving cameras.
Liu, Yanli; Granier, Xavier
2012-04-01
In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.
NASA Astrophysics Data System (ADS)
Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.
2015-04-01
Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.
Field Comparison between Sling Psychrometer and Meteorological Measuring Set AN/TMQ-22
the ML-224 Sling Psychrometer . From a series of independent tests designed to minimize error it was concluded that the AN/TMQ-22 yielded a more accurate...dew point reading. The average relative humidity error using the sling psychrometer was +9% while the AN/TMQ-22 had a plus or minus 2% error. Even with cautious measurement the sling yielded a +4% error.
Stability and Change in Interests: A Longitudinal Study of Adolescents from Grades 8 through 12
ERIC Educational Resources Information Center
Tracey, Terence J. G.; Robbins, Steven B.; Hofsess, Christy D.
2005-01-01
The pattern of RIASEC interests and academic skills were assessed longitudinally from a large-scale national database at three time points: eight grade, 10th grade, and 12th grade. Validation and cross-validation samples of 1000 males and 1000 females in each set were used to test the pattern of these scores over time relative to mean changes,…
Equilibrium relative humidity as a tool to monitor seed moisture
Robert P. Karrfalt
2010-01-01
The importance of seed moisture in maintaining high seed viability is well known. The seed storage chapters in the Tropical Tree Seed Manual (Hong and Ellis 2003) and the Woody Plant Seed Manual (Bonner 2008a) give a detailed discussion and many references on this point. Working with seeds in an operational setting requires a test of seed moisture status. It is...
Economics of regulation: externalities and institutional issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, A.E.
In two previous articles, ''Can An Economist Find Happiness Setting Public Utility Rates'' and ''Applications of Economics to Public Utility Rate Structures'', appearing in Public Utilities Fortnightly January 5 and January 19, 1978, respectively, the author summarized his experiences in applying elementary economic principles to the regulation of public utilities in New York state, specifically to setting utility rates. In this article, Mr. Kahn discusses second-best considerations and externalities. He points out that opponents of marginal-cost pricing--particularly of electricity--have in recent years become enthusiastic exponents of the theory of second best. What is required, he feels, is an examination ofmore » how other, most directly pertinent prices in the economy do actually stand relative to their marginal costs. These would be the prices of goods and services for which electricity is a substitute; with which electricity is used as a complement; in whose supply electricity is an input; and which themselves constitute inputs in the production and delivery of electricity. Oil and gas are more complicated cases. External costs, such as abatement requirements, are considered when setting rates. The author points out other regulatory issues to be considered in decision making to conclude this series of articles. (MCW)« less
Das, Saptarshi; Pan, Indranil; Das, Shantanu
2013-07-01
Fuzzy logic based PID controllers have been studied in this paper, considering several combinations of hybrid controllers by grouping the proportional, integral and derivative actions with fuzzy inferencing in different forms. Fractional order (FO) rate of error signal and FO integral of control signal have been used in the design of a family of decomposed hybrid FO fuzzy PID controllers. The input and output scaling factors (SF) along with the integro-differential operators are tuned with real coded genetic algorithm (GA) to produce optimum closed loop performance by simultaneous consideration of the control loop error index and the control signal. Three different classes of fractional order oscillatory processes with various levels of relative dominance between time constant and time delay have been used to test the comparative merits of the proposed family of hybrid fractional order fuzzy PID controllers. Performance comparison of the different FO fuzzy PID controller structures has been done in terms of optimal set-point tracking, load disturbance rejection and minimal variation of manipulated variable or smaller actuator requirement etc. In addition, multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal trade-offs between the set point tracking and control signal, and the set point tracking and load disturbance performance for each of the controller structure to handle the three different types of processes. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
2015-12-31
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
Hu, Zhiyong; Liebens, Johan; Rao, K Ranga
2008-01-01
Background Relatively few studies have examined the association between air pollution and stroke mortality. Inconsistent and inclusive results from existing studies on air pollution and stroke justify the need to continue to investigate the linkage between stroke and air pollution. No studies have been done to investigate the association between stroke and greenness. The objective of this study was to examine if there is association of stroke with air pollution, income and greenness in northwest Florida. Results Our study used an ecological geographical approach and dasymetric mapping technique. We adopted a Bayesian hierarchical model with a convolution prior considering five census tract specific covariates. A 95% credible set which defines an interval having a 0.95 posterior probability of containing the parameter for each covariate was calculated from Markov Chain Monte Carlo simulations. The 95% credible sets are (-0.286, -0.097) for household income, (0.034, 0.144) for traffic air pollution effect, (0.419, 1.495) for emission density of monitored point source polluters, (0.413, 1.522) for simple point density of point source polluters without emission data, and (-0.289,-0.031) for greenness. Household income and greenness show negative effects (the posterior densities primarily cover negative values). Air pollution covariates have positive effects (the 95% credible sets cover positive values). Conclusion High risk of stroke mortality was found in areas with low income level, high air pollution level, and low level of exposure to green space. PMID:18452609
Naugle, Alecia Larew; Barlow, Kristina E; Eblen, Denise R; Teter, Vanessa; Umholtz, Robert
2006-11-01
The U.S. Food Safety and Inspection Service (FSIS) tests sets of samples of selected raw meat and poultry products for Salmonella to ensure that federally inspected establishments meet performance standards defined in the pathogen reduction-hazard analysis and critical control point system (PR-HACCP) final rule. In the present report, sample set results are described and associations between set failure and set and establishment characteristics are identified for 4,607 sample sets collected from 1998 through 2003. Sample sets were obtained from seven product classes: broiler chicken carcasses (n = 1,010), cow and bull carcasses (n = 240), market hog carcasses (n = 560), steer and heifer carcasses (n = 123), ground beef (n = 2,527), ground chicken (n = 31), and ground turkey (n = 116). Of these 4,607 sample sets, 92% (4,255) were collected as part of random testing efforts (A sets), and 93% (4,166) passed. However, the percentage of positive samples relative to the maximum number of positive results allowable in a set increased over time for broilers but decreased or stayed the same for the other product classes. Three factors associated with set failure were identified: establishment size, product class, and year. Set failures were more likely early in the testing program (relative to 2003). Small and very small establishments were more likely to fail than large ones. Set failure was less likely in ground beef than in other product classes. Despite an overall decline in set failures through 2003, these results highlight the need for continued vigilance to reduce Salmonella contamination in broiler chicken and continued implementation of programs designed to assist small and very small establishments with PR-HACCP compliance issues.
DOT National Transportation Integrated Search
2016-09-01
We consider the problem of subspace clustering: given points that lie on or near the union of many low-dimensional linear subspaces, recover the subspaces. To this end, one first identifies sets of points close to the same subspace and uses the sets ...
Napadow, Vitaly; Liu, Jing; Kaptchuk, Ted J
2004-12-01
Acupuncture textbooks mention a wide assortment of indications for each acupuncture point and, conversely, each disease or indication can be treated by a wide assortment of acupoints. However, little systematic information exists on how acupuncture is actually used in practice: i.e. which points are actually selected and for which conditions. This study prospectively gathered data on acupuncture point usage in two primarily acupuncture hospital clinics in Beijing, China. Of the more than 150 unique acupoints, the 30 most commonly used points represented 68% of the total number of acupoints needled at the first clinic, and 63% of points needled at the second clinic. While acupuncturists use a similar set of most prevalent points, such as LI-4 (used in >65% of treatments at both clinic sites), this core of points only partially overlaps. These results support the hypothesis that while the most commonly used points are similar from one acupuncturist to another, each practitioner tends to have certain acupoints, which are favorites as core points or to round out the point prescription. In addition, the results of this study are consistent with the recent development of "manualized" protocols in randomized controlled trials of acupuncture where a fixed set of acupoints are augmented depending on individualized signs and symptoms (TCM patterns).
Environment parameters and basic functions for floating-point computation
NASA Technical Reports Server (NTRS)
Brown, W. S.; Feldman, S. I.
1978-01-01
A language-independent proposal for environment parameters and basic functions for floating-point computation is presented. Basic functions are proposed to analyze, synthesize, and scale floating-point numbers. The model provides a small set of parameters and a small set of axioms along with sharp measures of roundoff error. The parameters and functions can be used to write portable and robust codes that deal intimately with the floating-point representation. Subject to underflow and overflow constraints, a number can be scaled by a power of the floating-point radix inexpensively and without loss of precision. A specific representation for FORTRAN is included.
NASA Technical Reports Server (NTRS)
Robinson, E. L.; Fuller, C. A.
1999-01-01
Whole body heat production (HP) and heat loss (HL) were examined to determine their relative contributions to light masking of the circadian rhythm in body temperature (Tb). Squirrel monkey metabolism (n = 6) was monitored by both indirect and direct calorimetry, with telemetered measurement of body temperature and activity. Feeding was also measured. Responses to an entraining light-dark (LD) cycle (LD 12:12) and a masking LD cycle (LD 2:2) were compared. HP and HL contributed to both the daily rhythm and the masking changes in Tb. All variables showed phase-dependent masking responses. Masking transients at L or D transitions were generally greater during subjective day; however, L masking resulted in sustained elevation of Tb, HP, and HL during subjective night. Parallel, apparently compensatory, changes of HL and HP suggest action by both the circadian timing system and light masking on Tb set point. Furthermore, transient HL increases during subjective night suggest that gain change may supplement set point regulation of Tb.
An Optimal Set of Flesh Points on Tongue and Lips for Speech-Movement Classification
ERIC Educational Resources Information Center
Wang, Jun; Samal, Ashok; Rong, Panying; Green, Jordan R.
2016-01-01
Purpose: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. Method: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of…
NASA Astrophysics Data System (ADS)
Kamath, Aditya; Vargas-Hernández, Rodrigo A.; Krems, Roman V.; Carrington, Tucker; Manzhos, Sergei
2018-06-01
For molecules with more than three atoms, it is difficult to fit or interpolate a potential energy surface (PES) from a small number of (usually ab initio) energies at points. Many methods have been proposed in recent decades, each claiming a set of advantages. Unfortunately, there are few comparative studies. In this paper, we compare neural networks (NNs) with Gaussian process (GP) regression. We re-fit an accurate PES of formaldehyde and compare PES errors on the entire point set used to solve the vibrational Schrödinger equation, i.e., the only error that matters in quantum dynamics calculations. We also compare the vibrational spectra computed on the underlying reference PES and the NN and GP potential surfaces. The NN and GP surfaces are constructed with exactly the same points, and the corresponding spectra are computed with the same points and the same basis. The GP fitting error is lower, and the GP spectrum is more accurate. The best NN fits to 625/1250/2500 symmetry unique potential energy points have global PES root mean square errors (RMSEs) of 6.53/2.54/0.86 cm-1, whereas the best GP surfaces have RMSE values of 3.87/1.13/0.62 cm-1, respectively. When fitting 625 symmetry unique points, the error in the first 100 vibrational levels is only 0.06 cm-1 with the best GP fit, whereas the spectrum on the best NN PES has an error of 0.22 cm-1, with respect to the spectrum computed on the reference PES. This error is reduced to about 0.01 cm-1 when fitting 2500 points with either the NN or GP. We also find that the GP surface produces a relatively accurate spectrum when obtained based on as few as 313 points.
Halovic, Shaun; Kroos, Christian
2017-12-01
This data set describes the experimental data collected and reported in the research article "Walking my way? Walker gender and display format confounds the perception of specific emotions" (Halovic and Kroos, in press) [1]. The data set represent perceiver identification rates for different emotions (happiness, sadness, anger, fear and neutral), as displayed by full-light, point-light and synthetic point-light walkers. The perceiver identification scores have been transformed into H t rates, which represent proportions/percentages of correct identifications above what would be expected by chance. This data set also provides H t rates separately for male, female and ambiguously gendered walkers.
Trivial dynamics in discrete-time systems: carrying simplex and translation arcs
NASA Astrophysics Data System (ADS)
Niu, Lei; Ruiz-Herrera, Alfonso
2018-06-01
In this paper we show that the dynamical behavior in (first octant) of the classical Kolmogorov systems of competitive type admitting a carrying simplex can be sometimes determined completely by the number of fixed points on the boundary and the local behavior around them. Roughly speaking, T has trivial dynamics (i.e. the omega limit set of any orbit is a connected set contained in the set of fixed points) provided T has exactly four hyperbolic nontrivial fixed points in with local attractors on the carrying simplex and local repellers on the carrying simplex; and there exists a unique hyperbolic fixed point in Int. Our results are applied to some classical models including the Leslie–Gower models, Atkinson-Allen systems and Ricker maps.
Rebeiro, Geraldine; Edward, Karen-leigh; Chapman, Rose; Evans, Alicia
2015-12-01
A significant proportion of undergraduate nursing education occurs in the clinical setting in the form of practising skills and competencies, and is a requirement of all nursing curriculum for registration to practice. Education in the clinical setting is facilitated by registered nurses, yet this interpersonal relationship has not been examined well. To investigate the experience of interpersonal relationships between registered nurses and student nurses in the clinical setting from the point of view of the registered nurse. Integrative review Review methods: The databases of MEDLINE, CINAHL and OVID were searched. Key words used included: Registered Nurse, Preceptor, Buddy Nurse, Clinical Teacher, Mentor, Student Nurse, Nursing Student, Interpersonal Relationships, Attitudes and Perceptions. Additional review of the literature was manually undertaken through university library textbooks. 632 abstracts were returned after duplicates were removed. Twenty one articles were identified for full text read (quantitative n=2, mixed n=6, qualitative n=14); of these, seven articles addressed the experience of interpersonal relationships between registered nurses and student nurses in the clinical setting from the point of view of the registered nurse and these were reviewed. Providing education for registered nurses to enable them to lead student education in the clinical setting communicates the organizational value of the role. Registered nurses identified being supported in having the time-to-teach were considered important in facilitation of the clinical teaching role. The integrative review did not provide evidence related to the impact diverse clinical settings can have on the relationships between registered nurses and student nurses revealing an area for further examination. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Kumar, H; Daykin, J; Holder, R; Watkinson, J C; Sheppard, M C; Franklyn, J A
2001-06-01
Thyroid cancer is the most common endocrine malignancy but is none the less rare. Some aspects of its management remain controversial. Previous audits of patient management in the United Kingdom have revealed deficiencies, especially in communication between specialists. We have audited patient management in a large university-associated teaching hospital, assessing points of good practice identified from published guidelines and reviews, and have compared findings in groups of patients managed jointly by specialists with an interest in thyroid cancer (including surgeon, endocrinologist and oncologist) with a group managed by other clinicians outside that setting. Retrospective case-note review of 205 patients with differentiated (papillary or follicular) cancer including group A (n = 134; managed in a specialist multi-disciplinary clinic setting) and group B (n = 71; managed in other clinic settings). Points of good practice investigated were adequacy of surgery, surgical complications, prescription and adequacy of T4 treatment, adequacy of monitoring by measurement of serum thyroglobulin and action taken and appropriate administration of ablative radioiodine. Deficiencies in management of the cohort as a whole were identified, including inadequate surgery and inadequate TSH suppression in approximately one-fifth of the cases. Monitoring with thyroglobulin measurements and action when serum thyroglobulin was high were also inadequate in some cases and ablative radioiodine was not given, despite being indicated in 11.7% of the cohort. Inadequate surgery and failure to administer radioiodine were less common in those managed in a specialist clinic setting than in those managed in other clinic settings. The findings highlight the need for locally agreed protocols in managing relatively rare endocrine disorders such as thyroid cancer and argue in favour of centralization of expertise and patient management in multi-disciplinary specialist clinic settings.
Carpenter, Afton S; Sullivan, Joanne H; Deshmukh, Arati; Glisson, Scott R; Gallo, Stephen A
2015-01-01
Objective With the use of teleconferencing for grant peer-review panels increasing, further studies are necessary to determine the efficacy of the teleconference setting compared to the traditional onsite/face-to-face setting. The objective of this analysis was to examine the effects of discussion, namely changes in application scoring premeeting and postdiscussion, in these settings. We also investigated other parameters, including the magnitude of score shifts and application discussion time in face-to-face and teleconference review settings. Design The investigation involved a retrospective, quantitative analysis of premeeting and postdiscussion scores and discussion times for teleconference and face-to-face review panels. The analysis included 260 and 212 application score data points and 212 and 171 discussion time data points for the face-to-face and teleconference settings, respectively. Results The effect of discussion was found to be small, on average, in both settings. However, discussion was found to be important for at least 10% of applications, regardless of setting, with these applications moving over a potential funding line in either direction (fundable to unfundable or vice versa). Small differences were uncovered relating to the effect of discussion between settings, including a decrease in the magnitude of the effect in the teleconference panels as compared to face-to-face. Discussion time (despite teleconferences having shorter discussions) was observed to have little influence on the magnitude of the effect of discussion. Additionally, panel discussion was found to often result in a poorer score (as opposed to an improvement) when compared to reviewer premeeting scores. This was true regardless of setting or assigned reviewer type (primary or secondary reviewer). Conclusions Subtle differences were observed between settings, potentially due to reduced engagement in teleconferences. Overall, further research is required on the psychology of decision-making, team performance and persuasion to better elucidate the group dynamics of telephonic and virtual ad-hoc peer-review panels. PMID:26351194
`Dem DEMs: Comparing Methods of Digital Elevation Model Creation
NASA Astrophysics Data System (ADS)
Rezza, C.; Phillips, C. B.; Cable, M. L.
2017-12-01
Topographic details of Europa's surface yield implications for large-scale processes that occur on the moon, including surface strength, modification, composition, and formation mechanisms for geologic features. In addition, small scale details presented from this data are imperative for future exploration of Europa's surface, such as by a potential Europa Lander mission. A comparison of different methods of Digital Elevation Model (DEM) creation and variations between them can help us quantify the relative accuracy of each model and improve our understanding of Europa's surface. In this work, we used data provided by Phillips et al. (2013, AGU Fall meeting, abs. P34A-1846) and Schenk and Nimmo (2017, in prep.) to compare DEMs that were created using Ames Stereo Pipeline (ASP), SOCET SET, and Paul Schenk's own method. We began by locating areas of the surface with multiple overlapping DEMs, and our initial comparisons were performed near the craters Manannan, Pwyll, and Cilix. For each region, we used ArcGIS to draw profile lines across matching features to determine elevation. Some of the DEMs had vertical or skewed offsets, and thus had to be corrected. The vertical corrections were applied by adding or subtracting the global minimum of the data set to create a common zero-point. The skewed data sets were corrected by rotating the plot so that it had a global slope of zero and then subtracting for a zero-point vertical offset. Once corrections were made, we plotted the three methods on one graph for each profile of each region. Upon analysis, we found relatively good feature correlation between the three methods. The smoothness of a DEM depends on both the input set of images and the stereo processing methods used. In our comparison, the DEMs produced by SOCET SET were less smoothed than those from ASP or Schenk. Height comparisons show that ASP and Schenk's model appear similar, alternating in maximum height. SOCET SET has more topographic variability due to its decreased smoothing, which is borne out by preliminary offset calculations. In the future, we plan to expand upon this preliminary work with more regions of Europa, continue quantifying the height differences and relative accuracy of each method, and generate more DEMs to expand our available comparison regions.
Do We Know the Actual Magnetopause Position for Typical Solar Wind Conditions?
NASA Technical Reports Server (NTRS)
Samsonov, A. A.; Gordeev, E.; Tsyganenko, N. A.; Safrankova, J.; Nemecek, Z.; Simunek, J.; Sibeck, D. G.; Toth, G.; Merkin, V. G.; Raeder, J.
2016-01-01
We compare predicted magnetopause positions at the subsolar point and four reference points in the terminator plane obtained from several empirical and numerical MHD (magnetohydrodynamics) models. Empirical models using various sets of magnetopause crossings and making different assumptions about the magnetopause shape predict significantly different magnetopause positions (with a scatter greater than 1 Earth radius (R (sub E)) even at the subsolar point. Axisymmetric magnetopause models cannot reproduce the cusp indentations or the changes related to the dipole tilt effect, and most of them predict the magnetopause closer to the Earth than non axisymmetric models for typical solar wind conditions and zero tilt angle. Predictions of two global non axisymmetric models do not match each other, and the models need additional verification. MHD models often predict the magnetopause closer to the Earth than the non axisymmetric empirical models, but the predictions of MHD simulations may need corrections for the ring current effect and decreases of the solar wind pressure that occur in the foreshock. Comparing MHD models in which the ring current magnetic field is taken into account with the empirical Lin et al. model, we find that the differences in the reference point positions predicted by these models are relatively small for B (sub z) equals 0 (note: B (sub z) is when the Earth's magnetic field points north versus Sun's magnetic field pointing south). Therefore, we assume that these predictions indicate the actual magnetopause position, but future investigations are still needed.
Servo-control for maintaining abdominal skin temperature at 36C in low birth weight infants.
Sinclair, J C
2002-01-01
Randomized trials have shown that the neonatal mortality rate of low birth-weight babies can be reduced by keeping them warm. For low birth-weight babies nursed in incubators, warm conditions may be achieved either by heating the air to a desired temperature, or by servo-controlling the baby's body temperature at a desired set-point. In low birth weight infants, to determine the effect on death and other important clinical outcomes of targeting body temperature rather than air temperature as the end-point of control of incubator heating. Standard search strategy of the Cochrane Neonatal Review Group. Searches were made of the Cochrane Controlled Trials Register (CCTR) (Cochrane Library, Issue 4, 2001) and MEDLINE, 1966 to November 2001. Randomized or quasi-randomized trials which test the effects of having the heat output of the incubator servo-controlled from body temperature compared with setting a constant incubator air temperature. Trial methodologic quality was systematically assessed. Outcome measures included death, timing of death, cause of death, and other clinical outcomes. Categorical outcomes were analyzed using relative risk and risk difference. Meta-analysis assumed a fixed effect model. Two eligible trials were found. In total, they included 283 babies and 112 deaths. Compared to setting a constant incubator air temperature of 31.8C, servo-control of abdominal skin temperature at 36C reduces the neonatal death rate among low birth weight infants: relative risk 0.72 (95% CI 0.54, 0.97); risk difference -12.7% (95% CI -1.6, -23.9). This effect is even greater among VLBW infants. During at least the first week after birth, low birth weight babies should be provided with a carefully regulated thermal environment that is near the thermoneutral point. For LBW babies in incubators, this can be achieved by adjusting incubator temperature to maintain an anterior abdominal skin temperature of at least 36C, using either servo-control or frequent manual adjustment of incubator air temperature.
NASA Astrophysics Data System (ADS)
Székely, B.; Kania, A.; Standovár, T.; Heilmeier, H.
2016-06-01
The horizontal variation and vertical layering of the vegetation are important properties of the canopy structure determining the habitat; three-dimensional (3D) distribution of objects (shrub layers, understory vegetation, etc.) is related to the environmental factors (e.g., illumination, visibility). It has been shown that gaps in forests, mosaic-like structures are essential to biodiversity; various methods have been introduced to quantify this property. As the distribution of gaps in the vegetation is a multi-scale phenomenon, in order to capture it in its entirety, scale-independent methods are preferred; one of these is the calculation of lacunarity. We used Airborne Laser Scanning point clouds measured over a forest plantation situated in a former floodplain. The flat topographic relief ensured that the tree growth is independent of the topographic effects. The tree pattern in the plantation crops provided various quasi-regular and irregular patterns, as well as various ages of the stands. The point clouds were voxelized and layers of voxels were considered as images for two-dimensional input. These images calculated for a certain vicinity of reference points were taken as images for the computation of lacunarity curves, providing a stack of lacunarity curves for each reference points. These sets of curves have been compared to reveal spatial changes of this property. As the dynamic range of the lacunarity values is very large, the natural logarithms of the values were considered. Logarithms of lacunarity functions show canopy-related variations, we analysed these variations along transects. The spatial variation can be related to forest properties and ecology-specific aspects.
Estimating the relative utility of screening mammography.
Abbey, Craig K; Eckstein, Miguel P; Boone, John M
2013-05-01
The concept of diagnostic utility is a fundamental component of signal detection theory, going back to some of its earliest works. Attaching utility values to the various possible outcomes of a diagnostic test should, in principle, lead to meaningful approaches to evaluating and comparing such systems. However, in many areas of medical imaging, utility is not used because it is presumed to be unknown. In this work, we estimate relative utility (the utility benefit of a detection relative to that of a correct rejection) for screening mammography using its known relation to the slope of a receiver operating characteristic (ROC) curve at the optimal operating point. The approach assumes that the clinical operating point is optimal for the goal of maximizing expected utility and therefore the slope at this point implies a value of relative utility for the diagnostic task, for known disease prevalence. We examine utility estimation in the context of screening mammography using the Digital Mammographic Imaging Screening Trials (DMIST) data. We show how various conditions can influence the estimated relative utility, including characteristics of the rating scale, verification time, probability model, and scope of the ROC curve fit. Relative utility estimates range from 66 to 227. We argue for one particular set of conditions that results in a relative utility estimate of 162 (±14%). This is broadly consistent with values in screening mammography determined previously by other means. At the disease prevalence found in the DMIST study (0.59% at 365-day verification), optimal ROC slopes are near unity, suggesting that utility-based assessments of screening mammography will be similar to those found using Youden's index.
Mitochondrial flashes regulate ATP homeostasis in the heart
Wang, Xianhua; Zhang, Xing; Wu, Di; Huang, Zhanglong; Hou, Tingting; Jian, Chongshu; Yu, Peng; Lu, Fujian; Zhang, Rufeng; Sun, Tao; Li, Jinghang; Qi, Wenfeng; Wang, Yanru; Gao, Feng; Cheng, Heping
2017-01-01
The maintenance of a constant ATP level (‘set-point’) is a vital homeostatic function shared by eukaryotic cells. In particular, mammalian myocardium exquisitely safeguards its ATP set-point despite 10-fold fluctuations in cardiac workload. However, the exact mechanisms underlying this regulation of ATP homeostasis remain elusive. Here we show mitochondrial flashes (mitoflashes), recently discovered dynamic activity of mitochondria, play an essential role for the auto-regulation of ATP set-point in the heart. Specifically, mitoflashes negatively regulate ATP production in isolated respiring mitochondria and, their activity waxes and wanes to counteract the ATP supply-demand imbalance caused by superfluous substrate and altered workload in cardiomyocytes. Moreover, manipulating mitoflash activity is sufficient to inversely shift the otherwise stable ATP set-point. Mechanistically, the Bcl-xL-regulated proton leakage through F1Fo-ATP synthase appears to mediate the coupling between mitoflash production and ATP set-point regulation. These findings indicate mitoflashes appear to constitute a digital auto-regulator for ATP homeostasis in the heart. DOI: http://dx.doi.org/10.7554/eLife.23908.001 PMID:28692422
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, Jr., David (Inventor)
2016-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Data Point Averaging for Computational Fluid Dynamics Data
NASA Technical Reports Server (NTRS)
Norman, David, Jr. (Inventor)
2014-01-01
A system and method for generating fluid flow parameter data for use in aerodynamic heating analysis. Computational fluid dynamics data is generated for a number of points in an area on a surface to be analyzed. Sub-areas corresponding to areas of the surface for which an aerodynamic heating analysis is to be performed are identified. A computer system automatically determines a sub-set of the number of points corresponding to each of the number of sub-areas and determines a value for each of the number of sub-areas using the data for the sub-set of points corresponding to each of the number of sub-areas. The value is determined as an average of the data for the sub-set of points corresponding to each of the number of sub-areas. The resulting parameter values then may be used to perform an aerodynamic heating analysis.
Closed-field capacitive liquid level sensor
Kronberg, James W.
1998-01-01
A liquid level sensor based on a closed field circuit comprises a ring oscillator using a symmetrical array of plate units that creates a displacement current. The displacement current varies as a function of the proximity of a liquid to the plate units. The ring oscillator circuit produces an output signal with a frequency inversely proportional to the presence of a liquid. A continuous liquid level sensing device and a two point sensing device are both proposed sensing arrangements. A second set of plates may be located inside of the probe housing relative to the sensing plate units. The second set of plates prevent any interference between the sensing plate units.
Closed-field capacitive liquid level sensor
Kronberg, J.W.
1998-03-03
A liquid level sensor based on a closed field circuit comprises a ring oscillator using a symmetrical array of plate units that creates a displacement current. The displacement current varies as a function of the proximity of a liquid to the plate units. The ring oscillator circuit produces an output signal with a frequency inversely proportional to the presence of a liquid. A continuous liquid level sensing device and a two point sensing device are both proposed sensing arrangements. A second set of plates may be located inside of the probe housing relative to the sensing plate units. The second set of plates prevent any interference between the sensing plate units. 12 figs.
Closed-field capacitive liquid level sensor
Kronberg, J.W.
1995-01-01
A liquid level sensor based on a closed field circuit comprises a ring oscillator using a symmetrical array of plate units that creates a displacement current. The displacement current varies as a function of the proximity of a liquid to the plate units. The ring oscillator circuit produces an output signal with a frequency inversely proportional to the presence of a liquid. A continuous liquid level sensing device and a two point sensing device are both proposed sensing arrangements. A second set of plates may be located inside of the probe housing relative to the sensing plate units. The second set of plates prevent any interference between the sensing plate units.
Bonetti, Marco; Pagano, Marcello
2005-03-15
The topic of this paper is the distribution of the distance between two points distributed independently in space. We illustrate the use of this interpoint distance distribution to describe the characteristics of a set of points within some fixed region. The properties of its sample version, and thus the inference about this function, are discussed both in the discrete and in the continuous setting. We illustrate its use in the detection of spatial clustering by application to a well-known leukaemia data set, and report on the results of a simulation experiment designed to study the power characteristics of the methods within that study region and in an artificial homogenous setting. Copyright (c) 2004 John Wiley & Sons, Ltd.
Fast-food menu offerings vary in dietary quality, but are consistently poor
Kirkpatrick, Sharon I; Reedy, Jill; Kahle, Lisa L; Harris, Jennifer L; Ohri-Vachaspati, Punam; Krebs-Smith, Susan M
2013-01-01
Objective To evaluate five popular fast-food chains’ menus in relation to dietary guidance. Design Menus posted on chains’ websites were coded using the Food and Nutrient Database for Dietary Studies and MyPyramid Equivalents Database to enable Healthy Eating Index-2005 (HEI-2005) scores to be assigned. Dollar or value and kids’ menus and sets of items promoted as healthy or nutritious were also assessed. Setting Five popular fast-food chains in the USA. Subjects Not applicable. Results Full menus scored lower than 50 out of 100 possible points on the HEI-2005. Scores for Total Fruit, Whole Grains and Sodium were particularly dismal. Compared with full menus, scores on dollar or value menus were 3 points higher on average, whereas kids’ menus scored 10 points higher on average. Three chains marketed subsets of items as healthy or nutritious; these scored 17 points higher on average compared with the full menus. No menu or subset of menu items received a score higher than 72 out of 100 points. Conclusions The poor quality of fast-food menus is a concern in light of increasing away-from-home eating, aggressive marketing to children and minorities, and the tendency for fast-food restaurants to be located in low-income and minority areas. The addition of fruits, vegetables and legumes; replacement of refined with whole grains; and reformulation of offerings high in sodium, solid fats and added sugars are potential strategies to improve fast-food offerings. The HEI may be a useful metric for ongoing monitoring of fast-food menus. PMID:23317511
Das, Anirban; Trehan, Amita; Oberoi, Sapna; Bansal, Deepak
2017-06-01
The study aims to validate a score predicting risk of complications in pediatric patients with chemotherapy-related febrile neutropenia (FN) and evaluate the performance of previously published models for risk stratification. Children diagnosed with cancer and presenting with FN were evaluated in a prospective single-center study. A score predicting the risk of complications, previously derived in the unit, was validated on a prospective cohort. Performance of six predictive models published from geographically distinct settings was assessed on the same cohort. Complications were observed in 109 (26.3%) of 414 episodes of FN over 15 months. A risk score based on undernutrition (two points), time from last chemotherapy (<7 days = two points), presence of a nonupper respiratory focus of infection (two points), C-reactive protein (>60 mg/l = five points), and absolute neutrophil count (<100 per μl = two points) was used to stratify patients into "low risk" (score <7, n = 208) and assessed using the following parameters: overall performance (Nagelkerke R 2 = 34.4%), calibration (calibration slope = 0.39; P = 0.25 in Hosmer-Lemeshow test), discrimination (c-statistic = 0.81), overall sensitivity (86%), negative predictive value (93%), and clinical net benefit (0.43). Six previously published rules demonstrated inferior performance in this cohort. An indigenous decision rule using five simple predefined variables was successful in identifying children at risk for complications. Prediction models derived in developed nations may not be appropriate for low-middle-income settings and need to be validated before use. © 2016 Wiley Periodicals, Inc.
Discrete Fourier transforms of nonuniformly spaced data
NASA Technical Reports Server (NTRS)
Swan, P. R.
1982-01-01
Time series or spatial series of measurements taken with nonuniform spacings have failed to yield fully to analysis using the Discrete Fourier Transform (DFT). This is due to the fact that the formal DFT is the convolution of the transform of the signal with the transform of the nonuniform spacings. Two original methods are presented for deconvolving such transforms for signals containing significant noise. The first method solves a set of linear equations relating the observed data to values defined at uniform grid points, and then obtains the desired transform as the DFT of the uniform interpolates. The second method solves a set of linear equations relating the real and imaginary components of the formal DFT directly to those of the desired transform. The results of numerical experiments with noisy data are presented in order to demonstrate the capabilities and limitations of the methods.
Station coordinates, baselines, and earth rotation from Lageos laser ranging - 1976-1984
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Schultz, B. E.; Eanes, R. J.
1985-01-01
The orbit of the Lageos satellite is well suited as a reference frame for studying the rotation of the earth and the relative motion of points on the earth's crust. The satellite laser measurements can determine the location of a set of tracking stations in an appropriate terrestrial coordinate system. The motion of the earth's rotation axis relative to this system can be studied on the basis of the established tracking station locations. The present investigation is concerned with an analysis of 7.7 years of Lageos laser ranging data. In the first solution considered, the entire data span was used to adjust a single set of station positions simultaneously with orbit and earth rotation parameters. Attention is given to the accuracy of earth rotation parameters which are determined as an inherent part of the solution process.
Benefits and costs of HIV testing.
Bloom, D E; Glied, S
1991-06-28
The benefits and costs of human immunodeficiency virus (HIV) testing in employment settings are examined from two points of view: that of private employers whose profitability may be affected by their testing policies and that of public policy-makers who may affect social welfare through their design of regulations related to HIV testing. The results reveal that HIV testing is clearly not cost-beneficial for most firms, although the benefits of HIV testing may outweigh the costs for some large firms that offer generous fringe-benefit packages and that recruit workers from populations in which the prevalence of HIV infection is high. The analysis also indicates that the testing decisions of unregulated employers are not likely to yield socially optimal economic outcomes and that existing state and federal legislation related to HIV testing in employment settings has been motivated primarily by concerns over social equity.
Genome-wide differences in hepatitis C- vs alcoholism-associated hepatocellular carcinoma
Derambure, Céline; Coulouarn, Cédric; Caillot, Frédérique; Daveau, Romain; Hiron, Martine; Scotte, Michel; François, Arnaud; Duclos, Celia; Goria, Odile; Gueudin, Marie; Cavard, Catherine; Terris, Benoit; Daveau, Maryvonne; Salier, Jean-Philippe
2008-01-01
AIM: To look at a comprehensive picture of etiology-dependent gene abnormalities in hepatocellular carcinoma in Western Europe. METHODS: With a liver-oriented microarray, transcript levels were compared in nodules and cirrhosis from a training set of patients with hepatocellular carcinoma (alcoholism, 12; hepatitis C, 10) and 5 controls. Loose or tight selection of informative transcripts with an abnormal abundance was statistically valid and the tightly selected transcripts were next quantified by qRTPCR in the nodules from our training set (12 + 10) and a test set (6 + 7). RESULTS: A selection of 475 transcripts pointed to significant gene over-representation on chromosome 8 (alcoholism) or -2 (hepatitis C) and ontology indicated a predominant inflammatory response (alcoholism) or changes in cell cycle regulation, transcription factors and interferon responsiveness (hepatitis C). A stringent selection of 23 transcripts whose differences between etiologies were significant in nodules but not in cirrhotic tissue indicated that the above dysregulations take place in tumor but not in the surrounding cirrhosis. These 23 transcripts separated our test set according to etiologies. The inflammation-associated transcripts pointed to limited alterations of free iron metabolism in alcoholic vs hepatitis C tumors. CONCLUSION: Etiology-specific abnormalities (chromosome preference; differences in transcriptomes and related functions) have been identified in hepatocellular carcinoma driven by alcoholism or hepatitis C. This may open novel avenues for differential therapies in this disease. PMID:18350606
Sensible and latent heat forced divergent circulations in the West African Monsoon System
NASA Astrophysics Data System (ADS)
Hagos, S.; Zhang, C.
2008-12-01
Field properties of divergent circulation are utilized to identify the roles of various diabatic processes in forcing moisture transport in the dynamics of the West African Monsoon and its seasonal cycle. In this analysis, the divergence field is treated as a set of point sources and is partitioned into two sub-sets corresponding to latent heat release and surface sensible heat flux at each respective point. The divergent circulation associated with each set is then calculated from the Poisson's equation using Gauss-Seidel iteration. Moisture transport by each set of divergent circulation is subsequently estimated. The results show different roles of the divergent circulations forced by surface sensible and latent heating in the monsoon dynamics. Surface sensible heating drives a shallow meridional circulation, which transports moisture deep into the continent at the polar side of the monsoon rain band and thereby promotes the seasonal northward migration of monsoon precipitation during the monsoon onset season. In contrast, the circulation directly associated with latent heating is deep and the corresponding moisture convergence is within the region of precipitation. Latent heating also induces dry air advection from the north. Neither effect promotes the seasonal northward migration of precipitation. The relative contributions of the processes associated with latent and sensible heating to the net moisture convergence, and hence the seasonal evolution of monsoon precipitation, depend on the background moisture.
Predict Brain MR Image Registration via Sparse Learning of Appearance and Transformation
Wang, Qian; Kim, Minjeong; Shi, Yonghong; Wu, Guorong; Shen, Dinggang
2014-01-01
We propose a new approach to register the subject image with the template by leveraging a set of intermediate images that are pre-aligned to the template. We argue that, if points in the subject and the intermediate images share similar local appearances, they may have common correspondence in the template. In this way, we learn the sparse representation of a certain subject point to reveal several similar candidate points in the intermediate images. Each selected intermediate candidate can bridge the correspondence from the subject point to the template space, thus predicting the transformation associated with the subject point at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point, instead of allowing only a single correspondence. Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. We further embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we refine our estimated transformation field via existing registration method in effective manners. We apply our method to registering brain MR images, and conclude that the proposed framework is competent to improve registration performances substantially. PMID:25476412
Independent calculation of monitor units for VMAT and SPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xin; Bush, Karl; Ding, Aiping
Purpose: Dose and monitor units (MUs) represent two important facets of a radiation therapy treatment. In current practice, verification of a treatment plan is commonly done in dose domain, in which a phantom measurement or forward dose calculation is performed to examine the dosimetric accuracy and the MU settings of a given treatment plan. While it is desirable to verify directly the MU settings, a computational framework for obtaining the MU values from a known dose distribution has yet to be developed. This work presents a strategy to calculate independently the MUs from a given dose distribution of volumetric modulatedmore » arc therapy (VMAT) and station parameter optimized radiation therapy (SPORT). Methods: The dose at a point can be expressed as a sum of contributions from all the station points (or control points). This relationship forms the basis of the proposed MU verification technique. To proceed, the authors first obtain the matrix elements which characterize the dosimetric contribution of the involved station points by computing the doses at a series of voxels, typically on the prescription surface of the VMAT/SPORT treatment plan, with unit MU setting for all the station points. An in-house Monte Carlo (MC) software is used for the dose matrix calculation. The MUs of the station points are then derived by minimizing the least-squares difference between doses computed by the treatment planning system (TPS) and that of the MC for the selected set of voxels on the prescription surface. The technique is applied to 16 clinical cases with a variety of energies, disease sites, and TPS dose calculation algorithms. Results: For all plans except the lung cases with large tissue density inhomogeneity, the independently computed MUs agree with that of TPS to within 2.7% for all the station points. In the dose domain, no significant difference between the MC and Eclipse Anisotropic Analytical Algorithm (AAA) dose distribution is found in terms of isodose contours, dose profiles, gamma index, and dose volume histogram (DVH) for these cases. For the lung cases, the MC-calculated MUs differ significantly from that of the treatment plan computed using AAA. However, the discrepancies are reduced to within 3% when the TPS dose calculation algorithm is switched to a transport equation-based technique (Acuros™). Comparison in the dose domain between the MC and Eclipse AAA/Acuros calculation yields conclusion consistent with the MU calculation. Conclusions: A computational framework relating the MU and dose domains has been established. The framework does not only enable them to verify the MU values of the involved station points of a VMAT plan directly in the MU domain but also provide a much needed mechanism to adaptively modify the MU values of the station points in accordance to a specific change in the dose domain.« less
Chapple, Will
2013-10-01
In spite of the extensive research on acupuncture mechanisms, no comprehensive and systematic peer-reviewed reference list of the stratified anatomical and the neuroanatomical features of all 361 acupuncture points exists. This study creates a reference list of the neuroanatomy and the stratified anatomy for each of the 361 acupuncture points on the 14 classical channels and for 34 extra points. Each acupuncture point was individually assessed to relate the point's location to anatomical and neuroanatomical features. The design of the catalogue is intended to be useful for any style of acupuncture or Oriental medicine treatment modality. The stratified anatomy was divided into shallow, intermediate and deep insertion. A separate stratified anatomy was presented for different needle angles and directions. The following are identified for each point: additional specifications for point location, the stratified anatomy, motor innervation, cutaneous nerve and sensory innervation, dermatomes, Langer's lines, and somatotopic organization in the primary sensory and motor cortices. Acupuncture points for each muscle, dermatome and myotome are also reported. This reference list can aid clinicians, practitioners and researchers in furthering the understanding and accurate practice of acupuncture. Additional research on the anatomical variability around acupuncture points, the frequency of needle contact with an anatomical structure in a clinical setting, and conformational imaging should be done to verify this catalogue. Copyright © 2013. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Christensen, Bo T.; Hartmann, Peter V. W.; Rasmussen, Thomas Hedegaard
2017-01-01
A large sample of leaders (N = 4257) was used to test the link between leader innovativeness and intelligence. The threshold theory of the link between creativity and intelligence assumes that below a certain IQ level (approximately IQ 120), there is some correlation between IQ and creative potential, but above this cutoff point, there is no…
NASA Astrophysics Data System (ADS)
Kang, Zhizhong
2013-10-01
This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.
On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems
NASA Astrophysics Data System (ADS)
Junge, Oliver; Kevrekidis, Ioannis G.
2017-06-01
We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.
Junge, Oliver; Kevrekidis, Ioannis G
2017-06-01
We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.
Zhang, F; de Dear, R
2017-01-01
As one of the most common strategies for managing peak electricity demand, direct load control (DLC) of air-conditioners involves cycling the compressors on and off at predetermined intervals. In university lecture theaters, the implementation of DLC induces temperature cycles which might compromise university students' learning performance. In these experiments, university students' learning performance, represented by four cognitive skills of memory, concentration, reasoning, and planning, was closely monitored under DLC-induced temperature cycles and control conditions simulated in a climate chamber. In Experiment 1 with a cooling set point temperature of 22°C, subjects' cognitive performance was relatively stable or even slightly promoted by the mild heat intensity and short heat exposure resulting from temperature cycles; in Experiment 2 with a cooling set point of 24°C, subjects' reasoning and planning performance observed a trend of decline at the higher heat intensity and longer heat exposure. Results confirm that simpler cognitive tasks are less susceptible to temperature effects than more complex tasks; the effect of thermal variations on cognitive performance follows an extended-U relationship with performance being relatively stable across a range of temperatures. DLC appears to be feasible in university lecture theaters if DLC algorithms are implemented judiciously. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn Edward; Song, Xuehang; Ye, Ming
A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less
Shape dependence of holographic Rényi entropy in general dimensions
Bianchi, Lorenzo; Chapman, Shira; Dong, Xi; ...
2016-11-29
We present a holographic method for computing the response of Rényi entropies in conformal field theories to small shape deformations around a flat (or spherical) entangling surface. Our strategy employs the stress tensor one-point function in a deformed hyperboloid background and relates it to the coefficient in the two-point function of the displacement operator. We obtain explicit numerical results for d = 3, · · · , 6 spacetime dimensions, and also evaluate analytically the limits where the Rényi index approaches 1 and 0 in general dimensions. We use our results to extend the work of 1602.08493 and disprove amore » set of conjectures in the literature regarding the relation between the Rényi shape dependence and the conformal weight of the twist operator. As a result, we also extend our analysis beyond leading order in derivatives in the bulk theory by studying Gauss-Bonnet gravity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berezhiani, Lasha; Khoury, Justin; Wang, Junpu, E-mail: lashaber@gmail.com, E-mail: jkhoury@sas.upenn.edu, E-mail: jwang217@jhu.edu
Single-field perturbations satisfy an infinite number of consistency relations constraining the squeezed limit of correlation functions at each order in the soft momentum. These can be understood as Ward identities for an infinite set of residual global symmetries, or equivalently as Slavnov-Taylor identities for spatial diffeomorphisms. In this paper, we perform a number of novel, non-trivial checks of the identities in the context of single field inflationary models with arbitrary sound speed. We focus for concreteness on identities involving 3-point functions with a soft external mode, and consider all possible scalar and tensor combinations for the hard-momentum modes. In allmore » these cases, we check the consistency relations up to and including cubic order in the soft momentum. For this purpose, we compute for the first time the 3-point functions involving 2 scalars and 1 tensor, as well as 2 tensors and 1 scalar, for arbitrary sound speed.« less
From spinning conformal blocks to matrix Calogero-Sutherland models
NASA Astrophysics Data System (ADS)
Schomerus, Volker; Sobko, Evgeny
2018-04-01
In this paper we develop further the relation between conformal four-point blocks involving external spinning fields and Calogero-Sutherland quantum mechanics with matrix-valued potentials. To this end, the analysis of [1] is extended to arbitrary dimensions and to the case of boundary two-point functions. In particular, we construct the potential for any set of external tensor fields. Some of the resulting Schrödinger equations are mapped explicitly to the known Casimir equations for 4-dimensional seed conformal blocks. Our approach furnishes solutions of Casimir equations for external fields of arbitrary spin and dimension in terms of functions on the conformal group. This allows us to reinterpret standard operations on conformal blocks in terms of group-theoretic objects. In particular, we shall discuss the relation between the construction of spinning blocks in any dimension through differential operators acting on seed blocks and the action of left/right invariant vector fields on the conformal group.
Vibrational treatment of the formic acid double minimum case in valence coordinates
NASA Astrophysics Data System (ADS)
Richter, Falk; Carbonnière, P.
2018-02-01
One single full dimensional valence coordinate HCOOH ground state potential energy surface accurate for both cis and trans conformers for all levels up to 6000 cm-1 relative to trans zero point energy has been generated at CCSD(T)-F12a/aug-cc-pVTZ level. The fundamentals and a set of eigenfunctions complete up to about 3120 and 2660 cm-1 for trans- and cis-HCOOH, respectively, have been calculated and assigned using the improved relaxation method of the Heidelberg multi-configuration time-dependent Hartree package and an exact expression for the kinetic energy in valence coordinates generated by the TANA program. The calculated trans fundamental transition frequencies agree with experiment to within 5 cm-1. A few reassignments are suggested. Our results discard any cis trans delocalization effects for vibrational eigenfunctions up to 3640 cm-1 relative to trans zero point energy.
Industrial pollution and the management of river water quality: a model of Kelani River, Sri Lanka.
Gunawardena, Asha; Wijeratne, E M S; White, Ben; Hailu, Atakelty; Pandit, Ram
2017-08-19
Water quality of the Kelani River has become a critical issue in Sri Lanka due to the high cost of maintaining drinking water standards and the market and non-market costs of deteriorating river ecosystem services. By integrating a catchment model with a river model of water quality, we developed a method to estimate the effect of pollution sources on ambient water quality. Using integrated model simulations, we estimate (1) the relative contribution from point (industrial and domestic) and non-point sources (river catchment) to river water quality and (2) pollutant transfer coefficients for zones along the lower section of the river. Transfer coefficients provide the basis for policy analyses in relation to the location of new industries and the setting of priorities for industrial pollution control. They also offer valuable information to design socially optimal economic policy to manage industrialized river catchments.
Enhancing Biomedical Text Summarization Using Semantic Relation Extraction
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336
Tcof1-Related Molecular Networks in Treacher Collins Syndrome.
Dai, Jiewen; Si, Jiawen; Wang, Minjiao; Huang, Li; Fang, Bing; Shi, Jun; Wang, Xudong; Shen, Guofang
2016-09-01
Treacher Collins syndrome (TCS) is a rare, autosomal-dominant disorder characterized by craniofacial deformities, and is primarily caused by mutations in the Tcof1 gene. This article was aimed to perform a comprehensive literature review and systematic bioinformatic analysis of Tcof1-related molecular networks in TCS. First, the up- and down-regulated genes in Tcof1 heterozygous haploinsufficient mutant mice embryos and Tcof1 knockdown and Tcof1 over-expressed neuroblastoma N1E-115 cells were obtained from the Gene Expression Omnibus database. The GeneDecks database was used to calculate the 500 genes most closely related to Tcof1. Then, the relationships between 4 gene sets (a predicted set and sets comparing the wildtype with the 3 Gene Expression Omnibus datasets) were analyzed using the DAVID, GeneMANIA and STRING databases. The analysis results showed that the Tcof1-related genes were enriched in various biological processes, including cell proliferation, apoptosis, cell cycle, differentiation, and migration. They were also enriched in several signaling pathways, such as the ribosome, p53, cell cycle, and WNT signaling pathways. Additionally, these genes clearly had direct or indirect interactions with Tcof1 and between each other. Literature review and bioinformatic analysis finds imply that special attention should be given to these pathways, as they may offer target points for TCS therapies.
Quartz Microbalance Study of 400-angstrom Thick Films near the lambda Point
NASA Technical Reports Server (NTRS)
Chan, Moses H. W.
2003-01-01
In a recent measurement we observed the thinning of an adsorbed helium film induced by the confinement of critical fluctuations a few millikelvin below the lambda point. A capacitor set-up was used to measure this Casimir effect. In this poster we will present our measurement of an adsorbed helium film of 400 angstroms near the lambda point with a quartz microbalance. For films this thick, we must take into account the non-linear dynamics of the shear waves in the fluid. In spite of the added complications, we were able to confirm the thinning of the film due to the Casimir effect and the onset of the superfluid transition. In addition, we observe a sharp anomaly at the bulk lambda point, most likely related to critical dissipation of the first sound. This work is carried out in collaboration with Rafael Garcia, Stephen Jordon and John Lazzaretti. This work is funded by NASA's Office of Biological and Physical Research under grant.
CMOS Cell Sensors for Point-of-Care Diagnostics
Adiguzel, Yekbun; Kulah, Haluk
2012-01-01
The burden of health-care related services in a global era with continuously increasing population and inefficient dissipation of the resources requires effective solutions. From this perspective, point-of-care diagnostics is a demanded field in clinics. It is also necessary both for prompt diagnosis and for providing health services evenly throughout the population, including the rural districts. The requirements can only be fulfilled by technologies whose productivity has already been proven, such as complementary metal-oxide-semiconductors (CMOS). CMOS-based products can enable clinical tests in a fast, simple, safe, and reliable manner, with improved sensitivities. Portability due to diminished sensor dimensions and compactness of the test set-ups, along with low sample and power consumption, is another vital feature. CMOS-based sensors for cell studies have the potential to become essential counterparts of point-of-care diagnostics technologies. Hence, this review attempts to inform on the sensors fabricated with CMOS technology for point-of-care diagnostic studies, with a focus on CMOS image sensors and capacitance sensors for cell studies. PMID:23112587
CMOS cell sensors for point-of-care diagnostics.
Adiguzel, Yekbun; Kulah, Haluk
2012-01-01
The burden of health-care related services in a global era with continuously increasing population and inefficient dissipation of the resources requires effective solutions. From this perspective, point-of-care diagnostics is a demanded field in clinics. It is also necessary both for prompt diagnosis and for providing health services evenly throughout the population, including the rural districts. The requirements can only be fulfilled by technologies whose productivity has already been proven, such as complementary metal-oxide-semiconductors (CMOS). CMOS-based products can enable clinical tests in a fast, simple, safe, and reliable manner, with improved sensitivities. Portability due to diminished sensor dimensions and compactness of the test set-ups, along with low sample and power consumption, is another vital feature. CMOS-based sensors for cell studies have the potential to become essential counterparts of point-of-care diagnostics technologies. Hence, this review attempts to inform on the sensors fabricated with CMOS technology for point-of-care diagnostic studies, with a focus on CMOS image sensors and capacitance sensors for cell studies.
Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Wahi, A. K.
2003-12-01
Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Closing the chasm between research and practice: evidence of and for change.
Green, Lawrence W
2014-04-01
The usual remedy suggested for bridging the science-to-practice gap is to improve the efficiency of disseminating the evidence-based practices to practitioners. This reflection on the gap takes the position that it is the relevance and fit of the evidence with the majority of practices that limit its applicability and application in health promotion and related behavioural, community and population-level interventions where variations in context, values and norms make uniform interventions inappropriate. To make the evidence more relevant and actionable to practice settings and populations will require reforms at many points in the research-to-practice pipeline. These points in the pipeline are described and remedies for them suggested.
Temperature calibration of cryoscopic solutions used in the milk industry by adiabatic calorimetry
NASA Astrophysics Data System (ADS)
Méndez-Lango, E.; Lira-Cortes, L.; Quiñones-Ibarra, R.
2013-09-01
One method to detect extraneous water in milk is through cryoscopy. This method is used to measure the freezing point of milk. For calibration of a cryoscope there are is a set of standardized solution with known freezing points values. These values are related with the solute concentration, based in almost a century old data; it was no found recent results. It was found that reference solution are not certified in temperature: they do not have traceability to the temperature unit or standards. We prepared four solutions and measured them on a cryoscope and on an adiabatic calorimeter. It was found that results obtained with one technique dose not coincide with the other one.
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
Effect Size as the Essential Statistic in Developing Methods for mTBI Diagnosis.
Gibson, Douglas Brandt
2015-01-01
The descriptive statistic known as "effect size" measures the distinguishability of two sets of data. Distingishability is at the core of diagnosis. This article is intended to point out the importance of effect size in the development of effective diagnostics for mild traumatic brain injury and to point out the applicability of the effect size statistic in comparing diagnostic efficiency across the main proposed TBI diagnostic methods: psychological, physiological, biochemical, and radiologic. Comparing diagnostic approaches is difficult because different researcher in different fields have different approaches to measuring efficacy. Converting diverse measures to effect sizes, as is done in meta-analysis, is a relatively easy way to make studies comparable.
On E-discretization of tori of compact simple Lie groups. II
NASA Astrophysics Data System (ADS)
Hrivnák, Jiří; Juránek, Michal
2017-10-01
Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.
Wagner, Michael M.; Levander, John D.; Brown, Shawn; Hogan, William R.; Millett, Nicholas; Hanna, Josh
2013-01-01
This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem—which we define as a configuration and a query of results—exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services. PMID:24551417
Wagner, Michael M; Levander, John D; Brown, Shawn; Hogan, William R; Millett, Nicholas; Hanna, Josh
2013-01-01
This paper describes the Apollo Web Services and Apollo-SV, its related ontology. The Apollo Web Services give an end-user application a single point of access to multiple epidemic simulators. An end user can specify an analytic problem-which we define as a configuration and a query of results-exactly once and submit it to multiple epidemic simulators. The end user represents the analytic problem using a standard syntax and vocabulary, not the native languages of the simulators. We have demonstrated the feasibility of this design by implementing a set of Apollo services that provide access to two epidemic simulators and two visualizer services.
A minimization method on the basis of embedding the feasible set and the epigraph
NASA Astrophysics Data System (ADS)
Zabotin, I. Ya; Shulgina, O. N.; Yarullin, R. S.
2016-11-01
We propose a conditional minimization method of the convex nonsmooth function which belongs to the class of cutting-plane methods. During constructing iteration points a feasible set and an epigraph of the objective function are approximated by the polyhedral sets. In this connection, auxiliary problems of constructing iteration points are linear programming problems. In optimization process there is some opportunity of updating sets which approximate the epigraph. These updates are performed by periodically dropping of cutting planes which form embedding sets. Convergence of the proposed method is proved, some realizations of the method are discussed.
NASA Astrophysics Data System (ADS)
Marreiros, Filipe M. M.; Wang, Chunliang; Rossitti, Sandro; Smedby, Örjan
2016-03-01
In this study we present a non-rigid point set registration for 3D curves (composed by 3D set of points). The method was evaluated in the task of registration of 3D superficial vessels of the brain where it was used to match vessel centerline points. It consists of a combination of the Coherent Point Drift (CPD) and the Thin-Plate Spline (TPS) semilandmarks. The CPD is used to perform the initial matching of centerline 3D points, while the semilandmark method iteratively relaxes/slides the points. For the evaluation, a Magnetic Resonance Angiography (MRA) dataset was used. Deformations were applied to the extracted vessels centerlines to simulate brain bulging and sinking, using a TPS deformation where a few control points were manipulated to obtain the desired transformation (T1). Once the correspondences are known, the corresponding points are used to define a new TPS deformation(T2). The errors are measured in the deformed space, by transforming the original points using T1 and T2 and measuring the distance between them. To simulate cases where the deformed vessel data is incomplete, parts of the reference vessels were cut and then deformed. Furthermore, anisotropic normally distributed noise was added. The results show that the error estimates (root mean square error and mean error) are below 1 mm, even in the presence of noise and incomplete data.
Post-Structural Methodology at the Quilting Point: Intercultural Encounters.
Gillett, Grant
Lacan's quilting point connects a network of signifiers with the lived world as a place of voices, memory, and adaptation "seen in" the mirror of language. Crossing cultures can obscure the ways we make sense of the world. Some planes of signification, in aiming to be universal in their knowledge (such as the natural sciences), try to track objects and events independent of our thoughts about them and the ways that signifiers may slide past each other. However, cross-structural comparison and the analysis of cross cultural encounters cannot treat its objects of interest that way. Thus we need a theory and methodology that effectively connects the multilayered discourses of subjectivities from diverse cultures and allows triangulation between them in relation to points of shared experience. At such points we need a critical attitude to our own framework and an openness to the uneasy reflective equilibrium that uncovers assumptions and modes of thinking that will hamper us. Quilting points are such points where different discourses converge on a single event or set of events so as to mark "vertical" connections allowing tentative alignments between ways of meaning so that we can begin to build real cross-cultural understanding.
Using Laser Scanners to Augment the Systematic Error Pointing Model
NASA Astrophysics Data System (ADS)
Wernicke, D. R.
2016-08-01
The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.
Constrained tracking control for nonlinear systems.
Khani, Fatemeh; Haeri, Mohammad
2017-09-01
This paper proposes a tracking control strategy for nonlinear systems without needing a prior knowledge of the reference trajectory. The proposed method consists of a set of local controllers with appropriate overlaps in their stability regions and an on-line switching strategy which implements these controllers and uses some augmented intermediate controllers to ensure steering the system states to the desired set points without needing to redesign the controller for each value of set point changes. The proposed approach provides smooth transient responses despite switching among the local controllers. It should be mentioned that the stability regions of the proposed controllers could be estimated off-line for a range of set-point changes. The efficiencies of the proposed algorithm are illustrated via two example simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Progress in the development of paper-based diagnostics for low-resource point-of-care settings
Byrnes, Samantha; Thiessen, Gregory; Fu, Elain
2014-01-01
This Review focuses on recent work in the field of paper microfluidics that specifically addresses the goal of translating the multistep processes that are characteristic of gold-standard laboratory tests to low-resource point-of-care settings. A major challenge is to implement multistep processes with the robust fluid control required to achieve the necessary sensitivity and specificity of a given application in a user-friendly package that minimizes equipment. We review key work in the areas of fluidic controls for automation in paper-based devices, readout methods that minimize dedicated equipment, and power and heating methods that are compatible with low-resource point-of-care settings. We also highlight a focused set of recent applications and discuss future challenges. PMID:24256361
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
Remote temperature-set-point controller
Burke, W.F.; Winiecki, A.L.
1984-10-17
An instrument is described for carrying out mechanical strain tests on metallic samples with the addition of means for varying the temperature with strain. The instrument includes opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Remote temperature-set-point controller
Burke, William F.; Winiecki, Alan L.
1986-01-01
An instrument for carrying out mechanical strain tests on metallic samples with the addition of an electrical system for varying the temperature with strain, the instrument including opposing arms and associated equipment for holding a sample and varying the mechanical strain on the sample through a plurality of cycles of increasing and decreasing strain within predetermined limits, circuitry for producing an output signal representative of the strain during the tests, apparatus including a set point and a coil about the sample for providing a controlled temperature in the sample, and circuitry interconnected between the strain output signal and set point for varying the temperature of the sample linearly with strain during the tests.
Mind the Gap: The Prospects of Missing Data.
McConnell, Meghan; Sherbino, Jonathan; Chan, Teresa M
2016-12-01
The increasing use of workplace-based assessments (WBAs) in competency-based medical education has led to large data sets that assess resident performance longitudinally. With large data sets, problems that arise from missing data are increasingly likely. The purpose of this study is to examine (1) whether data are missing at random across various WBAs, and (2) the relationship between resident performance and the proportion of missing data. During 2012-2013, a total of 844 WBAs of CanMEDs Roles were completed for 9 second-year emergency medicine residents. To identify whether missing data were randomly distributed across various WBAs, the total number of missing data points was calculated for each Role. To examine whether the amount of missing data was related to resident performance, 5 faculty members rank-ordered the residents based on performance. A median rank score was calculated for each resident and was correlated with the proportion of missing data. More data were missing for Health Advocate and Professional WBAs relative to other competencies ( P < .001). Furthermore, resident rankings were not related to the proportion of missing data points ( r = 0.29, P > .05). The results of the present study illustrate that some CanMEDS Roles are less likely to be assessed than others. At the same time, the amount of missing data did not correlate with resident performance, suggesting lower-performing residents are no more likely to have missing data than their higher-performing peers. This article discusses several approaches to dealing with missing data.
Preconditioning 2D Integer Data for Fast Convex Hull Computations.
Cadenas, José Oswaldo; Megson, Graham M; Luengo Hendriks, Cris L
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2010 CFR
2010-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2013 CFR
2013-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2011 CFR
2011-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2012 CFR
2012-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
21 CFR Appendix A to Part 201 - Examples of Graphic Enhancements Used by FDA
Code of Federal Regulations, 2014 CFR
2014-04-01
... (e.g., “Ask a doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left... doctor or pharmacist before use if you are”) are set in 6 point Helvetica Bold, left justified. 4. The...
Public Data Set: Control and Automation of the Pegasus Multi-point Thomson Scattering System
Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586); Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448)
2016-08-12
This public data set contains openly-documented, machine readable digital research data corresponding to figures published in G.M. Bodner et al., 'Control and Automation of the Pegasus Multi-point Thomson Scattering System,' Rev. Sci. Instrum. 87, 11E523 (2016).
Robust group-wise rigid registration of point sets using t-mixture model
NASA Astrophysics Data System (ADS)
Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.
2016-03-01
A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.
PIV study of the wake of a model wind turbine transitioning between operating set points
NASA Astrophysics Data System (ADS)
Houck, Dan; Cowen, Edwin (Todd)
2016-11-01
Wind turbines are ideally operated at their most efficient tip speed ratio for a given wind speed. There is increasing interest, however, in operating turbines at other set points to increase the overall power production of a wind farm. Specifically, Goit and Meyers (2015) used LES to examine a wind farm optimized by unsteady operation of its turbines. In this study, the wake of a model wind turbine is measured in a water channel using PIV. We measure the wake response to a change in operational set point of the model turbine, e.g., from low to high tip speed ratio or vice versa, to examine how it might influence a downwind turbine. A modified torque transducer after Kang et al. (2010) is used to calibrate in situ voltage measurements of the model turbine's generator operating across a resistance to the torque on the generator. Changes in operational set point are made by changing the resistance or the flow speed, which change the rotation rate measured by an encoder. Single camera PIV on vertical planes reveals statistics of the wake at various distances downstream as the turbine transitions from one set point to another. From these measurements, we infer how the unsteady operation of a turbine may affect the performance of a downwind turbine as its incoming flow. National Science Foundation and the Atkinson Center for a Sustainable Future.
Text vectorization based on character recognition and character stroke modeling
NASA Astrophysics Data System (ADS)
Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao
2014-03-01
In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.
Neural modeling and functional neuroimaging.
Horwitz, B; Sporns, O
1994-01-01
Two research areas that so far have had little interaction with one another are functional neuroimaging and computational neuroscience. The application of computational models and techniques to the inherently rich data sets generated by "standard" neurophysiological methods has proven useful for interpreting these data sets and for providing predictions and hypotheses for further experiments. We suggest that both theory- and data-driven computational modeling of neuronal systems can help to interpret data generated by functional neuroimaging methods, especially those used with human subjects. In this article, we point out four sets of questions, addressable by computational neuroscientists whose answere would be of value and interest to those who perform functional neuroimaging. The first set consist of determining the neurobiological substrate of the signals measured by functional neuroimaging. The second set concerns developing systems-level models of functional neuroimaging data. The third set of questions involves integrating functional neuroimaging data across modalities, with a particular emphasis on relating electromagnetic with hemodynamic data. The last set asks how one can relate systems-level models to those at the neuronal and neural ensemble levels. We feel that there are ample reasons to link functional neuroimaging and neural modeling, and that combining the results from the two disciplines will result in furthering our understanding of the central nervous system. © 1994 Wiley-Liss, Inc. This Article is a US Goverment work and, as such, is in the public domain in the United State of America. Copyright © 1994 Wiley-Liss, Inc.
Harden, Stephen L.; Cuffney, Thomas F.; Terziotti, Silvia; Kolb, Katharine R.
2013-01-01
Data collected between 1997 and 2008 at 48 stream sites were used to characterize relations between watershed settings and stream nutrient yields throughout central and eastern North Carolina. The focus of the investigation was to identify environmental variables in watersheds that influence nutrient export for supporting the development and prioritization of management strategies for restoring nutrient-impaired streams. Nutrient concentration data and streamflow data compiled for the 1997 to 2008 study period were used to compute stream yields of nitrate, total nitrogen (N), and total phosphorus (P) for each study site. Compiled environmental data (including variables for land cover, hydrologic soil groups, base-flow index, streams, wastewater treatment facilities, and concentrated animal feeding operations) were used to characterize the watershed settings for the study sites. Data for the environmental variables were analyzed in combination with the stream nutrient yields to explore relations based on watershed characteristics and to evaluate whether particular variables were useful indicators of watersheds having relatively higher or lower potential for exporting nutrients. Data evaluations included an examination of median annual nutrient yields based on a watershed land-use classification scheme developed as part of the study. An initial examination of the data indicated that the highest median annual nutrient yields occurred at both agricultural and urban sites, especially for urban sites having large percentages of point-source flow contributions to the streams. The results of statistical testing identified significant differences in annual nutrient yields when sites were analyzed on the basis of watershed land-use category. When statistical differences in median annual yields were noted, the results for nitrate, total N, and total P were similar in that highly urbanized watersheds (greater than 30 percent developed land use) and (or) watersheds with greater than 10 percent point-source flow contributions to streamflow had higher yields relative to undeveloped watersheds (having less than 10 and 15 percent developed and agricultural land uses, respectively) and watersheds with relatively low agricultural land use (between 15 and 30 percent). The statistical tests further indicated that the median annual yields for total P were statistically higher for watersheds with high agricultural land use (greater than 30 percent) compared to the undeveloped watersheds and watersheds with low agricultural land use. The total P yields also were higher for watersheds with low urban land use (between 10 and 30 percent developed land) compared to the undeveloped watersheds. The study data indicate that grouping and examining stream nutrient yields based on the land-use classifications used in this report can be useful for characterizing relations between watershed settings and nutrient yields in streams located throughout central and eastern North Carolina. Compiled study data also were analyzed with four regression tree models as a means of determining which watershed environmental variables or combination of variables result in basins that are likely to have high or low nutrient yields. The regression tree analyses indicated that some of the environmental variables examined in this study were useful for predicting yields of nitrate, total N, and total P. When the median annual nutrient yields for all 48 sites were evaluated as a group (Model 1), annual point-source flow yields had the greatest influence on nitrate and total N yields observed in streams, and annual streamflow yields had the greatest influence on yields of total P. The Model 1 results indicated that watersheds with higher annual point-source flow yields had higher annual yields of nitrate and total N, and watersheds with higher annual streamflow yields had higher annual yields of total P. When sites with high point-source flows (greater than 10 percent of total streamflow) were excluded from the regression tree analyses (Models 2–4), the percentage of forested land in the watersheds was identified as the primary environmental variable influencing stream yields for both total N and total P. Models 2, 3 and 4 did not identify any watershed environmental variables that could adequately explain the observed variability in the nitrate yields among the set of sites examined by each of these models. The results for Models 2, 3, and 4 indicated that watersheds with higher percentages of forested land had lower annual total N and total P yields compared to watersheds with lower percentages of forested land, which had higher median annual total N and total P yields. Additional environmental variables determined to further influence the stream nutrient yields included median annual percentage of point-source flow contributions to the streams, variables of land cover (percentage of forested land, agricultural land, and (or) forested land plus wetlands) in the watershed and (or) in the stream buffer, and drainage area. The regression tree models can serve as a tool for relating differences in select watershed attributes to differences in stream yields of nitrate, total N, and total P, which can provide beneficial information for improving nutrient management in streams throughout North Carolina and for reducing nutrient loads to coastal waters.
Selecting the most appropriate time points to profile in high-throughput studies
Kleyman, Michael; Sefer, Emre; Nicola, Teodora; Espinoza, Celia; Chhabra, Divya; Hagood, James S; Kaminski, Naftali; Ambalavanan, Namasivayam; Bar-Joseph, Ziv
2017-01-01
Biological systems are increasingly being studied by high throughput profiling of molecular data over time. Determining the set of time points to sample in studies that profile several different types of molecular data is still challenging. Here we present the Time Point Selection (TPS) method that solves this combinatorial problem in a principled and practical way. TPS utilizes expression data from a small set of genes sampled at a high rate. As we show by applying TPS to study mouse lung development, the points selected by TPS can be used to reconstruct an accurate representation for the expression values of the non selected points. Further, even though the selection is only based on gene expression, these points are also appropriate for representing a much larger set of protein, miRNA and DNA methylation changes over time. TPS can thus serve as a key design strategy for high throughput time series experiments. Supporting Website: www.sb.cs.cmu.edu/TPS DOI: http://dx.doi.org/10.7554/eLife.18541.001 PMID:28124972
González-José, Rolando; Charlin, Judith
2012-01-01
The specific using of different prehistoric weapons is mainly determined by its physical properties, which provide a relative advantage or disadvantage to perform a given, particular function. Since these physical properties are integrated to accomplish that function, examining design variables and their pattern of integration or modularity is of interest to estimate the past function of a point. Here we analyze a composite sample of lithic points from southern Patagonia likely formed by arrows, thrown spears and hand-held points to test if they can be viewed as a two-module system formed by the blade and the stem, and to evaluate the degree in which shape, size, asymmetry, blade: stem length ratio, and tip angle explain the observed variance and differentiation among points supposedly aimed to accomplish different functions. To do so we performed a geometric morphometric analysis on 118 lithic points, departing from 24 two-dimensional landmark and semi landmarks placed on the point's contour. Klingenberg's covariational modularity tests were used to evaluate different modularity hypotheses, and a composite PCA including shape, size, asymmetry, blade: stem length ratio, and tip angle was used to estimate the importance of each attribute to explaining variation patterns. Results show that the blade and the stem can be seen as "near decomposable units" in the points integrating the studied sample. However, this modular pattern changes after removing the effects of reduction. Indeed, a resharpened point tends to show a tip/rest of the point modular pattern. The composite PCA analyses evidenced three different patterns of morphometric attributes compatible with arrows, thrown spears, and hand-held tools. Interestingly, when analyzed independently, these groups show differences in their modular organization. Our results indicate that stone tools can be approached as flexible designs, characterized by a composite set of interacting morphometric attributes, and evolving on a modular way.
González-José, Rolando; Charlin, Judith
2012-01-01
The specific using of different prehistoric weapons is mainly determined by its physical properties, which provide a relative advantage or disadvantage to perform a given, particular function. Since these physical properties are integrated to accomplish that function, examining design variables and their pattern of integration or modularity is of interest to estimate the past function of a point. Here we analyze a composite sample of lithic points from southern Patagonia likely formed by arrows, thrown spears and hand-held points to test if they can be viewed as a two-module system formed by the blade and the stem, and to evaluate the degree in which shape, size, asymmetry, blade: stem length ratio, and tip angle explain the observed variance and differentiation among points supposedly aimed to accomplish different functions. To do so we performed a geometric morphometric analysis on 118 lithic points, departing from 24 two-dimensional landmark and semi landmarks placed on the point's contour. Klingenberg's covariational modularity tests were used to evaluate different modularity hypotheses, and a composite PCA including shape, size, asymmetry, blade: stem length ratio, and tip angle was used to estimate the importance of each attribute to explaining variation patterns. Results show that the blade and the stem can be seen as “near decomposable units” in the points integrating the studied sample. However, this modular pattern changes after removing the effects of reduction. Indeed, a resharpened point tends to show a tip/rest of the point modular pattern. The composite PCA analyses evidenced three different patterns of morphometric attributes compatible with arrows, thrown spears, and hand-held tools. Interestingly, when analyzed independently, these groups show differences in their modular organization. Our results indicate that stone tools can be approached as flexible designs, characterized by a composite set of interacting morphometric attributes, and evolving on a modular way. PMID:23094104
Nutritional ecology of obesity: from humans to companion animals.
Raubenheimer, David; Machovsky-Capuska, Gabriel E; Gosby, Alison K; Simpson, Stephen
2015-01-01
We apply nutritional geometry, a framework for modelling the interactive effects of nutrients on animals, to help understand the role of modern environments in the obesity pandemic. Evidence suggests that humans regulate the intake of protein energy (PE) more strongly than non-protein energy (nPE), and consequently will over- and under-ingest nPE on diets with low or high PE, respectively. This pattern of macronutrient regulation has led to the protein leverage hypothesis, which proposes that the rise in obesity has been caused partly by a shift towards diets with reduced PE:nPE ratios relative to the set point for protein regulation. We discuss potential causes of this mismatch, including environmentally induced reductions in the protein density of the human diet and factors that might increase the regulatory set point for protein and hence exacerbate protein leverage. Economics--the high price of protein compared with fats and carbohydrates--is one factor that might contribute to the reduction of dietary protein concentrations. The possibility that rising atmospheric CO₂ levels could also play a role through reducing the PE:nPE ratios in plants and animals in the human food chain is discussed. Factors that reduce protein efficiency, for example by increasing the use of ingested amino acids in energy metabolism (hepatic gluconeogenesis), are highlighted as potential drivers of increased set points for protein regulation. We recommend that a similar approach is taken to understand the rise of obesity in other species, and identify some key gaps in the understanding of nutrient regulation in companion animals.
Blumenthal, Karen J; Chien, Alyna T; Singer, Sara J
2018-05-18
There remains a need to improve patient safety in primary care settings. Studies have demonstrated that creating high-performing teams can improve patient safety and encourage a safety culture within hospital settings, but little is known about this relationship in primary care. To examine how team dynamics relate to perceptions of safety culture in primary care and whether care coordination plays an intermediating role. This is a cross-sectional survey study with 63% response (n = 1082). The study participants were attending clinicians, resident physicians and other staff who interacted with patients from 19 primary care practices affiliated with Harvard Medical School. Three domains corresponding with our main measures: team dynamics, care coordination and safety culture. All items were measured on a 5-point Likert scale. We used linear regression clustered by practice site to assess the relationship between team dynamics and perceptions of safety culture. We also performed a mediation analysis to determine the extent to which care coordination explains the relationship between perceptions of team dynamics and of safety culture. For every 1-point increase in overall team dynamics, there was a 0.76-point increase in perception of safety culture [95% confidence interval (CI) 0.70-0.82, P < 0.001]. Care coordination mediated the relationship between team dynamics and the perception of safety culture. Our findings suggest there is a relationship between team dynamics, care coordination and perceptions of patient safety in a primary care setting. To make patients safer, we may need to pay more attention to how primary care providers work together to coordinate care.
Uncertainty in gridded CO 2 emissions estimates
Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...
2016-05-19
We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less
NASA Astrophysics Data System (ADS)
Hussnain, Zille; Oude Elberink, Sander; Vosselman, George
2016-06-01
In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.
Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT
Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.
2011-01-01
Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896
Path planning during combustion mode switch
Jiang, Li; Ravi, Nikhil
2015-12-29
Systems and methods are provided for transitioning between a first combustion mode and a second combustion mode in an internal combustion engine. A current operating point of the engine is identified and a target operating point for the internal combustion engine in the second combustion mode is also determined. A predefined optimized transition operating point is selected from memory. While operating in the first combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion engine to approach the selected optimized transition operating point. When the engine is operating at the selected optimized transition operating point, the combustion mode is switched from the first combustion mode to the second combustion mode. While operating in the second combustion mode, one or more engine actuator settings are adjusted to cause the operating point of the internal combustion to approach the target operating point.
Convex Hull Aided Registration Method (CHARM).
Fan, Jingfan; Yang, Jian; Zhao, Yitian; Ai, Danni; Liu, Yonghuai; Wang, Ge; Wang, Yongtian
2017-09-01
Non-rigid registration finds many applications such as photogrammetry, motion tracking, model retrieval, and object recognition. In this paper we propose a novel convex hull aided registration method (CHARM) to match two point sets subject to a non-rigid transformation. First, two convex hulls are extracted from the source and target respectively. Then, all points of the point sets are projected onto the reference plane through each triangular facet of the hulls. From these projections, invariant features are extracted and matched optimally. The matched feature point pairs are mapped back onto the triangular facets of the convex hulls to remove outliers that are outside any relevant triangular facet. The rigid transformation from the source to the target is robustly estimated by the random sample consensus (RANSAC) scheme through minimizing the distance between the matched feature point pairs. Finally, these feature points are utilized as the control points to achieve non-rigid deformation in the form of thin-plate spline of the entire source point set towards the target one. The experimental results based on both synthetic and real data show that the proposed algorithm outperforms several state-of-the-art ones with respect to sampling, rotational angle, and data noise. In addition, the proposed CHARM algorithm also shows higher computational efficiency compared to these methods.
Improving HVAC operational efficiency in small-and medium-size commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert
Small- and medium-size (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring, or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically use packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the United States for many reasons, chief among them being to mitigate themore » climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short cycling, when an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and to premature failure of the compressor or its components. Also, short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this paper describes two algorithms for detecting the zone set point temperature and RTU cycling rate that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique. The paper describes the two algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less
Relatives as spouses: preferences and opportunities for kin marriage in a Western society.
Bras, Hilde; Van Poppel, Frans; Mandemakers, Kees
2009-01-01
This article investigates the determinants of kin marriage on the basis of a large-scale database covering a major rural part of The Netherlands during the period 1840-1922. We studied three types of kin marriage: first cousin marriage, deceased spouse's sibling marriage, and sibling set exchange marriage. Almost 2% of all marriages were between first cousins, 0.85% concerned the sibling of a former spouse, while 4.14% were sibling set exchange marriages. While the first two types generally declined across the study period, sibling set exchange marriage reached a high point of almost 5% between 1890 and 1900. We found evidence for three mechanisms explaining the choice for relatives as spouses, centering both on preferences and on opportunities for kin marriage. Among the higher and middle strata and among farmers, kin marriages were commonly practiced and played an important role in the process of social class formation in the late nineteenth century. An increased choice for cousin marriage as a means of enculturation was observed among orthodox Protestants in the Bible Belt area of The Netherlands. Finally, all studied types of kin marriage took place more often in the relatively isolated, inland provinces of The Netherlands. Sibling set exchange marriages were a consequence of the enlarged supply of same-generation kin as a result of the demographic transition.
Wavelet analysis of the impedance cardiogram waveforms
NASA Astrophysics Data System (ADS)
Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.
2012-12-01
Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.
Hata, Kiyomi
2014-12-01
With the rising number of patients who rely on medical care, it is necessary to use evolving health care technology appropriately, to control health care costs, and to enhance the well-being of patients in the home care setting. Point of care testing (POCT)is instrumental system for such demands for home care; however, this term remains relatively unknown in Japan. For this research, I conducted a qualitative analysis of factors based on stories obtained through group interviews of 11 experienced home visiting nurses who work at three home-visit nursing stations for the purpose of clarifying issues in the introduction of POCT. The results of the research identified five categories and 16 subcategories for issues in the introduction of POCT. The identified categories are expected to be useful for the spread of POCT in the future. Key words: Point of care testing, Home care nursing.
New drugs and patient-centred end-points in old age: setting the wheels in motion.
Mangoni, Arduino A; Pilotto, Alberto
2016-01-01
Older patients with various degrees of frailty and disability, a key population target of pharmacological interventions in acute and chronic disease states, are virtually neglected in pre-marketing studies assessing the efficacy and safety of investigational drugs. Moreover, aggressively pursuing established therapeutic targets in old age, e.g. blood pressure, serum glucose or cholesterol concentrations, is not necessarily associated with the beneficial effects, and the acceptable safety, reported in younger patient cohorts. Measures of self-reported health and functional status might represent additional, more meaningful, therapeutic end-points in the older population, particularly in patients with significant frailty and relatively short life expectancy, e.g. in the presence of cancer and/or neurodegenerative disease conditions. Strategies enhancing early knowledge about key pharmacological characteristics of investigational drugs targeting older adults are discussed, together with the rationale for incorporating non-traditional, patient-centred, end-points in this ever-increasing group.
Research on stratified evolution of composite materials under four-point bending loading
NASA Astrophysics Data System (ADS)
Hao, M. J.; You, Q. J.; Zheng, J. C.; Yue, Z.; Xie, Z. P.
2017-12-01
In order to explore the effect of stratified evolution and delamination on the load capacity and service life of the composite materials under the four-point bending loading, the artificial tectonic defects of the different positions were set up. The four-point bending test was carried out, and the whole process was recorded by acoustic emission, and the damage degree of the composite layer was judged by the impact accumulation of the specimen - time-amplitude history chart, load-time-relative energy history chart, acoustic emission impact signal positioning map. The results show that the stratified defects near the surface of the specimen accelerate the process of material failure and expansion. The location of the delamination defects changes the bending performance of the composites to a great extent. The closer the stratification defects are to the surface of the specimen, the greater the damage, the worse the service capacity of the specimen.
Holographic definition of points and distances
NASA Astrophysics Data System (ADS)
Czech, Bartłomiej; Lamprou, Lampros
2014-11-01
We discuss the way in which field theory quantities assemble the spatial geometry of three-dimensional anti-de Sitter space (AdS3). The field theory ingredients are the entanglement entropies of boundary intervals. A point in AdS3 corresponds to a collection of boundary intervals which is selected by a variational principle we discuss. Coordinates in AdS3 are integration constants of the resulting equation of motion. We propose a distance function for this collection of points, which obeys the triangle inequality as a consequence of the strong subadditivity of entropy. Our construction correctly reproduces the static slice of AdS3 and the Ryu-Takayanagi relation between geodesics and entanglement entropies. We discuss how these results extend to quotients of AdS3 —the conical defect and the BTZ geometries. In these cases, the set of entanglement entropies must be supplemented by other field theory quantities, which can carry the information about lengths of nonminimal geodesics.
Anderson, Julie A; Tschumper, Gregory S
2006-06-08
Ten stationary points on the water dimer potential energy surface have been examined with ten density functional methods (X3LYP, B3LYP, B971, B98, MPWLYP, PBE1PBE, PBE, MPW1K, B3P86, and BHandHLYP). Geometry optimizations and vibrational frequency calculations were carried out with the TZ2P(f,d)+dif basis set. All ten of the density functionals correctly describe the relative energies of the ten stationary points. However, correctly describing the curvature of the potential energy surface is far more difficult. Only one functional (BHandHLYP) reproduces the number of imaginary frequencies from CCSD(T) calculations. The other nine density functionals fail to correctly characterize the nature of at least one of the ten (H(2)O)(2) stationary points studied here.
Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.
2018-01-01
Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424
A new prognostic model for chemotherapy-induced febrile neutropenia.
Ahn, Shin; Lee, Yoon-Seon; Lee, Jae-Lyun; Lim, Kyung Soo; Yoon, Sung-Cheol
2016-02-01
The objective of this study was to develop and validate a new prognostic model for febrile neutropenia (FN). This study comprised 1001 episodes of FN: 718 for the derivation set and 283 for the validation set. Multivariate logistic regression analysis was performed with unfavorable outcome as the primary endpoint and bacteremia as the secondary endpoint. In the derivation set, risk factors for adverse outcomes comprised age ≥ 60 years (2 points), procalcitonin ≥ 0.5 ng/mL (5 points), ECOG performance score ≥ 2 (2 points), oral mucositis grade ≥ 3 (3 points), systolic blood pressure <90 mmHg (3 points), and respiratory rate ≥ 24 breaths/min (3 points). The model stratified patients into three severity classes, with adverse event rates of 6.0 % in class I (score ≤ 2), 27.3 % in class II (score 3-8), and 67.9 % in class III (score ≥ 9). Bacteremia was present in 1.1, 11.5, and 29.8 % of patients in class I, II, and III, respectively. The outcomes of the validation set were similar in each risk class. When the derivation and validation sets were integrated, unfavorable outcomes occurred in 5.9 % of the low-risk group classified by the new prognostic model and in 12.2 % classified by the Multinational Association for Supportive Care in Cancer (MASCC) risk index. With the new prognostic model, we can classify patients with FN into three classes of increasing adverse outcomes and bacteremia. Early discharge would be possible for class I patients, short-term observation could safely manage class II patients, and inpatient admission is warranted for class III patients.
Gschwind, Michael K
2013-04-16
Mechanisms for generating and executing programs for a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA) are provided. A computer program product comprising a computer recordable medium having a computer readable program recorded thereon is provided. The computer readable program, when executed on a computing device, causes the computing device to receive one or more instructions and execute the one or more instructions using logic in an execution unit of the computing device. The logic implements a floating point (FP) only single instruction multiple data (SIMD) instruction set architecture (ISA), based on data stored in a vector register file of the computing device. The vector register file is configured to store both scalar and floating point values as vectors having a plurality of vector elements.
40 CFR 1065.659 - Removed water correction.
Code of Federal Regulations, 2011 CFR
2011-07-01
... know that saturated water vapor conditions exist. Use good engineering judgment to measure the... absolute pressure based on an alarm set point, a pressure regulator set point, or good engineering judgment... from raw exhaust, you may determine the amount of water based on intake-air humidity, plus a chemical...
40 CFR 1065.659 - Removed water correction.
Code of Federal Regulations, 2010 CFR
2010-07-01
... know that saturated water vapor conditions exist. Use good engineering judgment to measure the... absolute pressure based on an alarm set point, a pressure regulator set point, or good engineering judgment... from raw exhaust, you may determine the amount of water based on intake-air humidity, plus a chemical...
Wireless local area network security.
Bergeron, Bryan P
2004-01-01
Wireless local area networks (WLANs) are increasingly popular in clinical settings because they facilitate the use of wireless PDAs, laptops, and other pervasive computing devices at the point of care. However, because of the relative immaturity of wireless network technology and evolving standards, WLANs, if improperly configured, can present significant security risks. Understanding the security limitations of the technology and available fixes can help minimize the risks of clinical data loss and maintain compliance with HIPAA guidelines.
Debugging and Logging Services for Defence Service Oriented Architectures
2012-02-01
Service A software component and callable end point that provides a logically related set of operations, each of which perform a logical step in a...important to note that in some cases when the fault is identified to lie in uneditable code such as program libraries, or outsourced software services ...debugging is limited to characterisation of the fault, reporting it to the software or service provider and development of work-arounds and management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.
This study discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclonemore » detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.« less
Solvation effects on chemical shifts by embedded cluster integral equation theory.
Frach, Roland; Kast, Stefan M
2014-12-11
The accurate computational prediction of nuclear magnetic resonance (NMR) parameters like chemical shifts represents a challenge if the species studied is immersed in strongly polarizing environments such as water. Common approaches to treating a solvent in the form of, e.g., the polarizable continuum model (PCM) ignore strong directional interactions such as H-bonds to the solvent which can have substantial impact on magnetic shieldings. We here present a computational methodology that accounts for atomic-level solvent effects on NMR parameters by extending the embedded cluster reference interaction site model (EC-RISM) integral equation theory to the prediction of chemical shifts of N-methylacetamide (NMA) in aqueous solution. We examine the influence of various so-called closure approximations of the underlying three-dimensional RISM theory as well as the impact of basis set size and different treatment of electrostatic solute-solvent interactions. We find considerable and systematic improvement over reference PCM and gas phase calculations. A smaller basis set in combination with a simple point charge model already yields good performance which can be further improved by employing exact electrostatic quantum-mechanical solute-solvent interaction energies. A larger basis set benefits more significantly from exact over point charge electrostatics, which can be related to differences of the solvent's charge distribution.
NASA Astrophysics Data System (ADS)
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.
Dual capacity compressor with reversible motor and controls arrangement therefor
Sisk, Francis J.
1980-12-02
A hermetic reciprocating compressor such as may be used in heat pump applications is provided for dual capacity operation by providing the crankpin of the crankshaft with an eccentric ring rotatably mounted thereon, and with the end of the connecting rod opposite the piston encompassing the outer circumference of the eccentric ring, with means limiting the rotation of the eccentric ring upon the crankpin between one end point and an opposite angularly displaced end point to provide different values of eccentricity depending upon which end point the eccentric ring is rotated to upon the crankpin, and a reversible motor in the hermetic shell of the compressor for rotating the crankshaft, the motor operating in one direction effecting the angular displacement of the eccentric ring relative to the crankpin to the one end point, and in the opposite direction effecting the angular displacement of the eccentric ring relative to the crankpin to the opposite end point, this arrangement automatically giving different stroke lengths depending upon the direction of motor rotation. The mechanical structure of the arrangement may take various forms including at least one in which any impact of reversal is reduced by utilizing lubricant passages and chambers at the interface area of the crankpin and eccentric ring to provide a dashpot effect. In the main intended application of the arrangement according to the invention, that is, in a refrigerating or air conditioning system, it is desirable to insure a delay during reversal of the direction of compressor operation. A control arrangement is provided in which the control system controls the direction of motor operation in accordance with temperature conditions, the system including control means for effecting operation in a low capacity direction or alternatively in a high capacity direction in response to one set, and another set, respectively, of temperature conditions and with timer means delaying a restart of the compressor motor for at least a predetermined time in response to a condition of the control means operative to initiate a change in the operating direction of the compressor when it restarts.
Saltiel, Philippe; d'Avella, Andrea; Tresch, Matthew C; Wyler, Kuno; Bizzi, Emilio
2017-01-01
The central pattern generator (CPG) architecture for rhythm generation remains partly elusive. We compare cat and frog locomotion results, where the component unrelated to pattern formation appears as a temporal grid, and traveling wave respectively. Frog spinal cord microstimulation with N-methyl-D-Aspartate (NMDA), a CPG activator, produced a limited set of force directions, sometimes tonic, but more often alternating between directions similar to the tonic forces. The tonic forces were topographically organized, and sites evoking rhythms with different force subsets were located close to the constituent tonic force regions. Thus CPGs consist of topographically organized modules. Modularity was also identified as a limited set of muscle synergies whose combinations reconstructed the EMGs. The cat CPG was investigated using proprioceptive inputs during fictive locomotion. Critical points identified both as abrupt transitions in the effect of phasic perturbations, and burst shape transitions, had biomechanical correlates in intact locomotion. During tonic proprioceptive perturbations, discrete shifts between these critical points explained the burst durations changes, and amplitude changes occurred at one of these points. Besides confirming CPG modularity, these results suggest a fixed temporal grid of anchoring points, to shift modules onsets and offsets. Frog locomotion, reconstructed with the NMDA synergies, showed a partially overlapping synergy activation sequence. Using the early synergy output evoked by NMDA at different spinal sites, revealed a rostrocaudal topographic organization, where each synergy is preferentially evoked from a few, albeit overlapping, cord regions. Comparing the locomotor synergy sequence with this topography suggests that a rostrocaudal traveling wave would activate the synergies in the proper sequence for locomotion. This output was reproduced in a two-layer model using this topography and a traveling wave. Together our results suggest two CPG components: modules, i.e., synergies; and temporal patterning, seen as a temporal grid in the cat, and a traveling wave in the frog. Animal and limb navigation have similarities. Research relating grid cells to the theta rhythm and on segmentation during navigation may relate to our temporal grid and traveling wave results. Winfree's mathematical work, combining critical phases and a traveling wave, also appears important. We conclude suggesting tracing, and imaging experiments to investigate our CPG model.
Dybek, Inga; Bischof, Gallus; Grothues, Janina; Reinhardt, Susa; Meyer, Christian; Hapke, Ulfert; John, Ulrich; Broocks, Andreas; Hohagen, Fritz; Rumpf, Hans-Jürgen
2006-05-01
Our goal was to analyze the retest reliability and validity of the Alcohol Use Disorders Identification Test (AUDIT) in a primary-care setting and recommend a cut-off value for the different alcohol-related diagnoses. Participants recruited from general practices (GPs) in two northern German cities received the AUDIT, which was embedded in a health-risk questionnaire. In total, 10,803 screenings were conducted. The retest reliability was tested on a subsample of 99 patients, with an intertest interval of 30 days. Sensitivity and specificity at a number of different cut-off values were estimated for the sample of alcohol consumers (n=8237). For this study, 1109 screen-positive patients received a diagnostic interview. Individuals who scored less than five points in the AUDIT and also tested negative in a second alcohol-related screen were defined as "negative" (n=6003). This definition was supported by diagnostic interviews of 99 screen-negative patients from which no false negatives could be detected. As the gold standard for detection of an alcohol-use disorder (AUD), we used the Munich-Composite International Diagnostic Interview (MCIDI), which is based on Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, criteria. On the item level, the reliability, measured by the intraclass correlation coefficient (ICC), ranged between .39 (Item 9) and .98 (Item 10). For the total score, the ICC was .95. For cut-off values of eight points and five points, 87.5% and 88.9%, respectively, of the AUDIT-positives, and 98.9% and 95.1%, respectively, of the AUDIT-negatives were identically identified at retest, with kappa = .86 and kappa = .81. At the cut-off value of five points, we determined good combinations of sensitivity and specificity for the following diagnoses: alcohol dependence (sensitivity and specificity of .97 and .88, respectively), AUD (.97 and .92), and AUD and/or at-risk consumption (.97 and .91). Embedded in a health-risk questionnaire in primary-care settings, the AUDIT is a reliable and valid screening instrument to identify at-risk drinkers and patients with an AUD. Our findings strongly suggest a lowering of the recommended cut-off value of eight points.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.
2014-01-01
Computational Aerodynamic simulations of a 1484 ft/sec tip speed quiet high-speed fan system were performed at five different operating points on the fan operating line, in order to provide detailed internal flow field information for use with fan acoustic prediction methods presently being developed, assessed and validated. The fan system is a sub-scale, low-noise research fan/nacelle model that has undergone experimental testing in the 9- by 15-foot Low Speed Wind Tunnel at the NASA Glenn Research Center. Details of the fan geometry, the computational fluid dynamics methods, the computational grids, and various computational parameters relevant to the numerical simulations are discussed. Flow field results for three of the five operating points simulated are presented in order to provide a representative look at the computed solutions. Each of the five fan aerodynamic simulations involved the entire fan system, which includes a core duct and a bypass duct that merge upstream of the fan system nozzle. As a result, only fan rotational speed and the system bypass ratio, set by means of a translating nozzle plug, were adjusted in order to set the fan operating point, leading to operating points that lie on a fan operating line and making mass flow rate a fully dependent parameter. The resulting mass flow rates are in good agreement with measurement values. Computed blade row flow fields at all fan operating points are, in general, aerodynamically healthy. Rotor blade and fan exit guide vane flow characteristics are good, including incidence and deviation angles, chordwise static pressure distributions, blade surface boundary layers, secondary flow structures, and blade wakes. Examination of the computed flow fields reveals no excessive or critical boundary layer separations or related secondary-flow problems, with the exception of the hub boundary layer at the core duct entrance. At that location a significant flow separation is present. The region of local flow recirculation extends through a mixing plane, however, which for the particular mixing-plane model used is now known to exaggerate the recirculation. In any case, the flow separation has relatively little impact on the computed rotor and FEGV flow fields.
Parametric embedding for class visualization.
Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B
2007-09-01
We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.
Detection of Cheating by Decimation Algorithm
NASA Astrophysics Data System (ADS)
Yamanaka, Shogo; Ohzeki, Masayuki; Decelle, Aurélien
2015-02-01
We expand the item response theory to study the case of "cheating students" for a set of exams, trying to detect them by applying a greedy algorithm of inference. This extended model is closely related to the Boltzmann machine learning. In this paper we aim to infer the correct biases and interactions of our model by considering a relatively small number of sets of training data. Nevertheless, the greedy algorithm that we employed in the present study exhibits good performance with a few number of training data. The key point is the sparseness of the interactions in our problem in the context of the Boltzmann machine learning: the existence of cheating students is expected to be very rare (possibly even in real world). We compare a standard approach to infer the sparse interactions in the Boltzmann machine learning to our greedy algorithm and we find the latter to be superior in several aspects.
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy
2002-01-01
A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.
An inventory of undiscovered Canadian mineral resources
NASA Technical Reports Server (NTRS)
Labovitz, M. L.; Griffiths, J. C.
1982-01-01
Unit regional value (URV) and unit regional weight are area standardized measures of the expected value and quantity, respectively, of the mineral resources of a region. Estimation and manipulation of the URV statistic is the basis of an approach to mineral resource evaluation. Estimates of the kind and value of exploitable mineral resources yet to be discovered in the provinces of Canada are used as an illustration of the procedure. The URV statistic is set within a previously developed model wherein geology, as measured by point counting geologic maps, is related to the historical record of mineral resource production of well-developed regions of the world, such as the 50 states of the U.S.A.; these may be considered the training set. The Canadian provinces are related to this training set using geological information obtained in the same way from geologic maps of the provinces. The desired predictions of yet to be discovered mineral resources in the Canadian provinces arise as a consequence. The implicit assumption is that regions of similar geology, if equally well developed, will produce similar weights and values of mineral resources.
Störmer method for a problem of point injection of charged particles into a magnetic dipole field
NASA Astrophysics Data System (ADS)
Kolesnikov, E. K.
2017-03-01
The problem of point injection of charged particles into a magnetic dipole field was considered. Analytical expressions were obtained by the Störmer method for regions of allowed pulses of charged particles at random points of a dipole field at a set position of the point source of particles. It was found that, for a fixed location of the studied point, there was a specific structure of the coordinate space in the form of a set of seven regions, where the injector location in each region corresponded to a definite form of an allowed pulse region at the studied point. It was shown that the allowed region boundaries in four of the mentioned regions were surfaces of conic section revolution.
Singularities and the geometry of spacetime
NASA Astrophysics Data System (ADS)
Hawking, Stephen
2014-11-01
The aim of this essay is to investigate certain aspects of the geometry of the spacetime manifold in the General Theory of Relativity with particular reference to the occurrence of singularities in cosmological solutions and their relation with other global properties. Section 2 gives a brief outline of Riemannian geometry. In Section 3, the General Theory of Relativity is presented in the form of two postulates and two requirements which are common to it and to the Special Theory of Relativity, and a third requirement, the Einstein field equations, which distinguish it from the Special Theory. There does not seem to be any alternative set of field equations which would not have some undeseriable features. Some exact solutions are described. In Section 4, the physical significance of curvature is investigated using the deviation equation for timelike and null curves. The Riemann tensor is decomposed into the Ricci tensor which represents the gravitational effect at a point of matter at that point and the Welyl tensor which represents the effect at a point of gravitational radiation and matter at other points. The two tensors are related by the Bianchi identities which are presented in a form analogous to the Maxwell equations. Some lemmas are given for the occurrence of conjugate points on timelike and null geodesics and their relation with the variation of timelike and null curves is established. Section 5 is concerned with properties of causal relations between points of spacetime. It is shown that these could be used to determine physically the manifold structure of spacetime if the strong causality assumption held. The concepts of a null horizon and a partial Cauchy surface are introduced and are used to prove a number of lemmas relating to the existence of a timelike curve of maximum length between two sets. In Section 6, the definition of a singularity of spacetime is given in terms of geodesic incompleteness. The various energy assumptions needed to prove the occurrence of singularities are discussed and then a number of theorems are presented which prove the occurrence of singularities in most cosmological solutions. A procedure is given which could be used to describe and classify the singularites and their expected nature is discussed. Sections 2 and 3 are reviews of standard work. In Section 4, the deviation equation is standard but the matrix method used to analyse it is the author's own as is the decomposition given of the Bianchi identities (this was also obtained independently by Trümper). Variation of curves and conjugate points are standard in a positive-definite metric but this seems to be the first full account for timelike and null curves in a Lorentz metric. Except where otherwise indicated in the text, Sections 5 and 6 are the work of the author who, however, apologises if through ignorance or inadvertance he has failed to make acknowledgements where due. Some of this work has been described in [Hawking S.W. 1965b. Occurrence of singularities in open universes. Phys. Rev. Lett. 15: 689-690; Hawking S.W. and G.F.R. Ellis. 1965c. Singularities in homogeneous world models. Phys. Rev. Lett. 17: 246-247; Hawking S.W. 1966a. Singularities in the universe. Phys. Rev. Lett. 17: 444-445; Hawking S.W. 1966c. The occurrence of singularities in cosmology. Proc. Roy. Soc. Lond. A 294: 511-521]. Undoubtedly, the most important results are the theorems in Section 6 on the occurrence of singularities. These seem to imply either that the General Theory of Relativity breaks down or that there could be particles whose histories did not exist before (or after) a certain time. The author's own opinion is that the theory probably does break down, but only when quantum gravitational effects become important. This would not be expected to happen until the radius of curvature of spacetime became about 10-14 cm.
Automatic Monitoring of Tunnel Deformation Based on High Density Point Clouds Data
NASA Astrophysics Data System (ADS)
Du, L.; Zhong, R.; Sun, H.; Wu, Q.
2017-09-01
An automated method for tunnel deformation monitoring using high density point clouds data is presented. Firstly, the 3D point clouds data are converted to two-dimensional surface by projection on the XOY plane, the projection point set of central axis on XOY plane named Uxoy is calculated by combining the Alpha Shape algorithm with RANSAC (Random Sampling Consistency) algorithm, and then the projection point set of central axis on YOZ plane named Uyoz is obtained by highest and lowest points which are extracted by intersecting straight lines that through each point of Uxoy and perpendicular to the two -dimensional surface with the tunnel point clouds, Uxoy and Uyoz together form the 3D center axis finally. Secondly, the buffer of each cross section is calculated by K-Nearest neighbor algorithm, and the initial cross-sectional point set is quickly constructed by projection method. Finally, the cross sections are denoised and the section lines are fitted using the method of iterative ellipse fitting. In order to improve the accuracy of the cross section, a fine adjustment method is proposed to rotate the initial sectional plane around the intercept point in the horizontal and vertical direction within the buffer. The proposed method is used in Shanghai subway tunnel, and the deformation of each section in the direction of 0 to 360 degrees is calculated. The result shows that the cross sections becomes flat circles from regular circles due to the great pressure at the top of the tunnel
Hippman, Catriona; Davis, Claire
2016-08-01
What bearing have you set you set your sights on? How do you navigate the ever-changing swells and winds of our professional landscape? Are you feeling a nebulous desire for change, that your career is not going in the direction you were expecting, worry about lack of future opportunities, or even a deep dissatisfaction in your current position? You are not alone. The formation of the Committee on Advanced Training for Certified Genetic Counselors (CATCGC) was partly in response to such sentiments, expressed within a vibrant dialogue amongst members of the genetic counseling community. The CATCGC sought to understand how genetic counselors chart courses for their careers by conducting a Decision Points exercise during a pre-conference symposium (PCS) at the 2014 NSGC Annual Education Conference. Participants were asked to identify a decision point at which they were most satisfied with their careers and one at which they were least satisfied and to describe the situation, their personal goals and intentions, any actions they took, and the outcomes. Qualitative analysis in the constructivist tradition was conducted on participants' responses and facilitators' notes from the PCS to explore what personal meanings were made of the decision points; twelve themes related to Career High Points, Low Points, and how genetic counselors made career transitions were identified. Using a constructivist framework, themes are presented in the context of the authors' personal experiences, and the authors' share their reflections on these data. We wrote this article to offer you a window into your peers' experiences - the good, the bad, and the ugly - hoping to encourage and challenge you to reflect deeply, no matter where you are on your career journey.
Figuerola, Eva L. M.; Erijman, Leonardo
2014-01-01
The performance of two sets of primers targeting variable regions of the 16S rRNA gene V1–V3 and V4 was compared in their ability to describe changes of bacterial diversity and temporal turnover in full-scale activated sludge. Duplicate sets of high-throughput amplicon sequencing data of the two 16S rRNA regions shared a collection of core taxa that were observed across a series of twelve monthly samples, although the relative abundance of each taxon was substantially different between regions. A case in point was the changes in the relative abundance of filamentous bacteria Thiothrix, which caused a large effect on diversity indices, but only in the V1–V3 data set. Yet the relative abundance of Thiothrix in the amplicon sequencing data from both regions correlated with the estimation of its abundance determined using fluorescence in situ hybridization. In nonmetric multidimensional analysis samples were distributed along the first ordination axis according to the sequenced region rather than according to sample identities. The dynamics of microbial communities indicated that V1–V3 and the V4 regions of the 16S rRNA gene yielded comparable patterns of: 1) the changes occurring within the communities along fixed time intervals, 2) the slow turnover of activated sludge communities and 3) the rate of species replacement calculated from the taxa–time relationships. The temperature was the only operational variable that showed significant correlation with the composition of bacterial communities over time for the sets of data obtained with both pairs of primers. In conclusion, we show that despite the bias introduced by amplicon sequencing, the variable regions V1–V3 and V4 can be confidently used for the quantitative assessment of bacterial community dynamics, and provide a proper qualitative account of general taxa in the community, especially when the data are obtained over a convenient time window rather than at a single time point. PMID:24923665
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Quantifying inhomogeneity in fractal sets
NASA Astrophysics Data System (ADS)
Fraser, Jonathan M.; Todd, Mike
2018-04-01
An inhomogeneous fractal set is one which exhibits different scaling behaviour at different points. The Assouad dimension of a set is a quantity which finds the ‘most difficult location and scale’ at which to cover the set and its difference from box dimension can be thought of as a first-level overall measure of how inhomogeneous the set is. For the next level of analysis, we develop a quantitative theory of inhomogeneity by considering the measure of the set of points around which the set exhibits a given level of inhomogeneity at a certain scale. For a set of examples, a family of -invariant subsets of the 2-torus, we show that this quantity satisfies a large deviations principle. We compare members of this family, demonstrating how the rate function gives us a deeper understanding of their inhomogeneity.
SEMANTIC3D.NET: a New Large-Scale Point Cloud Classification Benchmark
NASA Astrophysics Data System (ADS)
Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J. D.; Schindler, K.; Pollefeys, M.
2017-05-01
This paper presents a new 3D point cloud classification benchmark data set with over four billion manually labelled points, meant as input for data-hungry (deep) learning methods. We also discuss first submissions to the benchmark that use deep convolutional neural networks (CNNs) as a work horse, which already show remarkable performance improvements over state-of-the-art. CNNs have become the de-facto standard for many tasks in computer vision and machine learning like semantic segmentation or object detection in images, but have no yet led to a true breakthrough for 3D point cloud labelling tasks due to lack of training data. With the massive data set presented in this paper, we aim at closing this data gap to help unleash the full potential of deep learning methods for 3D labelling tasks. Our semantic3D.net data set consists of dense point clouds acquired with static terrestrial laser scanners. It contains 8 semantic classes and covers a wide range of urban outdoor scenes: churches, streets, railroad tracks, squares, villages, soccer fields and castles. We describe our labelling interface and show that our data set provides more dense and complete point clouds with much higher overall number of labelled points compared to those already available to the research community. We further provide baseline method descriptions and comparison between methods submitted to our online system. We hope semantic3D.net will pave the way for deep learning methods in 3D point cloud labelling to learn richer, more general 3D representations, and first submissions after only a few months indicate that this might indeed be the case.
NASA Astrophysics Data System (ADS)
Roelfsema, Chris M.; Kovacs, Eva M.; Phinn, Stuart R.
2015-08-01
This paper describes seagrass species and percentage cover point-based field data sets derived from georeferenced photo transects. Annually or biannually over a ten year period (2004-2014) data sets were collected using 30-50 transects, 500-800 m in length distributed across a 142 km2 shallow, clear water seagrass habitat, the Eastern Banks, Moreton Bay, Australia. Each of the eight data sets include seagrass property information derived from approximately 3000 georeferenced, downward looking photographs captured at 2-4 m intervals along the transects. Photographs were manually interpreted to estimate seagrass species composition and percentage cover (Coral Point Count excel; CPCe). Understanding seagrass biology, ecology and dynamics for scientific and management purposes requires point-based data on species composition and cover. This data set, and the methods used to derive it are a globally unique example for seagrass ecological applications. It provides the basis for multiple further studies at this site, regional to global comparative studies, and, for the design of similar monitoring programs elsewhere.
NASA Astrophysics Data System (ADS)
Oriani, F.; Stisen, S.
2016-12-01
Rainfall amount is one of the most sensitive inputs to distributed hydrological models. Its spatial representation is of primary importance to correctly study the uncertainty of basin recharge and its propagation to the surface and underground circulation. We consider here the 10-km-grid rainfall product provided by the Danish Meteorological Institute as input to the National Water Resources Model of Denmark. Due to a drastic reduction in the rain gauge network in recent years (from approximately 500 stations in the period 1996-2006, to 250 in the period 2007-2014), the grid rainfall product, based on the interpolation of these data, is much less reliable. Consequently, the related hydrological model shows a significantly lower prediction power. To give a better estimation of spatial rainfall at the grid points far from ground measurements, we use the direct sampling technique (DS) [1], belonging to the family of multiple-point geostatistics. DS, already applied to rainfall and spatial variable estimation [2, 3], simulates a grid value by sampling a training data set where a similar data neighborhood occurs. In this way, complex statistical relations are preserved by generating similar spatial patterns to the ones found in the training data set. Using the reliable grid product from the period 1996-2006 as training data set, we first test the technique by simulating part of this data set, then we apply the technique to the grid product of the period 2007-2014, and subsequently analyzing the uncertainty propagation to the hydrological model. We show that DS can improve the reliability of the rainfall product by generating more realistic rainfall patterns, with a significant repercussion on the hydrological model. The reduction of rain gauge networks is a global phenomenon which has huge implications for hydrological model performance and the uncertainty assessment of water resources. Therefore, the presented methodology can potentially be used in many regions where historical records can act as training data. [1] G.Mariethoz et al. (2010), Water Resour. Res., 10.1029/2008WR007621.[2] F. Oriani et al. (2014), Hydrol. Earth Syst. Sc., 10.5194/hessd-11-3213-2014. [3] G. Mariethoz et al. (2012), Water Resour. Res., 10.1029/2012WR012115.
Organic food as a healthy lifestyle: A phenomenological psychological analysis
Von Essen, Elisabeth; Englander, Magnus
2013-01-01
This study explored the phenomenon of the lived experience of choosing a healthy lifestyle based upon an organic diet as seen from the perspective of the young adult. Interviews were collected in Sweden and analyzed using the descriptive phenomenological psychological research method. The results showed the general psychological structure of the phenomenon, comprising four constituents: (1) the lived body as the starting point for life exploration, (2) a narrative self through emotional-relational food memories, (3) a conscious life strategy for well-being and vitality, and (4) a personal set of values in relation to ethical standards. The results provide plausible insights into the intricate relation between psychological meaning and the natural world. PMID:23769652
Sampling functions for geophysics
NASA Technical Reports Server (NTRS)
Giacaglia, G. E. O.; Lunquist, C. A.
1972-01-01
A set of spherical sampling functions is defined such that they are related to spherical-harmonic functions in the same way that the sampling functions of information theory are related to sine and cosine functions. An orderly distribution of (N + 1) squared sampling points on a sphere is given, for which the (N + 1) squared spherical sampling functions span the same linear manifold as do the spherical-harmonic functions through degree N. The transformations between the spherical sampling functions and the spherical-harmonic functions are given by recurrence relations. The spherical sampling functions of two arguments are extended to three arguments and to nonspherical reference surfaces. Typical applications of this formalism to geophysical topics are sketched.
Inferring phylogenetic trees from the knowledge of rare evolutionary events.
Hellmuth, Marc; Hernandez-Rosales, Maribel; Long, Yangjing; Stadler, Peter F
2018-06-01
Rare events have played an increasing role in molecular phylogenetics as potentially homoplasy-poor characters. In this contribution we analyze the phylogenetic information content from a combinatorial point of view by considering the binary relation on the set of taxa defined by the existence of a single event separating two taxa. We show that the graph-representation of this relation must be a tree. Moreover, we characterize completely the relationship between the tree of such relations and the underlying phylogenetic tree. With directed operations such as tandem-duplication-random-loss events in mind we demonstrate how non-symmetric information constrains the position of the root in the partially reconstructed phylogeny.
Set membership experimental design for biological systems.
Marvel, Skylar W; Williams, Cranos M
2012-03-21
Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.
Set membership experimental design for biological systems
2012-01-01
Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
NASA Astrophysics Data System (ADS)
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
Barasz, Kate; John, Leslie K; Keenan, Elizabeth A; Norton, Michael I
2017-10-01
Pseudo-set framing-arbitrarily grouping items or tasks together as part of an apparent "set"-motivates people to reach perceived completion points. Pseudo-set framing changes gambling choices (Study 1), effort (Studies 2 and 3), giving behavior (Field Data and Study 4), and purchase decisions (Study 5). These effects persist in the absence of any reward, when a cost must be incurred, and after participants are explicitly informed of the arbitrariness of the set. Drawing on Gestalt psychology, we develop a conceptual account that predicts what will-and will not-act as a pseudo-set, and defines the psychological process through which these pseudo-sets affect behavior: over and above typical reference points, pseudo-set framing alters perceptions of (in)completeness, making intermediate progress seem less complete. In turn, these feelings of incompleteness motivate people to persist until the pseudo-set has been fulfilled. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Schlossberg, David J. [University of Wisconsin-Madison] (ORCID:0000000287139448); Bodner, Grant M. [University of Wisconsin-Madison] (ORCID:0000000324979172); Reusch, Joshua A. [University of Wisconsin-Madison] (ORCID:0000000284249422); Bongard, Michael W. [University of Wisconsin-Madison] (ORCID:0000000231609746); Fonck, Raymond J. [University of Wisconsin-Madison] (ORCID:0000000294386762); Rodriguez Sanchez, Cuauhtemoc [University of Wisconsin-Madison] (ORCID:0000000334712586)
2016-09-16
This public data set contains openly-documented, machine readable digital research data corresponding to figures published in D.J. Schlossberg et. al., 'A Novel, Cost-Effective, Multi-Point Thomson Scattering System on the Pegasus Toroidal Experiment,' Rev. Sci. Instrum. 87, 11E403 (2016).
Two-stage fan. 4: Performance data for stator setting angle optimization
NASA Technical Reports Server (NTRS)
Burger, G. D.; Keenan, M. J.
1975-01-01
Stator setting angle optimization tests were conducted on a two-stage fan to improve efficiency at overspeed, stall margin at design speed, and both efficiency and stall margin at partspeed. The fan has a design pressure ratio of 2.8, a flow rate of 184.2 lb/sec (83.55 kg/sec) and a 1st-stage rotor tip speed of 1450 ft/sec (441.96 in/sec). Performance was obtained at 70,100, and 105 percent of design speed with different combinations of 1st-stage and 2nd-stage stator settings. One combination of settings, other than design, was common to all three speeds. At design speed, a 2.0 percentage point increase in stall margin was obtained at the expense of a 1.3 percentage point efficiency decrease. At 105 percent speed, efficiency was improved by 1.8 percentage points but stall margin decreased 4.7 percentage points. At 70 percent speed, no change in stall margin or operating line efficiency was obtained with stator resets although considerable speed-flow requlation occurred.
Arismendi-Morillo, G; Hernández, I; Mengual, E; Abreu, N; Molero, N; Fuenmayor, A; Romero, G; Lizarzábal, M
2013-01-01
Severity of chronic gastritis associated with Helicobacter pylori infection (CGAHpI) could play a role in evaluating the potential risk to develop gastric cancer. Our aim was to estimate the risk for gastric cancer in a clinical setting, according to histopathologic criteria, by applying the gastric cancer risk index (GCRI) METHODS: Histopathologic study of the gastric biopsies (corpus-antrum) from consecutive adult patients that underwent gastroesophageal duodenoscopy was carried out, and the GCRI was applied in patients presenting with CGAHpI. One hundred eleven patients (77% female) with a mean age of 38.6±13.1 years were included. Active Helicobacter pylori infection (aHpi) was diagnosed in 77 cases (69.40%). In 45% of the cases with aHpi, pangastritis (23%) or corpus-predominant gastritis (22%) was diagnosed. Nine cases were diagnosed with intestinal metaplasia (8%), 7 of which (77.70%) were in the aHpi group. Twenty one percent of the patients with aHpi had a GCRI of 2 (18.10%) or 3 (2.50%) points (high risk index), while 79.10% accumulated a GCRI of 0 or 1 points (low risk index). Of the patients with no aHpi, none of them had 3 points (p=0.001). Of the 18 patients that accumulated 2 or 3 points, 6 (33.30%) presented with intestinal metaplasia (all with pangastritis and corpus-predominant gastritis), of which 4 cases (66.60%) had aHpi. The estimated gastric cancer risk in patients with CGAHpI in the clinical setting studied was relatively low and 5% of the patients had a histopathologic phenotype associated with an elevated risk for developing gastric cancer. Copyright © 2012 Asociación Mexicana de Gastroenterología. Published by Masson Doyma México S.A. All rights reserved.
Knopman, Debra S.; Voss, Clifford I.
1989-01-01
Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.
An integrated set of UNIX based system tools at control room level
NASA Astrophysics Data System (ADS)
Potepan, F.; Scafuri, C.; Bortolotto, C.; Surace, G.
1994-12-01
The design effort of providing a simple point-and-click approach to the equipment access has led to the definition and realization of a modular set of software tools to be used at the ELETTRA control room level. Point-to-point equipment access requires neither programming nor specific knowledge of the control system architecture. The development and integration of communication, graphic, editing and global database modules are described in depth, followed by a report of their use in the first commissioning period.
a Voxel-Based Filtering Algorithm for Mobile LIDAR Data
NASA Astrophysics Data System (ADS)
Qin, H.; Guan, G.; Yu, Y.; Zhong, L.
2018-04-01
This paper presents a stepwise voxel-based filtering algorithm for mobile LiDAR data. In the first step, to improve computational efficiency, mobile LiDAR points, in xy-plane, are first partitioned into a set of two-dimensional (2-D) blocks with a given block size, in each of which all laser points are further organized into an octree partition structure with a set of three-dimensional (3-D) voxels. Then, a voxel-based upward growing processing is performed to roughly separate terrain from non-terrain points with global and local terrain thresholds. In the second step, the extracted terrain points are refined by computing voxel curvatures. This voxel-based filtering algorithm is comprehensively discussed in the analyses of parameter sensitivity and overall performance. An experimental study performed on multiple point cloud samples, collected by different commercial mobile LiDAR systems, showed that the proposed algorithm provides a promising solution to terrain point extraction from mobile point clouds.
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
NASA Astrophysics Data System (ADS)
Tan, Xiangli; Yang, Jungang; Deng, Xinpu
2018-04-01
In the process of geometric correction of remote sensing image, occasionally, a large number of redundant control points may result in low correction accuracy. In order to solve this problem, a control points filtering algorithm based on RANdom SAmple Consensus (RANSAC) was proposed. The basic idea of the RANSAC algorithm is that using the smallest data set possible to estimate the model parameters and then enlarge this set with consistent data points. In this paper, unlike traditional methods of geometric correction using Ground Control Points (GCPs), the simulation experiments are carried out to correct remote sensing images, which using visible stars as control points. In addition, the accuracy of geometric correction without Star Control Points (SCPs) optimization is also shown. The experimental results show that the SCPs's filtering method based on RANSAC algorithm has a great improvement on the accuracy of remote sensing image correction.
Diffuse Scattering Investigations of Orientational Pair Potentials in C_60
NASA Astrophysics Data System (ADS)
Wochner, Peter
1996-03-01
Premonitory orientational fluctuations above the first order phase transition of C_60 at 260K have been studied by diffuse X-ray scattering experiments. These experiments probe the orientational pair correlations between C_60 molecules as a function of their separation and therefore the orientational pair potential. In addition to the diffuse scattering due to the orientational disorder of single molecules, we have observed zone boundary diffuse scattering at the X-points related to the Pabar 3 low temperature structure up to 300K. An additional set of diffuse peaks, which are even at room temperature comparable in intensity to the former ones, have been found at (0.5,0.5,0.5) positions (L-point). Similar results have recently been reported by P. Launois et al. (P. Launois, S. Ravy, R. Moret, PRB 52), 5414 (1995) and L. Pintschovius et al. (L. Pintschovius, S.L. Chaplot, G. Roth, G. Heger, PRL 75), 2843 (1995) The temperature dependence of the integrated intensity of both sets of diffuse peaks shows only a weak increase in approaching T_c, indicative of a strongly first order transition. Additional intensity with a very weak temperature dependence but similar correlation length has also been found at (0.5,0.5,0) and (0.5,0,0) positions. The diffuse intensity at the L, Σ and Δ points has probably its origin in competing phases which are not stabilized at low temperatures. Recent DSC measurements show close lying transitions at 260K with a separation of ~= 0.2-0.3K which might be related to these competing phases footnote J. Fischer, private communication. The data will be compared with model calculations using orientational pair potentials which have been used in literature to describe the orientational phase transition in C_60.
Feed-Back Moisture Sensor Control for the Delivery of Water to Plants Cultivated in Space
NASA Technical Reports Server (NTRS)
Levine, Howard G.; Prenger, Jessica J.; Rouzan, Donna T.; Spinale, April C.; Murdoch, Trevor; Burtness, Kevin A.
2005-01-01
The development of a spaceflight-rated Porous Tube Insert Module (PTIM) nutrient delivery tray has facilitated a series of studies evaluating various aspects of water and nutrient delivery to plants as they would be cultivated in space. We report here on our first experiment using the PTIM with a software-driven feedback moisture sensor control strategy for maintaining root zone wetness level set-points. One-day-old wheat seedlings (Tritium aestivum cv Apogee; N=15) were inserted into each of three Substrate Compartments (SCs) pre-packed with 0.25-1 . mm Profile(TradeMark) substrate and maintained at root zone relative water content levels of 70, 80 and 90%. The SCs contained a bottom-situated porous tube around which a capillary mat was wrapped. Three Porous Tubes. were planted using similar protocols (but without the substrate) and also maintained at these three moisture level set-points. Half-strength modified Hoagland's nutrient solution was used to supply water and nutrients. Results on hardware performance, water usage rates and wheat developmental differences between the different experimental treatments are presented.
Representing perturbed dynamics in biological network models
NASA Astrophysics Data System (ADS)
Stoll, Gautier; Rougemont, Jacques; Naef, Felix
2007-07-01
We study the dynamics of gene activities in relatively small size biological networks (up to a few tens of nodes), e.g., the activities of cell-cycle proteins during the mitotic cell-cycle progression. Using the framework of deterministic discrete dynamical models, we characterize the dynamical modifications in response to structural perturbations in the network connectivities. In particular, we focus on how perturbations affect the set of fixed points and sizes of the basins of attraction. Our approach uses two analytical measures: the basin entropy H and the perturbation size Δ , a quantity that reflects the distance between the set of fixed points of the perturbed network and that of the unperturbed network. Applying our approach to the yeast-cell-cycle network introduced by Li [Proc. Natl. Acad. Sci. U.S.A. 101, 4781 (2004)] provides a low-dimensional and informative fingerprint of network behavior under large classes of perturbations. We identify interactions that are crucial for proper network function, and also pinpoint functionally redundant network connections. Selected perturbations exemplify the breadth of dynamical responses in this cell-cycle model.
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Morgan, Rhonda M.; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.
2007-01-01
We have developed a new, adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels in two extended-scene images captured by a Shack-Hartmann wavefront sensor (SH-WFS). It determines the positions of all of the extended-scene image cells relative to a reference cell using an FFT-based iterative image shifting algorithm. It works with both point-source spot images as well as extended scene images. We have also set up a testbed for extended0scene SH-WFS, and tested the ACC algorithm with the measured data of both point-source and extended-scene images. In this paper we describe our algorithm and present out experimental results.
Latash, M; Gottleib, G
1990-01-01
Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.
Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom
NASA Astrophysics Data System (ADS)
Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid
2016-01-01
The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.
A technique for treating local breast cancer using a single set-up point and asymmetric collimation.
Rosenow, U F; Valentine, E S; Davis, L W
1990-07-01
Using both pairs of asymmetric jaws of a linear accelerator local-regional breast cancer may be treated from a single set-up point. This point is placed at the abutment of the supraclavicular fields with the medial and lateral tangential fields. Positioning the jaws to create a half-beam superiorly permits treatment of the supraclavicular field. Positioning both jaws asymmetrically at midline to define a single beam in the inferoanterior quadrant permits treatment of the breast from medial and lateral tangents. The highest possible matching accuracy between the supraclavicular and tangential fields is inherently provided by this technique. For treatment of all fields at 100 cm source to axis distance (SAD) the lateral placement and depth of the set-up point may be determined by simulation and simple trigonometry. We elaborate on the clinical procedure. For the technologists treatment of all fields from a single set-up point is simple and efficient. Since the tissue at the superior border of the tangential fields is generally firmer than in mid-breast, greater accuracy in day-to-day set-up is permitted. This technique eliminates the need for table angles even when tangential fields only are planned. Because of half-beam collimation the limit to the tangential field length is 20 cm. Means will be suggested to overcome this limitation in the few cases where it occurs. Another modification is suggested for linear accelerators with only one independent pair of jaws.
Preconditioning 2D Integer Data for Fast Convex Hull Computations
2016-01-01
In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved. PMID:26938221
Design of Video Games for Children's Diet and Physical Activity Behavior Change.
Baranowski, Tom; Thompson, Debbe; Buday, Richard; Lu, Amy Shirong; Baranowski, Janice
2010-01-01
Serious video games (VG) offer new opportunities for promoting health related diet and physical activity change among children. Games can be designed to use storylines, characters, and behavior change procedures, including modeling (e.g., engaging characters make changes themselves, and face and overcome challenges related to fruit and vegetable (FV) and physical activity (PA) goal attainment and/or consumption), skill development (e.g., asking behaviors; virtual recipe preparation), self regulatory behaviors (problem solving, goal setting, goal review, decision making), rewards (e.g., points and positive statements generated by the program), immediate feedback (e.g., through characters and/or statements that appear on the computer screen at critical decision points), and personalization (e.g., tailored choices offered at critical junctures, based on responses to baselines questions related to preferences, outcome expectancies, etc). We are in the earliest stages of learning how to optimally design effective behavior change procedures for use in VG, and yet they have been demonstrated to change behavior. As we learn, VG offer more and better opportunities for obesity prevention that can adjust to individual needs and preferences.
Design of Video Games for Children’s Diet and Physical Activity Behavior Change
Baranowski, Tom; Thompson, Debbe; Buday, Richard; Lu, Amy Shirong; Baranowski, Janice
2012-01-01
Serious video games (VG) offer new opportunities for promoting health related diet and physical activity change among children. Games can be designed to use storylines, characters, and behavior change procedures, including modeling (e.g., engaging characters make changes themselves, and face and overcome challenges related to fruit and vegetable (FV) and physical activity (PA) goal attainment and/or consumption), skill development (e.g., asking behaviors; virtual recipe preparation), self regulatory behaviors (problem solving, goal setting, goal review, decision making), rewards (e.g., points and positive statements generated by the program), immediate feedback (e.g., through characters and/or statements that appear on the computer screen at critical decision points), and personalization (e.g., tailored choices offered at critical junctures, based on responses to baselines questions related to preferences, outcome expectancies, etc). We are in the earliest stages of learning how to optimally design effective behavior change procedures for use in VG, and yet they have been demonstrated to change behavior. As we learn, VG offer more and better opportunities for obesity prevention that can adjust to individual needs and preferences. PMID:25364331
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-01-01
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. PMID:29144420
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-11-16
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sommer, C. M., E-mail: christof.sommer@med.uni-heidelberg.de; Arnegger, F.; Koch, V.
2012-06-15
Purpose: This study was designed to analyze the effect of two different ablation modes ('temperature control' and 'power control') of a microwave system on procedural outcome in porcine kidneys in vivo. Methods: A commercially available microwave system (Avecure Microwave Generator; MedWaves, San Diego, CA) was used. The system offers the possibility to ablate with two different ablation modes: temperature control and power control. Thirty-two microwave ablations were performed in 16 kidneys of 8 pigs. In each animal, one kidney was ablated twice by applying temperature control (ablation duration set point at 60 s, ablation temperature set point at 96 Degree-Signmore » C, automatic power set point; group I). The other kidney was ablated twice by applying power control (ablation duration set point at 60 s, ablation temperature set point at 96 Degree-Sign C, ablation power set point at 24 W; group II). Procedural outcome was analyzed: (1) technical success (e.g., system failures, duration of the ablation cycle), and (2) ablation geometry (e.g., long axis diameter, short axis diameter, and circularity). Results: System failures occurred in 0% in group I and 13% in group II. Duration of the ablation cycle was 60 {+-} 0 s in group I and 102 {+-} 21 s in group II. Long axis diameter was 20.3 {+-} 4.6 mm in group I and 19.8 {+-} 3.5 mm in group II (not significant (NS)). Short axis diameter was 10.3 {+-} 2 mm in group I and 10.5 {+-} 2.4 mm in group II (NS). Circularity was 0.5 {+-} 0.1 in group I and 0.5 {+-} 0.1 in group II (NS). Conclusions: Microwave ablations performed with temperature control showed fewer system failures and were finished faster. Both ablation modes demonstrated no significant differences with respect to ablation geometry.« less
Precision Adjustable Liquid Regulator (ALR)
NASA Astrophysics Data System (ADS)
Meinhold, R.; Parker, M.
2004-10-01
A passive mechanical regulator has been developed for the control of fuel or oxidizer flow to a 450N class bipropellant engine for use on commercial and interplanetary spacecraft. There are several potential benefits to the propulsion system, depending on mission requirements and spacecraft design. This system design enables more precise control of main engine mixture ratio and inlet pressure, and simplifies the pressurization system by transferring the function of main engine flow rate control from the pressurization/propellant tank assemblies, to a single component, the ALR. This design can also reduce the thermal control requirements on the propellant tanks, avoid costly Qualification testing of biprop engines for missions with more stringent requirements, and reduce the overall propulsion system mass and power usage. In order to realize these benefits, the ALR must meet stringent design requirements. The main advantage of this regulator over other units available in the market is that it can regulate about its nominal set point to within +/-0.85%, and change its regulation set point in flight +/-4% about that nominal point. The set point change is handled actively via a stepper motor driven actuator, which converts rotary into linear motion to affect the spring preload acting on the regulator. Once adjusted to a particular set point, the actuator remains in its final position unpowered, and the regulator passively maintains outlet pressure. The very precise outlet regulation pressure is possible due to new technology developed by Moog, Inc. which reduces typical regulator mechanical hysteresis to near zero. The ALR requirements specified an outlet pressure set point range from 225 to 255 psi, and equivalent water flow rates required were in the 0.17 lb/sec range. The regulation output pressure is maintained at +/-2 psi about the set point from a P (delta or differential pressure) of 20 to over 100 psid. Maximum upstream system pressure was specified at 320 psi. The regulator is fault tolerant in that it was purposely designed with no shutoff capability, such that the minimum flow position of the poppet still allows the subsystem to provide adequate flow to the main engine for basic operation.
NASA Astrophysics Data System (ADS)
Schwind, Michael
Structure from Motion (SfM) is a photogrammetric technique whereby three-dimensional structures (3D) are estimated from overlapping two-dimensional (2D) image sequences. It is studied in the field of computer vision and utilized in fields such as archeology, engineering, and the geosciences. Currently, many SfM software packages exist that allow for the generation of 3D point clouds. Little work has been done to show how topographic data generated from these software differ over varying terrain types and why they might produce different results. This work aims to compare and characterize the differences between point clouds generated by three different SfM software packages: two well-known proprietary solutions (Pix4D, Agisoft PhotoScan) and one open source solution (OpenDroneMap). Five terrain types were imaged utilizing a DJI Phantom 3 Professional small unmanned aircraft system (sUAS). These terrain types include a marsh environment, a gently sloped sandy beach and jetties, a forested peninsula, a house, and a flat parking lot. Each set of imagery was processed with each software and then directly compared to each other. Before processing the sets of imagery, the software settings were analyzed and chosen in a manner that allowed for the most similar settings to be set across the three software types. This was done in an attempt to minimize point cloud differences caused by dissimilar settings. The characteristics of the resultant point clouds were then compared with each other. Furthermore, a terrestrial light detection and ranging (LiDAR) survey was conducted over the flat parking lot using a Riegl VZ- 400 scanner. This data served as ground truth in order to conduct an accuracy assessment of the sUAS-SfM point clouds. Differences were found between the different results, apparent not only in the characteristics of the clouds, but also the accuracy. This study allows for users of SfM photogrammetry to have a better understanding of how different processing software compare and the inherent sensitivity of SfM automation in 3D reconstruction. Because this study used mostly default settings within the software, it would be beneficial for further research to investigate the effects of changing parameters have on the fidelity of point cloud datasets generated from different SfM software packages.
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni
Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.
Carpenter, Afton S; Sullivan, Joanne H; Deshmukh, Arati; Glisson, Scott R; Gallo, Stephen A
2015-09-08
With the use of teleconferencing for grant peer-review panels increasing, further studies are necessary to determine the efficacy of the teleconference setting compared to the traditional onsite/face-to-face setting. The objective of this analysis was to examine the effects of discussion, namely changes in application scoring premeeting and postdiscussion, in these settings. We also investigated other parameters, including the magnitude of score shifts and application discussion time in face-to-face and teleconference review settings. The investigation involved a retrospective, quantitative analysis of premeeting and postdiscussion scores and discussion times for teleconference and face-to-face review panels. The analysis included 260 and 212 application score data points and 212 and 171 discussion time data points for the face-to-face and teleconference settings, respectively. The effect of discussion was found to be small, on average, in both settings. However, discussion was found to be important for at least 10% of applications, regardless of setting, with these applications moving over a potential funding line in either direction (fundable to unfundable or vice versa). Small differences were uncovered relating to the effect of discussion between settings, including a decrease in the magnitude of the effect in the teleconference panels as compared to face-to-face. Discussion time (despite teleconferences having shorter discussions) was observed to have little influence on the magnitude of the effect of discussion. Additionally, panel discussion was found to often result in a poorer score (as opposed to an improvement) when compared to reviewer premeeting scores. This was true regardless of setting or assigned reviewer type (primary or secondary reviewer). Subtle differences were observed between settings, potentially due to reduced engagement in teleconferences. Overall, further research is required on the psychology of decision-making, team performance and persuasion to better elucidate the group dynamics of telephonic and virtual ad-hoc peer-review panels. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
1980-02-12
planet across the limb of the Sun at the end of a transit. Elements of an Orbit - See orbital elements . Elevation - The height of a point on the...That component of libration due to variations in the geometric position of the Earth relative to the Moon. 71 ś" Orbital Elements - The quantities which...completely describe the size, shape, and orientation of an object’s orbit as well as its location in it. The classical set consists of the semi-major
Hidden-service Statistics Reported by Relays
2015-06-01
received, then 7 an adversary that knows the . onion address of a hidden service (and thus can obtain its Introduction Points) could infer how many...hide any single or repeated 9 publication of any given group of at most 8 onion services (e.g. a set of 8 or fewer related onion addresses that are... onion router. In Proceedings of the 13th USENIX Security Symposium, 2004. [5] Cynthia Dwork. Differential privacy. In in ICALP, 2006. [6] Cynthia Dwork
Invite yourself to the table: librarian contributions to the electronic medical record.
Brandes, Susan; Wells, Karen; Bandy, Margaret
2013-01-01
Librarians from Exempla Healthcare hospitals initiated contact with the chief medical information officer regarding evidence-based medicine activities related to the development of the system's Electronic Medical Record (EMR). This column reviews the librarians' involvement in specific initiatives that included providing comparative information on point-of-care resources to integrate into the EMR, providing evidence as needed for the order sets being developed, and participating with clinicians on an evidence-based advisory committee.
Interpretations of Quantum Theory in the Light of Modern Cosmology
NASA Astrophysics Data System (ADS)
Castagnino, Mario; Fortin, Sebastian; Laura, Roberto; Sudarsky, Daniel
2017-11-01
The difficult issues related to the interpretation of quantum mechanics and, in particular, the "measurement problem" are revisited using as motivation the process of generation of structure from quantum fluctuations in inflationary cosmology. The unessential mathematical complexity of the particular problem is bypassed, facilitating the discussion of the conceptual issues, by considering, within the paradigm set up by the cosmological problem, another problem where symmetry serves as a focal point: a simplified version of Mott's problem.
Piecewise multivariate modelling of sequential metabolic profiling data.
Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan
2008-02-19
Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.
Scaling Relations of Starburst-driven Galactic Winds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanner, Ryan; Cecil, Gerald; Heitsch, Fabian, E-mail: rytanner@augusta.edu
2017-07-10
Using synthetic absorption lines generated from 3D hydrodynamical simulations, we explore how the velocity of a starburst-driven galactic wind correlates with the star formation rate (SFR) and SFR density. We find strong correlations for neutral and low ionized gas, but no correlation for highly ionized gas. The correlations for neutral and low ionized gas only hold for SFRs below a critical limit set by the mass loading of the starburst, above which point the scaling relations flatten abruptly. Below this point the scaling relations depend on the temperature regime being probed by the absorption line, not on the mass loading.more » The exact scaling relation depends on whether the maximum or mean velocity of the absorption line is used. We find that the outflow velocity of neutral gas can be up to five times lower than the average velocity of ionized gas, with the velocity difference increasing for higher ionization states. Furthermore, the velocity difference depends on both the SFR and mass loading of the starburst. Thus, absorption lines of neutral or low ionized gas cannot easily be used as a proxy for the outflow velocity of the hot gas.« less
Bayesian `hyper-parameters' approach to joint estimation: the Hubble constant from CMB measurements
NASA Astrophysics Data System (ADS)
Lahav, O.; Bridle, S. L.; Hobson, M. P.; Lasenby, A. N.; Sodré, L.
2000-07-01
Recently several studies have jointly analysed data from different cosmological probes with the motivation of estimating cosmological parameters. Here we generalize this procedure to allow freedom in the relative weights of various probes. This is done by including in the joint χ2 function a set of `hyper-parameters', which are dealt with using Bayesian considerations. The resulting algorithm, which assumes uniform priors on the log of the hyper-parameters, is very simple: instead of minimizing \\sum \\chi_j2 (where \\chi_j2 is per data set j) we propose to minimize \\sum Nj (\\chi_j2) (where Nj is the number of data points per data set j). We illustrate the method by estimating the Hubble constant H0 from different sets of recent cosmic microwave background (CMB) experiments (including Saskatoon, Python V, MSAM1, TOCO and Boomerang). The approach can be generalized for combinations of cosmic probes, and for other priors on the hyper-parameters.
NASA Technical Reports Server (NTRS)
Hoffer, R. M. (Principal Investigator)
1975-01-01
The author has reported the following significant results. A data set containing SKYLAB, LANDSAT, and topographic data has been overlayed, registered, and geometrically corrected to a scale of 1:24,000. After geometrically correcting both sets of data, the SKYLAB data were overlayed on the LANDSAT data. Digital topographic data were then obtained, reformatted, and a data channel containing elevation information was then digitally overlayed onto the LANDSAT and SKYLAB spectral data. The 14,039 square kilometers involving 2,113, 776 LANDSAT pixels represents a relatively large data set available for digital analysis. The overlayed data set enables investigators to numerically analyze and compare two sources of spectral data and topographic data from any point in the scene. This capability is new and it will permit a numerical comparison of spectral response with elevation, slope, and aspect. Utilization of the spectral and topographic data together to obtain more accurate classifications of the various cover types present is feasible.
NASA Technical Reports Server (NTRS)
Holladay, Jon; Day, Greg; Gill, Larry
2004-01-01
Spacecraft are typically designed with a primary focus on weight in order to meet launch vehicle performance parameters. However, for pressurized and/or man-rated spacecraft, it is also necessary to have an understanding of the vehicle operating environments to properly size the pressure vessel. Proper sizing of the pressure vessel requires an understanding of the space vehicle's life cycle and compares the physical design optimization (weight and launch "cost") to downstream operational complexity and total life cycle cost. This paper will provide an overview of some major environmental design drivers and provide examples for calculating the optimal design pressure versus a selected set of design parameters related to thermal and environmental perspectives. In addition, this paper will provide a generic set of cracking pressures for both positive and negative pressure relief valves that encompasses worst case environmental effects for a variety of launch / landing sites. Finally, several examples are included to highlight pressure relief set points and vehicle weight impacts for a selected set of orbital missions.
Selection and Characterization of Vegetable Crop Cultivars for use in Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Langhans, Robert W.
1997-01-01
Cultivar evaluation for controlled environments is a lengthy and multifaceted activity. The chapters of this thesis cover eight steps preparatory to yield trials, and the final step of cultivar selection after data are collected. The steps are as follows: 1. Examination of the literature on the crop and crop cultivars to assess the state of knowledge. 2. Selection of standard cultivars with which to explore crop response to major growth factors and determine set points for screening and, later, production. 3. Determination of practical growing techniques for the crop in controlled environments. 4. Design of experiments for determination of crop responses to the major growth factors, with particular emphasis on photoperiod, daily light integral and air temperature. 5. Developing a way of measuring yield appropriate to the crop type by sampling through the harvest period and calculating a productivity function. 6. Narrowing down the pool of cultivars and breeding lines according to a set of criteria and breeding history. 7. Determination of environmental set points for cultivar evaluation through calculating production cost as a function of set points and size of target facility. 8. Design of screening and yield trial experiments emphasizing efficient use of space. 9. Final evaluation of cultivars after data collection, in terms of production cost and value to the consumer. For each of the steps, relevant issues are addressed. In selecting standards to determine set points for screening, set points that optimize cost of production for the standards may not be applicable to all cultivars. Production of uniform and equivalent- sized seedlings is considered as a means of countering possible differences in seed vigor. Issues of spacing and re-spacing are also discussed.
Advanced control of neutral beam injected power in DIII-D
Pawley, Carl J.; Crowley, Brendan J.; Pace, David C.; ...
2017-03-23
In the DIII-D tokamak, one of the most powerful techniques to control the density, temperature and plasma rotation is by eight independently modulated neutral beam sources with a total power of 20 MW. The rapid modulation requires a high degree of reproducibility and precise control of the ion source plasma and beam acceleration voltage. Recent changes have been made to the controls to provide a new capability to smoothly vary the beam current and beam voltage during a discharge, while maintaining the modulation capability. The ion source plasma inside the arc chamber is controlled through feedback from the Langmuir probesmore » measuring plasma density near the extraction end. To provide the new capability, the plasma control system (PCS) has been enabled to change the Langmuir probe set point and the beam voltage set point in real time. When the PCS varies the Langmuir set point, the plasma density is directly controlled in the arc chamber, thus changing the beam current (perveance) and power going into the tokamak. Alternately, the PCS can sweep the beam voltage set point by 20 kV or more and adjust the Langmuir probe setting to match, keeping the perveance constant and beam divergence at a minimum. This changes the beam power and average neutral particle energy, which changes deposition in the tokamak plasma. The ion separating magnetic field must accurately match the beam voltage to protect the beam line. To do this, the magnet current control accurately tracks the beam voltage set point. In conclusion, these new capabilities allow continuous in-shot variation of neutral beam ion energy to complement« less
Zhang, Melvyn; Bingham, Kathleen; Kantarovich, Karin; Laidlaw, Jennifer; Urbach, David; Sockalingam, Sanjeev; Ho, Roger
2016-04-30
Delirium is a common medical condition with a high prevalence in hospital settings. Effective delirium management requires a multi-component intervention, including the use of Interprofessional teams and evidence-based interventions at the point of care. One vehicle for increasing access of delirium practice tools at the point of care is E-health. There has been a paucity of studies describing the implementation of delirium related clinical application. The purpose of this current study is to acquire users' perceptions of the utility, feasibility and effectiveness of a smartphone application for delirium care in a general surgery unit. In addition, the authors aimed to elucidate the potential challenges with implementing this application. This quantitative study was conducted between January 2015 and June 2015 at the University Health Network, Toronto General Hospital site. Participants met inclusion criteria if they were clinical staff on the General Surgery Unit at the Toronto General Hospital site and had experience caring for patients with delirium. At the conclusion of the 4 weeks after the implementation of the intervention, participants were invited by email to participate in a focus group to discuss their perspectives related to using the delirium application Our findings identified several themes related to the implementation and use of this smartphone application in an acute care clinical setting. These themes will provide clinicians preparing to use a smartphone application to support delirium care with an implementation framework. This study is one of the first to demonstrate the potential utility of a smartphone application for delirium inter-professional education. While this technology does appeal to healthcare professionals, it is important to note potential implementation challenges. Our findings provide insights into these potential barriers and can be used to assist healthcare professionals considering the development and use of an inter-professional clinical care application in their setting.
National smokefree law in New Zealand improves air quality inside bars, pubs and restaurants.
Wilson, Nick; Edwards, Richard; Maher, Anthony; Näthe, Jenny; Jalali, Rafed
2007-05-18
We aimed to: (i) assess compliance with a new smokefree law in a range of hospitality settings; and (ii) to assess the impact of the new law by measuring air quality and making comparisons with air quality in outdoor smoking areas and with international data from hospitality settings. We included 34 pubs, restaurants and bars, 10 transportation settings, nine other indoor settings, six outdoor smoking areas of bars and restaurants, and six other outdoor settings. These were selected using a mix of random, convenience and purposeful sampling. The number of lit cigarettes among occupants at defined time points in each venue was observed and a portable real-time aerosol monitor was used to measure fine particulate levels (PM2.5). No smoking was observed during the data collection periods among over 3785 people present in the indoor venues, nor in any of the transportation settings. The levels of fine particulates were relatively low inside the bars, pubs and restaurants in the urban and rural settings (mean 30-minute level = 16 microg/m3 for 34 venues; range of mean levels for each category: 13 microg/m3 to 22 microg/m3). The results for other smokefree indoor settings (shops, offices etc) and for smokefree transportation settings (eg, buses, trains, etc) were even lower. However, some "outdoor" smoking areas attached to bars/restaurants had high levels of fine particulates, especially those that were partly enclosed (eg, up to a 30-minute mean value of 182 microg/m3 and a peak of maximum value of 284 microg/m3). The latter are far above WHO guideline levels for 24-hour exposure (ie, 25 microg/m3). There was very high compliance with the new national smokefree law and this was also reflected by the relatively good indoor air quality in hospitality settings (compared to the "outdoor" smoking areas and the comparable settings in countries that permit indoor smoking). Nevertheless, adopting enhanced regulations (as used in various US and Canadian jurisdictions) may be needed to address hazardous air quality in relatively enclosed "outdoor" smoking areas.
National smokefree law in New Zealand improves air quality inside bars, pubs and restaurants
Wilson, Nick; Edwards, Richard; Maher, Anthony; Näthe, Jenny; Jalali, Rafed
2007-01-01
Background: We aimed to: (i) assess compliance with a new smokefree law in a range of hospitality settings; and (ii) to assess the impact of the new law by measuring air quality and making comparisons with air quality in outdoor smoking areas and with international data from hospitality settings. Methods: We included 34 pubs, restaurants and bars, 10 transportation settings, nine other indoor settings, six outdoor smoking areas of bars and restaurants, and six other outdoor settings. These were selected using a mix of random, convenience and purposeful sampling. The number of lit cigarettes among occupants at defined time points in each venue was observed and a portable real-time aerosol monitor was used to measure fine particulate levels (PM2.5). Results: No smoking was observed during the data collection periods among over 3785 people present in the indoor venues, nor in any of the transportation settings. The levels of fine particulates were relatively low inside the bars, pubs and restaurants in the urban and rural settings (mean 30-minute level = 16 μg/m3 for 34 venues; range of mean levels for each category: 13 μg/m3 to 22 μg/m3). The results for other smokefree indoor settings (shops, offices etc) and for smokefree transportation settings (eg, buses, trains, etc) were even lower. However, some "outdoor" smoking areas attached to bars/restaurants had high levels of fine particulates, especially those that were partly enclosed (eg, up to a 30-minute mean value of 182 μg/m3 and a peak of maximum value of 284 μg/m3). The latter are far above WHO guideline levels for 24-hour exposure (ie, 25μg/m3). Conclusion: There was very high compliance with the new national smokefree law and this was also reflected by the relatively good indoor air quality in hospitality settings (compared to the "outdoor" smoking areas and the comparable settings in countries that permit indoor smoking). Nevertheless, adopting enhanced regulations (as used in various US and Canadian jurisdictions) may be needed to address hazardous air quality in relatively enclosed "outdoor" smoking areas. PMID:17511877
The 2008-2012 French Alzheimer plan: description of the national Alzheimer information system.
Le Duff, Franck; Develay, Aude Emmanuelle; Quetel, Julien; Lafay, Pierre; Schück, Stéphane; Pradier, Christian; Robert, Philippe
2012-01-01
In France, one of the aims of the current national Alzheimer's disease plan is to collect data from all memory centers (memory units, memory resource and research centers, independent neurologists) throughout the country. Here we describe the French Alzheimer Information System and present a 'snapshot' of the data collected throughout the country during the first year of operation. We analyzed all data transmitted by memory centers between January 2010 and December 2010. Each participating center is required to transmit information on patients to the French National Alzheimer dataBank (BNA). This involves completing a computer file containing 31 variables corresponding to a limited data set on AD (CIMA: Corpus minimum d'information Alzheimer). In 2010, the BNA received data from 320 memory centers relating to 199,113 consultations involving 118,776 patients. An analysis of the data shows that the initial MMSE (Mini Mental State Examination) mean score for patients in France was 16.8 points for Alzheimer's disease, 25.7 points for mild cognitive impairment, and 18.8 points for 'related disorders related disorders. The BNA will provide longitudinal data that can be used to assess the needs of individual local health areas and size specialized care provision in each regional health scheme. By contributing to the BNA, the memory centers enhance their clinical activity and help to advance knowledge in epidemiology and medical research in the important field of Alzheimer's disease and related dementias.
Baudischova, L; Straznicka, J; Pokladnikova, J; Jahodar, L
2018-02-01
Background The purchase of dietary supplements (DS) via the Internet is increasing worldwide as well as in the Czech Republic. Objective The aim of the study is to evaluate the quality of information on DS available on the Internet. Setting Czech websites related to dietary supplements. Methods A cross-sectional study was carried out involving the analysis of information placed on the websites related to the 100 top-selling DS in the Czech Republic in 2014, according to IMS Health data. Main outcome measure The following criteria were evaluated: contact for the manufacturer, recommended dosage, information on active substances as well as overall composition, permitted health claims, % of the daily reference intake value (DRIV) for vitamins and minerals, link for online counseling, pregnancy/breastfeeding, allergy information, contraindications, adverse reactions, and supplement-drug interactions (some criteria were evaluated from both points of view). Results A total of 199 web domains and 850 websites were evaluated. From the regulatory point of view, all the criteria were fulfilled by 11.3% of websites. Almost 9% of the websites reported information referring to the treatment, cure, or prevention of a disease. From the clinical point of view, all the criteria were only met by one website. Conclusions The quality of information related to DS available on the Internet in the Czech Republic is quite low. The consumers should consult a specialist when using DS purchased online.
Multiple-hopping trajectories near a rotating asteroid
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Zhang, Tian-Jiao; Li, Zhao; Li, Heng-Nian
2017-03-01
We present a study of the transfer orbits connecting landing points of irregular-shaped asteroids. The landing points do not touch the surface of the asteroids and are chosen several meters above the surface. The ant colony optimization technique is used to calculate the multiple-hopping trajectories near an arbitrary irregular asteroid. This new method has three steps which are as follows: (1) the search of the maximal clique of candidate target landing points; (2) leg optimization connecting all landing point pairs; and (3) the hopping sequence optimization. In particular this method is applied to asteroids 433 Eros and 216 Kleopatra. We impose a critical constraint on the target landing points to allow for extensive exploration of the asteroid: the relative distance between all the arrived target positions should be larger than a minimum allowed value. Ant colony optimization is applied to find the set and sequence of targets, and the differential evolution algorithm is used to solve for the hopping orbits. The minimum-velocity increment tours of hopping trajectories connecting all the landing positions are obtained by ant colony optimization. The results from different size asteroids indicate that the cost of the minimum velocity-increment tour depends on the size of the asteroids.
Tsafrir, D; Tsafrir, I; Ein-Dor, L; Zuk, O; Notterman, D A; Domany, E
2005-05-15
We introduce a novel unsupervised approach for the organization and visualization of multidimensional data. At the heart of the method is a presentation of the full pairwise distance matrix of the data points, viewed in pseudocolor. The ordering of points is iteratively permuted in search of a linear ordering, which can be used to study embedded shapes. Several examples indicate how the shapes of certain structures in the data (elongated, circular and compact) manifest themselves visually in our permuted distance matrix. It is important to identify the elongated objects since they are often associated with a set of hidden variables, underlying continuous variation in the data. The problem of determining an optimal linear ordering is shown to be NP-Complete, and therefore an iterative search algorithm with O(n3) step-complexity is suggested. By using sorting points into neighborhoods, i.e. SPIN to analyze colon cancer expression data we were able to address the serious problem of sample heterogeneity, which hinders identification of metastasis related genes in our data. Our methodology brings to light the continuous variation of heterogeneity--starting with homogeneous tumor samples and gradually increasing the amount of another tissue. Ordering the samples according to their degree of contamination by unrelated tissue allows the separation of genes associated with irrelevant contamination from those related to cancer progression. Software package will be available for academic users upon request.
Base and precious metal occurrences along the San Andreas Fault, Point Delgada, California
McLaughlin, Robert J.; Sorg, D.H.; Ohlin, H.N.; Heropoulos, Chris
1979-01-01
Previously unrecognized veins containing lead, zinc, and copper sulfide minerals at Point Delgada, Calif., are associated with late Mesozoic(?) and Tertiary volcanic and sedimentary rocks of the Franciscan assemblage. Sulfide minerals include pyrite, sphalerite, galena, and minor chalcopyrite, and galena-rich samples contain substantial amounts of silver. These minerals occur in a quartz-carbonate gangue along northeast-trending faults and fractures that exhibit (left?) lateral and vertical slip. The sense of fault movement and the northeasterly strike are consistent with predicted conjugate fault sets of the present San Andreas fault system. The sulfide mineralization is younger than the Franciscan rocks of Point Delgada and King Range, and it may have accompanied or postdated the inception of San Andreas faulting. Mineralization largely preceded uplift, the formation of a marine terrace, and the emplacement of landslide-related debris-flow breccias that overlie the mineralized rocks and truncate the sulfide veins. These field relations indicate that the sulfide mineralization and inception of San Andreas faulting were clearly more recent than the early Miocene and that the mineralization could be younger than about 1.2 m.y. The sulfide veins at Point Delgada may be of economic significance. However, prior to any exploitation of the occurrence, economic and environmental conflicts of interest involving private land ownership, the Shelter Cove home development, and proximity of the coast must be resolved.
Continuum Limit of Total Variation on Point Clouds
NASA Astrophysics Data System (ADS)
García Trillos, Nicolás; Slepčev, Dejan
2016-04-01
We consider point clouds obtained as random samples of a measure on a Euclidean domain. A graph representing the point cloud is obtained by assigning weights to edges based on the distance between the points they connect. Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering. In particular, we study when the cut capacity, and more generally total variation, on these graphs is a good approximation of the perimeter (total variation) in the continuum setting. We address this question in the setting of Γ-convergence. We obtain almost optimal conditions on the scaling, as the number of points increases, of the size of the neighborhood over which the points are connected by an edge for the Γ-convergence to hold. Taking of the limit is enabled by a transportation based metric which allows us to suitably compare functionals defined on different point clouds.
Fixed point theorems for generalized contractions in ordered metric spaces
NASA Astrophysics Data System (ADS)
O'Regan, Donal; Petrusel, Adrian
2008-05-01
The purpose of this paper is to present some fixed point results for self-generalized contractions in ordered metric spaces. Our results generalize and extend some recent results of A.C.M. Ran, M.C. Reurings [A.C.M. Ran, MEC. Reurings, A fixed point theorem in partially ordered sets and some applications to matrix equations, Proc. Amer. Math. Soc. 132 (2004) 1435-1443], J.J. Nieto, R. Rodríguez-López [J.J. Nieto, R. Rodríguez-López, Contractive mapping theorems in partially ordered sets and applications to ordinary differential equations, Order 22 (2005) 223-239; J.J. Nieto, R. Rodríguez-López, Existence and uniqueness of fixed points in partially ordered sets and applications to ordinary differential equations, Acta Math. Sin. (Engl. Ser.) 23 (2007) 2205-2212], J.J. Nieto, R.L. Pouso, R. Rodríguez-López [J.J. Nieto, R.L. Pouso, R. Rodríguez-López, Fixed point theorem theorems in ordered abstract sets, Proc. Amer. Math. Soc. 135 (2007) 2505-2517], A. Petrusel, I.A. Rus [A. Petrusel, I.A. Rus, Fixed point theorems in ordered L-spaces, Proc. Amer. Math. Soc. 134 (2006) 411-418] and R.P. Agarwal, M.A. El-Gebeily, D. O'Regan [R.P. Agarwal, M.A. El-Gebeily, D. O'Regan, Generalized contractions in partially ordered metric spaces, Appl. Anal., in press]. As applications, existence and uniqueness results for Fredholm and Volterra type integral equations are given.
Columbia Glacier, Alaska, photogrammetry data set, 1981-82 and 1984-85
Krimmel, R.M.
1987-01-01
Photogrammetric processing of 12 sets of vertical aerial photography of the Columbia Glacier, Alaska, has measured the altitude and velocity fields of the lowest 14,000 m of the glacier during the periods of September 1981 to October 1982 and October 1984 to September 1985. The data set consists of the location of 3,604 points on the glacier, 1,161 points along the glacier terminus, and 1,116 points along the top of the terminus ice cliff. During the 1981 to 1985 period the terminus of the glacier receded 1,350 m, the ice near the terminus thinned at a rate of 18 m/year, and ice velocity near the terminus tripled, reaching as much as 6,000 m/year. (Author 's abstract)
The Julia sets of basic uniCremer polynomials of arbitrary degree
NASA Astrophysics Data System (ADS)
Blokh, Alexander; Oversteegen, Lex
Let P be a polynomial of degree d with a Cremer point p and no repelling or parabolic periodic bi-accessible points. We show that there are two types of such Julia sets J_P . The red dwarf J_P are nowhere connected im kleinen and such that the intersection of all impressions of external angles is a continuum containing p and the orbits of all critical images. The solar J_P are such that every angle with dense orbit has a degenerate impression disjoint from other impressions and J_P is connected im kleinen at its landing point. We study bi-accessible points and locally connected models of J_P and show that such sets J_P appear through polynomial-like maps for generic polynomials with Cremer points. Since known tools break down for d>2 (if d>2 , it is not known if there are small cycles near p , while if d=2 , this result is due to Yoccoz), we introduce wandering ray continua in J_P and provide a new application of Thurston laminations.
Häuser, Winfried; Kühn-Becker, Hedi; von Wilmoswky, Hubertus; Settan, Margit; Brähler, Elmar; Petzke, Frank
2011-04-01
Well-established gender differences in the clinical picture of fibromyalgia syndrome (FMS) have been suggested. However, studies on gender differences in demographic and clinical features of FMS have contradictory results. Their significance is limited by the small number of patients included and selection bias of single settings. The purpose of this study was to compare demographic characteristics (age, family status) and clinical variables (duration of chronic pain and FMS diagnosis, tender point count, number of pain sites, and somatic and depressive symptoms) of male and female patients in different settings (general population, FMS self-help organization, and different clinical settings). FMS was diagnosed according to survey criteria in the general population and in the self-help organization setting and by 1990 criteria of the American College of Rheumatology in the clinical settings. Tender point examination was performed according to the manual tender point survey protocol in clinical settings. Somatic and depressive symptoms were assessed by validated questionnaires. A total of 1023 patients (885 female, 138 male) were included in the analysis. Compared with male participants, female participants reported a longer duration of chronic widespread pain (P = 0.009) and time since FMS diagnosis (P = 0.05), and they had a higher tender point count (P = 0.04). There were no gender differences in age, family status, number of pain sites, or somatic and depressive symptoms. We found no relevant gender differences in the clinical picture of FMS. The assumption of well-established gender differences in the clinical picture of FMS could not be supported. Copyright © 2011 Elsevier HS Journals, Inc. All rights reserved.
Sato, Atsushi; Okuda, Yutaka; Fujita, Takaaki; Kimura, Norihiko; Hoshina, Noriyuki; Kato, Sayaka; Tanaka, Shigenari
2016-01-01
This study aimed to clarify which cognitive and physical factors are associated with the need for toileting assistance in stroke patients and to calculate cut-off values for discriminating between independent supervision and dependent toileting ability. This cross-sectional study included 163 first-stroke patients in nine convalescent rehabilitation wards. Based on their FIM Ⓡ instrument score for toileting, the patients were divided into an independent-supervision group and a dependent group. Multiple logistic regression analysis and receiver operating characteristic analysis were performed to identify factors related to toileting performance. The Minimental State Examination (MMSE); the Stroke Impairment Assessment Set (SIAS) score for the affected lower limb, speech, and visuospatial functions; and the Functional Assessment for Control of Trunk (FACT) were analyzed as independent variables. The multiple logistic regression analysis showed that the FIM Ⓡ instrument score for toileting was associated with the SIAS score for the affected lower limb function, MMSE, and FACT. On receiver operating characteristic analysis, the SIAS score for the affected lower limb function cut-off value was 8/7 points, the MMSE cut-off value was 25/24 points, and the FACT cut-off value was 14/13 points. Affected lower limb function, cognitive function, and trunk function were related with the need for toileting assistance. These cut-off values may be useful for judging whether toileting assistance is needed in stroke patients.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva
2017-06-01
Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adrián-Martínez, S.; Ardid, M.; Bou-Cabo, M.
2014-05-01
A search for cosmic neutrino sources using six years of data collected by the ANTARES neutrino telescope has been performed. Clusters of muon neutrinos over the expected atmospheric background have been looked for. No clear signal has been found. The most signal-like accumulation of events is located at equatorial coordinates R.A. = –46.°8 and decl. = –64.°9 and corresponds to a 2.2σ background fluctuation. In addition, upper limits on the flux normalization of an E {sup –2} muon neutrino energy spectrum have been set for 50 pre-selected astrophysical objects. Finally, motivated by an accumulation of seven events relatively close tomore » the Galactic Center in the recently reported neutrino sample of the IceCube telescope, a search for point sources in a broad region around this accumulation has been carried out. No indication of a neutrino signal has been found in the ANTARES data and upper limits on the flux normalization of an E {sup –2} energy spectrum of neutrinos from point sources in that region have been set. The 90% confidence level upper limits on the muon neutrino flux normalization vary between 3.5 and 5.1 × 10{sup –8} GeV cm{sup –2} s{sup –1}, depending on the exact location of the source.« less
Project Delivery System Mode Decision Based on Uncertain AHP and Fuzzy Sets
NASA Astrophysics Data System (ADS)
Kaishan, Liu; Huimin, Li
2017-12-01
The project delivery system mode determines the contract pricing type, project management mode and the risk allocation among all participants. Different project delivery system modes have different characteristics and applicable scope. For the owners, the selection of the delivery mode is the key point to decide whether the project can achieve the expected benefits, it relates to the success or failure of project construction. Under the precondition of comprehensively considering the influence factors of the delivery mode, the model of project delivery system mode decision was set up on the basis of uncertain AHP and fuzzy sets, which can well consider the uncertainty and fuzziness when conducting the index evaluation and weight confirmation, so as to rapidly and effectively identify the most suitable delivery mode according to project characteristics. The effectiveness of the model has been verified via the actual case analysis in order to provide reference for the construction project delivery system mode.
Home advantage in soccer--A matter of expectations, goal setting and tactical decisions of coaches?
Staufenbiel, Kathrin; Lobinger, Babett; Strauss, Bernd
2015-01-01
In soccer, home teams win about 67% of decided games. The causes for this home advantage are still unresolved. There is a shortage of research on the psychological states of actors involved. In this study, we examined soccer coaches' expectations, goal setting and tactical decisions in relation to game location. Soccer coaches (N = 297) with different expertise levels participated in an experimental, online management game and were randomly assigned to one of two groups, "home game (HG)" or "away game." Participants received information on the game for which they were asked to make decisions in multiple points. The only differing information between groups was game location. Regardless of expertise, HG coaches had higher expectations to win, set more challenging goals and decided for more offensive and courageous playing tactics. Possible consequences of these findings concerning home advantage in soccer are discussed.
Visual space under free viewing conditions.
Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J
2005-10-01
Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.
Towards a Full-sky, High-resolution Dust Extinction Map with WISE and Planck
NASA Astrophysics Data System (ADS)
Meisner, Aaron M.; Finkbeiner, D. P.
2014-01-01
We have recently completed a custom processing of the entire WISE 12 micron All-sky imaging data set. The result is a full-sky map of diffuse, mid-infrared Galactic dust emission with angular resolution of 15 arcseconds, and with contaminating artifacts such as compact sources removed. At the same time, the 2013 Planck HFI maps represent a complementary data set in the far-infrared, with zero-point relatively immune to zodiacal contamination and angular resolution superior to previous full-sky data sets at similar frequencies. Taken together, these WISE and Planck data products present an opportunity to improve upon the SFD (1998) dust extinction map, by virtue of enhanced angular resolution and potentially better-controlled systematics on large scales. We describe our continuing efforts to construct and test high-resolution dust extinction and temperature maps based on our custom WISE processing and Planck HFI data.
Gender, affiliation, assertion, and the interactive context of parent-child play.
Leaper, C
2000-05-01
Ninety-eight young U.S. children (mean age = 48 months) with either European, Latin American, or multiple ethnic backgrounds were videotaped with their mothers and their fathers on separate occasions in their families' homes. Parent-child pairs played for 8 min each with a feminine-stereotyped toy set (foods and plates) and a masculine-stereotyped toy set (track and cars). Levels of affiliation (engaging vs. distancing) and assertion (direct vs. nondirect) were rated on 7-point scales every 5 s from the videotapes for both parent and child. Overall, the play activity accounted for a large proportion of the variance in parents' and children's mean affiliation and assertion ratings. Some hypothesized gender-related differences in behavior were also observed. In addition, exploratory analyses revealed some differences between the different ethnic groups. The results highlight the importance of role modeling and activity settings in the socialization and social construction of gender.
Ignorance is a bliss: Mathematical structure of many-box models
NASA Astrophysics Data System (ADS)
Tylec, Tomasz I.; Kuś, Marek
2018-03-01
We show that the propositional system of a many-box model is always a set-representable effect algebra. In particular cases of 2-box and 1-box models, it is an orthomodular poset and an orthomodular lattice, respectively. We discuss the relation of the obtained results with the so-called Local Orthogonality principle. We argue that non-classical properties of box models are the result of a dual enrichment of the set of states caused by the impoverishment of the set of propositions. On the other hand, quantum mechanical models always have more propositions as well as more states than the classical ones. Consequently, we show that the box models cannot be considered as generalizations of quantum mechanical models and seeking additional principles that could allow us to "recover quantum correlations" in box models are, at least from the fundamental point of view, pointless.
Does gastric bypass surgery change body weight set point?
Hao, Z; Mumphrey, M B; Morrison, C D; Münzberg, H; Ye, J; Berthoud, H R
2016-01-01
The relatively stable body weight during adulthood is attributed to a homeostatic regulatory mechanism residing in the brain which uses feedback from the body to control energy intake and expenditure. This mechanism guarantees that if perturbed up or down by design, body weight will return to pre-perturbation levels, defined as the defended level or set point. The fact that weight re-gain is common after dieting suggests that obese subjects defend a higher level of body weight. Thus, the set point for body weight is flexible and likely determined by the complex interaction of genetic, epigenetic and environmental factors. Unlike dieting, bariatric surgery does a much better job in producing sustained suppression of food intake and body weight, and an intensive search for the underlying mechanisms has started. Although one explanation for this lasting effect of particularly Roux-en-Y gastric bypass surgery (RYGB) is simple physical restriction due to the invasive surgery, a more exciting explanation is that the surgery physiologically reprograms the body weight defense mechanism. In this non-systematic review, we present behavioral evidence from our own and other studies that defended body weight is lowered after RYGB and sleeve gastrectomy. After these surgeries, rodents return to their preferred lower body weight if over- or underfed for a period of time, and the ability to drastically increase food intake during the anabolic phase strongly argues against the physical restriction hypothesis. However, the underlying mechanisms remain obscure. Although the mechanism involves central leptin and melanocortin signaling pathways, other peripheral signals such as gut hormones and their neural effector pathways likely contribute. Future research using both targeted and non-targeted ‘omics’ techniques in both humans and rodents as well as modern, genetically targeted, neuronal manipulation techniques in rodents will be necessary. PMID:28685029
NASA Astrophysics Data System (ADS)
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.
The lead time tradeoff: the case of health states better than dead.
Pinto-Prades, José Luis; Rodríguez-Míguez, Eva
2015-04-01
Lead time tradeoff (L-TTO) is a variant of the time tradeoff (TTO). L-TTO introduces a lead period in full health before illness onset, avoiding the need to use 2 different procedures for states better and worse than dead. To estimate utilities, additive separability is assumed. We tested to what extent violations of this assumption can bias utilities estimated with L-TTO. A sample of 500 members of the Spanish general population evaluated 24 health states, using face-to-face interviews. A total of 188 subjects were interviewed with L-TTO and the rest with TTO. Both samples evaluated the same set of 24 health states, divided into 4 groups with 6 health states per set. Each subject evaluated 1 of the sets. A random effects regression model was fitted to our data. Only health states better than dead were included in the regression since it is in this subset where additive separability can be tested clearly. Utilities were higher in L-TTO in relation to TTO (on average L-TTO adds about 0.2 points to the utility of health states), suggesting that additive separability is violated. The difference between methods increased with the severity of the health state. Thus, L-TTO adds about 0.14 points to the average utility of the less severe states, 0.23 to the intermediate states, and 0.28 points to the more severe estates. L-TTO produced higher utilities than TTO. Health problems are perceived as less severe if a lead period in full health is added upfront, implying that there are interactions between disjointed time periods. The advantages of this method have to be compared with the cost of modeling the interaction between periods. © The Author(s) 2014.
Validation of Foot Placement Locations from Ankle Data of a Kinect v2 Sensor.
Geerse, Daphne; Coolen, Bert; Kolijn, Detmar; Roerdink, Melvyn
2017-10-10
The Kinect v2 sensor may be a cheap and easy to use sensor to quantify gait in clinical settings, especially when applied in set-ups integrating multiple Kinect sensors to increase the measurement volume. Reliable estimates of foot placement locations are required to quantify spatial gait parameters. This study aimed to systematically evaluate the effects of distance from the sensor, side and step length on estimates of foot placement locations based on Kinect's ankle body points. Subjects (n = 12) performed stepping trials at imposed foot placement locations distanced 2 m or 3 m from the Kinect sensor (distance), for left and right foot placement locations (side), and for five imposed step lengths. Body points' time series of the lower extremities were recorded with a Kinect v2 sensor, placed frontoparallelly on the left side, and a gold-standard motion-registration system. Foot placement locations, step lengths, and stepping accuracies were compared between systems using repeated-measures ANOVAs, agreement statistics and two one-sided t -tests to test equivalence. For the right side at the 2 m distance from the sensor we found significant between-systems differences in foot placement locations and step lengths, and evidence for nonequivalence. This distance by side effect was likely caused by differences in body orientation relative to the Kinect sensor. It can be reduced by using Kinect's higher-dimensional depth data to estimate foot placement locations directly from the foot's point cloud and/or by using smaller inter-sensor distances in the case of a multi-Kinect v2 set-up to estimate foot placement locations at greater distances from the sensor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2013-11-01
Maintaining comfort in a home can be challenging in hot-humid climates. At the common summer temperature set point of 75 degrees F, the perceived air temperature can vary by 11 degrees F because higher indoor humidity reduces comfort. Often the air conditioner (AC) thermostat set point is lower than the desirable cooling level to try to increase moisture removal so that the interior air is not humid or "muggy." However, this method is not always effective in maintaining indoor relative humidity (RH) or comfort. In order to quantify the performance of a combined whole-house dehumidifier (WHD) AC system, researchers frommore » the U.S. Department of Energy's Building America team Consortium of Advanced Residential Buildings (CARB) monitored the operation of two Lennox AC systems coupled with a Honeywell DH150 TrueDRY whole-house dehumidifier for a six-month period. By using a WHD to control moisture levels (latent cooling) and optimizing a central AC to control temperature (sensible cooling), improvements in comfort can be achieved while reducing utility costs. Indoor comfort for this study was defined as maintaining indoor conditions at below 60% RH and a humidity ratio of 0.012 lbm/lbm while at common dry bulb set point temperatures of 74 degrees -80 degrees F. In addition to enhanced comfort, controlling moisture to these levels can reduce the risk of other potential issues such as mold growth, pests, and building component degradation. Because a standard AC must also reduce dry bulb air temperature in order to remove moisture, a WHD is typically needed to support these latent loads when sensible heat removal is not desired.« less