Sample records for single reference point

  1. Improving the Patron Experience: Sterling Memorial Library's Single Service Point

    ERIC Educational Resources Information Center

    Sider, Laura Galas

    2016-01-01

    This article describes the planning process and implementation of a single service point at Yale University's Sterling Memorial Library. While much recent scholarship on single service points (SSPs) has focused on the virtues or hazards of eliminating reference desks in libraries nationwide, this essay explores the ways in which single service…

  2. Precise aircraft single-point positioning using GPS post-mission orbits and satellite clock corrections

    NASA Astrophysics Data System (ADS)

    Lachapelle, G.; Cannon, M. E.; Qiu, W.; Varner, C.

    1996-09-01

    Aircraft single point position accuracy is assessed through a comparison of the single point coordinates with corresponding DGPS-derived coordinates. The platform utilized for this evaluation is a Naval Air Warfare Center P-3 Orion aircraft. Data was collected over a period of about 40 hours, spread over six days, off Florida's East Coast in July 94, using DGPS reference stations in Jacksonville, FL, and Warminster, PA. The analysis of results shows that the consistency between aircraft single point and DGPS coordinates obtained in single point positioning mode and DGPS mode is about 1 m (rms) in latitude and longitude, and 2 m (rms) in height, with instantaneous errors of up to a few metres due to the effect of the ionosphere on the single point L1 solutions.

  3. Ecosystem approach to fisheries: Exploring environmental and trophic effects on Maximum Sustainable Yield (MSY) reference point estimates

    PubMed Central

    Kumar, Rajeev; Pitcher, Tony J.; Varkey, Divya A.

    2017-01-01

    We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY) reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE) ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1) environmental conditions alter species productivity and (2) fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed. PMID:28957387

  4. An innovative use of instant messaging technology to support a library's single-service point.

    PubMed

    Horne, Andrea S; Ragon, Bart; Wilson, Daniel T

    2012-01-01

    A library service model that provides reference and instructional services by summoning reference librarians from a single service point is described. The system utilizes Libraryh3lp, an open-source, multioperator instant messaging system. The selection and refinement of this solution and technical challenges encountered are explored, as is the design of public services around this technology, usage of the system, and best practices. This service model, while a major cultural and procedural change at first, is now a routine aspect of customer service for this library.

  5. Phase-shifting point diffraction interferometer mask designs

    DOEpatents

    Goldberg, Kenneth Alan

    2001-01-01

    In a phase-shifting point diffraction interferometer, different image-plane mask designs can improve the operation of the interferometer. By keeping the test beam window of the mask small compared to the separation distance between the beams, the problem of energy from the reference beam leaking through the test beam window is reduced. By rotating the grating and mask 45.degree., only a single one-dimensional translation stage is required for phase-shifting. By keeping two reference pinholes in the same orientation about the test beam window, only a single grating orientation, and thus a single one-dimensional translation stage, is required. The use of a two-dimensional grating allows for a multiplicity of pinholes to be used about the pattern of diffracted orders of the grating at the mask. Orientation marks on the mask can be used to orient the device and indicate the position of the reference pinholes.

  6. Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues.

    PubMed

    Peeters, David; Snijders, Tineke M; Hagoort, Peter; Özyürek, Aslı

    2017-01-27

    In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Rapid fast-mapping abilities in 2-year-olds.

    PubMed

    Spiegel, Chad; Halberda, Justin

    2011-05-01

    Learning a new word consists of two primary tasks that have often been conflated into a single process: referent selection, in which a child must determine the correct referent of a novel label, and referent retention, which is the ability to store this newly formed label-object mapping in memory for later use. In addition, children must be capable of performing these tasks rapidly and repeatedly as they are frequently exposed to novel words during the course of natural conversation. Here we used a preferential pointing task to investigate 2-year-olds' (N=72) ability to infer the referent of a novel noun from a single ambiguous exposure and their ability to retain this mapping over time. Children were asked to identify the referent of a novel label on six critical trials distributed throughout the course of a 10-min study involving many familiar and novel objects. On these critical trials, images of a known object and a novel object (e.g., a ball and a nameless artifact constructed in the laboratory) appeared on two computer screens and a voice asked children to "point at the _____ [e.g., glark]." Following label onset, children were allowed only 3s during which to infer the correct referent, point at it, and potentially store this new word-object mapping. In a final posttest trial, all previously labeled novel objects appeared and children were asked to point to one of them (e.g., "Can you find the glark?"). To succeed, children needed to have initially mapped the novel labels correctly and retained these mappings over the course of the study. Despite the difficult demands of the current task, children successfully identified the target object on the retention trial. We conclude that 2-year-olds are able to fast map novel nouns during a brief single exposure under ambiguous labeling conditions. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Alignment reference device

    DOEpatents

    Patton, Gail Y.; Torgerson, Darrel D.

    1987-01-01

    An alignment reference device provides a collimated laser beam that minimizes angular deviations therein. A laser beam source outputs the beam into a single mode optical fiber. The output end of the optical fiber acts as a source of radiant energy and is positioned at the focal point of a lens system where the focal point is positioned within the lens. The output beam reflects off a mirror back to the lens that produces a collimated beam.

  9. A tri-reference point theory of decision making under risk.

    PubMed

    Wang, X T; Johnson, Joseph G

    2012-11-01

    The tri-reference point (TRP) theory takes into account minimum requirements (MR), the status quo (SQ), and goals (G) in decision making under risk. The 3 reference points demarcate risky outcomes and risk perception into 4 functional regions: success (expected value of x ≥ G), gain (SQ < × < G), loss (MR ≤ x < SQ), and failure (x < MR). The psychological impact of achieving or failing to achieve these reference points is rank ordered as MR > G > SQ. We present TRP assumptions and value functions and a mathematical formalization of the theory. We conducted empirical tests of crucial TRP predictions using both explicit and implicit reference points. We show that decision makers consider both G and MR and give greater weight to MR than G, indicating failure aversion (i.e., the disutility of a failure is greater than the utility of a success in the same task) in addition to loss aversion (i.e., the disutility of a loss is greater than the utility of the same amount of gain). Captured by a double-S shaped value function with 3 inflection points, risk preferences switched between risk seeking and risk aversion when the distribution of a gamble straddled a different reference point. The existence of MR (not G) significantly shifted choice preference toward risk aversion even when the outcome distribution of a gamble was well above the MR. Single reference point based models such as prospect theory cannot consistently account for these findings. The TRP theory provides simple guidelines for evaluating risky choices for individuals and organizational management. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  10. Novel switching method for single-phase NPC three-level inverter with neutral-point voltage control

    NASA Astrophysics Data System (ADS)

    Lee, June-Seok; Lee, Seung-Joo; Lee, Kyo-Beum

    2018-02-01

    This paper proposes a novel switching method with the neutral-point voltage control in a single-phase neutral-point-clamped three-level inverter (SP-NPCI) used in photovoltaic systems. A proposed novel switching method for the SP-NPCI improves the efficiency. The main concept is to fix the switching state of one leg. As a result, the switching loss decreases and the total efficiency is improved. In addition, it enables the maximum power-point-tracking operation to be performed by applying the proposed neutral-point voltage control algorithm. This control is implemented by modifying the reference signal. Simulation and experimental results provide verification of the performance of a novel switching method with the neutral-point voltage control.

  11. Redefining Roles and Responsibilities: Implementing a Triage Reference Model at a Single Service Point

    ERIC Educational Resources Information Center

    LaMagna, Michael; Hartman-Caverly, Sarah; Marchetti, Lori

    2016-01-01

    As academic institutions continue to renovate and remodel existing libraries to include colocated services, it is important to understand how this new environment requires the redefining of traditional library roles and responsibilities. This case study examines how Delaware County Community College redefined reference and research service by…

  12. Superior Cognitive Mapping through Single Landmark-Related Learning than through Boundary-Related Learning

    ERIC Educational Resources Information Center

    Zhou, Ruojing; Mou, Weimin

    2016-01-01

    Cognitive mapping is assumed to be through hippocampus-dependent place learning rather than striatum-dependent response learning. However, we proposed that either type of spatial learning, as long as it involves encoding metric relations between locations and reference points, could lead to a cognitive map. Furthermore, the fewer reference points…

  13. Single-Pulse Multi-Point Multi-Component Interferometric Rayleigh Scattering Velocimeter

    NASA Technical Reports Server (NTRS)

    Bivolaru, Daniel; Danehy, Paul M.; Lee, Joseph W.; Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2006-01-01

    A simultaneous multi-point, multi-component velocimeter using interferometric detection of the Doppler shift of Rayleigh, Mie, and Rayleigh-Brillouin scattered light in supersonic flow is described. The system uses up to three sets of collection optics and one beam combiner for the reference laser light to form a single collimated beam. The planar Fabry-Perot interferometer used in the imaging mode for frequency detection preserves the spatial distribution of the signal reasonably well. Single-pulse multi-points measurements of up to two orthogonal and one non-orthogonal components of velocity in a Mach 2 free jet were performed to demonstrate the technique. The average velocity measurements show a close agreement with the CFD calculations using the VULCAN code.

  14. Superior cognitive mapping through single landmark-related learning than through boundary-related learning.

    PubMed

    Zhou, Ruojing; Mou, Weimin

    2016-08-01

    Cognitive mapping is assumed to be through hippocampus-dependent place learning rather than striatum-dependent response learning. However, we proposed that either type of spatial learning, as long as it involves encoding metric relations between locations and reference points, could lead to a cognitive map. Furthermore, the fewer reference points to specify individual locations, the more accurate a cognitive map of these locations will be. We demonstrated that participants have more accurate representations of vectors between 2 locations and of configurations among 3 locations when locations are individually encoded in terms of a single landmark than when locations are encoded in terms of a boundary. Previous findings have shown that learning locations relative to a boundary involve stronger place learning and higher hippocampal activation whereas learning relative to a single landmark involves stronger response learning and higher striatal activation. Recognizing this, we have provided evidence challenging the cognitive map theory but favoring our proposal. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Geoscience laser altimeter system-stellar reference system

    NASA Astrophysics Data System (ADS)

    Millar, Pamela S.; Sirota, J. Marcos

    1998-01-01

    GLAS is an EOS space-based laser altimeter being developed to profile the height of the Earth's ice sheets with ~15 cm single shot accuracy from space under NASA's Mission to Planet Earth (MTPE). The primary science goal of GLAS is to determine if the ice sheets are increasing or diminishing for climate change modeling. This is achieved by measuring the ice sheet heights over Greenland and Antarctica to 1.5 cm/yr over 100 km×100 km areas by crossover analysis (Zwally 1994). This measurement performance requires the instrument to determine the pointing of the laser beam to ~5 urad (1 arcsecond), 1-sigma, with respect to the inertial reference frame. The GLAS design incorporates a stellar reference system (SRS) to relate the laser beam pointing angle to the star field with this accuracy. This is the first time a spaceborne laser altimeter is measuring pointing to such high accuracy. The design for the stellar reference system combines an attitude determination system (ADS) with a laser reference system (LRS) to meet this requirement. The SRS approach and expected performance are described in this paper.

  16. Validation of a modification to Performance-Tested Method 070601: Reveal Listeria Test for detection of Listeria spp. in selected foods and selected environmental samples.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.

  17. eLISA Telescope In-field Pointing and Scattered Light Study

    NASA Astrophysics Data System (ADS)

    Livas, J.; Sankar, S.; West, G.; Seals, L.; Howard, J.; Fitzsimons, E.

    2017-05-01

    The orbital motion of the three spacecraft that make up the eLISA Observatory constellation causes long-arm line of sight variations of approximately ± one degree over the course of a year. The baseline solution is to package the telescope, the optical bench, and the gravitational reference sensor (GRS) into an optical assembly at each end of the measurement arm, and then to articulate the assembly. An optical phase reference is exchanged between the moving optical benches with a single mode optical fiber (“backlink” fiber). An alternative solution, referred to as in-field pointing, embeds a steering mirror into the optical design, fixing the optical benches and eliminating the backlink fiber, but requiring the additional complication of a two-stage optical design for the telescope. We examine the impact of an in-field pointing design on the scattered light performance.

  18. Flow cytogenetics and chromosome sorting.

    PubMed

    Cram, L S

    1990-06-01

    This review of flow cytogenetics and chromosome sorting provides an overview of general information in the field and describes recent developments in more detail. From the early developments of chromosome analysis involving single parameter or one color analysis to the latest developments in slit scanning of single chromosomes in a flow stream, the field has progressed rapidly and most importantly has served as an important enabling technology for the human genome project. Technological innovations that advanced flow cytogenetics are described and referenced. Applications in basic cell biology, molecular biology, and clinical investigations are presented. The necessary characteristics for large number chromosome sorting are highlighted. References to recent review articles are provided as a starting point for locating individual references that provide more detail. Specific references are provided for recent developments.

  19. Real-time time-division color electroholography using a single GPU and a USB module for synchronizing reference light.

    PubMed

    Araki, Hiromitsu; Takada, Naoki; Niwase, Hiroaki; Ikawa, Shohei; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2015-12-01

    We propose real-time time-division color electroholography using a single graphics processing unit (GPU) and a simple synchronization system of reference light. To facilitate real-time time-division color electroholography, we developed a light emitting diode (LED) controller with a universal serial bus (USB) module and the drive circuit for reference light. A one-chip RGB LED connected to a personal computer via an LED controller was used as the reference light. A single GPU calculates three computer-generated holograms (CGHs) suitable for red, green, and blue colors in each frame of a three-dimensional (3D) movie. After CGH calculation using a single GPU, the CPU can synchronize the CGH display with the color switching of the one-chip RGB LED via the LED controller. Consequently, we succeeded in real-time time-division color electroholography for a 3D object consisting of around 1000 points per color when an NVIDIA GeForce GTX TITAN was used as the GPU. Furthermore, we implemented the proposed method in various GPUs. The experimental results showed that the proposed method was effective for various GPUs.

  20. First Point-Spread Function and X-Ray Phase Contrast Imaging Results with an 88-mm Diameter Single Crystal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.

    In this study, we report initial demonstrations of the use of single crystals in indirect x-ray imaging with a benchtop implementation of propagation-based (PB) x-ray phase contrast imaging. Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point-spread function (PSF) with the 50-μm thick single crystal scintillators than with the reference polycrystalline phosphor/scintillator. Fiber-optic plate depth-of-focus and Al reflective-coating aspects are also elucidated. Guided by the results from the 25-mm diameter crystal samples, we report additionally the first results with a unique 88-mm diameter single crystal bonded to a fiber optic platemore » and coupled to the large format CCD. Both PSF and x-ray phase contrast imaging data are quantified and presented.« less

  1. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy.

    PubMed

    Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet

    2013-01-01

    Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.

  2. Finite Element Analysis of Doorframe Structure of Single Oblique Pole Type in Container Crane

    NASA Astrophysics Data System (ADS)

    Cheng, X. F.; Wu, F. Q.; Tang, G.; Hu, X.

    2017-07-01

    Compared with the composite type, the single oblique pole type has more advantages, such as simple structure, thrift steel and high safe overhead clearance. The finite element model of the single oblique pole type is established in nodes by ANSYS, and more details are considered when the model is simplified, such as the section of Girder and Boom, torque in Girder and Boom occurred by Machinery house and Trolley, density according to the way of simplification etc. The stress and deformation of ten observation points are compared and analyzed, when the trolley is in nine dangerous positions. Based on the result of analysis, six dangerous points are selected to provide reference for the detection and evaluation of container crane.

  3. Single-chip microcomputer application in high-altitude balloon orientation system

    NASA Technical Reports Server (NTRS)

    Lim, T. S.; Ehrmann, C. H.; Allison, S. R.

    1980-01-01

    This paper describes the application of a single-chip microcomputer in a high-altitude balloon instrumentation system. The system, consisting of a magnetometer, a stepping motor, a microcomputer and a gray code shaft encoder, is used to provide an orientation reference to point a scientific instrument at an object in space. The single-chip microcomputer, Intel's 8748, consisting of a CPU, program memory, data memory and I/O ports, is used to control the orientation of the system.

  4. Multisurface fixture permits easy grinding of tool bit angles

    NASA Technical Reports Server (NTRS)

    Jones, C. R.

    1966-01-01

    Multisurface fixture with a tool holder permits accurate grinding and finishing of right and left hand single point threading tools. All angles are ground by changing the fixture position to rest at various references angles without removing the tool from the holder.

  5. High-resolution velocimetry in energetic tidal currents using a convergent-beam acoustic Doppler profiler

    NASA Astrophysics Data System (ADS)

    Sellar, Brian; Harding, Samuel; Richmond, Marshall

    2015-08-01

    An array of single-beam acoustic Doppler profilers has been developed for the high resolution measurement of three-dimensional tidal flow velocities and subsequently tested in an energetic tidal site. This configuration has been developed to increase spatial resolution of velocity measurements in comparison to conventional acoustic Doppler profilers (ADPs) which characteristically use divergent acoustic beams emanating from a single instrument. This is achieved using geometrically convergent acoustic beams creating a sample volume at the focal point of 0.03 m3. Away from the focal point, the array is also able to simultaneously reconstruct three-dimensional velocity components in a profile throughout the water column, and is referred to herein as a convergent-beam acoustic Doppler profiler (C-ADP). Mid-depth profiling is achieved through integration of the sensor platform with the operational commercial-scale Alstom 1 MW DeepGen-IV Tidal Turbine deployed at the European Marine Energy Center, Orkney Isles, UK. This proof-of-concept paper outlines the C-ADP system configuration and comparison to measurements provided by co-installed reference instrumentation. Comparison of C-ADP to standard divergent ADP (D-ADP) velocity measurements reveals a mean difference of 8 mm s-1, standard deviation of 18 mm s-1, and an order of magnitude reduction in realisable length scale. C-ADP focal point measurements compared to a proximal single-beam reference show peak cross-correlation coefficient of 0.96 over 4.0 s averaging period and a 47% reduction in Doppler noise. The dual functionality of the C-ADP as a profiling instrument with a high resolution focal point make this configuration a unique and valuable advancement in underwater velocimetry enabling improved quantification of flow turbulence. Since waves are simultaneously measured via profiled velocities, pressure measurements and surface detection, it is expected that derivatives of this system will be a powerful tool in wave-current interaction studies.

  6. The limits of boundaries: unpacking localization and cognitive mapping relative to a boundary.

    PubMed

    Zhou, Ruojing; Mou, Weimin

    2018-05-01

    Previous research (Zhou, Mou, Journal of Experimental Psychology: Learning, Memory and Cognition 42(8):1316-1323, 2016) showed that learning individual locations relative to a single landmark, compared to learning relative to a boundary, led to more accurate inferences of inter-object spatial relations (cognitive mapping of multiple locations). Following our past findings, the current study investigated whether the larger number of reference points provided by a homogeneous circular boundary, as well as less accessible knowledge of direct spatial relations among the multiple reference points, would lead to less effective cognitive mapping relative to the boundary. Accordingly, we manipulated (a) the number of primary reference points (one segment drawn from a circular boundary, four such segments, vs. the complete boundary) available when participants were localizing four objects sequentially (Experiment 1) and (b) the extendedness of each of the four segments (Experiment 2). The results showed that cognitive mapping was the least accurate in the whole boundary condition. However, expanding each of the four segments did not affect the accuracy of cognitive mapping until the four were connected to form a continuous boundary. These findings indicate that when encoding locations relative to a homogeneous boundary, participants segmented the boundary into differentiated pieces and subsequently chose the most informative local part (i.e., the segment closest in distance to one location) as the primary reference point for a particular location. During this process, direct spatial relations among the reference points were likely not attended to. These findings suggest that people might encode and represent bounded space in a fragmented fashion when localizing within a homogeneous boundary.

  7. Spacecraft attitude calibration/verification baseline study

    NASA Technical Reports Server (NTRS)

    Chen, L. C.

    1981-01-01

    A baseline study for a generalized spacecraft attitude calibration/verification system is presented. It can be used to define software specifications for three major functions required by a mission: the pre-launch parameter observability and data collection strategy study; the in-flight sensor calibration; and the post-calibration attitude accuracy verification. Analytical considerations are given for both single-axis and three-axis spacecrafts. The three-axis attitudes considered include the inertial-pointing attitudes, the reference-pointing attitudes, and attitudes undergoing specific maneuvers. The attitude sensors and hardware considered include the Earth horizon sensors, the plane-field Sun sensors, the coarse and fine two-axis digital Sun sensors, the three-axis magnetometers, the fixed-head star trackers, and the inertial reference gyros.

  8. Definition of a Robust Supervisory Control Scheme for Sodium-Cooled Fast Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, R.; Passerini, S.; Vilim, R. B.

    In this work, an innovative control approach for metal-fueled Sodium-cooled Fast Reactors is proposed. With respect to the classical approach adopted for base-load Nuclear Power Plants, an alternative control strategy for operating the reactor at different power levels by respecting the system physical constraints is presented. In order to achieve a higher operational flexibility along with ensuring that the implemented control loops do not influence the system inherent passive safety features, a dedicated supervisory control scheme for the dynamic definition of the corresponding set-points to be supplied to the PID controllers is designed. In particular, the traditional approach based onmore » the adoption of tabulated lookup tables for the set-point definition is found not to be robust enough when failures of the implemented SISO (Single Input Single Output) actuators occur. Therefore, a feedback algorithm based on the Reference Governor approach, which allows for the optimization of reference signals according to the system operating conditions, is proposed.« less

  9. Interactive Reference Point Procedure Based on the Conic Scalarizing Function

    PubMed Central

    2014-01-01

    In multiobjective optimization methods, multiple conflicting objectives are typically converted into a single objective optimization problem with the help of scalarizing functions. The conic scalarizing function is a general characterization of Benson proper efficient solutions of non-convex multiobjective problems in terms of saddle points of scalar Lagrangian functions. This approach preserves convexity. The conic scalarizing function, as a part of a posteriori or a priori methods, has successfully been applied to several real-life problems. In this paper, we propose a conic scalarizing function based interactive reference point procedure where the decision maker actively takes part in the solution process and directs the search according to her or his preferences. An algorithmic framework for the interactive solution of multiple objective optimization problems is presented and is utilized for solving some illustrative examples. PMID:24723795

  10. Optimal Strategy for Integrated Dynamic Inventory Control and Supplier Selection in Unknown Environment via Stochastic Dynamic Programming

    NASA Astrophysics Data System (ADS)

    Sutrisno; Widowati; Solikhin

    2016-06-01

    In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.

  11. Single-mode fiber systems for deep space communication network

    NASA Technical Reports Server (NTRS)

    Lutes, G.

    1982-01-01

    The present investigation is concerned with the development of single-mode optical fiber distribution systems. It is pointed out that single-mode fibers represent potentially a superior medium for the distribution of frequency and timing reference signals and wideband (400 MHz) IF signals. In this connection, single-mode fibers have the potential to improve the capability and precision of NASA's Deep Space Network (DSN). Attention is given to problems related to precise time synchronization throughout the DSN, questions regarding the selection of a transmission medium, and the function of the distribution systems, taking into account specific improvements possible by an employment of single-mode fibers.

  12. Dosimetric evaluation of two treatment planning systems for high dose rate brachytherapy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shwetha, Bondel; Ravikumar, Manickam, E-mail: drravikumarm@gmail.com; Supe, Sanjay S.

    2012-04-01

    Various treatment planning systems are used to design plans for the treatment of cervical cancer using high-dose-rate brachytherapy. The purpose of this study was to make a dosimetric comparison of the 2 treatment planning systems from Varian medical systems, namely ABACUS and BrachyVision. The dose distribution of Ir-192 source generated with a single dwell position was compared using ABACUS (version 3.1) and BrachyVision (version 6.5) planning systems. Ten patients with intracavitary applications were planned on both systems using orthogonal radiographs. Doses were calculated at the prescription points (point A, right and left) and reference points RU, LU, RM, LM, bladder,more » and rectum. For single dwell position, little difference was observed in the doses to points along the perpendicular bisector. The mean difference between ABACUS and BrachyVision for these points was 1.88%. The mean difference in the dose calculated toward the distal end of the cable by ABACUS and BrachyVision was 3.78%, whereas along the proximal end the difference was 19.82%. For the patient case there was approximately 2% difference between ABACUS and BrachyVision planning for dose to the prescription points. The dose difference for the reference points ranged from 0.4-1.5%. For bladder and rectum, the differences were 5.2% and 13.5%, respectively. The dose difference between the rectum points was statistically significant. There is considerable difference between the dose calculations performed by the 2 treatment planning systems. It is seen that these discrepancies are caused by the differences in the calculation methodology adopted by the 2 systems.« less

  13. From Relativistic Electrons to X-ray Phase Contrast Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Garson, A. B.; Anastasio, M. A.

    2017-10-09

    We report the initial demonstrations of the use of single crystals in indirect x-ray imaging for x-ray phase contrast imaging at the Washington University in St. Louis Computational Bioimaging Laboratory (CBL). Based on single Gaussian peak fits to the x-ray images, we observed a four times smaller system point spread function (21 μm (FWHM)) with the 25-mm diameter single crystals than the reference polycrystalline phosphor’s 80-μm value. Potential fiber-optic plate depth-of-focus aspects and 33-μm diameter carbon fiber imaging are also addressed.

  14. Limitations of the Mycobacterium tuberculosis reference genome H37Rv in the detection of virulence-related loci.

    PubMed

    O'Toole, Ronan F; Gautam, Sanjay S

    2017-10-01

    The genome sequence of Mycobacterium tuberculosis strain H37Rv is an important and valuable reference point in the study of M. tuberculosis phylogeny, molecular epidemiology, and drug-resistance mutations. However, it is becoming apparent that use of H37Rv as a sole reference genome in analysing clinical isolates presents some limitations to fully investigating M. tuberculosis virulence. Here, we examine the presence of single locus variants and the absence of entire genes in H37Rv with respect to strains that are responsible for cases and outbreaks of tuberculosis. We discuss how these polymorphisms may affect phenotypic properties of H37Rv including pathogenicity. Based on our observations and those of other researchers, we propose that use of a single reference genome, H37Rv, is not sufficient for the detection and characterisation of M. tuberculosis virulence-related loci. We recommend incorporation of genome sequences of other reference strains, in particular, direct clinical isolates, in such analyses in addition to H37Rv. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Bone mineral content measurement in small infants by single-photon absorptiometry: current methodologic issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steichen, J.J.; Asch, P.A.; Tsang, R.C.

    1988-07-01

    Single-photon absorptiometry (SPA), developed in 1963 and adapted for infants by Steichen et al. in 1976, is an important tool to quantitate bone mineralization in infants. Studies of infants in which SPA was used include studies of fetal bone mineralization and postnatal bone mineralization in very low birth weight infants. The SPA technique has also been used as a research tool to investigate longitudinal bone mineralization and to study the effect of nutrition and disease processes such as rickets or osteopenia of prematurity. At present, it has little direct clinical application for diagnosing bone disease in single patients. The bonesmore » most often used to measure bone mineral content (BMC) are the radius, the ulna, and, less often, the humerus. The radius appears to be preferred as a suitable bone to measure BMC in infants. It is easily accessible; anatomic reference points are easily palpated and have a constant relationship to the radial mid-shaft site; soft tissue does not affect either palpation of anatomic reference points or BMC quantitation in vivo. The peripheral location of the radius minimizes body radiation exposure. Trabecular and cortical bone can be measured separately. Extensive background studies exist on radial BMC in small infants. Most important, the radius has a relatively long zone of constant BMC. Finally, SPA for BMC in the radius has a high degree of precision and accuracy. 61 references.« less

  16. Echoes of a Forgotten Past: Eugenics, Testing, and Education Reform.

    ERIC Educational Resources Information Center

    Stoskopf, Alan

    2002-01-01

    Review of the work of Goddard, Terman, and Thorndike and the role of eugenics and the intelligence quotient in testing points out dangers to be avoided in the current testing climate, such as use of the business model, single-number scores, and tracking. (Contains 42 references.) (SK)

  17. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  18. Control system design for the large space systems technology reference platform

    NASA Technical Reports Server (NTRS)

    Edmunds, R. S.

    1982-01-01

    Structural models and classical frequency domain control system designs were developed for the large space systems technology (LSST) reference platform which consists of a central bus structure, solar panels, and platform arms on which a variety of experiments may be mounted. It is shown that operation of multiple independently articulated payloads on a single platform presents major problems when subarc second pointing stability is required. Experiment compatibility will be an important operational consideration for systems of this type.

  19. How best to geo-reference farms? A case study from Cornwall, England.

    PubMed

    Durr, P A; Froggatt, A E A

    2002-11-29

    The commonest way of geo-referencing farms as single points is using the location of the farmhouse as either read off a map or approximated by its postcode. While these two methods may be adequate for small farms, they are unlikely to be satisfactory for large ones, or alternatively when they are comprised of several discrete units or holdings. In order to investigate the best representation of the total farm polygon(s) by a single point, we undertook a study using nearly 500 actual farm boundaries in the county of Cornwall, England. For each farm, the farm boundaries were digitised, and its area and centroid determined using ArcView 3.2. A variety of point geo-referencing systems were tested to find the best single point location for a farm, as judged by the proportion of farm area captured. Whilst the centroid was found to capture the largest area, the main farm building was judged to be the best geo-referencing method for practical purposes. In contrast, the various systems of geo-coding using the farm postal address performed relatively poorly. Where there are separate parcels of land managed together in a single parish, they may be identified as a single unit, but if there are separate parcels in different parishes they should be identified as separate units.The implications of these results for Great Britain's national animal health information system (VETNET) are discussed.

  20. Communication: translational Brownian motion for particles of arbitrary shape.

    PubMed

    Cichocki, Bogdan; Ekiel-Jeżewska, Maria L; Wajnryb, Eligiusz

    2012-02-21

    A single Brownian particle of arbitrary shape is considered. The time-dependent translational mean square displacement W(t) of a reference point at this particle is evaluated from the Smoluchowski equation. It is shown that at times larger than the characteristic time scale of the rotational Brownian relaxation, the slope of W(t) becomes independent of the choice of a reference point. Moreover, it is proved that in the long-time limit, the slope of W(t) is determined uniquely by the trace of the translational-translational mobility matrix μ(tt) evaluated with respect to the hydrodynamic center of mobility. The result is applicable to dynamic light scattering measurements, which indeed are performed in the long-time limit. © 2012 American Institute of Physics

  1. Device for modular input high-speed multi-channel digitizing of electrical data

    DOEpatents

    VanDeusen, Alan L.; Crist, Charles E.

    1995-09-26

    A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages.

  2. Comparison of the pharmacokinetics and safety of three formulations of infliximab (CT-P13, EU-approved reference infliximab and the US-licensed reference infliximab) in healthy subjects: a randomized, double-blind, three-arm, parallel-group, single-dose, Phase I study.

    PubMed

    Park, Won; Lee, Sang Joon; Yun, Jihye; Yoo, Dae Hyun

    2015-01-01

    To compare the pharmacokinetics (PK), safety and tolerability of biosimilar infliximab (CT-P13 [Remsima(®), Inflectra(®)]) with two formulations of the reference medicinal product (RMP) (Remicade(®)) from either Europe (EU-RMP) or the USA (US-RMP). This was a double-blind, three-arm, parallel-group study (EudraCT number: 2013-003173-10). Healthy subjects received single doses (5 mg/kg) of CT-P13 (n = 71), EU-RMP (n = 71) or US-RMP (n = 71). The primary objective was to compare the PK profiles for the three formulations. Assessments of comparative safety and tolerability were secondary objectives. Baseline demographics were well balanced across the three groups. Primary end points (Cmax, AUClast and AUCinf) were equivalent between all formulations (CT-P13 vs EU-RMP; CT-P13 vs US-RMP; EU-RMP vs US-RMP). All other PK end points supported the high similarity of the three treatments. Tolerability profiles of the formulations were similar. The PK profile of CT-P13 is highly similar to EU-RMP and US-RMP. All three formulations were equally well tolerated.

  3. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  4. Active Ingredient - AZ

    EPA Pesticide Factsheets

    EPA Pesticide Chemical Search allows a user to easily find the pesticide chemical or active ingredient that they are interested in by using an array of simple to advanced search options. Chemical Search provides a single point of reference for easy access to information previously published in a variety of locations, including various EPA web pages and Regulations.gov.

  5. Field testing of a convergent array of acoustic Doppler profilers for high-resolution velocimetry in energetic tidal currents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Samuel F.; Sellar, Brian; Richmond, Marshall C.

    An array of single-beam acoustic Doppler profilers has been developed for the high resolution measurement of three-dimensional tidal flow velocities and subsequently tested in an energetic tidal site. This configuration has been developed to increase spatial resolution of velocity measurements in comparison to conventional acoustic Doppler profilers (ADPs) which characteristically use divergent acoustic beams emanating from a single instrument. This is achieved using geometrically convergent acoustic beams creating a sample volume at the focal point of 0.03 m3. Away from the focal point, the array is also able to simultaneously reconstruct three-dimensional velocity components in a profile throughout the watermore » column, and is referred to herein as a convergent-beam acoustic Doppler profiler (C-ADP). Mid-depth profiling is achieved through integration of the sensor platform with the operational commercial-scale Alstom 1MW DeepGen-IV Tidal Turbine deployed at the European Marine Energy Center, Orkney Isles, UK. This proof-of-concept paper outlines the C-ADP system configuration and comparison to measurements provided by co-installed reference instrumentation.« less

  6. Performance evaluation of thermally treated graphite felt electrodes for vanadium redox flow battery and their four-point single cell characterization

    NASA Astrophysics Data System (ADS)

    Mazúr, P.; Mrlík, J.; Beneš, J.; Pocedič, J.; Vrána, J.; Dundálek, J.; Kosek, J.

    2018-03-01

    In our contribution we study the electrocatalytic effect of oxygen functionalization of thermally treated graphite felt on kinetics of electrode reactions of vanadium redox flow battery. Chemical and morphological changes of the felts are analysed by standard physico-chemical characterization techniques. A complex method four-point method is developed and employed for characterization of the felts in a laboratory single-cell. The method is based on electrochemical impedance spectroscopy and load curves measurements of positive and negative half-cells using platinum wire pseudo-reference electrodes. The distribution of ohmic and faradaic losses within a single-cell is evaluated for both symmetric and asymmetric electrode set-up with respect to the treatment conditions. Positive effect of oxygen functionalization is observed only for negative electrode, whereas kinetics of positive electrode reaction is almost unaffected by the treatment. This is in a contradiction to the results of typically employed cyclovoltammetric characterization which indicate that both electrodes are enhanced by the treatment to a similar extent. The developed four-point characterization method can be further used e.g., for the component screening and in-situ durability studies on single-cell scale redox flow batteries of various chemistries.

  7. Alveolar Ridge Contouring with Free Connective Tissue Graft at Implant Placement: A 5-Year Consecutive Clinical Study.

    PubMed

    Hanser, Thomas; Khoury, Fouad

    2016-01-01

    This study evaluated volume stability after alveolar ridge contouring with free connective tissue grafts at implant placement in single-tooth gaps. A total of 52 single-tooth gaps with labial volume deficiencies in the maxilla (incisors, canines, and premolars) were consecutively treated with implants and concomitant free palatal connective tissue grafts in 46 patients between 2006 and 2009. Implants had to be covered with at least 2 mm peri-implant local bone after insertion. At implant placement, a free connective tissue graft from the palate was fixed inside a labial split-thickness flap to form an existing concave buccal alveolar ridge contour due to tissue volume deficiency into a convex shape. Standardized volumetric measurements of the labial alveolar contour using a template were evaluated before connective tissue grafting and at 2 weeks, 1 year, and 5 years after implantprosthetic incorporation. Tissue volume had increased significantly (P < .05) in all six reference points representing the outer alveolar soft tissue contour of the implant before connective tissue grafting to baseline (2 weeks after implant-prosthetic incorporation). Statistically, 50% of the reference points (P > .05) kept their volume from baseline to 1 year after prosthetic incorporation and from baseline to 5 years after prosthetic incorporation, respectively, whereas reference points located within the area of the implant sulcus showed a significant (P < .05) decrease in volume. Clinically, 5 years after prosthetic incorporation the originally concave buccal alveolar contour was still convex in all implants, leading to a continuous favorable anatomical shape and improved esthetic situation. Intraoral radiographs confirmed osseointegration and stable peri-implant parameters with a survival rate of 100% after a follow-up of approximately 5 years. Implant placement with concomitant free connective tissue grafting appears to be an appropriate long-term means to contour preexisting buccal alveolar volume deficiencies in single implants.

  8. Device for modular input high-speed multi-channel digitizing of electrical data

    DOEpatents

    VanDeusen, A.L.; Crist, C.E.

    1995-09-26

    A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages. 1 fig.

  9. Design and Shielding of Radiotherapy Treatment Facilities; IPEM Report 75, 2nd Edition

    NASA Astrophysics Data System (ADS)

    Horton, Patrick; Eaton, David

    2017-07-01

    Design and Shielding of Radiotherapy Treatment Facilities provides readers with a single point of reference for protection advice to the construction and modification of radiotherapy facilities. The book assembles a faculty of national and international experts on all modalities including megavoltage and kilovoltage photons, brachytherapy and high-energy particles, and on conventional and Monte Carlo shielding calculations. This book is a comprehensive reference for qualified experts and radiation-shielding designers in radiation physics and also useful to anyone involved in the design of radiotherapy facilities.

  10. New ab initio adiabatic potential energy surfaces and bound state calculations for the singlet ground X˜ 1A1 and excited C˜ 1B2(21A') states of SO2

    NASA Astrophysics Data System (ADS)

    Kłos, Jacek; Alexander, Millard H.; Kumar, Praveen; Poirier, Bill; Jiang, Bin; Guo, Hua

    2016-05-01

    We report new and more accurate adiabatic potential energy surfaces (PESs) for the ground X˜ 1A1 and electronically excited C˜ 1B2(21A') states of the SO2 molecule. Ab initio points are calculated using the explicitly correlated internally contracted multi-reference configuration interaction (icMRCI-F12) method. A second less accurate PES for the ground X ˜ state is also calculated using an explicitly correlated single-reference coupled-cluster method with single, double, and non-iterative triple excitations [CCSD(T)-F12]. With these new three-dimensional PESs, we determine energies of the vibrational bound states and compare these values to existing literature data and experiment.

  11. An improved AVC strategy applied in distributed wind power system

    NASA Astrophysics Data System (ADS)

    Zhao, Y. N.; Liu, Q. H.; Song, S. Y.; Mao, W.

    2016-08-01

    Traditional AVC strategy is mainly used in wind farm and only concerns about grid connection point, which is not suitable for distributed wind power system. Therefore, this paper comes up with an improved AVC strategy applied in distributed wind power system. The strategy takes all nodes of distribution network into consideration and chooses the node having the most serious voltage deviation as control point to calculate the reactive power reference. In addition, distribution principles can be divided into two conditions: when wind generators access to network on single node, the reactive power reference is distributed according to reactive power capacity; when wind generators access to network on multi-node, the reference is distributed according to sensitivity. Simulation results show the correctness and reliability of the strategy. Compared with traditional control strategy, the strategy described in this paper can make full use of generators reactive power output ability according to the distribution network voltage condition and improve the distribution network voltage level effectively.

  12. Points of Focus and Position: Intertextual Reference in PhD Theses

    ERIC Educational Resources Information Center

    Thompson, Paul

    2005-01-01

    This paper investigates the nature of texts produced for assessment at the highest level of advanced academic literacy: PhD theses. Eight theses from within a single department (Agricultural Botany) at a British university are the subject of study, and the contexts in which these texts were written are investigated through interviews with the…

  13. Fiber optic inclination detector system having a weighted sphere with reference points

    DOEpatents

    Cwalinski, Jeffrey P.

    1995-01-01

    A fiber optic inclination detector system for determining the angular displacement of an object from a reference surface includes a simple mechanical transducer which requires a minimum number of parts and no electrical components. The system employs a single light beam which is split into two light beams and provided to the transducer. Each light beam is amplitude modulated upon reflecting off the transducer to detect inclination. The power values associated with each of the reflected light beams are converted by a pair of photodetectors into voltage signals, and a microprocessor manipulates the voltage signals to provide a measure of the angular displacement between the object and the reference surface.

  14. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  15. Surprising performance for vibrational frequencies of the distinguishable clusters with singles and doubles (DCSD) and MP2.5 approximations

    NASA Astrophysics Data System (ADS)

    Kesharwani, Manoj K.; Sylvetsky, Nitai; Martin, Jan M. L.

    2017-11-01

    We show that the DCSD (distinguishable clusters with all singles and doubles) correlation method permits the calculation of vibrational spectra at near-CCSD(T) quality but at no more than CCSD cost, and with comparatively inexpensive analytical gradients. For systems dominated by a single reference configuration, even MP2.5 is a viable alternative, at MP3 cost. MP2.5 performance for vibrational frequencies is comparable to double hybrids such as DSD-PBEP86-D3BJ, but without resorting to empirical parameters. DCSD is also quite suitable for computing zero-point vibrational energies in computational thermochemistry.

  16. Validation of a modification to Performance-Tested Method 010403: microwell DNA hybridization assay for detection of Listeria spp. in selected foods and selected environmental surfaces.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.

  17. Evaluating a hybrid three-dimensional metrology system: merging data from optical and touch probe devices

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2011-08-01

    In a project to meet requirements for CBP Laboratory analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS), a hybrid metrology system comprising both optical and touch probe devices has been assembled. A unique requirement must be met: To identify the interface-typically obscured in samples of concern-of the "external surface area upper" (ESAU) and the sole without physically destroying the sample. The sample outer surface is determined by discrete point cloud coordinates obtained using laser scanner optical measurements. Measurements from the optically inaccessible insole region are obtained using a coordinate measuring machine (CMM). That surface similarly is defined by point cloud data. Mathematically, the individual CMM and scanner data sets are transformed into a single, common reference frame. Custom software then fits a polynomial surface to the insole data and extends it to intersect the mesh fitted to the outer surface point cloud. This line of intersection defines the required ESAU boundary, thus permitting further fractional area calculations to determine the percentage of materials present. With a draft method in place, and first-level method validation underway, we examine the transformation of the two dissimilar data sets into the single, common reference frame. We also will consider the six previously-identified potential error factors versus the method process. This paper reports our on-going work and discusses our findings to date.

  18. Improved method for selection of the NOAEL.

    PubMed

    Calabrese, E J; Baldwin, L A

    1994-02-01

    The paper proposes that the NOAEL be defined as the highest dosage tested that is statistically significantly different from the control group while also being statistically significantly different from the LOAEL. This new definition requires that the NOAEL be defined from two points of reference rather than the current approach (i.e., single point of reference) in which the NOAEL represents only the highest dosage not statistically significantly different from the control group. This proposal is necessary in order to differentiate NOAELs which are statistically distinguishable from the LOAEL. Under the new regime only those satisfying both criteria would be designated a true NOAEL while those satisfying only one criteria (i.e., not statistically significant different from the control group) would be designated a "quasi" NOAEL and handled differently (i.e., via an uncertainty factor) for risk assessment purposes.

  19. High sensitivity detection and quantitation of DNA copy number and single nucleotide variants with single color droplet digital PCR.

    PubMed

    Miotke, Laura; Lau, Billy T; Rumma, Rowza T; Ji, Hanlee P

    2014-03-04

    In this study, we present a highly customizable method for quantifying copy number and point mutations utilizing a single-color, droplet digital PCR platform. Droplet digital polymerase chain reaction (ddPCR) is rapidly replacing real-time quantitative PCR (qRT-PCR) as an efficient method of independent DNA quantification. Compared to quantative PCR, ddPCR eliminates the needs for traditional standards; instead, it measures target and reference DNA within the same well. The applications for ddPCR are widespread including targeted quantitation of genetic aberrations, which is commonly achieved with a two-color fluorescent oligonucleotide probe (TaqMan) design. However, the overall cost and need for optimization can be greatly reduced with an alternative method of distinguishing between target and reference products using the nonspecific DNA binding properties of EvaGreen (EG) dye. By manipulating the length of the target and reference amplicons, we can distinguish between their fluorescent signals and quantify each independently. We demonstrate the effectiveness of this method by examining copy number in the proto-oncogene FLT3 and the common V600E point mutation in BRAF. Using a series of well-characterized control samples and cancer cell lines, we confirmed the accuracy of our method in quantifying mutation percentage and integer value copy number changes. As another novel feature, our assay was able to detect a mutation comprising less than 1% of an otherwise wild-type sample, as well as copy number changes from cancers even in the context of significant dilution with normal DNA. This flexible and cost-effective method of independent DNA quantification proves to be a robust alternative to the commercialized TaqMan assay.

  20. The effect of vapor polarity and boiling point on breakthrough for binary mixtures on respirator carbon.

    PubMed

    Robbins, C A; Breysse, P N

    1996-08-01

    This research evaluated the effect of the polarity of a second vapor on the adsorption of a polar and a nonpolar vapor using the Wheeler model. To examine the effect of polarity, it was also necessary to observe the effect of component boiling point. The 1% breakthrough time (1% tb), kinetic adsorption capacity (W(e)), and rate constant (kv) of the Wheeler model were determined for vapor challenges on carbon beds for both p-xylene and pyrrole (referred to as test vapors) individually, and in equimolar binary mixtures with the polar and nonpolar vapors toluene, p-fluorotoluene, o-dichlorobenzene, and p-dichlorobenzene (referred to as probe vapors). Probe vapor polarity (0 to 2.5 Debye) did not systematically alter the 1% tb, W(e), or kv of the test vapors. The 1% tb and W(e) for test vapors in binary mixtures can be estimated reasonably well, using the Wheeler model, from single-vapor data (1% tb +/- 30%, W(e) +/- 20%). The test vapor 1% tb depended mainly on total vapor concentration in both single and binary systems. W(e) was proportional to test vapor fractional molar concentration (mole fraction) in mixtures. The kv for p-xylene was significantly different (p < or = 0.001) when compared according to probe boiling point; however, these differences were apparently of limited importance in estimating 1% tb for the range of boiling points tested (111 to 180 degrees C). Although the polarity and boiling point of chemicals in the range tested are not practically important in predicting 1% tb with the Wheeler model, an effect due to probe boiling point is suggested, and tests with chemicals of more widely ranging boiling point are warranted. Since the 1% tb, and thus, respirator service life, depends mainly on total vapor concentration, these data underscore the importance of taking into account the presence of other vapors when estimating respirator service life for a vapor in a mixture.

  1. Microgravity Experiments Safety and Integration Requirements Document Tree

    NASA Technical Reports Server (NTRS)

    Hogan, Jean M.

    1995-01-01

    This report is a document tree of the safety and integration documents required to develop a space experiment. Pertinent document information for each of the top level (tier one) safety and integration documents, and their applicable and reference (tier two) documents has been identified. This information includes: document title, revision level, configuration management, electronic availability, listed applicable and reference documents, source for obtaining the document, and document owner. One of the main conclusions of this report is that no single document tree exists for all safety and integration documents, regardless of the Shuttle carrier. This document also identifies the need for a single point of contact for customers wishing to access documents. The data in this report serves as a valuable information source for the NASA Lewis Research Center Project Documentation Center, as well as for all developers of space experiments.

  2. Derivation of the Biot-Savart Law from Ampere's Law Using the Displacement Current

    ERIC Educational Resources Information Center

    Buschauer, Robert

    2013-01-01

    The equation describing the magnetic field due to a single, nonrelativistic charged particle moving at constant velocity is often referred to as the "Biot-Savart law for a point charge." Introductory calculus-based physics books usually state this law without proof. Advanced texts often present it either without proof or as a special…

  3. A randomised, single-blind, single-dose, three-arm, parallel-group study in healthy subjects to demonstrate pharmacokinetic equivalence of ABP 501 and adalimumab

    PubMed Central

    Kaur, Primal; Chow, Vincent; Zhang, Nan; Moxness, Michael; Kaliyaperumal, Arunan; Markus, Richard

    2017-01-01

    Objective To demonstrate pharmacokinetic (PK) similarity of biosimilar candidate ABP 501 relative to adalimumab reference product from the USA and European Union (EU) and evaluate safety, tolerability and immunogenicity of ABP 501. Methods Randomised, single-blind, single-dose, three-arm, parallel-group study; healthy subjects were randomised to receive ABP 501 (n=67), adalimumab (USA) (n=69) or adalimumab (EU) (n=67) 40 mg subcutaneously. Primary end points were area under the serum concentration-time curve from time 0 extrapolated to infinity (AUCinf) and the maximum observed concentration (Cmax). Secondary end points included safety and immunogenicity. Results AUCinf and Cmax were similar across the three groups. Geometrical mean ratio (GMR) of AUCinf was 1.11 between ABP 501 and adalimumab (USA), and 1.04 between ABP 501 and adalimumab (EU). GMR of Cmax was 1.04 between ABP 501 and adalimumab (USA) and 0.96 between ABP 501 and adalimumab (EU). The 90% CIs for the GMRs of AUCinf and Cmax were within the prespecified standard PK equivalence criteria of 0.80 to 1.25. Treatment-related adverse events were mild to moderate and were reported for 35.8%, 24.6% and 41.8% of subjects in the ABP 501, adalimumab (USA) and adalimumab (EU) groups; incidence of antidrug antibodies (ADAbs) was similar among the study groups. Conclusions Results of this study demonstrated PK similarity of ABP 501 with adalimumab (USA) and adalimumab (EU) after a single 40-mg subcutaneous injection. No new safety signals with ABP 501 were identified. The safety and tolerability of ABP 501 was similar to the reference products, and similar ADAb rates were observed across the three groups. Trial registration number EudraCT number 2012-000785-37; Results. PMID:27466231

  4. Bioequivalence of generic alendronate sodium tablets (70 mg) to Fosamax® tablets (70 mg) in fasting, healthy volunteers: a randomized, open-label, three-way, reference-replicated crossover study

    PubMed Central

    Zhang, Yifan; Chen, Xiaoyan; Tang, Yunbiao; Lu, Youming; Guo, Lixia; Zhong, Dafang

    2017-01-01

    Purpose The aim of this study was to evaluate the bioequivalence of a generic product 70 mg alendronate sodium tablets with the reference product Fosamax® 70 mg tablet. Materials and methods A single-center, open-label, randomized, three-period, three-sequence, reference-replicated crossover study was performed in 36 healthy Chinese male volunteers under fasting conditions. In each study period, the volunteers received a single oral dose of the generic or reference product (70 mg). Blood samples were collected at pre-dose and up to 8 h after administration. The bioequivalence of the generic product to the reference product was assessed using the US Food and Drug Administration (FDA) and European Medicines Agency (EMA) reference-scaled average bioequivalence (RSABE) methods. Results The average maximum concentrations (Cmax) of alendronic acid were 64.78±43.76, 56.62±31.95, and 60.15±37.12 ng/mL after the single dose of the generic product and the first and second doses of the reference product, respectively. The areas under the plasma concentration–time curves from time 0 to the last timepoint (AUC0–t) were 150.36±82.90, 148.15±85.97, and 167.11±110.87 h⋅ng/mL, respectively. Reference scaling was used because the within-subject standard deviations of the reference product (sWR) for Cmax and AUC0–t were all higher than the cutoff value of 0.294. The 95% upper confidence bounds were −0.16 and −0.17 for Cmax and AUC0–t, respectively, and the point estimates for the generic/reference product ratio were 1.08 and 1.00, which satisfied the RSABE acceptance criteria of the FDA. The 90% CIs for Cmax and AUC0–t were 90.35%–129.04% and 85.31%–117.15%, respectively, which were within the limits of the EMA for the bioequivalence of 69.84%–143.19% and 80.00%–125.00%. Conclusion The generic product was bioequivalent to the reference product in terms of the rate and extent of alendronate absorption after a single 70 mg oral dose under fasting conditions. PMID:28744102

  5. A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services

    NASA Astrophysics Data System (ADS)

    Malinowski, Marcin; Kwiecień, Janusz

    2016-12-01

    Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.

  6. Detection of Single Tree Stems in Forested Areas from High Density ALS Point Clouds Using 3d Shape Descriptors

    NASA Astrophysics Data System (ADS)

    Amiri, N.; Polewski, P.; Yao, W.; Krzystek, P.; Skidmore, A. K.

    2017-09-01

    Airborne Laser Scanning (ALS) is a widespread method for forest mapping and management purposes. While common ALS techniques provide valuable information about the forest canopy and intermediate layers, the point density near the ground may be poor due to dense overstory conditions. The current study highlights a new method for detecting stems of single trees in 3D point clouds obtained from high density ALS with a density of 300 points/m2. Compared to standard ALS data, due to lower flight height (150-200 m) this elevated point density leads to more laser reflections from tree stems. In this work, we propose a three-tiered method which works on the point, segment and object levels. First, for each point we calculate the likelihood that it belongs to a tree stem, derived from the radiometric and geometric features of its neighboring points. In the next step, we construct short stem segments based on high-probability stem points, and classify the segments by considering the distribution of points around them as well as their spatial orientation, which encodes the prior knowledge that trees are mainly vertically aligned due to gravity. Finally, we apply hierarchical clustering on the positively classified segments to obtain point sets corresponding to single stems, and perform ℓ1-based orthogonal distance regression to robustly fit lines through each stem point set. The ℓ1-based method is less sensitive to outliers compared to the least square approaches. From the fitted lines, the planimetric tree positions can then be derived. Experiments were performed on two plots from the Hochficht forest in Oberösterreich region located in Austria.We marked a total of 196 reference stems in the point clouds of both plots by visual interpretation. The evaluation of the automatically detected stems showed a classification precision of 0.86 and 0.85, respectively for Plot 1 and 2, with recall values of 0.7 and 0.67.

  7. Pointing Reference Scheme for Free-Space Optical Communications Systems

    NASA Technical Reports Server (NTRS)

    Wright, Malcolm; Ortiz, Gerardo; Jeganathan, Muthu

    2006-01-01

    A scheme is proposed for referencing the propagation direction of the transmit laser signal in pointing a free-space optical communications terminal. This recently developed scheme enables the use of low-cost, commercial silicon-based sensors for tracking the direction of the transmit laser, regardless of the transmit wavelength. Compared with previous methods, the scheme offers some advantages of less mechanical and optical complexity and avoids expensive and exotic sensor technologies. In free-space optical communications, the transmit beam must be accurately pointed toward the receiver in order to maintain the communication link. The current approaches to achieve this function call for part of the transmit beam to be split off and projected onto an optical sensor used to infer the pointed direction. This requires that the optical sensor be sensitive to the wavelength of the transmit laser. If a different transmit wavelength is desired, for example to obtain a source capable of higher data rates, this can become quite impractical because of the unavailability or inefficiency of sensors at these wavelengths. The innovation proposed here decouples this requirement by allowing any transmit wavelength to be used with any sensor. We have applied this idea to a particular system that transmits at the standard telecommunication wavelength of 1,550 nm and uses a silicon-based sensor, sensitive from 0.5 to 1.0 micrometers, to determine the pointing direction. The scheme shown in the figure involves integrating a low-power 980-nm reference or boresight laser beam coupled to the 1,550-nm transmit beam via a wavelength-division-multiplexed fiber coupler. Both of these signals propagate through the optical fiber where they achieve an extremely high level of co-alignment before they are launched into the telescope. The telescope uses a dichroic beam splitter to reflect the 980- nm beam onto the silicon image sensor (a quad detector, charge-coupled device, or active-pixel-sensor array) while the 1,550- nm signal beam is transmitted through the optical assembly toward the remotely located receiver. Since the 980-nm reference signal originates from the same single-mode fiber-coupled source as the transmit signal, its position on the sensor is used to accurately determine the propagation direction of the transmit signal. The optics are considerably simpler in the proposed scheme due to the use of a single aperture for transmitting and receiving. Moreover, the issue of mechanical misalignment does not arise because the reference signal and transmitted laser beams are inherently co-aligned. The beam quality of the 980-nm reference signal used for tracking is required to be circularly symmetric and stable at the tracking-plane sensor array in order to minimize error in the centroiding algorithm of the pointing system. However, since the transmit signal is delivered through a fiber that supports a single mode at 1,550 nm, propagation of higher order 980-nm modes is possible. Preliminary analysis shows that the overall mode profile is dominated by the fundamental mode, giving a near symmetric profile. The instability of the mode was also measured and found to be negligible in comparison to the other error contributions in the centroid position on the sensor array.

  8. Motor Synergies and the Equilibrium-Point Hypothesis

    PubMed Central

    Latash, Mark L.

    2010-01-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multi-joint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed. PMID:20702893

  9. Motor synergies and the equilibrium-point hypothesis.

    PubMed

    Latash, Mark L

    2010-07-01

    The article offers a way to unite three recent developments in the field of motor control and coordination: (1) The notion of synergies is introduced based on the principle of motor abundance; (2) The uncontrolled manifold hypothesis is described as offering a computational framework to identify and quantify synergies; and (3) The equilibrium-point hypothesis is described for a single muscle, single joint, and multijoint systems. Merging these concepts into a single coherent scheme requires focusing on control variables rather than performance variables. The principle of minimal final action is formulated as the guiding principle within the referent configuration hypothesis. Motor actions are associated with setting two types of variables by a controller, those that ultimately define average performance patterns and those that define associated synergies. Predictions of the suggested scheme are reviewed, such as the phenomenon of anticipatory synergy adjustments, quick actions without changes in synergies, atypical synergies, and changes in synergies with practice. A few models are briefly reviewed.

  10. Development of Low-Cost Instrumentation for Single Point Autofluorescence Lifetime Measurements.

    PubMed

    Lagarto, João; Hares, Jonathan D; Dunsby, Christopher; French, Paul M W

    2017-09-01

    Autofluorescence lifetime measurements, which can provide label-free readouts in biological tissues, contrasting e.g. different types and states of tissue matrix components and different cellular metabolites, may have significant clinical potential for diagnosis and to provide surgical guidance. However, the cost of the instrumentation typically used currently presents a barrier to wider implementation. We describe a low-cost single point time-resolved autofluorescence instrument, exploiting modulated laser diodes for excitation and FPGA-based circuitry for detection, together with a custom constant fraction discriminator. Its temporal accuracy is compared against a "gold-standard" instrument incorporating commercial TCSPC circuitry by resolving the fluorescence decays of reference fluorophores presenting single and double exponential decay profiles. To illustrate the potential to read out intrinsic contrast in tissue, we present preliminary measurements of autofluorescence lifetime measurements of biological tissues ex vivo. We believe that the lower cost of this instrument could enhance the potential of autofluorescence lifetime metrology for clinical deployment and commercial development.

  11. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  12. Defect interactions in GaAs single crystals

    NASA Technical Reports Server (NTRS)

    Gatos, H. C.; Lagowski, J.

    1984-01-01

    The two-sublattice structural configuration of GaAs and deviations from stoichiometry render the generation and interaction of electrically active point defects (and point defect complexes) critically important for device applications and very complex. Of the defect-induced energy levels, those lying deep into the energy band are very effective lifetime ""killers". The level 0.82 eV below the condition band, commonly referred to as EL2, is a major deep level, particularly in melt-grown GaAs. This level is associated with an antisite defect complex (AsGa - VAS). Possible mechanisms of its formation and its annihilation were further developed.

  13. Vehicle Localization by LIDAR Point Correlation Improved by Change Detection

    NASA Astrophysics Data System (ADS)

    Schlichting, A.; Brenner, C.

    2016-06-01

    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.

  14. Evaluation of a Modified Version of the Posttraumatic Growth Inventory-Short Form

    DTIC Science & Technology

    2017-04-20

    psychological well-being at a single time point. Methods: The study population (N = 135,843) was randomly and equally split into exploratory and confirmatory...with social support and personal mastery. Conclusions: The modified PTGI-SF in this study captures psychological well-being in cross-sectional...Keywords: Psychological well-being, Posttraumatic Growth Inventory, Military, Psychometrics Background Posttraumatic growth refers to the positive

  15. Feature Relevance Assessment of Multispectral Airborne LIDAR Data for Tree Species Classification

    NASA Astrophysics Data System (ADS)

    Amiri, N.; Heurich, M.; Krzystek, P.; Skidmore, A. K.

    2018-04-01

    The presented experiment investigates the potential of Multispectral Laser Scanning (MLS) point clouds for single tree species classification. The basic idea is to simulate a MLS sensor by combining two different Lidar sensors providing three different wavelngthes. The available data were acquired in the summer 2016 at the same date in a leaf-on condition with an average point density of 37 points/m2. For the purpose of classification, we segmented the combined 3D point clouds consisiting of three different spectral channels into 3D clusters using Normalized Cut segmentation approach. Then, we extracted four group of features from the 3D point cloud space. Once a varity of features has been extracted, we applied forward stepwise feature selection in order to reduce the number of irrelevant or redundant features. For the classification, we used multinomial logestic regression with L1 regularization. Our study is conducted using 586 ground measured single trees from 20 sample plots in the Bavarian Forest National Park, in Germany. Due to lack of reference data for some rare species, we focused on four classes of species. The results show an improvement between 4-10 pp for the tree species classification by using MLS data in comparison to a single wavelength based approach. A cross validated (15-fold) accuracy of 0.75 can be achieved when all feature sets from three different spectral channels are used. Our results cleary indicates that the use of MLS point clouds has great potential to improve detailed forest species mapping.

  16. Reference genotype and exome data from an Australian Aboriginal population for health-based research

    PubMed Central

    Tang, Dave; Anderson, Denise; Francis, Richard W.; Syn, Genevieve; Jamieson, Sarra E.; Lassmann, Timo; Blackwell, Jenefer M.

    2016-01-01

    Genetic analyses, including genome-wide association studies and whole exome sequencing (WES), provide powerful tools for the analysis of complex and rare genetic diseases. To date there are no reference data for Aboriginal Australians to underpin the translation of health-based genomic research. Here we provide a catalogue of variants called after sequencing the exomes of 72 Aboriginal individuals to a depth of 20X coverage in ∼80% of the sequenced nucleotides. We determined 320,976 single nucleotide variants (SNVs) and 47,313 insertions/deletions using the Genome Analysis Toolkit. We had previously genotyped a subset of the Aboriginal individuals (70/72) using the Illumina Omni2.5 BeadChip platform and found ~99% concordance at overlapping sites, which suggests high quality genotyping. Finally, we compared our SNVs to six publicly available variant databases, such as dbSNP and the Exome Sequencing Project, and 70,115 of our SNVs did not overlap any of the single nucleotide polymorphic sites in all the databases. Our data set provides a useful reference point for genomic studies on Aboriginal Australians. PMID:27070114

  17. Reference genotype and exome data from an Australian Aboriginal population for health-based research.

    PubMed

    Tang, Dave; Anderson, Denise; Francis, Richard W; Syn, Genevieve; Jamieson, Sarra E; Lassmann, Timo; Blackwell, Jenefer M

    2016-04-12

    Genetic analyses, including genome-wide association studies and whole exome sequencing (WES), provide powerful tools for the analysis of complex and rare genetic diseases. To date there are no reference data for Aboriginal Australians to underpin the translation of health-based genomic research. Here we provide a catalogue of variants called after sequencing the exomes of 72 Aboriginal individuals to a depth of 20X coverage in ∼80% of the sequenced nucleotides. We determined 320,976 single nucleotide variants (SNVs) and 47,313 insertions/deletions using the Genome Analysis Toolkit. We had previously genotyped a subset of the Aboriginal individuals (70/72) using the Illumina Omni2.5 BeadChip platform and found ~99% concordance at overlapping sites, which suggests high quality genotyping. Finally, we compared our SNVs to six publicly available variant databases, such as dbSNP and the Exome Sequencing Project, and 70,115 of our SNVs did not overlap any of the single nucleotide polymorphic sites in all the databases. Our data set provides a useful reference point for genomic studies on Aboriginal Australians.

  18. Fabrication of a mini multi-fixed-point cell for the calibration of industrial platinum resistance thermometers

    NASA Astrophysics Data System (ADS)

    Ragay-Enot, Monalisa; Lee, Young Hee; Kim, Yong-Gyoo

    2017-07-01

    A mini multi-fixed-point cell (length 118 mm, diameter 33 mm) containing three materials (In-Zn eutectic (mass fraction 3.8% Zn), Sn and Pb) in a single crucible was designed and fabricated for the easy and economical fixed-point calibration of industrial platinum resistance thermometers (IPRTs) for use in industrial temperature measurements. The melting and freezing behaviors of the metals were investigated and the phase transition temperatures were determined using a commercial dry-block calibrator. Results showed that the melting plateaus are generally easy to realize and are reproducible, flatter and of longer duration. On the other hand, the freezing process is generally difficult, especially for Sn, due to the high supercooling required to initiate freezing. The observed melting temperatures at optimum set conditions were 143.11 °C (In-Zn), 231.70 °C (Sn) and 327.15 °C (Pb) with expanded uncertainties (k  = 2) of 0.12 °C, 0.10 °C and 0.13 °C, respectively. This multi-fixed-point cell can be treated as a sole reference temperature-generating system. Based on the results, the realization of melting points of the mini multi-fixed-point cell can be recommended for the direct calibration of IPRTs in industrial applications without the need for a reference thermometer.

  19. Automated estimation of leaf distribution for individual trees based on TLS point clouds

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Rutzinger, Martin; Bremer, Magnus

    2017-04-01

    Light Detection and Ranging (LiDAR) especially the ground based LiDAR (Terrestrial Laser Scanning - TLS) is an operational used and widely available measurement tool supporting forest inventory updating and research in forest ecology. High resolution point clouds from TLS already represent single leaves which can be used for a more precise estimation of Leaf Area Index (LAI) and for higher accurate biomass estimation. However, currently the methodology for extracting single leafs from the unclassified point clouds for individual trees is still missing. The aim of this study is to present a novel segmentation approach in order to extract single leaves and derive features related to leaf morphology (such as area, slope, length and width) of each single leaf from TLS point cloud data. For the study two exemplary single trees were scanned in leaf-on condition on the university campus of Innsbruck during calm wind conditions. A northern red oak (Quercus rubra) was scanned by a discrete return recording Optech ILRIS-3D TLS scanner and a tulip tree (Liliodendron tulpifera) with Riegl VZ-6000 scanner. During the scanning campaign a reference dataset was measured parallel to scanning. In this case 230 leaves were randomly collected around the lower branches of the tree and photos were taken. The developed workflow steps were the following: in the first step normal vectors and eigenvalues were calculated based on the user specified neighborhood. Then using the direction of the largest eigenvalue outliers i.e. ghost points were removed. After that region growing segmentation based on the curvature and angles between normal vectors was applied on the filtered point cloud. On each segment a RANSAC plane fitting algorithm was applied in order to extract the segment based normal vectors. Using the related features of the calculated segments the stem and branches were labeled as non-leaf and other segments were classified as leaf. The validation of the different segmentation parameters was evaluated as the following: i) the sum area of the collected leaves and the point cloud, ii) the segmented leaf length-width ratio iii) the distribution of the leaf area for the segmented and the reference-ones were compared and the ideal parameter-set was found. The results show that the leaves can be captured with the developed workflow and the slope can be determined robustly for the segmented leaves. However, area, length and width values are systematically depending on the angle and the distance from the scanner. For correction of the systematic underestimation, more systematic measurement or LiDAR simulation is required for further detailed analysis. The results of leaf segmentation algorithm show high potential in generating more precise tree models with correctly located leaves in order to extract more precise input model for biological modeling of LAI or atmospheric corrections studies. The presented workflow also can be used in monitoring the change of angle of the leaves due to sun irradiation, water balance, and day-night rhythm.

  20. Semi-automatic delineation of the spino-laminar junction curve on lateral x-ray radiographs of the cervical spine

    NASA Astrophysics Data System (ADS)

    Narang, Benjamin; Phillips, Michael; Knapp, Karen; Appelboam, Andy; Reuben, Adam; Slabaugh, Greg

    2015-03-01

    Assessment of the cervical spine using x-ray radiography is an important task when providing emergency room care to trauma patients suspected of a cervical spine injury. In routine clinical practice, a physician will inspect the alignment of the cervical spine vertebrae by mentally tracing three alignment curves along the anterior and posterior sides of the cervical vertebral bodies, as well as one along the spinolaminar junction. In this paper, we propose an algorithm to semi-automatically delineate the spinolaminar junction curve, given a single reference point and the corners of each vertebral body. From the reference point, our method extracts a region of interest, and performs template matching using normalized cross-correlation to find matching regions along the spinolaminar junction. Matching points are then fit to a third order spline, producing an interpolating curve. Experimental results demonstrate promising results, on average producing a modified Hausdorff distance of 1.8 mm, validated on a dataset consisting of 29 patients including those with degenerative change, retrolisthesis, and fracture.

  1. A Proposal of New Reference System for the Standard Axial, Sagittal, Coronal Planes of Brain Based on the Serially-Sectioned Images

    PubMed Central

    Park, Jin Seo; Park, Hyo Seok; Shin, Dong Sun; Har, Dong-Hwan; Cho, Zang-Hee; Kim, Young-Bo; Han, Jae-Yong; Chi, Je-Geun

    2010-01-01

    Sectional anatomy of human brain is useful to examine the diseased brain as well as normal brain. However, intracerebral reference points for the axial, sagittal, and coronal planes of brain have not been standardized in anatomical sections or radiological images. We made 2,343 serially-sectioned images of a cadaver head with 0.1 mm intervals, 0.1 mm pixel size, and 48 bit color and obtained axial, sagittal, and coronal images based on the proposed reference system. This reference system consists of one principal reference point and two ancillary reference points. The two ancillary reference points are the anterior commissure and the posterior commissure. And the principal reference point is the midpoint of two ancillary reference points. It resides in the center of whole brain. From the principal reference point, Cartesian coordinate of x, y, z could be made to be the standard axial, sagittal, and coronal planes. PMID:20052359

  2. Determination of the carbon, hydrogen and nitrogen contents of alanine and their uncertainties using the certified reference material L-alanine (NMIJ CRM 6011-a).

    PubMed

    Itoh, Nobuyasu; Sato, Ayako; Yamazaki, Taichi; Numata, Masahiko; Takatsu, Akiko

    2013-01-01

    The carbon, hydrogen, and nitrogen (CHN) contents of alanine and their uncertainties were estimated using a CHN analyzer and the certified reference material (CRM) L-alanine. The CHN contents and their uncertainties, as measured using the single-point calibration method, were 40.36 ± 0.20% for C, 7.86 ± 0.13% for H, and 15.66 ± 0.09% for N; the results obtained using the bracket calibration method were also comparable. The method described in this study is reasonable, convenient, and meets the general requirement of having uncertainties ≤ 0.4%.

  3. LC-oscillator with automatic stabilized amplitude via bias current control. [power supply circuit for transducers

    NASA Technical Reports Server (NTRS)

    Hamlet, J. F. (Inventor)

    1974-01-01

    A stable excitation supply for measurement transducers is described. It consists of a single-transistor oscillator with a coil connected to the collector and a capacitor connected from the collector to the emitter. The output of the oscillator is rectified and the rectified signal acts as one input to a differential amplifier; the other input being a reference potential. The output of the amplifier is connected at a point between the emitter of the transistor and ground. When the rectified signal is greater than the reference signal, the differential amplifier produces a signal of polarity to reduce bias current and, consequently, amplification.

  4. A stellar tracking reference system

    NASA Technical Reports Server (NTRS)

    Klestadt, B.

    1971-01-01

    A stellar attitude reference system concept for satellites was studied which promises to permit continuous precision pointing of payloads with accuracies of 0.001 degree without the use of gyroscopes. It is accomplished with the use of a single, clustered star tracker assembly mounted on a non-orthogonal, two gimbal mechanism, driven so as to unwind satellite orbital and orbit precession rates. A set of eight stars was found which assures the presence of an adequate inertial reference on a continuous basis in an arbitrary orbit. Acquisition and operational considerations were investigated and inherent reference redundancy/reliability was established. Preliminary designs for the gimbal mechanism, its servo drive, and the star tracker cluster with its associated signal processing were developed for a baseline sun-synchronous, noon-midnight orbit. The functions required of the onboard computer were determined and the equations to be solved were found. In addition detailed error analyses were carried out, based on structural, thermal and other operational considerations.

  5. Extremely Lightweight Intrusion Detection (ELIDe)

    DTIC Science & Technology

    2013-12-01

    devices that would be more commonly found in a dynamic tactical environment. As a point of reference, the Raspberry Pi single-chip computer (4) is...the ELIDe application onto a resource- constrained hardware platform more likely to be used in a mobile tactical network, and the Raspberry Pi was...chosen as that representative platform. ELIDe was successfully tested on a Raspberry Pi , its throughput was benchmarked at approximately 8.3 megabits

  6. REQUEST: A Recursive QUEST Algorithm for Sequential Attitude Determination

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.

    1996-01-01

    In order to find the attitude of a spacecraft with respect to a reference coordinate system, vector measurements are taken. The vectors are pairs of measurements of the same generalized vector, taken in the spacecraft body coordinates, as well as in the reference coordinate system. We are interested in finding the best estimate of the transformation between these coordinate system.s The algorithm called QUEST yields that estimate where attitude is expressed by a quarternion. Quest is an efficient algorithm which provides a least squares fit of the quaternion of rotation to the vector measurements. Quest however, is a single time point (single frame) batch algorithm, thus measurements that were taken at previous time points are discarded. The algorithm presented in this work provides a recursive routine which considers all past measurements. The algorithm is based on on the fact that the, so called, K matrix, one of whose eigenvectors is the sought quaternion, is linerly related to the measured pairs, and on the ability to propagate K. The extraction of the appropriate eigenvector is done according to the classical QUEST algorithm. This stage, however, can be eliminated, and the computation simplified, if a standard eigenvalue-eigenvector solver algorithm is used. The development of the recursive algorithm is presented and illustrated via a numerical example.

  7. Multi-model inference for incorporating trophic and climate uncertainty into stock assessments

    NASA Astrophysics Data System (ADS)

    Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim

    2016-12-01

    Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.

  8. Pharmacokinetics of lacosamide and omeprazole coadministration in healthy volunteers: results from a phase I, randomized, crossover trial.

    PubMed

    Cawello, Willi; Mueller-Voessing, Christa; Fichtner, Andreas

    2014-05-01

    The antiepileptic drug lacosamide has a low potential for drug-drug interactions, but is a substrate and moderate inhibitor of the cytochrome P450 (CYP) enzyme CYP2C19. This phase I, randomized, open-label, two-way crossover trial evaluated the pharmacokinetic effects of lacosamide and omeprazole coadministration. Healthy, White, male volunteers (n = 36) who were not poor metabolizers of CYP2C19 were randomized to treatment A (single-dose 40 mg omeprazole on days 1 and 8 together with 6 days of multiple-dose lacosamide [200-600 mg/day] on days 3-8) and treatment B (single doses of 300 mg lacosamide on days 1 and 8 with 7 days of 40 mg/day omeprazole on days 3-9) in pseudorandom order, separated by a ≥ 7-day washout period. Area under the concentration-time curve (AUC) and peak concentration (C(max)) were the primary pharmacokinetic parameters measured for lacosamide or omeprazole administered alone (reference) or in combination (test). Bioequivalence was determined if the 90 % confidence interval (CI) of the ratio (test/reference) fell within the acceptance range of 0.8-1.25. The point estimates (90 % CI) of the ratio of omeprazole + lacosamide coadministered versus omeprazole alone for AUC (1.098 [0.996-1.209]) and C(max) (1.105 [0.979-1.247]) fell within the acceptance range for bioequivalence. The point estimates (90 % CI) of the ratio of lacosamide + omeprazole coadministration versus lacosamide alone also fell within the acceptance range for bioequivalence (AUC 1.133 [1.102-1.165]); C(max) 0.996 (0.947-1.047). Steady-state lacosamide did not influence omeprazole single-dose pharmacokinetics, and multiple-dose omeprazole did not influence lacosamide single-dose pharmacokinetics.

  9. The potential of high resolution airborne laser scanning for deriving geometric properties of single trees

    NASA Astrophysics Data System (ADS)

    Morsdorf, F.; Meier, E.; Koetz, B.; Nüesch, D.; Itten, K.; Allgöwer, B.

    2003-04-01

    The potential of airborne laserscanning for mapping forest stands has been intensively evaluated in the past few years. Algorithms deriving structural forest parameters in a stand-wise manner from laser data have been successfully implemented by a number of researchers. However, with very high point density laser (>20 points/m^2) data we pursue the approach of deriving these parameters on a single-tree basis. We explore the potential of delineating single trees from laser scanner raw data (x,y,z- triples) and validate this approach with a dataset of more than 2000 georeferenced trees, including tree height and crown diameter, gathered on a long term forest monitoring site by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL). The accuracy of the laser scanner is evaluated trough 6 reference targets, being 3x3 m^2 in size and horizontally plain, for validating both the horizontal and vertical accuracy of the laser scanner by matching of triangular irregular networks (TINs). Single trees are segmented by a clustering analysis in all three coordinate dimensions and their geometric properties can then be derived directly from the tree cluster.

  10. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  11. Outcomes of the modified Brostrom procedure using suture anchors for chronic lateral ankle instability--a prospective, randomized comparison between single and double suture anchors.

    PubMed

    Cho, Byung-Ki; Kim, Yong-Min; Kim, Dong-Soo; Choi, Eui-Sung; Shon, Hyun-Chul; Park, Kyoung-Jin

    2013-01-01

    The present prospective, randomized study was conducted to compare the clinical outcomes of the modified Brostrom procedure using single and double suture anchors for chronic lateral ankle instability. A total of 50 patients were followed up for more than 2 years after undergoing the modified Brostrom procedure. Of the 50 procedures, 25 each were performed using single and double suture anchors by 1 surgeon. The Karlsson scale had improved significantly to 89.8 points and 90.6 points in the single and double anchor groups, respectively. Using the Sefton grading system, 23 cases (92%) in the single anchor group and 22 (88%) in the double anchor group achieved satisfactory results. The talar tilt angle and anterior talar translation on stress radiographs using the Telos device had improved significantly to an average of 5.7° and 4.6 mm in the single anchor group and 4.5° and 4.3 mm in the double anchor group, respectively. The double anchor technique was superior with respect to the postoperative talar tilt. The single and double suture anchor techniques produced similar clinical and functional outcomes, with the exception of talar tilt as a reference of mechanical stability. The modified Brostrom procedure using both single and double suture anchors appears to be an effective treatment method for chronic lateral ankle instability. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  12. A Stereophotogrammetric System For The Detection Of Prosthesis Loosening In Total Hip Arthroplasty

    NASA Astrophysics Data System (ADS)

    Baumrind, Sheldon; Genant, Harry K.; Hunter, John; Miller, David; Moffitt, Francis; Murray, William R.; Ross, Steven E.

    1980-07-01

    Loosening of the prosthetic device occurs in about 5% of cases following placement of total hip prostheses (THP). Early detection of loosening is much desired but is difficult to achieve using conventional methods. Due to errors of projection, it is quite possible to fail to detect mobility of even as much as 5 mm on single x-ray films. We are attempting to develop a simplified photogrammetric system suitable for general hospital use which could detect loosening of 0.8 mm at the 95 % level of confidence without use of complex stereoplotting equipment. Metal reference markers are placed in the shaft of the femur and in the acetabular region of the pelvis at the time of surgery. The distances between these reference markers and certain unambiguous points on the prostheses are computed analytically using an X-Y acoustical digitizer (accuracy ± 0.1 mm) and software developed previously for craniofacial measurement. Separate stereopairs of the joint region are taken under weight-bearing and nonweight-bearing conditions. Differences in the measured distances between the bo-ne markers and the prosthetic components on the two stereopairs are taken as indicators of prosthesis loosening. Measurements on a phantom using ten different x-ray stereopairs taken from as many different perspectives have established that true linear distances between reference points and prostheses can be measured at the desired reliability with the present low precision system. Preliminary in vivo measurements indicate that the main unresolved problem is the movement of the subject between the two exposures of each single stereopair. Two possible solutions to this problem are discussed.

  13. Investigation of L-band shipboard antennas for maritime satellite applications

    NASA Technical Reports Server (NTRS)

    Heckert, G. P.

    1972-01-01

    A basic conceptual investigation of low cost L-band antenna subsystems for shipboard use was conducted by identifying the various pertinent design trade-offs and related performance characteristics peculiar to the civilian maritime application, and by comparing alternate approaches for their simplicity and general suitability. The study was not directed at a single specific proposal, but was intended to be parametric in nature. Antenna system concepts were to be investigated for a range of gain of 3 to 18 dB, with a value of about 10 dB considered as a baseline reference. As the primary source of potential complexity in shipboard antennas, which have beamwidths less than hemispherical as the beam pointing or selecting mechanism, major emphasis was directed at this aspect. Three categories of antenna system concepts were identified: (1) mechanically pointed, single-beam antennas; (2) fixed antennas with switched-beams; and (3) electronically-steered phased arrays. It is recommended that an L-band short backfire antenna subsystem, including a two-axis motor driven gimbal mount, and necessary single channel monopulse tracking receiver portions be developed for demonstration of performance and subsystem simplicity.

  14. Exploring the reference point in prospect theory: gambles for length of life.

    PubMed

    van Osch, Sylvie M C; van den Hout, Wilbert B; Stiggelbout, Anne M

    2006-01-01

    Attitude toward risk is an important factor determining patient preferences. Risk behavior has been shown to be strongly dependent on the perception of the outcome as either a gain or a loss. According to prospect theory, the reference point determines how an outcome is perceived. However, no theory on the location of the reference point exists, and for the health domain, there is no direct evidence for the location of the reference point. This article combines qualitative with quantitative data to provide evidence of the reference point in life-year certainty equivalent (CE) gambles and to explore the psychology behind the reference point. The authors argue that goals (aspirations) in life influence the reference point. While thinking aloud, 45 healthy respondents gave certainty equivalents for life-year CE gambles with long and short durations of survival. Contrary to suggestions from the literature, qualitative data argued that the offered certainty equivalent most frequently served as the reference point. Thus, respondents perceived life-year CE gambles as mixed. Framing of the question and goals set in life appeared to be important factors behind the psychology of the reference point. On the basis of the authors' quantitative and qualitative data, they argue that goals alter the perception of outcomes as described by prospect theory by influencing the reference point. This relationship is more apparent for the near future as opposed to the remote future, as goals are mostly set for the near future.

  15. Combined influence of visual scene and body tilt on arm pointing movements: gravity matters!

    PubMed

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R; Bourdin, Christophe; Mestre, Daniel R; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., 'combined' tilts equal to the sum of 'single' tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues.

  16. A randomised, single-blind, single-dose, three-arm, parallel-group study in healthy subjects to demonstrate pharmacokinetic equivalence of ABP 501 and adalimumab.

    PubMed

    Kaur, Primal; Chow, Vincent; Zhang, Nan; Moxness, Michael; Kaliyaperumal, Arunan; Markus, Richard

    2017-03-01

    To demonstrate pharmacokinetic (PK) similarity of biosimilar candidate ABP 501 relative to adalimumab reference product from the USA and European Union (EU) and evaluate safety, tolerability and immunogenicity of ABP 501. Randomised, single-blind, single-dose, three-arm, parallel-group study; healthy subjects were randomised to receive ABP 501 (n=67), adalimumab (USA) (n=69) or adalimumab (EU) (n=67) 40 mg subcutaneously. Primary end points were area under the serum concentration-time curve from time 0 extrapolated to infinity (AUC inf ) and the maximum observed concentration (C max ). Secondary end points included safety and immunogenicity. AUC inf and C max were similar across the three groups. Geometrical mean ratio (GMR) of AUC inf was 1.11 between ABP 501 and adalimumab (USA), and 1.04 between ABP 501 and adalimumab (EU). GMR of C max was 1.04 between ABP 501 and adalimumab (USA) and 0.96 between ABP 501 and adalimumab (EU). The 90% CIs for the GMRs of AUC inf and C max were within the prespecified standard PK equivalence criteria of 0.80 to 1.25. Treatment-related adverse events were mild to moderate and were reported for 35.8%, 24.6% and 41.8% of subjects in the ABP 501, adalimumab (USA) and adalimumab (EU) groups; incidence of antidrug antibodies (ADAbs) was similar among the study groups. Results of this study demonstrated PK similarity of ABP 501 with adalimumab (USA) and adalimumab (EU) after a single 40-mg subcutaneous injection. No new safety signals with ABP 501 were identified. The safety and tolerability of ABP 501 was similar to the reference products, and similar ADAb rates were observed across the three groups. EudraCT number 2012-000785-37; Results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  17. Motor control theories and their applications.

    PubMed

    Latash, Mark L; Levin, Mindy F; Scholz, John P; Schöner, Gregor

    2010-01-01

    We describe several influential hypotheses in the field of motor control including the equilibrium-point (referent configuration) hypothesis, the uncontrolled manifold hypothesis, and the idea of synergies based on the principle of motor abundance. The equilibrium-point hypothesis is based on the idea of control with thresholds for activation of neuronal pools; it provides a framework for analysis of both voluntary and involuntary movements. In particular, control of a single muscle can be adequately described with changes in the threshold of motor unit recruitment during slow muscle stretch (threshold of the tonic stretch reflex). Unlike the ideas of internal models, the equilibrium-point hypothesis does not assume neural computations of mechanical variables. The uncontrolled manifold hypothesis is based on the dynamic system approach to movements; it offers a toolbox to analyze synergic changes within redundant sets of elements related to stabilization of potentially important performance variables. The referent configuration hypothesis and the principle of abundance can be naturally combined into a single coherent scheme of control of multi-element systems. A body of experimental data on healthy persons and patients with movement disorders are reviewed in support of the mentioned hypotheses. In particular, movement disorders associated with spasticity are considered as consequences of an impaired ability to shift threshold of the tonic stretch reflex within the whole normal range. Technical details and applications of the mentioned hypo-theses to studies of motor learning are described. We view the mentioned hypotheses as the most promising ones in the field of motor control, based on a solid physical and neurophysiological foundation.

  18. A Cross-Cultural Study of Reference Point Adaptation: Evidence from China, Korea, and the US

    ERIC Educational Resources Information Center

    Arkes, Hal R.; Hirshleifer, David; Jiang, Danling; Lim, Sonya S.

    2010-01-01

    We examined reference point adaptation following gains or losses in security trading using participants from China, Korea, and the US. In both questionnaire studies and trading experiments with real money incentives, reference point adaptation was larger for Asians than for Americans. Subjects in all countries adapted their reference points more…

  19. Intermittent child employment and its implications for estimates of child labour

    PubMed Central

    LEVISON, Deborah; HOEK, Jasper; LAM, David; DURYEA, Suzanne

    2008-01-01

    Using longitudinal data from urban Brazil, the authors track the employment patterns of thousands of children aged 10-16 during four months of their lives in the 1980s and 1990s. The proportion of children who work at some point during a four-month period is substantially higher than the fraction observed working in any single month. The authors calculate an intermittency multiplier to summarize the difference between employment rates in one reference week vs. four reference weeks over a four-month period. They conclude that intermittent employment is a crucial characteristic of child labour which must be recognized to capture levels of child employment adequately and identify child workers. PMID:18815624

  20. A benchmark theoretical study of the electronic ground state and of the singlet-triplet split of benzene and linear acenes

    NASA Astrophysics Data System (ADS)

    Hajgató, B.; Szieberth, D.; Geerlings, P.; De Proft, F.; Deleuze, M. S.

    2009-12-01

    A benchmark theoretical study of the electronic ground state and of the vertical and adiabatic singlet-triplet (ST) excitation energies of benzene (n =1) and n-acenes (C4n+2H2n+4) ranging from naphthalene (n =2) to heptacene (n =7) is presented, on the ground of single- and multireference calculations based on restricted or unrestricted zero-order wave functions. High-level and large scale treatments of electronic correlation in the ground state are found to be necessary for compensating giant but unphysical symmetry-breaking effects in unrestricted single-reference treatments. The composition of multiconfigurational wave functions, the topologies of natural orbitals in symmetry-unrestricted CASSCF calculations, the T1 diagnostics of coupled cluster theory, and further energy-based criteria demonstrate that all investigated systems exhibit a A1g singlet closed-shell electronic ground state. Singlet-triplet (S0-T1) energy gaps can therefore be very accurately determined by applying the principles of a focal point analysis onto the results of a series of single-point and symmetry-restricted calculations employing correlation consistent cc-pVXZ basis sets (X=D, T, Q, 5) and single-reference methods [HF, MP2, MP3, MP4SDQ, CCSD, CCSD(T)] of improving quality. According to our best estimates, which amount to a dual extrapolation of energy differences to the level of coupled cluster theory including single, double, and perturbative estimates of connected triple excitations [CCSD(T)] in the limit of an asymptotically complete basis set (cc-pV∞Z), the S0-T1 vertical excitation energies of benzene (n =1) and n-acenes (n =2-7) amount to 100.79, 76.28, 56.97, 40.69, 31.51, 22.96, and 18.16 kcal/mol, respectively. Values of 87.02, 62.87, 46.22, 32.23, 24.19, 16.79, and 12.56 kcal/mol are correspondingly obtained at the CCSD(T)/cc-pV∞Z level for the S0-T1 adiabatic excitation energies, upon including B3LYP/cc-PVTZ corrections for zero-point vibrational energies. In line with the absence of Peierls distortions, extrapolations of results indicate a vanishingly small S0-T1 energy gap of 0 to ˜4 kcal/mol (˜0.17 eV) in the limit of an infinitely large polyacene.

  1. A bioequivalence study of two memantine formulations in healthy Chinese male volunteers
.

    PubMed

    Deng, Ying; Zhuang, Jialang; Wu, Jingguo; Chen, Jiangying; Ding, Liang; Wang, Xueding; Huang, Lihui; Zeng, Guixiong; Chen, Jie; Ma, Zhongfu; Chen, Xiao; Zhong, Guoping; Huang, Min; Zhao, Xianglan

    2017-10-01

    The aim of the current study is to evaluate the bioequivalence between the test and reference formulations of memantine in a single-dose, two-period and two-sequence crossover study with a 44-day washout interval. A total of 20 healthy Chinese male volunteers were enrolled and completed the study, after oral administration of single doses of 10 mg test and reference formulations of memantine. The blood samples were collected at different time points and memantine concentrations were determined by a fully validated HPLC-MS/MS method. The evaluated pharmacokinetic parameters (test vs. reference) including Cmax (18 ± 3.2 vs. 17.8 ± 3.4), AUC0-t (1,188.5 ± 222.2 vs. 1,170.9 ± 135.7), and AUC0-∞ (1,353.3 ± 258.6 vs. 1,291.3 ± 136.7) values were assessed for bioequivalence based on current guidelines. The observed pharmacokinetic parameters of memantine test drug were similar to those of the reference formulation. The 90% confidence intervals of test/reference ratios for Cmax, AUC0-t, and AUC0-∞ were within the bioequivalence acceptance range of 80 - 125%. The results obtained from the healthy Chinese subjects in this study suggests that the test formulation of memantine 10 mg tablet is bioequivalent to the reference formulation (Ebixa®10 mg tablet).
.

  2. Microstructural and compositional contributions towards the mechanical behavior of aging human bone measured by cyclic and impact reference point indentation.

    PubMed

    Abraham, Adam C; Agarwalla, Avinesh; Yadavalli, Aditya; Liu, Jenny Y; Tang, Simon Y

    2016-06-01

    The assessment of fracture risk often relies primarily on measuring bone mineral density, thereby accounting for only a single pathology: the loss of bone mass. However, bone's ability to resist fracture is a result of its biphasic composition and hierarchical structure that imbue it with high strength and toughness. Reference point indentation (RPI) testing is designed to directly probe bone mechanical behavior at the microscale in situ, although it remains unclear which aspects of bone composition and structure influence the results at this scale. Therefore, our goal in this study was to investigate factors that contribute to bone mechanical behavior measured by cyclic reference point indentation, impact reference point indentation, and three-point bending. Twenty-eight female cadavers (ages 57-97) were subjected to cyclic and impact RPI in parallel at the unmodified tibia mid-diaphysis. After RPI, the middiaphyseal tibiae were removed, scanned using micro-CT to obtain cortical porosity (Ct.Po.) and tissue mineral density (TMD), then tested using three-point bending, and lastly assayed for the accumulation of advanced glycation end-products (AGEs). Both the indentation distance increase from cyclic RPI (IDI) and bone material strength index from impact RPI (BMSi) were significantly correlated with TMD (r=-0.390, p=0.006; r=0.430, p=0.002; respectively). Accumulation of AGEs was significantly correlated with IDI (r=0.281, p=0.046), creep indentation distance (CID, r=0.396, p=0.004), and BMSi (r=-0.613, p<0.001). There were no significant relationships between tissue TMD or AGEs accumulation with the quasi-static material properties. Toughness decreased with increasing tissue Ct.Po. (r=-0.621, p<0.001). Other three-point bending measures also correlated with tissue Ct.Po. including the bending modulus (r=-0.50, p<0.001) and ultimate stress (r=-0.56, p<0.001). The effects of Ct.Po. on indentation were less pronounced with IDI (r=0.290, p=0.043) and BMSi (r=-0.299, p=0.037) correlated modestly with tissue Ct.Po. These results suggest that RPI may be sensitive to bone quality changes relating to collagen. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Self-referenced single-shot THz detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell, Brandon K.; Ofori-Okai, Benjamin K.; Chen, Zhijiang

    We demonstrate a self-referencing method to reduce noise in a single-shot terahertz detection scheme. By splitting a single terahertz pulse and using a reflective echelon, both the signal and reference terahertz time-domain waveforms were measured using one laser pulse. Simultaneous acquisition of these waveforms significantly reduces noise originating from shot-to-shot fluctuations. Here, we show that correlation function based referencing, which is not limited to polarization dependent measurements, can achieve a noise floor that is comparable to state-of-the-art polarization-gated balanced detection. Lastly, we extract the DC conductivity of a 30 nm free-standing gold film using a single THz pulse. The measuredmore » value of σ 0 = 1.3 ± 0.4 × 10 7 S m -1 is in good agreement with the value measured by four-point probe, indicating the viability of this method for measuring dynamical changes and small signals.« less

  4. Self-referenced single-shot THz detection

    DOE PAGES

    Russell, Brandon K.; Ofori-Okai, Benjamin K.; Chen, Zhijiang; ...

    2017-06-29

    We demonstrate a self-referencing method to reduce noise in a single-shot terahertz detection scheme. By splitting a single terahertz pulse and using a reflective echelon, both the signal and reference terahertz time-domain waveforms were measured using one laser pulse. Simultaneous acquisition of these waveforms significantly reduces noise originating from shot-to-shot fluctuations. Here, we show that correlation function based referencing, which is not limited to polarization dependent measurements, can achieve a noise floor that is comparable to state-of-the-art polarization-gated balanced detection. Lastly, we extract the DC conductivity of a 30 nm free-standing gold film using a single THz pulse. The measuredmore » value of σ 0 = 1.3 ± 0.4 × 10 7 S m -1 is in good agreement with the value measured by four-point probe, indicating the viability of this method for measuring dynamical changes and small signals.« less

  5. Experimental observation of optical Weyl points and Fermi arcs

    NASA Astrophysics Data System (ADS)

    Rechtsman, Mikael

    We directly observe the presence type-II Weyl points for optical photons in a three-dimensional dielectric structure comprising arrays of evanescently-coupled, single-mode, helical waveguides. We also observe the corresponding Fermi arc surface states emerging from Weyl points (despite the use of the `Fermi arc' terminology, we are referring to bosons rather than fermions). The Weyl points are manifested by the presence of conical diffraction at the Weyl frequency in the photonic band structure, and the Fermi arc states are manifested by the emergence of surface states as we scan in frequency past the Weyl point. We map the Weyl points to Dirac points of the isofrequency surface, and the Fermi arcs to chiral edge states of an anomalous Floquet insulator. In collaboration with: Jiho Noh, Sheng Huang, Daniel Leykam*, Y. D. Chong, Kevin Chen, and Mikael C. Rechtsman M.C.R. acknowledges the National Science Foundation under Award Number ECCS-1509546, the Penn State MRSEC, Center for Nanoscale Science, under Award Number NSF DMR-1420620, and the Alfred P. Sloan Foundation under fellowship number FG-2016-6418.

  6. Accurate dew-point measurement over a wide temperature range using a quartz crystal microbalance dew-point sensor

    NASA Astrophysics Data System (ADS)

    Kwon, Su-Yong; Kim, Jong-Chul; Choi, Buyng-Il

    2008-11-01

    Quartz crystal microbalance (QCM) dew-point sensors are based on frequency measurement, and so have fast response time, high sensitivity and high accuracy. Recently, we have reported that they have the very convenient attribute of being able to distinguish between supercooled dew and frost from a single scan through the resonant frequency of the quartz resonator as a function of the temperature. In addition to these advantages, by using three different types of heat sinks, we have developed a QCM dew/frost-point sensor with a very wide working temperature range (-90 °C to 15 °C). The temperature of the quartz surface can be obtained effectively by measuring the temperature of the quartz crystal holder and using temperature compensation curves (which showed a high level of repeatability and reproducibility). The measured dew/frost points showed very good agreement with reference values and were within ±0.1 °C over the whole temperature range.

  7. Toward a Global Bundle Adjustment of SPOT 5 - HRS Images

    NASA Astrophysics Data System (ADS)

    Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.

    2012-07-01

    The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the global block is unique it becomes easier to manage the historic and the different evolutions of the computations (new images, new GCPs or tie points). The location is now unique and consequently coherent all around the world, avoiding steps and artifacts on the borders of DSMs (Digital Surface Models) and OrthoImages historically calculated from different blocks. No extrapolation far from GCPs in the limits of images is done anymore. Using the global block as a reference will allow new images from other sources to be easily located on this reference.

  8. Pulse advancement and delay in an integrated-optical two-port ring-resonator circuit: direct experimental observations.

    PubMed

    Uranus, H P; Zhuang, L; Roeloffzen, C G H; Hoekstra, H J W M

    2007-09-01

    We report experimental observations of the negative-group-velocity (v(g)) phenomenon in an integrated-optical two-port ring-resonator circuit. We demonstrate that when the v(g) is negative, the (main) peak of output pulse appears earlier than the peak of a reference pulse, while for a positive v(g), the situation is the other way around. We observed that a pulse splitting phenomenon occurs in the neighborhood of the critical-coupling point. This pulse splitting limits the maximum achievable delay and advancement of a single device as well as facilitating a smooth transition from highly advanced to highly delayed pulse, and vice versa, across the critical-coupling point.

  9. Ranging error analysis of single photon satellite laser altimetry under different terrain conditions

    NASA Astrophysics Data System (ADS)

    Huang, Jiapeng; Li, Guoyuan; Gao, Xiaoming; Wang, Jianmin; Fan, Wenfeng; Zhou, Shihong

    2018-02-01

    Single photon satellite laser altimeter is based on Geiger model, which has the characteristics of small spot, high repetition rate etc. In this paper, for the slope terrain, the distance of error's formula and numerical calculation are carried out. Monte Carlo method is used to simulate the experiment of different terrain measurements. The experimental results show that ranging accuracy is not affected by the spot size under the condition of the flat terrain, But the inclined terrain can influence the ranging error dramatically, when the satellite pointing angle is 0.001° and the terrain slope is about 12°, the ranging error can reach to 0.5m. While the accuracy can't meet the requirement when the slope is more than 70°. Monte Carlo simulation results show that single photon laser altimeter satellite with high repetition rate can improve the ranging accuracy under the condition of complex terrain. In order to ensure repeated observation of the same point for 25 times, according to the parameters of ICESat-2, we deduce the quantitative relation between the footprint size, footprint, and the frequency repetition. The related conclusions can provide reference for the design and demonstration of the domestic single photon laser altimetry satellite.

  10. The LIFE Laser Design in Context: A Comparison to the State-of-the-Art

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deri, R J; Bayramian, A J; Erlandson, A C

    2011-03-21

    The current point design for the LIFE laser leverages decades of solid-state laser development in order to achieve the performance and attributes required for inertial fusion energy. This document provides a brief comparison of the LIFE laser point design to other state-of-the-art solid-state lasers. Table I compares the attributes of the current LIFE laser point design to other systems. the state-of-the-art for single-shot performance at fusion-relevant beamline energies is exemplified by performance observed on the National Ignition Facility. The state-of-the-art for high average power is exemplified by the Northrup Grumman JHPSSL laser. Several items in Table I deal with themore » laser efficiency; a more detailed discussion of efficiency can be found in reference 5. The electrical-to-optical efficiency of the LIFE design exceeds that of reference 4 due to the availability of higher efficiency laser diode pumps (70% vs. {approx}50% used in reference 4). LIFE diode pumps are discussed in greater detail in reference 6. The 'beam steering' state of the art is represented by the deflection device that will be used in the LIFE laser, not a laser system. Inspection of Table I shows that most LIFE laser attributes have already been experimentally demonstrated. The two cases where the LIFE design is somewhat better than prior experimental work do not involve the development of new concepts: beamline power is increased simply by increasing aperture (as demonstrated by the power/aperture comparison in Table I), and efficiency increases are achieved by employing state-of-the-art diode pumps. In conclusion, the attributes anticipated for the LIFE laser are consistent with the demonstrated performance of existing solid-state lasers.« less

  11. An ultra-precision tool nanoindentation instrument for replication of single point diamond tool cutting edges

    NASA Astrophysics Data System (ADS)

    Cai, Yindi; Chen, Yuan-Liu; Xu, Malu; Shimizu, Yuki; Ito, So; Matsukuma, Hiraku; Gao, Wei

    2018-05-01

    Precision replication of the diamond tool cutting edge is required for non-destructive tool metrology. This paper presents an ultra-precision tool nanoindentation instrument designed and constructed for replication of the cutting edge of a single point diamond tool onto a selected soft metal workpiece by precisely indenting the tool cutting edge into the workpiece surface. The instrument has the ability to control the indentation depth with a nanometric resolution, enabling the replication of tool cutting edges with high precision. The motion of the diamond tool along the indentation direction is controlled by the piezoelectric actuator of a fast tool servo (FTS). An integrated capacitive sensor of the FTS is employed to detect the displacement of the diamond tool. The soft metal workpiece is attached to an aluminum cantilever whose deflection is monitored by another capacitive sensor, referred to as an outside capacitive sensor. The indentation force and depth can be accurately evaluated from the diamond tool displacement, the cantilever deflection and the cantilever spring constant. Experiments were carried out by replicating the cutting edge of a single point diamond tool with a nose radius of 2.0 mm on a copper workpiece surface. The profile of the replicated tool cutting edge was measured using an atomic force microscope (AFM). The effectiveness of the instrument in precision replication of diamond tool cutting edges is well-verified by the experimental results.

  12. RNAcentral: A comprehensive database of non-coding RNA sequences

    DOE PAGES

    Williams, Kelly Porter; Lau, Britney Yan

    2016-10-28

    RNAcentral is a database of non-coding RNA (ncRNA) sequences that aggregates data from specialised ncRNA resources and provides a single entry point for accessing ncRNA sequences of all ncRNA types from all organisms. Since its launch in 2014, RNAcentral has integrated twelve new resources, taking the total number of collaborating database to 22, and began importing new types of data, such as modified nucleotides from MODOMICS and PDB. We created new species-specific identifiers that refer to unique RNA sequences within a context of single species. Furthermore, the website has been subject to continuous improvements focusing on text and sequence similaritymore » searches as well as genome browsing functionality.« less

  13. RNAcentral: A comprehensive database of non-coding RNA sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Kelly Porter; Lau, Britney Yan

    RNAcentral is a database of non-coding RNA (ncRNA) sequences that aggregates data from specialised ncRNA resources and provides a single entry point for accessing ncRNA sequences of all ncRNA types from all organisms. Since its launch in 2014, RNAcentral has integrated twelve new resources, taking the total number of collaborating database to 22, and began importing new types of data, such as modified nucleotides from MODOMICS and PDB. We created new species-specific identifiers that refer to unique RNA sequences within a context of single species. Furthermore, the website has been subject to continuous improvements focusing on text and sequence similaritymore » searches as well as genome browsing functionality.« less

  14. Here, There, and Everywhere: Reference at the Point-of-Need.

    ERIC Educational Resources Information Center

    Trump, Judith F.; Tuttle, Ian P.

    2001-01-01

    Growing numbers of libraries are experimenting with a new form of interactive reference to extend service to their patrons at the point-of-need and time-of-need. Examines digital reference for the altered user culture and point-of-need service products. Describes a pilot project offering chat reference service. Discuses developing service…

  15. A Gyroless Safehold Control Law Using Angular Momentum as an Inertial Reference Vector

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lebsock, Ken

    2008-01-01

    A novel safehold control law was developed for the nadir-pointing Vegetation Canopy Lidar (VCL) spacecraft, necessitated by a challenging combination of constraints. The instrument optics did not have a recloseable cover to protect them form potentially catastrophic damage if they were exposed to direct sunlight. The baseline safehold control law relied on a single-string inertial reference unit. A gyroless safehold law was developed to give a degree of robustness to gyro failures. Typical safehold solutions were not viable; thermal constraints made spin stabilization unsuitable, and an inertial hold based solely on magnetometer measurements wandered unaceptably during eclipse. The novel approach presented here maintains a momentum bias vector not for gyroscopic stiffness, but to use as an inertial reference direction during eclipse. The control law design is presented. The effect on stability of the rank-deficiency of magnetometer-based rate derivation is assessed. The control law's performance is evaluated by simulation.

  16. A Gyroless Safehold Control Law using Angular Momentum as an Inertial Reference Vector

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Lebsock, Ken

    2008-01-01

    A novel safehold control law was developed for the nadir-pointing Vegetation Canopy Lidar (VCL) spacecraft, necessitated by a challenging combination of constraints. The instrument optics did not have a reclosable cover to protect them from potentially catastrophic damage if they were exposed to direct sunlight. The baseline safehold control law relied on a single-string inertial reference unit. A gyroless safehold law was developed to give a degree of rebustness to gyro failures. Typical safehold solutions were not viable; thermal constraints made spin stabilization unsuitable, and an inertial hold based solely on magnetometer measurements wandered unacceptably during eclipse. The novel approach presented here maintains a momentum bias vector not for gyroscopic stiffness, but to use as an inertial reference direction during eclipse. The control law design is presented. The effect on stability of the rate-deficiency of magnetometer-based rate derivation is assessed. The control law's performance is evaluated by simulation.

  17. Combining near-field scanning optical microscopy with spectral interferometry for local characterization of the optical electric field in photonic structures.

    PubMed

    Trägårdh, Johanna; Gersen, Henkjan

    2013-07-15

    We show how a combination of near-field scanning optical microscopy with crossed beam spectral interferometry allows a local measurement of the spectral phase and amplitude of light propagating in photonic structures. The method only requires measurement at the single point of interest and at a reference point, to correct for the relative phase of the interferometer branches, to retrieve the dispersion properties of the sample. Furthermore, since the measurement is performed in the spectral domain, the spectral phase and amplitude could be retrieved from a single camera frame, here in 70 ms for a signal power of less than 100 pW limited by the dynamic range of the 8-bit camera. The method is substantially faster than most previous time-resolved NSOM methods that are based on time-domain interferometry, which also reduced problems with drift. We demonstrate how the method can be used to measure the refractive index and group velocity in a waveguide structure.

  18. 'Constraint consistency' at all orders in cosmological perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2015-08-01

    We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less

  19. Stereo multiplexed holographic particle image velocimeter

    DOEpatents

    Adrian, Ronald J.; Barnhart, Donald H.; Papen, George A.

    1996-01-01

    A holographic particle image velocimeter employs stereoscopic recording of particle images, taken from two different perspectives and at two distinct points in time for each perspective, on a single holographic film plate. The different perspectives are provided by two optical assemblies, each including a collecting lens, a prism and a focusing lens. Collimated laser energy is pulsed through a fluid stream, with elements carried in the stream scattering light, some of which is collected by each collecting lens. The respective focusing lenses are configured to form images of the scattered light near the holographic plate. The particle images stored on the plate are reconstructed using the same optical assemblies employed in recording, by transferring the film plate and optical assemblies as a single integral unit to a reconstruction site. At the reconstruction site, reconstruction beams, phase conjugates of the reference beams used in recording the image, are directed to the plate, then selectively through either one of the optical assemblies, to form an image reflecting the chosen perspective at the two points in time.

  20. Stereo multiplexed holographic particle image velocimeter

    DOEpatents

    Adrian, R.J.; Barnhart, D.H.; Papen, G.A.

    1996-08-20

    A holographic particle image velocimeter employs stereoscopic recording of particle images, taken from two different perspectives and at two distinct points in time for each perspective, on a single holographic film plate. The different perspectives are provided by two optical assemblies, each including a collecting lens, a prism and a focusing lens. Collimated laser energy is pulsed through a fluid stream, with elements carried in the stream scattering light, some of which is collected by each collecting lens. The respective focusing lenses are configured to form images of the scattered light near the holographic plate. The particle images stored on the plate are reconstructed using the same optical assemblies employed in recording, by transferring the film plate and optical assemblies as a single integral unit to a reconstruction site. At the reconstruction site, reconstruction beams, phase conjugates of the reference beams used in recording the image, are directed to the plate, then selectively through either one of the optical assemblies, to form an image reflecting the chosen perspective at the two points in time. 13 figs.

  1. Geometric registration of images by similarity transformation using two reference points

    NASA Technical Reports Server (NTRS)

    Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)

    2011-01-01

    A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.

  2. Qualitative and quantitative three-dimensional accuracy of a single tooth captured by elastomeric impression materials: an in vitro study.

    PubMed

    Schaefer, Oliver; Schmidt, Monika; Goebel, Roland; Kuepper, Harald

    2012-09-01

    The accuracy of impressions has been described in 1 or 2 dimensions, whereas it is most desirable to evaluate the accuracy of impressions spatially, in 3 dimensions. The purpose of this study was to demonstrate the accuracy and reproducibility of a 3-dimensional (3-D) approach to assessing impression preciseness and to quantitatively comparing the occlusal correctness of gypsum dies made with different impression materials. By using an aluminum replica of a maxillary molar, single-step dual viscosity impressions were made with 1 polyether/vinyl polysiloxane hybrid material (Identium), 1 vinyl polysiloxane (Panasil), and 1 polyether (Impregum) (n=5). Corresponding dies were made of Type IV gypsum and were optically digitized and aligned to the virtual reference of the aluminum tooth. Accuracy was analyzed by computing mean quadratic deviations between the virtual reference and the gypsum dies, while deviations of the dies among one another determined the reproducibility of the method. The virtual reference was adapted to create 15 occlusal contact points. The percentage of contact points deviating within a ±10 µm tolerance limit (PDP(10) = Percentage of Deviating Points within ±10 µm Tolerance) was set as the index for assessing occlusal accuracy. Visual results for the difference from the reference tooth were displayed with colors, whereas mean deviation values as well as mean PDP(10) differences were analyzed with a 1-way ANOVA and Scheffé post hoc comparisons (α=.05). Objective characterization of accuracy showed smooth axial surfaces to be undersized, whereas occlusal surfaces were accurate or enlarged when compared to the original tooth. The accuracy of the gypsum replicas ranged between 3 and 6 µm, while reproducibility results varied from 2 to 4 µm. Mean (SD) PDP(10)-values were: Panasil 91% (±11), Identium 77% (±4) and Impregum 29% (±3). One-way ANOVA detected significant differences among the subjected impression materials (P<.001). The accuracy and reproducibility of impressions were determined by 3-D analysis. Results were presented as color images and the newly developed PDP(10)-index was successfully used to quantify spatial dimensions for complex occlusal anatomy. Impression materials with high PDP(10)-values were shown to reproduce occlusal dimensions the most accurately. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  3. High-resolution velocimetry in energetic tidal currents using a convergent-beam acoustic Doppler profiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sellar, Brian; Harding, Samuel F.; Richmond, Marshall C.

    An array of convergent acoustic Doppler velocimeters has been developed and tested for the high resolution measurement of three-dimensional tidal flow velocities in an energetic tidal site. This configuration has been developed to increase spatial resolution of velocity measurements in comparison to conventional acoustic Doppler profilers (ADPs) which characteristically use diverging acoustic beams emanating from a single instrument. This is achieved using converging acoustic beams with a sample volume at the focal point of 0.03 m 3. The array is also able to simultaneously measure three-dimensional velocity components in a profile throughout the water column, and as such is referredmore » to herein as a converging-beam acoustic Doppler profiler (CADP). Mid-depth profiling is achieved through integration of the sensor platform with the operational Alstom 1MW DeepGen-IV Tidal Turbine. This proof-of-concept paper outlines system configuration and comparison to measurements provided by co-installed reference instrumentation. Comparison of CADP to standard ADP velocity measurements reveals a mean difference of 8 mm/s, standard deviation of 18 mm/s, and order-of-magnitude reduction in realizable length-scale. CADP focal point measurements compared to a proximal single-beam reference show peak cross-correlation coefficient of 0.96 over 4.0 s averaging period and a 47% reduction in Doppler noise. The dual functionality of the CADP as a profiling instrument with a high resolution focal point make this configuration a unique and valuable advancement in underwater velocimetry enabling improved turbulence, resource and structural loading quantification and validation of numerical simulations. Alternative modes of operation have been implemented including noise-reducing bi-static sampling. Since waves are simultaneously measured it is expected that derivatives of this system will be a powerful tool in wave-current interaction studies.« less

  4. Priorities and prospect theory.

    PubMed

    Happich, M; Mazurek, B

    2002-01-01

    Whose preferences are to be used for cost-effectiveness analysis? It has been recommended that community preferences for health states are the most appropriate ones for use in a reference case analysis. However, critics maintain that persons are not able properly to judge a health state if they have not experienced the condition themselves. This problem is analyzed here in the framework of Prospect Theory. It can be argued that the differing reference points of patients and the general public are responsible for deviating results. In addition, we argue that risk attitudes with respect to health-related quality of life are an indicator of reference points. If patients and the general public refer to the same reference point, i.e., they have the same risk attitude, the hypothesis is that deviations no longer significantly differ. Evaluations of the health condition of tinnitus by 210 patients and 210 unaffected persons were compared. The Time Tradeoff and Standard Gamble methods were applied to elicit preferences. Risk attitude was measured with the question of whether participants would undergo a treatment that could either improve or worsen their health condition, both with an equal chance (five possible answers between "in no case" and "in any case"). Affected persons indicated significantly higher values for tinnitus-related quality of life according to the Standard Gamble method. The difference between Time Tradeoff values was less dramatic but still significant. In addition, nonaffected persons are more risk-averse than affected persons. However, differences in evaluations are not significant considering single risk groups (e.g., those who answered "in no case"). Prospect Theory is a reasonable framework for considering the question of whose preferences count. If this result can be generalized for other diseases as well, it allows the mathematical combination of "objective" evaluations by the general public with the illness experience of patients. These evaluations should be weighted with patients' risk attitudes, i.e., community preferences can be used if they are corrected for risk attitudes.

  5. reaxFF Reactive Force Field for Disulfide Mechanochemistry, Fitted to Multireference ab Initio Data.

    PubMed

    Müller, Julian; Hartke, Bernd

    2016-08-09

    Mechanochemistry, in particular in the form of single-molecule atomic force microscopy experiments, is difficult to model theoretically, for two reasons: Covalent bond breaking is not captured accurately by single-determinant, single-reference quantum chemistry methods, and experimental times of milliseconds or longer are hard to simulate with any approach. Reactive force fields have the potential to alleviate both problems, as demonstrated in this work: Using nondeterministic global parameter optimization by evolutionary algorithms, we have fitted a reaxFF force field to high-level multireference ab initio data for disulfides. The resulting force field can be used to reliably model large, multifunctional mechanochemistry units with disulfide bonds as designed breaking points. Explorative calculations show that a significant part of the time scale gap between AFM experiments and dynamical simulations can be bridged with this approach.

  6. Laser-Induced Melting of Co-C Eutectic Cells as a New Research Tool

    NASA Astrophysics Data System (ADS)

    van der Ham, E.; Ballico, M.; Jahan, F.

    2015-08-01

    A new laser-based technique to examine heat transfer and energetics of phase transitions in metal-carbon fixed points and potentially to improve the quality of phase transitions in furnaces with poor uniformity is reported. Being reproducible below 0.1 K, metal-carbon fixed points are increasingly used as reference standards for the calibration of thermocouples and radiation thermometers. At NMIA, the Co-C eutectic point is used for the calibration of thermocouples, with the fixed point traceable to the International Temperature Scale (ITS-90) using radiation thermometry. For thermocouple use, these cells are deep inside a high-uniformity furnace, easily obtaining excellent melting plateaus. However, when used with radiation thermometers, the essential large viewing cone to the crucible restricts the furnace depth and introduces large heat losses from the front furnace zone, affecting the quality of the phase transition. Short laser bursts have been used to illuminate the cavity of a conventional Co-C fixed-point cell during various points in its melting phase transition. The laser is employed to partially melt the metal at the rear of the crucible providing a liquid-solid interface close to the region being observed by the reference pyrometer. As the laser power is known, a quantitative estimate of can be made for the Co-C latent heat of fusion. Using a single laser pulse during a furnace-induced melt, a plateau up to 8 min is observed before the crucible resumes a characteristic conventional melt curve. Although this plateau is satisfyingly flat, well within 100 mK, it is observed that the plateau is laser energy dependent and elevates from the conventional melt "inflection-point" value.

  7. Limited Sampling Strategy for the Prediction of Area Under the Curve (AUC) of Statins: Reliability of a Single Time Point for AUC Prediction for Pravastatin and Simvastatin.

    PubMed

    Srinivas, N R

    2016-02-01

    Statins are widely prescribed medicines and are also available in fixed dose combinations with other drugs to treat several chronic ailments. Given the safety issues associated with statins it may be important to assess feasibility of a single time concentration strategy for prediction of exposure (area under the curve; AUC). The peak concentration (Cmax) was used to establish relationship with AUC separately for pravastatin and simvastatin using published pharmacokinetic data. The regression equations generated for statins were used to predict the AUC values from various literature references. The fold difference of the observed divided by predicted values along with correlation coefficient (r) were used to judge the feasibility of the single time point approach. Both pravastatin and simvastatin showed excellent correlation of Cmax vs. AUC values with r value ≥ 0.9638 (p<0.001). The fold difference was within 0.5-fold to 2-fold for 220 out of 227 AUC predictions and >81% of the predicted values were in a narrower range of >0.75-fold but <1.5-fold difference. Predicted vs. observed AUC values showed excellent correlation for pravastatin (r=0.9708, n=115; p<0.001) and simvastatin (r=0.9810; n=117; p<0.001) suggesting the utility of Cmax for AUC predictions. On the basis of the present work, it is feasible to develop a single concentration time point strategy that coincides with Cmax occurrence for both pravastatin and simvastatin from a therapeutic drug monitoring perspective. © Georg Thieme Verlag KG Stuttgart · New York.

  8. 47 CFR 76.53 - Reference points.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 4 2011-10-01 2011-10-01 false Reference points. 76.53 Section 76.53 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.53 Reference points. The...

  9. 47 CFR 76.53 - Reference points.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 4 2013-10-01 2013-10-01 false Reference points. 76.53 Section 76.53 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.53 Reference points. The...

  10. 47 CFR 76.53 - Reference points.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 4 2014-10-01 2014-10-01 false Reference points. 76.53 Section 76.53 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.53 Reference points. The...

  11. 47 CFR 76.53 - Reference points.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 4 2012-10-01 2012-10-01 false Reference points. 76.53 Section 76.53 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Carriage of Television Broadcast Signals § 76.53 Reference points. The...

  12. Passive serialization in a multitasking environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hennessey, J.P.; Osisek, D.L.; Seigh, J.W. II

    1989-02-28

    In a multiprocessing system having a control program in which data objects are shared among processes, this patent describes a method for serializing references to a data object by the processes so as to prevent invalid references to the data object by any process when an operation requiring exclusive access is performed by another process, comprising the steps of: permitting the processes to reference data objects on a shared access basis without obtaining a shared lock; monitoring a point of execution of the control program which is common to all processes in the system, which occurs regularly in the process'more » execution and across which no references to any data object can be maintained by any process, except references using locks; establishing a system reference point which occurs after each process in the system has passed the point of execution at least once since the last such system reference point; requesting an operation requiring exclusive access on a selected data object; preventing subsequent references by other processes to the selected data object; waiting until two of the system references points have occurred; and then performing the requested operation.« less

  13. Stages in Learning Motor Synergies: A View Based on the Equilibrium-Point Hypothesis

    PubMed Central

    Latash, Mark L.

    2009-01-01

    This review describes a novel view on stages in motor learning based on recent developments of the notion of synergies, the uncontrolled manifold hypothesis, and the equilibrium-point hypothesis (referent configuration) that allow to merge these notions into a single scheme of motor control. The principle of abundance and the principle of minimal final action form the foundation for analyses of natural motor actions performed by redundant sets of elements. Two main stages of motor learning are introduced corresponding to (1) discovery and strengthening of motor synergies stabilizing salient performance variable(s), and (2) their weakening when other aspects of motor performance are optimized. The first stage may be viewed as consisting of two steps, the elaboration of an adequate referent configuration trajectory and the elaboration of multi-joint (multi-muscle) synergies stabilizing the referent configuration trajectory. Both steps are expected to lead to more variance in the space of elemental variables that is compatible with a desired time profile of the salient performance variable (“good variability”). Adjusting control to other aspects of performance during the second stage (for example, esthetics, energy expenditure, time, fatigue, etc.) may lead to a drop in the “good variability”. Experimental support for the suggested scheme is reviewed. PMID:20060610

  14. Stages in learning motor synergies: a view based on the equilibrium-point hypothesis.

    PubMed

    Latash, Mark L

    2010-10-01

    This review describes a novel view on stages in motor learning based on recent developments of the notion of synergies, the uncontrolled manifold hypothesis, and the equilibrium-point hypothesis (referent configuration) that allow to merge these notions into a single scheme of motor control. The principle of abundance and the principle of minimal final action form the foundation for analyses of natural motor actions performed by redundant sets of elements. Two main stages of motor learning are introduced corresponding to (1) discovery and strengthening of motor synergies stabilizing salient performance variable(s) and (2) their weakening when other aspects of motor performance are optimized. The first stage may be viewed as consisting of two steps, the elaboration of an adequate referent configuration trajectory and the elaboration of multi-joint (multi-muscle) synergies stabilizing the referent configuration trajectory. Both steps are expected to lead to more variance in the space of elemental variables that is compatible with a desired time profile of the salient performance variable ("good variability"). Adjusting control to other aspects of performance during the second stage (for example, esthetics, energy expenditure, time, fatigue, etc.) may lead to a drop in the "good variability". Experimental support for the suggested scheme is reviewed. Copyright © 2009 Elsevier B.V. All rights reserved.

  15. Ionization chamber-based reference dosimetry of intensity modulated radiation beams.

    PubMed

    Bouchard, Hugo; Seuntjens, Jan

    2004-09-01

    The present paper addresses reference dose measurements using thimble ionization chambers for quality assurance in IMRT fields. In these radiation fields, detector fluence perturbation effects invalidate the application of open-field dosimetry protocol data for the derivation of absorbed dose to water from ionization chamber measurements. We define a correction factor C(Q)IMRT to correct the absorbed dose to water calibration coefficient N(D, w)Q for fluence perturbation effects in individual segments of an IMRT delivery and developed a calculation method to evaluate the factor. The method consists of precalculating, using accurate Monte Carlo techniques, ionization chamber, type-dependent cavity air dose, and in-phantom dose to water at the reference point for zero-width pencil beams as a function of position of the pencil beams impinging on the phantom surface. These precalculated kernels are convolved with the IMRT fluence distribution to arrive at the dose-to-water-dose-to-cavity air ratio [D(a)w (IMRT)] for IMRT fields and with a 10x10 cm2 open-field fluence to arrive at the same ratio D(a)w (Q) for the 10x10 cm2 reference field. The correction factor C(Q)IMRT is then calculated as the ratio of D(a)w (IMRT) and D(a)w (Q). The calculation method was experimentally validated and the magnitude of chamber correction factors in reference dose measurements in single static and dynamic IMRT fields was studied. The results show that, for thimble-type ionization chambers the correction factor in a single, realistic dynamic IMRT field can be of the order of 10% or more. We therefore propose that for accurate reference dosimetry of complete n-beam IMRT deliveries, ionization chamber fluence perturbation correction factors must explicitly be taken into account.

  16. Application of microprocessors in an upper atmosphere instrument package

    NASA Technical Reports Server (NTRS)

    Lim, T. S.; Ehrman, C. H.; Allison, S.

    1981-01-01

    A servo-driven magnetometer table measuring offset from magnetic north has been developed by NASA to calculate payload azimuth required to point at a celestial target. Used as an aid to the study of gamma-ray phenomena, the high-altitude balloon-borne instrument determines a geocentric reference system, and calculates a set of pointing directions with respect to the system. Principal components include the magnetometer, stepping motor, microcomputer, and gray code shaft encoder. The single-chip microcomputer is used to control the orientation of the system, and consists of a central processing unit, program memory, data memory and input/output ports. Principal advantages include a low power requirement, consuming 6 watts, as compared to 30 watts consumed by the previous system.

  17. Kingdom Animalia: the zoological malaise from a microbial perspective

    NASA Technical Reports Server (NTRS)

    Margulis, L.

    1990-01-01

    Pain and cognitive dissonance abounds amongst biologists: the plant-animal, botany-zoology wound has nearly healed and the new gash--revealed by department and budget reorganizations--is "molecular" vs. "organismic" biology. Here I contend that resolution of these tensions within zoology requires that an autopoietic-gaian view replace a mechanical-neodarwinian perspective; in the interest of brevity and since many points have been discussed elsewhere, rather than develop detailed arguments I must make staccato statements and refer to a burgeoning literature. The first central concept is that animals, all organisms developing from blastular embryos, evolved from single protist cells that were unable to reproduce their undulipodia. The second points to the usefulness of recognizing the analogy between cyclically established symbioses and meiotic sexuality.

  18. Experimental Testing of a Generic Submarine Model in the DSTO Low Speed Wind Tunnel. Phase 2

    DTIC Science & Technology

    2014-03-01

    axis, z-axis (Nm) l Model reference length (1.35 m) L Lift force (N) MRP Moment Reference Point q Dynamic pressure       2 2 1 Uρ (Pa...moment reference point ( MRP ). The moment reference point was defined as the mid-length position on the centre-line of the model. Figure 5 presents the

  19. Response to the Point of View of Gregory B. Pauly, David M. Hillis, and David C. Cannatella, by the Anuran Subcommittee of the SSAR/HL/ASIH Scientific and Standard English Names List

    USGS Publications Warehouse

    Frost, Darrel R.; McDiarmid, Roy W.; Mendelson, Joseph R.

    2009-01-01

    The Point of View by Gregory Pauly, David Hillis, and David Cannatella misrepresents the motives and activities of the anuran subcommittee of the Scientific and Standard English Names Committee, contains a number of misleading statements, omits evidence and references to critical literature that have already rejected or superseded their positions, and cloaks the limitations of their nomenclatural approach in ambiguous language. Their Point of View is not about promoting transparency in the process of constructing the English Names list, assuring that its taxonomy is adequately reviewed, or promoting nomenclatural stability in any global sense. Rather, their Point of View focuses in large part on a single publication, The Amphibian Tree of Life, which is formally unrelated to the Standard English Names List, and promotes an approach to nomenclature mistakenly asserted by them to be compatible with both the International Code of Zoological Nomenclature and one of its competitors, the PhyloCode.

  20. Efficient clustering aggregation based on data fragments.

    PubMed

    Wu, Ou; Hu, Weiming; Maybank, Stephen J; Zhu, Mingliang; Li, Bing

    2012-06-01

    Clustering aggregation, known as clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a single better clustering. Existing clustering aggregation algorithms are applied directly to data points, in what is referred to as the point-based approach. The algorithms are inefficient if the number of data points is large. We define an efficient approach for clustering aggregation based on data fragments. In this fragment-based approach, a data fragment is any subset of the data that is not split by any of the clustering results. To establish the theoretical bases of the proposed approach, we prove that clustering aggregation can be performed directly on data fragments under two widely used goodness measures for clustering aggregation taken from the literature. Three new clustering aggregation algorithms are described. The experimental results obtained using several public data sets show that the new algorithms have lower computational complexity than three well-known existing point-based clustering aggregation algorithms (Agglomerative, Furthest, and LocalSearch); nevertheless, the new algorithms do not sacrifice the accuracy.

  1. Linear LIDAR versus Geiger-mode LIDAR: impact on data properties and data quality

    NASA Astrophysics Data System (ADS)

    Ullrich, A.; Pfennigbauer, M.

    2016-05-01

    LIDAR has become the inevitable technology to provide accurate 3D data fast and reliably even in adverse measurement situations and harsh environments. It provides highly accurate point clouds with a significant number of additional valuable attributes per point. LIDAR systems based on Geiger-mode avalanche photo diode arrays, also called single photon avalanche photo diode arrays, earlier employed for military applications, now seek to enter the commercial market of 3D data acquisition, advertising higher point acquisition speeds from longer ranges compared to conventional techniques. Publications pointing out the advantages of these new systems refer to the other category of LIDAR as "linear LIDAR", as the prime receiver element for detecting the laser echo pulses - avalanche photo diodes - are used in a linear mode of operation. We analyze the differences between the two LIDAR technologies and the fundamental differences in the data they provide. The limitations imposed by physics on both approaches to LIDAR are also addressed and advantages of linear LIDAR over the photon counting approach are discussed.

  2. Reference-dependent preferences for maternity wards: an exploration of two reference points.

    PubMed

    Neuman, Einat

    2014-01-01

    It is now well established that a person's valuation of the benefit from an outcome of a decision is determined by the intrinsic "consumption utility" of the outcome itself and also by the relation of the outcome to some reference point. The most notable expression of such reference-dependent preferences is loss aversion. What precisely this reference point is, however, is less clear. This paper claims and provides empirical evidence for the existence of more than one reference point. Using a discrete choice experiment in the Israeli public health-care sector, within a sample of 219 women who had given birth, it is shown that respondents refer to two reference points : (i) a constant scenario that is used in the experiment; and (ii) also the actual state of the quantitative attributes of the service (number of beds in room of hospitalization; and travel time from residence to hospital). In line with the loss aversion theory, it is also shown that losses (vis-à-vis the constant scenario and vis-à-vis the actual state) accumulate and have reinforced effects, while gains do not.

  3. Modelling lidar volume-averaging and its significance to wind turbine wake measurements

    NASA Astrophysics Data System (ADS)

    Meyer Forsting, A. R.; Troldborg, N.; Borraccino, A.

    2017-05-01

    Lidar velocity measurements need to be interpreted differently than conventional in-situ readings. A commonly ignored factor is “volume-averaging”, which refers to lidars not sampling in a single, distinct point but along its entire beam length. However, especially in regions with large velocity gradients, like the rotor wake, can it be detrimental. Hence, an efficient algorithm mimicking lidar flow sampling is presented, which considers both pulsed and continous-wave lidar weighting functions. The flow-field around a 2.3 MW turbine is simulated using Detached Eddy Simulation in combination with an actuator line to test the algorithm and investigate the potential impact of volume-averaging. Even with very few points discretising the lidar beam is volume-averaging captured accurately. The difference in a lidar compared to a point measurement is greatest at the wake edges and increases from 30% one rotor diameter (D) downstream of the rotor to 60% at 3D.

  4. FRACTURE STRENGTH AND TIME DEPENDENT PROPERTIES OF 0/90 AND ±55-BRAIDED WEAVE SiC/SiC TYPE-S FIBER COMPOSITES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henager, Charles H.

    PNNL has performed mechanical property tests on two types of Hi-Nicalon Type-S fiber SiC/SiC composites for the general purpose of evaluating such composites for control rod guide tube applications in the NGNP high-temperature gas-cooled reactor design. The mechanical testing consisted of 4-point bend strength, 4-point single-edge notched bend fracture toughness, and 4-point bend slow crack growth testing on both composites from ambient to 1600°C (1873K). The two composite materials that were tested included a ±55°-braided-weave composite with Type-S fibers inclined at 55° to the principal composite axes to simulate a braided tube architecture and a Type-S 0/90 satin-weave composite asmore » a reference material.« less

  5. Phase-shifting point diffraction interferometer

    DOEpatents

    Medecki, H.

    1998-11-10

    Disclosed is a point diffraction interferometer for evaluating the quality of a test optic. In operation, the point diffraction interferometer includes a source of radiation, the test optic, a beam divider, a reference wave pinhole located at an image plane downstream from the test optic, and a detector for detecting an interference pattern produced between a reference wave emitted by the pinhole and a test wave emitted from the test optic. The beam divider produces separate reference and test beams which focus at different laterally separated positions on the image plane. The reference wave pinhole is placed at a region of high intensity (e.g., the focal point) for the reference beam. This allows reference wave to be produced at a relatively high intensity. Also, the beam divider may include elements for phase shifting one or both of the reference and test beams. 8 figs.

  6. Phase-shifting point diffraction interferometer

    DOEpatents

    Medecki, Hector

    1998-01-01

    Disclosed is a point diffraction interferometer for evaluating the quality of a test optic. In operation, the point diffraction interferometer includes a source of radiation, the test optic, a beam divider, a reference wave pinhole located at an image plane downstream from the test optic, and a detector for detecting an interference pattern produced between a reference wave emitted by the pinhole and a test wave emitted from the test optic. The beam divider produces separate reference and test beams which focus at different laterally separated positions on the image plane. The reference wave pinhole is placed at a region of high intensity (e.g., the focal point) for the reference beam. This allows reference wave to be produced at a relatively high intensity. Also, the beam divider may include elements for phase shifting one or both of the reference and test beams.

  7. Erosion Remineralization Efficacy of Gel-to-Foam Fluoride Toothpastes in situ: A Randomized Clinical Trial.

    PubMed

    Nehme, Marc; Jeffery, Peter; Mason, Stephen; Lippert, Frank; Zero, Domenick T; Hara, Anderson T

    2016-01-01

    This single-center, randomized, placebo-controlled, four-treatment, four-period crossover study compared the enamel remineralization effects of low- and medium-abrasivity gel-to-foam toothpastes and a reference toothpaste (all 1,450 ppm fluoride as NaF) versus placebo toothpaste (0 ppm fluoride) using a short-term in situ erosion model. Subjects (n = 56) wearing a palatal appliance holding acid-softened bovine enamel specimens brushed their teeth with the test toothpastes. Thereafter, the specimens were removed for analysis of percent surface microhardness recovery (%SMHR) and percent relative erosion resistance (%RER) at 2, 4, and 8 h. Both low- and medium-abrasivity gel-to-foam fluoride toothpastes and the reference toothpaste provided significantly greater %SMHR than placebo at all assessment time points (all p < 0.05). No statistically significant difference of %SMHR was observed between the fluoride treatment groups at any time point. Similarly, all fluoride products provided significantly superior %RER versus placebo (all p < 0.0001), whereas no significant difference of this parameter was noted between the fluoride treatment groups. Increasing numerical improvements of %SMHR and %RER were observed in all four treatment groups over time (2, 4, and 8 h). The present in situ model is a sensitive tool to investigate intrinsic and fluoride-enhanced rehardening of eroded enamel. All three fluoride toothpastes were more efficacious than placebo, and there were no safety concerns following single dosing in this short-term in situ model. © 2016 The Author(s) Published by S. Karger AG, Basel.

  8. Co-C and Pd-C Eutectic Fixed Points for Radiation Thermometry and Thermocouple Thermometry

    NASA Astrophysics Data System (ADS)

    Wang, L.

    2017-12-01

    Two Co-C and Pd-C eutectic fixed point cells for both radiation thermometry and thermocouple thermometry were constructed at NMC. This paper describes details of the cell design, materials used, and fabrication of the cells. The melting curves of the Co-C and Pd-C cells were measured with a reference radiation thermometer realized in both a single-zone furnace and a three-zone furnace in order to investigate furnace effect. The transition temperatures in terms of ITS-90 were determined to be 1324.18 {°}C and 1491.61 {°}C with the corresponding combined standard uncertainty of 0.44 {°}C and 0.31 {°}C for Co-C and Pd-C, respectively, taking into account of the differences of two different types of furnaces used. The determined ITS-90 temperatures are also compared with that of INRIM cells obtained using the same reference radiation thermometer and the same furnaces with the same settings during a previous bilateral comparison exercise (Battuello et al. in Int J Thermophys 35:535-546, 2014). The agreements are within k=1 uncertainty for Co-C cell and k = 2 uncertainty for Pd-C cell. Shapes of the plateaus of NMC cells and INRIM cells are compared too and furnace effects are analyzed as well. The melting curves of the Co-C and Pd-C cells realized in the single-zone furnace are also measured by a Pt/Pd thermocouple, and the preliminary results are presented as well.

  9. Fast and fuzzy multi-objective radiotherapy treatment plan generation for head and neck cancer patients with the lexicographic reference point method (LRPM)

    NASA Astrophysics Data System (ADS)

    van Haveren, Rens; Ogryczak, Włodzimierz; Verduijn, Gerda M.; Keijzer, Marleen; Heijmen, Ben J. M.; Breedveld, Sebastiaan

    2017-06-01

    Previously, we have proposed Erasmus-iCycle, an algorithm for fully automated IMRT plan generation based on prioritised (lexicographic) multi-objective optimisation with the 2-phase ɛ-constraint (2pɛc) method. For each patient, the output of Erasmus-iCycle is a clinically favourable, Pareto optimal plan. The 2pɛc method uses a list of objective functions that are consecutively optimised, following a strict, user-defined prioritisation. The novel lexicographic reference point method (LRPM) is capable of solving multi-objective problems in a single optimisation, using a fuzzy prioritisation of the objectives. Trade-offs are made globally, aiming for large favourable gains for lower prioritised objectives at the cost of only slight degradations for higher prioritised objectives, or vice versa. In this study, the LRPM is validated for 15 head and neck cancer patients receiving bilateral neck irradiation. The generated plans using the LRPM are compared with the plans resulting from the 2pɛc method. Both methods were capable of automatically generating clinically relevant treatment plans for all patients. For some patients, the LRPM allowed large favourable gains in some treatment plan objectives at the cost of only small degradations for the others. Moreover, because of the applied single optimisation instead of multiple optimisations, the LRPM reduced the average computation time from 209.2 to 9.5 min, a speed-up factor of 22 relative to the 2pɛc method.

  10. Method and Apparatus of Multiplexing and Acquiring Data from Multiple Optical Fibers Using a Single Data Channel of an Optical Frequency-Domain Reflectometry (OFDR) System

    NASA Technical Reports Server (NTRS)

    Parker, Jr., Allen R (Inventor); Chan, Hon Man (Inventor); Piazza, Anthony (Nino) (Inventor); Richards, William Lance (Inventor)

    2014-01-01

    A method and system for multiplexing a network of parallel fiber Bragg grating (FBG) sensor-fibers to a single acquisition channel of a closed Michelson interferometer system via a fiber splitter by distinguishing each branch of fiber sensors in the spatial domain. On each branch of the splitter, the fibers have a specific pre-determined length, effectively separating each branch of fiber sensors spatially. In the spatial domain the fiber branches are seen as part of one acquisition channel on the interrogation system. However, the FBG-reference arm beat frequency information for each fiber is retained. Since the beat frequency is generated between the reference arm, the effective fiber length of each successive branch includes the entire length of the preceding branch. The multiple branches are seen as one fiber having three segments where the segments can be resolved. This greatly simplifies optical, electronic and computational complexity, and is especially suited for use in multiplexed or branched OFS networks for SHM of large and/or distributed structures which need a lot of measurement points.

  11. An ATP System for Deep-Space Optical Communication

    NASA Technical Reports Server (NTRS)

    Lee, Shinhak; Irtuzm Gerardi; Alexander, James

    2008-01-01

    An acquisition, tracking, and pointing (ATP) system is proposed for aiming an optical-communications downlink laser beam from deep space. In providing for a direction reference, the concept exploits the mature technology of star trackers to eliminate the need for a costly and potentially hazardous laser beacon. The system would include one optical and two inertial sensors, each contributing primarily to a different portion of the frequency spectrum of the pointing signal: a star tracker (<10 Hz), a gyroscope (<50 Hz), and a precise fluid-rotor inertial angular-displacement sensor (sometimes called, simply, "angle sensor") for the frequency range >50 Hz. The outputs of these sensors would be combined in an iterative averaging process to obtain high-bandwidth, high-accuracy pointing knowledge. The accuracy of pointing knowledge obtainable by use of the system was estimated on the basis of an 8-cm-diameter telescope and known parameters of commercially available star trackers and inertial sensors: The single-axis pointing-knowledge error was found to be characterized by a standard deviation of 150 nanoradians - below the maximum value (between 200 and 300 nanoradians) likely to be tolerable in deep-space optical communications.

  12. A preliminary report on the genetic variation in pointed gourd (Trichosanthes dioica Roxb.) as assessed by random amplified polymorphic DNA.

    PubMed

    Adhikari, S; Biswas, A; Bandyopadhyay, T K; Ghosh, P D

    2014-06-01

    Pointed gourd (Trichosanthes dioica Roxb.) is an economically important cucurbit and is extensively propagated through vegetative means, viz vine and root cuttings. As the accessions are poorly characterized it is important at the beginning of a breeding programme to discriminate among available genotypes to establish the level of genetic diversity. The genetic diversity of 10 pointed gourd races, referred to as accessions was evaluated. DNA profiling was generated using 10 sequence independent RAPD markers. A total of 58 scorable loci were observed out of which 18 (31.03%) loci were considered polymorphic. Genetic diversity parameters [average and effective number of alleles, Shannon's index, percent polymorphism, Nei's gene diversity, polymorphic information content (PIC)] for RAPD along with UPGMA clustering based on Jaccard's coefficient were estimated. The UPGMA dendogram constructed based on RAPD analysis in 10 pointed gourd accessions were found to be grouped in a single cluster and may represent members of one heterotic group. RAPD analysis showed promise as an effective tool in estimating genetic polymorphism in different accessions of pointed gourd.

  13. Performance of Four-Leg VSC based DSTATCOM using Single Phase P-Q Theory

    NASA Astrophysics Data System (ADS)

    Jampana, Bangarraju; Veramalla, Rajagopal; Askani, Jayalaxmi

    2017-02-01

    This paper presents single-phase P-Q theory for four-leg VSC based distributed static compensator (DSTATCOM) in the distribution system. The proposed DSTATCOM maintains unity power factor at source, zero voltage regulation, eliminates current harmonics, load balancing and neutral current compensation. The advantage of using four-leg VSC based DSTATCOM is to eliminate isolated/non-isolated transformer connection at point of common coupling (PCC) for neutral current compensation. The elimination of transformer connection at PCC with proposed topology will reduce cost of DSTATCOM. The single-phase P-Q theory control algorithm is used to extract fundamental component of active and reactive currents for generation of reference source currents which is based on indirect current control method. The proposed DSTATCOM is modelled and the results are validated with various consumer loads under unity power factor and zero voltage regulation modes in the MATLAB R2013a environment using simpower system toolbox.

  14. Metonymy and Reference-Point Errors in Novice Programming

    ERIC Educational Resources Information Center

    Miller, Craig S.

    2014-01-01

    When learning to program, students often mistakenly refer to an element that is structurally related to the element that they intend to reference. For example, they may indicate the attribute of an object when their intention is to reference the whole object. This paper examines these reference-point errors through the context of metonymy.…

  15. Normalization of Reverse Transcription Quantitative PCR Data During Ageing in Distinct Cerebral Structures.

    PubMed

    Bruckert, G; Vivien, D; Docagne, F; Roussel, B D

    2016-04-01

    Reverse transcription quantitative-polymerase chain reaction (RT-qPCR) has become a routine method in many laboratories. Normalization of data from experimental conditions is critical for data processing and is usually achieved by the use of a single reference gene. Nevertheless, as pointed by the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines, several reference genes should be used for reliable normalization. Ageing is a physiological process that results in a decline of many expressed genes. Reliable normalization of RT-qPCR data becomes crucial when studying ageing. Here, we propose a RT-qPCR study from four mouse brain regions (cortex, hippocampus, striatum and cerebellum) at different ages (from 8 weeks to 22 months) in which we studied the expression of nine commonly used reference genes. With the use of two different algorithms, we found that all brain structures need at least two genes for a good normalization step. We propose specific pairs of gene for efficient data normalization in the four brain regions studied. These results underline the importance of reliable reference genes for specific brain regions in ageing.

  16. Assessment of reference gene stability in Rice stripe virus and Rice black streaked dwarf virus infection rice by quantitative Real-time PCR.

    PubMed

    Fang, Peng; Lu, Rongfei; Sun, Feng; Lan, Ying; Shen, Wenbiao; Du, Linlin; Zhou, Yijun; Zhou, Tong

    2015-10-24

    Stably expressed reference gene(s) normalization is important for the understanding of gene expression patterns by quantitative Real-time PCR (RT-qPCR), particularly for Rice stripe virus (RSV) and Rice black streaked dwarf virus (RBSDV) that caused seriously damage on rice plants in China and Southeast Asia. The expression of fourteen common used reference genes of Oryza sativa L. were evaluated by RT-qPCR in RSV and RBSDV infected rice plants. Suitable normalization reference gene(s) were identified by geNorm and NormFinder algorithms. UBQ 10 + GAPDH and UBC + Actin1 were identified as suitable reference genes for RT-qPCR normalization under RSV and RBSDV infection, respectively. When using multiple reference genes, the expression patterns of OsPRIb and OsWRKY, two virus resistance genes, were approximately similar with that reported previously. Comparatively, by using single reference gene (TIP41-Like), a weaker inducible response was observed. We proposed that the combination of two reference genes could obtain more accurate and reliable normalization of RT-qPCR results in RSV- and RBSDV-infected plants. This work therefore sheds light on establishing a standardized RT-qPCR procedure in RSV- and RBSDV-infected rice plants, and might serve as an important point for discovering complex regulatory networks and identifying genes relevant to biological processes or implicated in virus.

  17. Possible generational effects of habitat degradation on alligator reproduction

    USGS Publications Warehouse

    Fujisaki, Ikuko; Rice, K.G.; Woodward, A.R.; Percival, H.F.

    2007-01-01

    Population decline of the American alligator (Alligator mississippiensis) was observed in Lake Apopka in central Florida, USA, in the early 1980s. This decline was thought to result from adult mortality and nest failure caused by anthropogenic increases in sediment loads, nutrients, and contaminants. Reproductive impairment also was reported. Extensive restoration of marshes associated with Lake Apopka has been conducted, as well as some limited restoration measures on the lake. Monitoring by the Florida Fish and Wildlife Conservation Commission (FFWCC) has indicated that the adult alligator population began increasing in the early 1990s. We expected that the previously reported high proportion of complete nest failure (??0) during the 1980s may have decreased. We collected clutches from alligator nests in Lake Apopka from 1983 to 2003 and from 5 reference areas from 1988 to 1991, and we artificially incubated them. We used a Bayesian framework with Gibbs sampler of Markov chain Monte Carlo simulation to analyze ??0. Estimated ??0was consistently higher in Lake Apopka compared with reference areas, and the difference in ??0 ranged from 0.19 to 0.56. We conducted change point analysis to identify and test the significance of the change point in ??0in Lake Apopka between 1983 and 2003, indicating the point of reproductive recovery. The estimated Bayes factor strongly supported the single change point hypothesis against the no change point hypothesis. The major downward shift in ??0 probably occurred in the mid-1990s, approximately a generation after the major population decline in the 1980s. Furthermore, estimated ??0 values after the change point (0.21) were comparable with those of reference areas (0.07-0.31). These results combined with the monitoring by FFWCC seem to suggest that anthropogenic habitat degradation caused reproductive impairment of adult females and decreases in ??0 occurred with the sexual maturity of a new generation of breeding females. Long-term monitoring is essential to understand population changes due to habitat restoration. Such information can be used as an input in planning and evaluating restoration activities.

  18. Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

    DOEpatents

    Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert

    2015-12-08

    Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.

  19. Novel Payload Architectures for LISA

    NASA Astrophysics Data System (ADS)

    Johann, Ulrich A.; Gath, Peter F.; Holota, Wolfgang; Schulte, Hans Reiner; Weise, Dennis

    2006-11-01

    As part of the current LISA Mission Formulation Study, and based on prior internal investigations, Astrium Germany has defined and preliminary assessed novel payload architectures, potentially reducing overall complexity and improving budgets and costs. A promising concept is characterized by a single active inertial sensor attached to a single optical bench and serving both adjacent interferometer arms via two rigidly connected off-axis telescopes. The in-plane triangular constellation ``breathing angle'' compensation is accomplished by common telescope in-field of view pointing actuation of the transmit/received beams line of sight. A dedicated actuation mechanism located on the optical bench is required in addition to the on bench actuators for differential pointing of the transmit and receive direction perpendicular to the constellation plane. Both actuators operate in a sinusoidal yearly period. A technical challenge is the actuation mechanism pointing jitter and the monitoring and calibration of the laser phase walk which occurs while changing the optical path inside the optical assembly during re-pointing. Calibration or monitoring of instrument internal phase effects e.g. by a laser metrology truss derived from the existing interferometry is required. The architecture exploits in full the two-step interferometry (strap down) concept, separating functionally inter spacecraft and intra-spacecraft interferometry (reference mass laser metrology degrees of freedom sensing). The single test mass is maintained as cubic, but in free-fall in the lateral degrees of freedom within the constellation plane. Also the option of a completely free spherical test mass with full laser interferometer readout has been conceptually investigated. The spherical test mass would rotate slowly, and would be allowed to tumble. Imperfections in roundness and density would be calibrated from differential wave front sensing in a tetrahedral arrangement, supported by added attitude information via a grid of tick marks etched onto the surface and monitored by the laser readout.

  20. Three-Dimensional Imaging by Self-Reference Single-Channel Digital Incoherent Holography

    PubMed Central

    Rosen, Joseph; Kelner, Roy

    2016-01-01

    Digital holography offers a reliable and fast method to image a three-dimensional scene from a single perspective. This article reviews recent developments of self-reference single-channel incoherent hologram recorders. Hologram recorders in which both interfering beams, commonly referred to as the signal and the reference beams, originate from the same observed objects are considered as self-reference systems. Moreover, the hologram recorders reviewed herein are configured in a setup of a single channel interferometer. This unique configuration is achieved through the use of one or more spatial light modulators. PMID:28757811

  1. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    NASA Astrophysics Data System (ADS)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;

  2. Summary Diagrams for Coupled Hydrodynamic-Ecosystem Model Skill Assessment

    DTIC Science & Technology

    2009-01-01

    reference point have the smallest unbiased RMSD value (Fig. 3). It would appear that the cluster of model points closest to the reference point may...total RMSD values. This is particularly the case for phyto- plankton absorption (Fig. 3B) where the cluster of points closest to the reference...pattern statistics and the bias (difference of mean values) each magnitude of the total Root-Mean-Square Difference ( RMSD ). An alternative skill score and

  3. The Influence of the Tri-reference Points on Fairness and Satisfaction Perception

    PubMed Central

    Zhao, Lei; Ye, Junhui; Wu, Xuexian; Hu, Fengpei

    2018-01-01

    We examined the influence of three reference points (minimum requirements [MR], the status quo [SQ], and goal [G]) proposed by the tri-reference point (TRP) theory on fairness and satisfaction perceptions of pay in three laboratory experiments. To test the effects, we manipulated these three reference points both implicitly (Experiment 1) and explicitly (Experiments 2 and 3). We also provided the information of the salary offered to a peer person that was lower than, equal to, or higher than the salary offer to the participant. As hypothesized, the results demonstrated the important role of these reference points in judging the fairness of and satisfaction with pay when they were explicitly set (an interaction between reference points and social comparison in Experiments 2 and 3, but not in Experiment 1). Participants altered their judgments when the salary was in different regions. When the salary was below MR, participants perceived very low fairness and satisfaction, even when the offer was equal to/exceeded others. When the salary was above G, participants perceived much higher fairness and satisfaction, even with disadvantageous inequality. Participants were more impacted when they were explicitly instructed of the reference points (Experiments 2 and 3) than when they were not (Experiment 1). Moreover, MR appeared to be the most important, followed by G. A Salary below MR was judged as very unacceptable, with very low fairness and satisfaction ratings. PMID:29515503

  4. Young Children Follow Pointing over Words in Interpreting Acts of Reference

    ERIC Educational Resources Information Center

    Grassmann, Susanne; Tomasello, Michael

    2010-01-01

    Adults refer young children's attention to things in two basic ways: through the use of pointing (and other deictic gestures) and words (and other linguistic conventions). In the current studies, we referred young children (2- and 4-year-olds) to things in conflicting ways, that is, by pointing to one object while indicating linguistically (in…

  5. Reproducibility of the Helium-3 Constant-Volume Gas Thermometry and New Data Down to 1.9 K at NMIJ/AIST

    NASA Astrophysics Data System (ADS)

    Nakano, Tohru; Shimazaki, Takeshi; Tamura, Osamu

    2017-07-01

    This study confirms reproducibility of the International Temperature Scale of 1990 (ITS-90) realized by interpolation using the constant-volume gas thermometer (CVGT) of National Metrology Institute of Japan (NMIJ)/AIST with 3He as the working gas from 3 K to 24.5561 K by comparing the newly obtained results and those of earlier reports, indicating that the CVGT has retained its capability after renovation undertaken since strong earthquakes struck Japan. The thermodynamic temperature T is also obtained using the single-isotherm fit to four working gas densities (127 mol\\cdot m^{-3}, 145 mol\\cdot m^{-3}, 171 mol\\cdot m^{-3} and 278 mol\\cdot m^{-3}) down to 1.9 K, using the triple point temperature of Ne as a reference temperature. In this study, only the second virial coefficient is taken into account for the single-isotherm fit. Differences between T and the ITS-90 temperature, T-T_{90}, reported in earlier works down to 3 K were confirmed in this study. At the temperatures below 3 K down to 2.5 K, T-T_{90} is much smaller than the standard combined uncertainty of thermodynamic temperature measurement. However, T- T_{90} seems to increase with decreasing temperature below 2.5 K down to 1.9 K, although still within the standard combined uncertainty of thermodynamic temperature measurement. In this study, T is obtained also from the CVGT with a single gas density of 278 mol\\cdot m^{-3} using the triple-point temperature of Ne as a reference temperature by making correction for the deviation from the ideal gas using theoretical values of the second and third virial coefficients down to 2.6 K, which is the lowest temperature of the theoretical values of the third virial coefficient. T values obtained using this method agree well with those obtained from the single-isotherm fit. We also found that the second virial coefficient obtained by the single-isotherm fit to experimental results agrees well with that obtained by the single-isotherm fit to the theoretically expected behavior of 3He gas with the theoretical second and third virial coefficients at four gas densities used in the present work.

  6. Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang

    2016-11-01

    A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.

  7. A study on using pre-forming blank in single point incremental forming process by finite element analysis

    NASA Astrophysics Data System (ADS)

    Abass, K. I.

    2016-11-01

    Single Point Incremental Forming process (SPIF) is a forming technique of sheet material based on layered manufacturing principles. The edges of sheet material are clamped while the forming tool is moved along the tool path. The CNC milling machine is used to manufacturing the product. SPIF involves extensive plastic deformation and the description of the process is more complicated by highly nonlinear boundary conditions, namely contact and frictional effects have been accomplished. However, due to the complex nature of these models, numerical approaches dominated by Finite Element Analysis (FEA) are now in widespread use. The paper presents the data and main results of a study on effect of using preforming blank in SPIF through FEA. The considered SPIF has been studied under certain process conditions referring to the test work piece, tool, etc., applying ANSYS 11. The results show that the simulation model can predict an ideal profile of processing track, the behaviour of contact tool-workpiece, the product accuracy by evaluation its thickness, surface strain and the stress distribution along the deformed blank section during the deformation stages.

  8. Strategies for single-point diamond machining a large format germanium blazed immersion grating

    NASA Astrophysics Data System (ADS)

    Montesanti, R. C.; Little, S. L.; Kuzmenko, P. J.; Bixler, J. V.; Jackson, J. L.; Lown, J. G.; Priest, R. E.; Yoxall, B. E.

    2016-07-01

    A large format germanium immersion grating was flycut with a single-point diamond tool on the Precision Engineering Research Lathe (PERL) at the Lawrence Livermore National Laboratory (LLNL) in November - December 2015. The grating, referred to as 002u, has an area of 59 mm x 67 mm (along-groove and cross-groove directions), line pitch of 88 line/mm, and blaze angle of 32 degree. Based on total groove length, the 002u grating is five times larger than the previous largest grating (ZnSe) cut on PERL, and forty-five times larger than the previous largest germanium grating cut on PERL. The key risks associated with cutting the 002u grating were tool wear and keeping the PERL machine running uninterrupted in a stable machining environment. This paper presents the strategies employed to mitigate these risks, introduces pre-machining of the as-etched grating substrate to produce a smooth, flat, damage-free surface into which the grooves are cut, and reports on trade-offs that drove decisions and experimental results.

  9. Validation of reference genes for quantitative gene expression analysis in experimental epilepsy.

    PubMed

    Sadangi, Chinmaya; Rosenow, Felix; Norwood, Braxton A

    2017-12-01

    To grasp the molecular mechanisms and pathophysiology underlying epilepsy development (epileptogenesis) and epilepsy itself, it is important to understand the gene expression changes that occur during these phases. Quantitative real-time polymerase chain reaction (qPCR) is a technique that rapidly and accurately determines gene expression changes. It is crucial, however, that stable reference genes are selected for each experimental condition to ensure that accurate values are obtained for genes of interest. If reference genes are unstably expressed, this can lead to inaccurate data and erroneous conclusions. To date, epilepsy studies have used mostly single, nonvalidated reference genes. This is the first study to systematically evaluate reference genes in male Sprague-Dawley rat models of epilepsy. We assessed 15 potential reference genes in hippocampal tissue obtained from 2 different models during epileptogenesis, 1 model during chronic epilepsy, and a model of noninjurious seizures. Reference gene ranking varied between models and also differed between epileptogenesis and chronic epilepsy time points. There was also some variance between the four mathematical models used to rank reference genes. Notably, we found novel reference genes to be more stably expressed than those most often used in experimental epilepsy studies. The consequence of these findings is that reference genes suitable for one epilepsy model may not be appropriate for others and that reference genes can change over time. It is, therefore, critically important to validate potential reference genes before using them as normalizing factors in expression analysis in order to ensure accurate, valid results. © 2017 Wiley Periodicals, Inc.

  10. A reference Pelton turbine - High speed visualization in the rotating frame

    NASA Astrophysics Data System (ADS)

    Solemslie, Bjørn W.; Dahlhaug, Ole G.

    2016-11-01

    To enable a detailed study the flow mechanisms effecting the flow within the reference Pelton runner designed at the Waterpower Laboratory (NTNLT) a flow visualization system has been developed. The system enables high speed filming of the hydraulic surface of a single bucket in the rotating frame of reference. It is built with an angular borescopes adapter entering the turbine along the rotational axis and a borescope embedded within a bucket. A stationary high speed camera located outside the turbine housing has been connected to the optical arrangement by a non-contact coupling. The view point of the system includes the whole hydraulic surface of one half of a bucket. The system has been designed to minimize the amount of vibrations and to ensure that the vibrations felt by the borescope are the same as those affecting the camera. The preliminary results captured with the system are promising and enable a detailed study of the flow within the turbine.

  11. Method and apparatus for white-light dispersed-fringe interferometric measurement of corneal topography

    NASA Technical Reports Server (NTRS)

    Hochberg, Eric B. (Inventor); Baroth, Edmund C. (Inventor)

    1994-01-01

    An novel interferometric apparatus and method for measuring the topography of aspheric surfaces, without requiring any form of scanning or phase shifting. The apparatus and method of the present invention utilize a white-light interferometer, such as a white-light Twyman-Green interferometer, combined with a means for dispersing a polychromatic interference pattern, using a fiber-optic bundle and a disperser such as a prism for determining the monochromatic spectral intensities of the polychromatic interference pattern which intensities uniquely define the optical path differences or OPD between the surface under test and a reference surface such as a reference sphere. Consequently, the present invention comprises a snapshot approach to measuring aspheric surface topographies such as the human cornea, thereby obviating vibration sensitive scanning which would otherwise reduce the accuracy of the measurement. The invention utilizes a polychromatic interference pattern in the pupil image plane, which is dispersed on a point-wise basis, by using a special area-to-line fiber-optic manifold, onto a CCD or other type detector comprising a plurality of columns of pixels. Each such column is dedicated to a single point of the fringe pattern for enabling determination of the spectral content of the pattern. The auto-correlation of the dispersed spectrum of the fringe pattern is uniquely characteristic of a particular optical path difference between the surface under test and a reference surface.

  12. Apparatus and method for mapping an area of interest

    DOEpatents

    Staab, Torsten A. Cohen, Daniel L.; Feller, Samuel [Fairfax, VA

    2009-12-01

    An apparatus and method are provided for mapping an area of interest using polar coordinates or Cartesian coordinates. The apparatus includes a range finder, an azimuth angle measuring device to provide a heading and an inclinometer to provide an angle of inclination of the range finder as it relates to primary reference points and points of interest. A computer is provided to receive signals from the range finder, inclinometer and azimuth angle measurer to record location data and calculate relative locations between one or more points of interest and one or more primary reference points. The method includes mapping of an area of interest to locate points of interest relative to one or more primary reference points and to store the information in the desired manner. The device may optionally also include an illuminator which can be utilized to paint the area of interest to indicate both points of interest and primary points of reference during and/or after data acquisition.

  13. Factors influencing superimposition error of 3D cephalometric landmarks by plane orientation method using 4 reference points: 4 point superimposition error regression model.

    PubMed

    Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul

    2014-01-01

    Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.

  14. Topical nasal decongestant oxymetazoline (0.05%) provides relief of nasal symptoms for 12 hours.

    PubMed

    Druce, H M; Ramsey, D L; Karnati, S; Carr, A N

    2018-05-22

    Nasal congestion, often referred to as stuffy nose or blocked nose is one of the most prevalent and bothersome symptoms of an upper respiratory tract infection. Oxymetazoline, a widely used intranasal decongestant, offers fast symptom relief, but little is known about the duration of effect. The results of 2 randomized, double-blind, vehicle-controlled, single-dose, parallel, clinical studies (Study 1, n=67; Study 2, n=61) in which the efficacy of an oxymetazoline (0.05% Oxy) nasal spray in patients with acute coryzal rhinitis was assessed over a 12-hour time-period. Data were collected on both subjective relief of nasal congestion (6-point nasal congestion scale) and objective measures of nasal patency (anterior rhinomanometry) in both studies. A pooled study analysis showed statistically significant changes from baseline in subjective nasal congestion for 0.05% oxymetazoline and vehicle at each hourly time-point from Hour 1 through Hour 12 (marginally significant at Hour 11). An objective measure of nasal flow was statistically significant at each time-point up to 12 hours. Adverse events on either treatment were infrequent. The number of subjects who achieved an improvement in subjective nasal congestion scores of at least 1.0 was significantly higher in the Oxy group vs. vehicle at all hourly time-points on a 6-point nasal congestion scale. This study shows for the first time, that oxymetazoline provides both statistically significant and clinically meaningful relief of nasal congestion and improves nasal airflow for up to 12 hours following a single dose.

  15. A Tri-Reference Point Theory of Decision Making under Risk

    ERIC Educational Resources Information Center

    Wang, X. T.; Johnson, Joseph G.

    2012-01-01

    The tri-reference point (TRP) theory takes into account minimum requirements (MR), the status quo (SQ), and goals (G) in decision making under risk. The 3 reference points demarcate risky outcomes and risk perception into 4 functional regions: success (expected value of x greater than or equal to G), gain (SQ less than x less than G), loss (MR…

  16. 16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...

  17. 16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...

  18. 16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION.... 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A of Part 1209—Zero Reference Point Related to Detecting...

  19. Accuracy of body mass index for age to diagnose obesity in Mexican schoolchildren.

    PubMed

    Mendoza Pablo, Pedro A; Valdés, Jesús; Ortiz-Hernández, Luis

    2015-06-01

    To compare the accuracy of three BMI-forage references (World Health Organization reference, WHO; the updated International Obesity Task Force reference, IOTF; and Centers for Disease Control and Prevention (CDC) growth charts) to diagnose obesity in Mexican children. A convenience sample of Mexican schoolchildren (n = 218) was assessed. The gold standard was the percentage of body fat estimated by deuterium dilution technique. Sensitivity and specificity of the classical cutoff point of BMI-for-age to identify obesity (i.e. > 2.00 standard deviation, SD) were estimated. The accuracy (i.e. area under the curve, AUC) of three BMI-for-age references for the diagnosis of obesity was estimated with the receiver operating characteristic (ROC) curves method. The optimal cutoff point (OCP) was determined. The cutoff points to identify obesity had low (WHO reference: 57.6%, CDC: 53.5%) to very low (IOTF reference: 40.4%) sensitivities, but adequate specificities (91.6%, 95.0%, and, 97.5%, respectively). The AUC of the three references were adequate (0.89). For the IOTF reference, the AUC was lower among the older children. The OCP for the CDC reference (1.24 SD) was lower than the OCP for WHO (1.53 SD) and IOTF charts (1.47 SD). The classical cutoff point for obesity has low sensitivity--especially for the IOTF reference. The accuracy of the three references was similar. However, to obtain comparable diagnosis of obesity different cutoff points should be used depending of the reference. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  20. Mode Matching for Optical Antennas

    NASA Astrophysics Data System (ADS)

    Feichtner, Thorsten; Christiansen, Silke; Hecht, Bert

    2017-11-01

    The emission rate of a point dipole can be strongly increased in the presence of a well-designed optical antenna. Yet, optical antenna design is largely based on radio-frequency rules, ignoring, e.g., Ohmic losses and non-negligible field penetration in metals at optical frequencies. Here, we combine reciprocity and Poynting's theorem to derive a set of optical-frequency antenna design rules for benchmarking and optimizing the performance of optical antennas driven by single quantum emitters. Based on these findings a novel plasmonic cavity antenna design is presented exhibiting a considerably improved performance compared to a reference two-wire antenna. Our work will be useful for the design of high-performance optical antennas and nanoresonators for diverse applications ranging from quantum optics to antenna-enhanced single-emitter spectroscopy and sensing.

  1. Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-06-01

    Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.

  2. Residual Stress Analysis Based on Acoustic and Optical Methods.

    PubMed

    Yoshida, Sanichiro; Sasaki, Tomohiro; Usui, Masaru; Sakamoto, Shuichi; Gurney, David; Park, Ik-Keun

    2016-02-16

    Co-application of acoustoelasticity and optical interferometry to residual stress analysis is discussed. The underlying idea is to combine the advantages of both methods. Acoustoelasticity is capable of evaluating a residual stress absolutely but it is a single point measurement. Optical interferometry is able to measure deformation yielding two-dimensional, full-field data, but it is not suitable for absolute evaluation of residual stresses. By theoretically relating the deformation data to residual stresses, and calibrating it with absolute residual stress evaluated at a reference point, it is possible to measure residual stresses quantitatively, nondestructively and two-dimensionally. The feasibility of the idea has been tested with a butt-jointed dissimilar plate specimen. A steel plate 18.5 mm wide, 50 mm long and 3.37 mm thick is braze-jointed to a cemented carbide plate of the same dimension along the 18.5 mm-side. Acoustoelasticity evaluates the elastic modulus at reference points via acoustic velocity measurement. A tensile load is applied to the specimen at a constant pulling rate in a stress range substantially lower than the yield stress. Optical interferometry measures the resulting acceleration field. Based on the theory of harmonic oscillation, the acceleration field is correlated to compressive and tensile residual stresses qualitatively. The acoustic and optical results show reasonable agreement in the compressive and tensile residual stresses, indicating the feasibility of the idea.

  3. Single payers, multiple systems: the scope and limits of subnational variation under a Federal health policy framework.

    PubMed

    Tuohy, Carolyn Hughes

    2009-08-01

    In political discourse, the term "single-payer system" originated in an attempt to stake out a middle ground between the public and private sectors in providing universal access to health care. In this view, a single-payer system is one in which health care is financed by government and delivered by privately owned and operated health care providers. The term appears to have been coined in U.S. policy debates to provide a rhetorical reference point for universal health insurance other than the "socialized medicine" of state-owned and -operated health care providers. This article, like others in this special issue, is meant to provide a more nuanced view of single-payer systems. In particular, it reviews experience in the prototypical single-payer system for physician and hospital services: the Canadian case. Given Canada's federal governance structure, this example also aptly illuminates the scope and limits of subnational variation within this single model of health care finance. And what it demonstrates in essence is that the very feature that defines the single-payer prototype -- the maintenance of independent providers remunerated by a single public payer in each province -- also leads to a set of profession-state bargains that define the limits of variation.

  4. Development and Validation of Limited-Sampling Strategies for Predicting Amoxicillin Pharmacokinetic and Pharmacodynamic Parameters

    PubMed Central

    Suarez-Kurtz, Guilherme; Ribeiro, Frederico Mota; Vicente, Flávio L.; Struchiner, Claudio J.

    2001-01-01

    Amoxicillin plasma concentrations (n = 1,152) obtained from 48 healthy subjects in two bioequivalence studies were used to develop limited-sampling strategy (LSS) models for estimating the area under the concentration-time curve (AUC), the maximum concentration of drug in plasma (Cmax), and the time interval of concentration above MIC susceptibility breakpoints in plasma (T>MIC). Each subject received 500-mg amoxicillin, as reference and test capsules or suspensions, and plasma concentrations were measured by a validated microbiological assay. Linear regression analysis and a “jack-knife” procedure revealed that three-point LSS models accurately estimated (R2, 0.92; precision, <5.8%) the AUC from 0 h to infinity (AUC0-∞) of amoxicillin for the four formulations tested. Validation tests indicated that a three-point LSS model (1, 2, and 5 h) developed for the reference capsule formulation predicts the following accurately (R2, 0.94 to 0.99): (i) the individual AUC0-∞ for the test capsule formulation in the same subjects, (ii) the individual AUC0-∞ for both reference and test suspensions in 24 other subjects, and (iii) the average AUC0-∞ following single oral doses (250 to 1,000 mg) of various amoxicillin formulations in 11 previously published studies. A linear regression equation was derived, using the same sampling time points of the LSS model for the AUC0-∞, but using different coefficients and intercept, for estimating Cmax. Bioequivalence assessments based on LSS-derived AUC0-∞'s and Cmax's provided results similar to those obtained using the original values for these parameters. Finally, two-point LSS models (R2 = 0.86 to 0.95) were developed for T>MICs of 0.25 or 2.0 μg/ml, which are representative of microorganisms susceptible and resistant to amoxicillin. PMID:11600352

  5. Big Geo Data Services: From More Bytes to More Barrels

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2016-04-01

    The data deluge is affecting the oil and gas industry just as much as many other industries. However, aside from the sheer volume there is the challenge of data variety, such as regular and irregular grids, multi-dimensional space/time grids, point clouds, and TINs and other meshes. A uniform conceptualization for modelling and serving them could save substantial effort, such as the proverbial "department of reformatting". The notion of a coverage actually can accomplish this. Its abstract model in ISO 19123 together with the concrete, interoperable OGC Coverage Implementation Schema (CIS), which is currently under adoption as ISO 19123-2, provieds a common platform for representing any n-D grid type, point clouds, and general meshes. This is paired by the OGC Web Coverage Service (WCS) together with its datacube analytics language, the OGC Web Coverage Processing Service (WCPS). The OGC WCS Core Reference Implementation, rasdaman, relies on Array Database technology, i.e. a NewSQL/NoSQL approach. It supports the grid part of coverages, with installations of 100+ TB known and single queries parallelized across 1,000+ cloud nodes. Recent research attempts to address the point cloud and mesh part through a unified query model. The Holy Grail envisioned is that these approaches can be merged into a single service interface at some time. We present both grid amd point cloud / mesh approaches and discuss status, implementation, standardization, and research perspectives, including a live demo.

  6. Remote Sensing Analysis of Temperature and Suspended Sediment Concentration in Ayeyarwady River in Myanmar

    NASA Astrophysics Data System (ADS)

    Thanda Ko, Nyein; Rutten, Martine

    2017-04-01

    Detailed spatial coverage of water quality parameters are crucial to better manage rivers. However, collection of water quality parameters is both time consuming and costly for large rivers. This study demonstrates that Operational Land Image (OLI) Sensor on board of Landsat 8 can be successfully applied for the detection of spatial patterns of water temperature as well as suspended sediment concentration (SSC) using the Ayeyarwady river, Myanmar as a case study. Water temperature estimation was obtained from the brightness thermal Band 10 by using the Split-Window algorithm. The study finds that there is a close agreement between the remote sensing temperature and in-situ temperature with relative error in the range from 4.5% to 8.2%. The sediment load of Ayeyarwady river is ranked as the third-largest sediment load among the world's rivers but there is very little known about this important parameter, due to a lack of adequate gauge data. The single band reflectance of Landsat image (Band 5) seems a good indicator for the estimation of SSC with relative error in the range of less than 10% but the developed empirical formula by the power relation with the only seven ground reference points is uncertain to apply for the entire river basin. It is to note that an important constraint for the sediment analysis is the availability of spatial and temporal ground reference data. Future studies should also focus on the improvement of ground reference data points to become more reliable, because most of the river in Asia, especially in Myanmar, don't have readily available continuous ground sediment data points due to lack of measurement gauge stations through the river.

  7. The processing and collaborative assay of a reference endotoxin.

    PubMed

    Hochstein, H D; Mills, D F; Outschoorn, A S; Rastogi, S C

    1983-10-01

    A preparation of Escherichia coli bacterial endotoxin, the latest of successive lots drawn from bulk material which has been studied in laboratory tests and in animals and humans for suitability as a reference endotoxin, has been filled and lyophilized in a large number of vials. Details of its characterization, including stability studies, are given. A collaborative assay was conducted by 14 laboratories using gelation end-points with Limulus amebocyte lysates. Approximate continuity of the unit of potency with the existing national unit was achieved. The lot was made from the single final bulk but had to be freeze-dried in five sublimators. An assessment was therefore made for possible heterogeneity. The results indicate that the lot can be used as a large homogeneous quantity. The advantages of using it widely as a standard for endotoxins are discussed.

  8. Adjoint-Based Design of Rotors using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2009-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated using comparisons with a complex-variable technique, and a number of single- and multi-point optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  9. An Electron Density Source-Function Study of DNA Base Pairs in Their Neutral and Ionized Ground States†.

    PubMed

    Gatti, Carlo; Macetti, Giovanni; Boyd, Russell J; Matta, Chérif F

    2018-07-05

    The source function (SF) decomposes the electron density at any point into contributions from all other points in the molecule, complex, or crystal. The SF "illuminates" those regions in a molecule that most contribute to the electron density at a point of reference. When this point of reference is the bond critical point (BCP), a commonly used surrogate of chemical bonding, then the SF analysis at an atomic resolution within the framework of Bader's Quantum Theory of Atoms in Molecules returns the contribution of each atom in the system to the electron density at that BCP. The SF is used to locate the important regions that control the hydrogen bonds in both Watson-Crick (WC) DNA dimers (adenine:thymine (AT) and guanine:cytosine (GC)) which are studied in their neutral and their singly ionized (radical cationic and anionic) ground states. The atomic contributions to the electron density at the BCPs of the hydrogen bonds in the two dimers are found to be delocalized to various extents. Surprisingly, gaining or loosing an electron has similar net effects on some hydrogen bonds concealing subtle compensations traced to atomic sources contributions. Coarser levels of resolutions (groups, rings, and/or monomers-in-dimers) reveal that distant groups and rings often have non-negligible effects especially on the weaker hydrogen bonds such as the third weak CH⋅⋅⋅O hydrogen bond in AT. Interestingly, neither the purine nor the pyrimidine in the neutral or ionized forms dominate any given hydrogen bond despite that the former has more atoms that can act as source or sink for the density at its BCP. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  10. Establishment of reference scores and interquartile ranges for the Japanese Orthopaedic Association Back Pain Evaluation Questionnaire (JOABPEQ) in patients with low back pain.

    PubMed

    Tominaga, Ryoji; Sekiguchi, Miho; Yonemoto, Koji; Kakuma, Tatsuyuki; Konno, Shin-Ichi

    2018-05-01

    The Japanese Orthopaedic Association Back Pain Evaluation Questionnaire (JOABPEQ) was developed in 2007, including the five domains of Pain-related disorder, Lumbar spine dysfunction, Gait disturbance, Social life disturbance, and Psychological disorder. It is used by physicians to evaluate treatment efficacy by comparing scores before and after treatment. However, the JOABPEQ does not allow evaluation of the severity of a patient's condition compared to the general population at a single time point. Given the unavailability of a standard measurement of back pain, we sought to establish reference scores and interquartile ranges using data obtained from a multicenter, cross-sectional survey taken in Japanese primary care settings. The Lumbar Spinal Stenosis Diagnosis Support Tool project was conducted from 2011 to 2012 in 1657 hospitals in Japan to investigate the establishment of reference scores using JOABPEQ. Patients aged ≥ 20 years undergoing medical examinations by either non-orthopaedic primary care physicians or general orthopedists were considered for enrollment. A total of 10,651 consecutive low back pain patients (5331 men, 5320 women, 18 subjects with missing sex data) who had undergone a medical examination were included. Reference scores and interquartile ranges for each of the five domains of the JOABPEQ according to age and sex were recorded. The median score and interquartile range are the same in the domain of Pain-related disorder in all ages and sexes. The reference scores for Gait disturbance, Social life disturbance and Psychological disorder declined with increasing age in both age- and sex-stratified groups, while there was some different trend in Lumbar spine dysfunction between men and women. Reference scores and interquartile ranges for JOABPEQ were generated based on the data from the examination data. These provide a measurement standard to assess patient perceptions of low back pain at any time point during evaluation or therapy. Copyright © 2018 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  11. A new methodology for automatic detection of reference points in 3D cephalometry: A pilot study.

    PubMed

    Ed-Dhahraouy, Mohammed; Riri, Hicham; Ezzahmouly, Manal; Bourzgui, Farid; El Moutaoukkil, Abdelmajid

    2018-04-05

    The aim of this study was to develop a new method for an automatic detection of reference points in 3D cephalometry to overcome the limits of 2D cephalometric analyses. A specific application was designed using the C++ language for automatic and manual identification of 21 (reference) points on the craniofacial structures. Our algorithm is based on the implementation of an anatomical and geometrical network adapted to the craniofacial structure. This network was constructed based on the anatomical knowledge of the 3D cephalometric (reference) points. The proposed algorithm was tested on five CBCT images. The proposed approach for the automatic 3D cephalometric identification was able to detect 21 points with a mean error of 2.32mm. In this pilot study, we propose an automated methodology for the identification of the 3D cephalometric (reference) points. A larger sample will be implemented in the future to assess the method validity and reliability. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  12. Pharmacokinetics and bioequivalence of two strontium ranelate formulations after single oral administration in healthy Chinese subjects.

    PubMed

    Zhang, Dan; Du, Aihua; Wang, Xiaolin; Zhang, Lina; Yang, Man; Ma, Jingyi; Deng, Ming; Liu, Huichen

    2018-05-08

    Pharmacokinetics of exogenous strontium (Sr) and bioequivalence of a new oral formulation of strontium ranelate compared with the brand-name drug in healthy Chinese subjects was evaluated. A balanced, randomized, single-dose, two-treatment parallel study was conducted in 36 healthy Chinese subjects. Subjects were randomly allocated into two groups of 18 to receive a single oral dose of test formulation and reference formulation under a fasting state, respectively. Blood samples were collected at 19 designated time points up to 240-h post-dose. Serum concentrations of Sr were quantified by ICP-MS. A total of 36 subjects were enrolled and completed the study. Nine mild adverse events in 6 subjects were reported. The C max , AUC 0-72 h , AUC 0- t , and AUC 0-∞ of test and reference formulations shown as mean ± SD were 6.97 ± 1.78 and 6.78 ± 1.80 µg/mL, 199 ± 51 and 187 ± 38 µg·h/mL, 303 ± 89 and 278 ± 54 µg·h/mL, and 337 ± 109 and 305 ± 60 µg·h/mL, respectively. Two formulations were bioequivalent, and both were generally well tolerated.

  13. 49 CFR 571.210 - Standard No. 210; Seat belt assembly anchorages.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... reference point, shall extend forward from that contact point at an angle with the horizontal of not less... torso belt first contacts the uppermost torso belt anchorage.Seat belt anchorage means any component... line from the seating reference point to the nearest contact point of the belt with the anchorage shall...

  14. Handwriting assessment of Franco-Quebec primary school-age students

    PubMed

    Couture, Mélanie; Morin, Marie-France; Coallier, Mélissa; Lavigne, Audrey; Archambault, Patricia; Bolduc, Émilie; Chartier, Émilie; Liard, Karolane; Jasmin, Emmanuelle

    2016-12-01

    Reasons for referring school-age children to occupational therapy mainly relate to handwriting problems. However, there are no validated tools or reference values for assessing handwriting in francophone children in Canada. This study aimed to adapt and validate the writing tasks described in an English Canadian handwriting assessment protocol and to develop reference values for handwriting speed for francophone children. Three writing tasks from the Handwriting Assessment Protocol-2nd Edition (near-point and far-point copying and dictation) were adapted for Québec French children and administered to 141 Grade 1 ( n = 73) and Grade 2 ( n = 68) students. Reference values for handwriting speed were obtained for near point and far point copying tasks. This adapted protocol and these reference values for speed will improve occupational therapy handwriting assessments for the target population.

  15. Postural stabilization after single-leg vertical jump in individuals with chronic ankle instability.

    PubMed

    Nunes, Guilherme S; de Noronha, Marcos

    2016-11-01

    To investigate the impact different ways to define reference balance can have when analysing time to stabilization (TTS). Secondarily, to investigate the difference in TTS between people with chronic ankle instability (CAI) and healthy controls. Cross-sectional study. Laboratory. Fifty recreational athletes (25 CAI, 25 controls). TTS of the center of pressure (CoP) after maximal single-leg vertical jump using as reference method the single-leg stance, pre-jump period, and post-jump period; and the CoP variability during the reference methods. The post-jump reference period had lower values for TTS in the anterior-posterior (AP) direction when compared to single-leg stance (P = 0.001) and to pre-jump (P = 0.002). For TTS in the medio-lateral (ML) direction, the post-jump reference period showed lower TTS when compared to single-leg stance (P = 0.01). We found no difference between CAI and control group for TTS for any direction. The CAI group showed more CoP variability than control group in the single-leg stance reference period for both directions. Different reference periods will produce different results for TTS. There is no difference in TTS after a maximum vertical jump between groups. People with CAI have more CoP variability in both directions during single-leg stance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Identifying, Assessing, and Mitigating Risk of Single-Point Inspections on the Space Shuttle Reusable Solid Rocket Motor

    NASA Technical Reports Server (NTRS)

    Greenhalgh, Phillip O.

    2004-01-01

    In the production of each Space Shuttle Reusable Solid Rocket Motor (RSRM), over 100,000 inspections are performed. ATK Thiokol Inc. reviewed these inspections to ensure a robust inspection system is maintained. The principal effort within this endeavor was the systematic identification and evaluation of inspections considered to be single-point. Single-point inspections are those accomplished on components, materials, and tooling by only one person, involving no other check. The purpose was to more accurately characterize risk and ultimately address and/or mitigate risk associated with single-point inspections. After the initial review of all inspections and identification/assessment of single-point inspections, review teams applied risk prioritization methodology similar to that used in a Process Failure Modes Effects Analysis to derive a Risk Prioritization Number for each single-point inspection. After the prioritization of risk, all single-point inspection points determined to have significant risk were provided either with risk-mitigating actions or rationale for acceptance. This effort gave confidence to the RSRM program that the correct inspections are being accomplished, that there is appropriate justification for those that remain as single-point inspections, and that risk mitigation was applied to further reduce risk of higher risk single-point inspections. This paper examines the process, results, and lessons learned in identifying, assessing, and mitigating risk associated with single-point inspections accomplished in the production of the Space Shuttle RSRM.

  17. Derivation of the Biot-Savart Law from Ampere's Law Using the Displacement Current

    NASA Astrophysics Data System (ADS)

    Buschauer, Robert

    2013-12-01

    The equation describing the magnetic field due to a single, nonrelativistic charged particle moving at constant velocity is often referred to as the "Biot-Savart law for a point charge." Introductory calculus-based physics books usually state this law without proof.2 Advanced texts often present it either without proof or as a special case of a complicated mathematical formalism.3 Either way, little or no physical insight is provided to the student regarding the underlying physics. This paper presents a novel, basic, and transparent derivation of the Biot-Savart law for a point charge based only on Maxwell's displacement current term in Ampere's law. This derivation can serve many pedagogical purposes. For example, it can be used as lecture material at any academic level to obtain the Biot-Savart law for a point charge from simple principles. It can also serve as a practical example of the important fact that a changing electric flux produces a magnetic field.

  18. The gallium melting-point standard: its role in our temperature measurement system.

    PubMed

    Mangum, B W

    1977-01-01

    The latest internationally-adopted temperature scale, the International Practical Temperature Scale of 1968 (amended edition of 1975), is discussed in some detail and a brief description is given of its evolution. The melting point of high-purity gallium (stated to be at least 99.99999% pure) as a secondary temperature reference point is evaluated. I believe that this melting-point temperature of gallium should be adopted by the various medical professional societies and voluntary standards groups as the reaction temperature for enzyme reference methods in clinical enzymology. Gallium melting-point cells are available at the National Bureau of Standards as Standard Reference Material No. 1968.

  19. Impact of Multiple Complex Plaques on Short-and Long-Term Clinical in Patients Presenting with ST-Segment Elevation Myocardial Infarction (From the Harmonizing Outcomes with Revascularization and Stents in Acute Myocardial Infarction [HORIZONS-AMI] Trial)

    PubMed Central

    Keeley, Ellen C.; Mehran, Roxana; Brener, Sorin J.; Witzenbichler, Bernhard; Guagliumi, Giulio; Dudek, Dariusz; Kornowski, Ran; Dressler, Ovidiu; Fahy, Martin; Xu, Ke; Grines, Cindy L.; Stone, Gregg W.

    2014-01-01

    It is not known whether the extent and severity of non-culprit coronary lesions correlate with outcomes in patients with STEMI referred for primary PCI. We sought to quantify complex plaques in ST-segment elevation myocardial infarction (STEMI) patients referred for primary percutaneous coronary intervention (PCI) and to determine their effect on short- and long-term clinical outcomes by examining the core laboratory database for plaque analysis from the HORIZONS-AMI study. Baseline demographic, angiographic, and procedural details were compared between patients with single vs. multiple complex plaques undergoing single vessel PCI. Multivariable analysis was performed for predictors of long-term major adverse cardiac events (MACE), a combined end point of death, reinfarction, ischemic target vessel revascularization, or stroke, and for death alone. Single vessel PCI was performed in 3,137 patients (87%): 2,174 (69%) had multiple complex plaques and 963 (31%) had a single complex plaque. Compared to those with a single complex plaque, patients with multiple complex plaques were older (p<0.0001) and had more comorbidities. The presence of multiple complex plaques was an independent predictor of 3-year MACE (hazard ratio [HR]: 1.58; 95% confidence interval [CI]: 1.26–1.98, p<0.0001), and death alone (HR: 1.68; 95% CI: 1.05–2.70, p=0.03). In conclusion, multiple complex plaques are present in the majority of STEMI patients undergoing primary PCI and their presence is an independent predictor of short- and long-term MACE, including death. (Harmonizing Outcomes With Revascularization and Stents in Acute Myocardial Infarction [HORIZONS-AMI]; NCT00433966) PMID:24703369

  20. Active dendrites: colorful wings of the mysterious butterflies.

    PubMed

    Johnston, Daniel; Narayanan, Rishikesh

    2008-06-01

    Santiago Ramón y Cajal had referred to neurons as the 'mysterious butterflies of the soul.' Wings of these butterflies--their dendrites--were traditionally considered as passive integrators of synaptic information. Owing to a growing body of experimental evidence, it is now widely accepted that these wings are colorful, endowed with a plethora of active conductances, with each family of these butterflies made of distinct hues and shades. Furthermore, rapidly evolving recent literature also provides direct and indirect demonstrations for activity-dependent plasticity of these active conductances, pointing toward chameleonic adaptability in these hues. These experimental findings firmly establish the immense computational power of a single neuron, and thus constitute a turning point toward the understanding of various aspects of neuronal information processing. In this brief historical perspective, we track important milestones in the chameleonic transmogrification of these mysterious butterflies.

  1. Traceability of pH measurements by glass electrode cells: performance characteristic of pH electrodes by multi-point calibration.

    PubMed

    Naumann, R; Alexander-Weber, Ch; Eberhardt, R; Giera, J; Spitzer, P

    2002-11-01

    Routine pH measurements are carried out with pH meter-glass electrode assemblies. In most cases the glass and reference electrodes are thereby fashioned into a single probe, the so-called 'combination electrode' or simply 'the pH electrode'. The use of these electrodes is subject to various effects, described below, producing uncertainties of unknown magnitude. Therefore, the measurement of pH of a sample requires a suitable calibration by certified standard buffer solutions (CRMs) traceable to primary pH standards. The procedures in use are based on calibrations at one point, at two points bracketing the sample pH and at a series of points, the so-called multi-point calibration. The multi-point calibration (MPC) is recommended if minimum uncertainty and maximum consistency are required over a wide range of unknown pH values. Details of uncertainty computations for the two-point and MPC procedure are given. Furthermore, the multi-point calibration is a useful tool to characterise the performance of pH electrodes. This is demonstrated with different commercial pH electrodes. ELECTRONIC SUPPLEMENTARY MATERIAL is available if you access this article at http://dx.doi.org/10.1007/s00216-002-1506-5. On that page (frame on the left side), a link takes you directly to the supplementary material.

  2. Evaluation of the new electron-transport algorithm in MCNP6.1 for the simulation of dose point kernel in water

    NASA Astrophysics Data System (ADS)

    Antoni, Rodolphe; Bourgois, Laurent

    2017-12-01

    In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those energy ranges. Accordingly, special care has to be taken in setting choice for calculating electron dose distribution with MCNP6, in particular with regards to dosimetry or nuclear medicine applications.

  3. Estimating implementation and operational costs of an integrated tiered CD4 service including laboratory and point of care testing in a remote health district in South Africa.

    PubMed

    Cassim, Naseem; Coetzee, Lindi M; Schnippel, Kathryn; Glencross, Deborah K

    2014-01-01

    An integrated tiered service delivery model (ITSDM) has been proposed to provide 'full-coverage' of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing < 30-40 samples from 8-10 health-clinics; Tier-3/Community laboratories servicing ∼ 50 health-clinics, processing < 150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing < 300 or > 600 samples/day and serving > 100 or > 200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of > 24-48 hours. Full service coverage with TAT < 6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured 'full service coverage' and < 24 hour LTR-TAT for the district at $7.42 per-test. Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼ 12-24-hour LTR-TAT, is ∼ $2 more than existing referred services per-test, but 2-4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services.

  4. [Comparison between references of the overweight and obesity prevalence, through the Body Mass Index, in Argentinean children].

    PubMed

    Padula, Gisel; Salceda, Susana A

    2008-12-01

    The evaluation of child nutritional status is highly dependent on the growth charts used. The aim of this study was to compare different references assessing overweight and obesity in children population, through the Body Mass Index. A total of 737 healthy children born at term, aged 2-5 years, were included (cross-sectional study). The participation was voluntary and consent. Body Mass Index (kg/m2) was estimated. The measurements techniques were based on national guidelines. We compared three references: (1) Centers for Disease Control and Prevention (CDC) (> Pc85: overweight; > Pc95: obesity; (2) International Obesity Task Force (IOTF) (sex-age-specific body mass index cut-offs); (3) World Health Organization (WHO) (+2 standard deviation: overweight; +3: obesity). The Epi Info 6.0 software was used to the statistical evaluation (chi2, p < or = .05%). The prevalence of overweight was 1.1 and 2.33 times higher in CDC application respect to the IOTF and the WHO references respectively. The prevalence of overweight was 2.1 times higher in IOTF application respect to the WHO references (p = .00001). The prevalence of obesity was 5.4 and 23.9 times higher in CDC application respect to the IOTF and the WHO references respectively. The prevalence of obesity was 4.4 times higher in IOTF application respect to the WHO references (p = .0000001). The prevalence of overweight and obesity, calculated through the BMI, differs substantially according to the reference and cut-off points used. In the absence of even a single criterion, each of the references should be used with care.

  5. Development of discrete choice model considering internal reference points and their effects in travel mode choice context

    NASA Astrophysics Data System (ADS)

    Sarif; Kurauchi, Shinya; Yoshii, Toshio

    2017-06-01

    In the conventional travel behavior models such as logit and probit, decision makers are assumed to conduct the absolute evaluations on the attributes of the choice alternatives. On the other hand, many researchers in cognitive psychology and marketing science have been suggesting that the perceptions of attributes are characterized by the benchmark called “reference points” and the relative evaluations based on them are often employed in various choice situations. Therefore, this study developed a travel behavior model based on the mental accounting theory in which the internal reference points are explicitly considered. A questionnaire survey about the shopping trip to the CBD in Matsuyama city was conducted, and then the roles of reference points in travel mode choice contexts were investigated. The result showed that the goodness-of-fit of the developed model was higher than that of the conventional model, indicating that the internal reference points might play the major roles in the choice of travel mode. Also shown was that the respondents seem to utilize various reference points: some tend to adopt the lowest fuel price they have experienced, others employ fare price they feel in perceptions of the travel cost.

  6. Safety characteristics of the monolithic CFC divertor

    NASA Astrophysics Data System (ADS)

    Zucchetti, M.; Merola, M.; Matera, R.

    1994-09-01

    The main distinguishing feature of the monolithic CFC divertor is the use of a single material, a carbon fibre reinforced carbon, for the protective armour, the heat sink and the cooling channels. This removes joint interface problems which are one of the most important concerns related to the reference solutions of the ITER CDA divertor. An activation analysis of the different coolant options for this concept is presented. It turns out that neither short-term nor long-term activation are a concern for any coolants investigated. Therefore the proposed concept proves to be attractive from a safety stand-point also.

  7. Opportunities and Efficiencies in Building a New Service Desk Model.

    PubMed

    Mayo, Alexa; Brown, Everly; Harris, Ryan

    2017-01-01

    In July 2015, the Health Sciences and Human Services Library (HS/HSL) at the University of Maryland, Baltimore (UMB), merged its reference and circulation services, creating the Information Services Department and Information Services Desk. Designing the Information Services Desk with a team approach allowed for the re-examination of the HS/HSL's service model from the ground up. With the creation of a single service point, the HS/HSL was able to create efficiencies, improve the user experience by eliminating handoffs, create a collaborative team environment, and engage information services staff in a variety of new projects.

  8. CCD correlation techniques

    NASA Technical Reports Server (NTRS)

    Hewes, C. R.; Bosshart, P. W.; Eversole, W. L.; Dewit, M.; Buss, D. D.

    1976-01-01

    Two CCD techniques were discussed for performing an N-point sampled data correlation between an input signal and an electronically programmable reference function. The design and experimental performance of an implementation of the direct time correlator utilizing two analog CCDs and MOS multipliers on a single IC were evaluated. The performance of a CCD implementation of the chirp z transform was described, and the design of a new CCD integrated circuit for performing correlation by multiplication in the frequency domain was presented. This chip provides a discrete Fourier transform (DFT) or inverse DFT, multipliers, and complete support circuitry for the CCD CZT. The two correlation techniques are compared.

  9. Single Event Effects Test Results for the Actel ProASIC Plus and Altera Stratix-II Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Allen, Gregory R.; Swift, Gary M.

    2006-01-01

    This work describes radiation testing of Actel's ProASIC Plus and Altera's Stratix-II FPGAs. The Actel Device Under Test (DUT) was a ProASIC Plus APA300-PQ208 nonvolatile, field reprogrammable device which is based on a 0.22micron flash-based LVCMOS technology. Limited investigation has taken place into flash based FPGA technologies, therefore this test served as a preliminary reference point for various SEE behaviors. The Altera DUT was a Stratix-II EP2S60F1020C4. Single Event Upset (SEU) and Single Event Latchup (SEL) were the focus of these studies. For the Actel, a latchup test was done at an effective LET of 75.0 MeV-sq cm/mg at room temperature, and no latchup was detected when irradiated to a total fluence of 1 x 10(exp 7) particles/sq cm. The Altera part was shown to latchup at room temperature.

  10. Portable open-path optical remote sensing (ORS) Fourier transform infrared (FTIR) instrumentation miniaturization and software for point and click real-time analysis

    NASA Astrophysics Data System (ADS)

    Zemek, Peter G.; Plowman, Steven V.

    2010-04-01

    Advances in hardware have miniaturized the emissions spectrometer and associated optics, rendering them easily deployed in the field. Such systems are also suitable for vehicle mounting, and can provide high quality data and concentration information in minutes. Advances in software have accompanied this hardware evolution, enabling the development of portable point-and-click OP-FTIR systems that weigh less than 16 lbs. These systems are ideal for first-responders, military, law enforcement, forensics, and screening applications using optical remote sensing (ORS) methodologies. With canned methods and interchangeable detectors, the new generation of OP-FTIR technology is coupled to the latest forward reference-type model software to provide point-and-click technology. These software models have been established for some time. However, refined user-friendly models that use active, passive, and solar occultation methodologies now allow the user to quickly field-screen and quantify plumes, fence-lines, and combustion incident scenarios in high-temporal-resolution. Synthetic background generation is now redundant as the models use highly accurate instrument line shape (ILS) convolutions and several other parameters, in conjunction with radiative transfer model databases to model a single calibration spectrum to collected sample spectra. Data retrievals are performed directly on single beam spectra using non-linear classical least squares (NLCLS). Typically, the Hitran line database is used to generate the initial calibration spectrum contained within the software.

  11. Sideloading - Ingestion of Large Point Clouds Into the Apache Spark Big Data Engine

    NASA Astrophysics Data System (ADS)

    Boehm, J.; Liu, K.; Alis, C.

    2016-06-01

    In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.

  12. The Long-Wave Infrared Earth Image as a Pointing Reference for Deep-Space Optical Communications

    NASA Astrophysics Data System (ADS)

    Biswas, A.; Piazzolla, S.; Peterson, G.; Ortiz, G. G.; Hemmati, H.

    2006-11-01

    Optical communications from space require an absolute pointing reference. Whereas at near-Earth and even planetary distances out to Mars and Jupiter a laser beacon transmitted from Earth can serve as such a pointing reference, for farther distances extending to the outer reaches of the solar system, the means for meeting this requirement remains an open issue. We discuss in this article the prospects and consequences of utilizing the Earth image sensed in the long-wave infrared (LWIR) spectral band as a beacon to satisfy the absolute pointing requirements. We have used data from satellite-based thermal measurements of Earth to synthesize images at various ranges and have shown the centroiding accuracies that can be achieved with prospective LWIR image sensing arrays. The nonuniform emissivity of Earth causes a mispointing bias error term that exceeds a provisional pointing budget allocation when using simple centroiding algorithms. Other issues related to implementing thermal imaging of Earth from deep space for the purposes of providing a pointing reference are also reported.

  13. Solving multi-objective optimization problems in conservation with the reference point method

    PubMed Central

    Dujardin, Yann; Chadès, Iadine

    2018-01-01

    Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650

  14. Using single-case experimental design methodology to evaluate the effects of the ABC method for nursing staff on verbal aggressive behaviour after acquired brain injury.

    PubMed

    Winkens, Ieke; Ponds, Rudolf; Pouwels, Climmy; Eilander, Henk; van Heugten, Caroline

    2014-01-01

    The ABC method is a basic and simplified form of behavioural modification therapy for use by nurses. ABC refers to the identification of Antecedent events, target Behaviours, and Consequent events. A single-case experimental AB design was used to evaluate the effects of the ABC method on a woman diagnosed with olivo-ponto-cerebellar ataxia. Target behaviour was verbal aggressive behaviour during ADL care, assessed at 9 time points immediately before implementation of the ABC method and at 36 time points after implementation. A randomisation test showed a significant treatment effect between the baseline and intervention phases (t = .58, p = .03; ES [Nonoverlap All Pairs] = .62). Visual analysis, however, showed that the target behaviour was still present after implementation of the method and that on some days the nurses even judged the behaviour to be more severe than at baseline. Although the target behaviour was still present after treatment, the ABC method seems to be a promising tool for decreasing problem behaviour in patients with acquired brain injury. It is worth investigating the effects of this method in future studies. When interpreting single-subject data, both visual inspection and statistical analysis are needed to determine whether treatment is effective and whether the effects lead to clinically desirable results.

  15. Why are we afraid to love?

    PubMed

    Tavormina, Romina

    2014-11-01

    The crisis of the couple in today's society, increases the number of singles and causes difficulty in creating stable and lasting relationships over time. The last two censuses of the Italian population show an increase in the number of singles, especially among the young generation. From this figure we have tried to understand the causes of this phenomenon both from the point of view of sociology and psychology. The sociologist Baumann considers the crisis of the couple and the increase in singles as a result of the crisis of modern society. The author says that we live in a fluid world where everything changes quickly and there is nothing stable and this also affects the couple. From a psychological point of view, reference has been made to all that psychoanalytic writers from Freud, have said about the crisis of the couple. It is clear that some subjects have difficulty maintaining stable and long-lasting relationships because they are suffering from philophobia (fear of love). We have tried to understand what the causes of different types of philophobia for both men and women are. It was found that the origins are created in the early childhood relationships with their parents. It 'was found that people who have had problems in their childhood relationships with their parents, because of their lack of love, tend to reproduce in adulthood the same dysfunctional relational model, learned in childhood.

  16. Quantum-mechanical analysis of low-gain free-electron laser oscillators

    NASA Astrophysics Data System (ADS)

    Fares, H.; Yamada, M.; Chiadroni, E.; Ferrario, M.

    2018-05-01

    In the previous classical theory of the low-gain free-electron laser (FEL) oscillators, the electron is described as a point-like particle, a delta function in the spatial space. On the other hand, in the previous quantum treatments, the electron is described as a plane wave with a single momentum state, a delta function in the momentum space. In reality, an electron must have statistical uncertainties in the position and momentum domains. Then, the electron is neither a point-like charge nor a plane wave of a single momentum. In this paper, we rephrase the theory of the low-gain FEL where the interacting electron is represented quantum mechanically by a plane wave with a finite spreading length (i.e., a wave packet). Using the concepts of the transformation of reference frames and the statistical quantum mechanics, an expression for the single-pass radiation gain is derived. The spectral broadening of the radiation is expressed in terms of the spreading length of an electron, the relaxation time characterizing the energy spread of electrons, and the interaction time. We introduce a comparison between our results and those obtained in the already known classical analyses where a good agreement between both results is shown. While the correspondence between our results and the classical results are shown, novel insights into the electron dynamics and the interaction mechanism are presented.

  17. Verification of an optimized stimulation point on the abdominal wall for transcutaneous neuromuscular electrical stimulation for activation of deep lumbar stabilizing muscles.

    PubMed

    Baek, Seung Ok; Cho, Hee Kyung; Jung, Gil Su; Son, Su Min; Cho, Yun Woo; Ahn, Sang Ho

    2014-09-01

    Transcutaneous neuromuscular electrical stimulation (NMES) can stimulate contractions in deep lumbar stabilizing muscles. An optimal protocol has not been devised for the activation of these muscles by NMES, and information is lacking regarding an optimal stimulation point on the abdominal wall. The goal was to determine a single optimized stimulation point on the abdominal wall for transcutaneous NMES for the activation of deep lumbar stabilizing muscles. Ultrasound images of the spinal stabilizing muscles were captured during NMES at three sites on the lateral abdominal wall. After an optimal location for the placement of the electrodes was determined, changes in the thickness of the lumbar multifidus (LM) were measured during NMES. Three stimulation points were investigated using 20 healthy physically active male volunteers. A reference point R, 1 cm superior to the iliac crest along the midaxillary line, was used. Three study points were used: stimulation point S1 was located 2 cm superior and 2 cm medial to the anterior superior iliac spine, stimulation point S3 was 2 cm below the lowest rib along the same sagittal plane as S1, and stimulation point S2 was midway between S1 and S3. Sessions were conducted stimulating at S1, S2, or S3 using R for reference. Real-time ultrasound imaging (RUSI) of the abdominal muscles was captured during each stimulation session. In addition, RUSI images were captured of the LM during stimulation at S1. Thickness, as measured by RUSI, of the transverse abdominis (TrA), obliquus internus, and obliquus externus was greater during NMES than at rest for all three study points (p<.05). Transverse abdominis was significantly stimulated more by NMES at S1 than at the other points (p<.05). The LM thickness was also significantly greater during NMES at S1 than at rest (p<.05). Neuromuscular electrical stimulation at S1 optimally activated deep spinal stabilizing muscles, TrA and LM, as evidenced by RUSI. The authors recommend this optimal stimulation point be used for NMES in the course of lumbar spine stabilization training in patients having difficulty initiating contraction of these muscles. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Booster propulsion/vehicle impact study, 2

    NASA Technical Reports Server (NTRS)

    Johnson, P.; Satterthwaite, S.; Carson, C.; Schnackel, J.

    1988-01-01

    This is the final report in a study examining the impact of launch vehicles for various boost propulsion design options. These options included: differing boost phase engines using different combinations of fuels and coolants to include RP-1, methane, propane (subcooled and normal boiling point), and hydrogen; variable and high mixture ratio hydrogen engines; translating nozzles on boost phase engines; and cross feeding propellants from the booster to second stage. Vehicles examined included a fully reusable two stage cargo vehicle and a single stage to orbit vehicle. The use of subcooled propane as a fuel generated vehicles with the lowest total vehicle dry mass. Engines with hydrogen cooling generated only slight mass reductions from the reference, all-hydrogen vehicle. Cross feeding propellants generated the most significant mass reductions from the reference two stage vehicle. The use of high mixture ratio or variable mixture ratio hydrogen engines in the boost phase of flight resulted in vehicles with total dry mass 20 percent greater than the reference hydrogen vehicle. Translating nozzles for boost phase engines generated a heavier vehicle. Also examined were the design impacts on the vehicle and ground support subsystems when subcooled propane is used as a fuel. The most significant cost difference between facilities to handle normal boiling point versus subcooled propane is 5 million dollars. Vehicle cost differences were negligible. A significant technical challenge exists for properly conditioning the vehicle propellant on the ground and in flight when subcooled propane is used as fuel.

  19. Patron Preference in Reference Service Points.

    ERIC Educational Resources Information Center

    Morgan, Linda

    1980-01-01

    Behavior of patrons choosing between a person sitting at a counter and one sitting at a desk at each of two reference points was observed at the reference department during remodeling at the M. D. Anderson Library of the University of Houston. Results showed a statistically relevant preference for the counter. (Author/JD)

  20. Pointing As a Socio-Pragmatic Cue to Particular vs.Generic Reference

    ERIC Educational Resources Information Center

    Meyer, Meredith; Baldwin, Dare A.

    2013-01-01

    Generic noun phrases, or generics, refer to abstract kind categories ("Dogs" bark) rather than particular individuals ("Those dogs" bark). How do children distinguish these distinct kinds of reference? We examined the role of one socio-pragmatic cue, namely pointing, in producing and comprehending generic versus particular…

  1. Null test fourier domain alignment technique for phase-shifting point diffraction interferometer

    DOEpatents

    Naulleau, Patrick; Goldberg, Kenneth Alan

    2000-01-01

    Alignment technique for calibrating a phase-shifting point diffraction interferometer involves three independent steps where the first two steps independently align the image points and pinholes in rotation and separation to a fixed reference coordinate system, e.g, CCD. Once the two sub-elements have been properly aligned to the reference in two parameters (separation and orientation), the third step is to align the two sub-element coordinate systems to each other in the two remaining parameters (x,y) using standard methods of locating the pinholes relative to some easy to find reference point.

  2. The Nuclear Energy Density Functional Formalism

    NASA Astrophysics Data System (ADS)

    Duguet, T.

    The present document focuses on the theoretical foundations of the nuclear energy density functional (EDF) method. As such, it does not aim at reviewing the status of the field, at covering all possible ramifications of the approach or at presenting recent achievements and applications. The objective is to provide a modern account of the nuclear EDF formalism that is at variance with traditional presentations that rely, at one point or another, on a Hamiltonian-based picture. The latter is not general enough to encompass what the nuclear EDF method represents as of today. Specifically, the traditional Hamiltonian-based picture does not allow one to grasp the difficulties associated with the fact that currently available parametrizations of the energy kernel E[g',g] at play in the method do not derive from a genuine Hamilton operator, would the latter be effective. The method is formulated from the outset through the most general multi-reference, i.e. beyond mean-field, implementation such that the single-reference, i.e. "mean-field", derives as a particular case. As such, a key point of the presentation provided here is to demonstrate that the multi-reference EDF method can indeed be formulated in a mathematically meaningful fashion even if E[g',g] does not derive from a genuine Hamilton operator. In particular, the restoration of symmetries can be entirely formulated without making any reference to a projected state, i.e. within a genuine EDF framework. However, and as is illustrated in the present document, a mathematically meaningful formulation does not guarantee that the formalism is sound from a physical standpoint. The price at which the latter can be enforced as well in the future is eventually alluded to.

  3. Attenuated coupled cluster: a heuristic polynomial similarity transformation incorporating spin symmetry projection into traditional coupled cluster theory

    NASA Astrophysics Data System (ADS)

    Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2017-11-01

    In electronic structure theory, restricted single-reference coupled cluster (CC) captures weak correlation but fails catastrophically under strong correlation. Spin-projected unrestricted Hartree-Fock (SUHF), on the other hand, misses weak correlation but captures a large portion of strong correlation. The theoretical description of many important processes, e.g. molecular dissociation, requires a method capable of accurately capturing both weak and strong correlation simultaneously, and would likely benefit from a combined CC-SUHF approach. Based on what we have recently learned about SUHF written as particle-hole excitations out of a symmetry-adapted reference determinant, we here propose a heuristic CC doubles model to attenuate the dominant spin collective channel of the quadratic terms in the CC equations. Proof of principle results presented here are encouraging and point to several paths forward for improving the method further.

  4. Exhaled breath condensate – from an analytical point of view

    PubMed Central

    Dodig, Slavica; Čepelak, Ivana

    2013-01-01

    Over the past three decades, the goal of many researchers is analysis of exhaled breath condensate (EBC) as noninvasively obtained sample. A total quality in laboratory diagnostic processes in EBC analysis was investigated: pre-analytical (formation, collection, storage of EBC), analytical (sensitivity of applied methods, standardization) and post-analytical (interpretation of results) phases. EBC analysis is still used as a research tool. Limitations referred to pre-analytical, analytical, and post-analytical phases of EBC analysis are numerous, e.g. low concentrations of EBC constituents, single-analyte methods lack in sensitivity, and multi-analyte has not been fully explored, and reference values are not established. When all, pre-analytical, analytical and post-analytical requirements are met, EBC biomarkers as well as biomarker patterns can be selected and EBC analysis can hopefully be used in clinical practice, in both, the diagnosis and in the longitudinal follow-up of patients, resulting in better outcome of disease. PMID:24266297

  5. LiDAR Vegetation Investigation and Signature Analysis System (LVISA)

    NASA Astrophysics Data System (ADS)

    Höfle, Bernhard; Koenig, Kristina; Griesbaum, Luisa; Kiefer, Andreas; Hämmerle, Martin; Eitel, Jan; Koma, Zsófia

    2015-04-01

    Our physical environment undergoes constant changes in space and time with strongly varying triggers, frequencies, and magnitudes. Monitoring these environmental changes is crucial to improve our scientific understanding of complex human-environmental interactions and helps us to respond to environmental change by adaptation or mitigation. The three-dimensional (3D) description of the Earth surface features and the detailed monitoring of surface processes using 3D spatial data have gained increasing attention within the last decades, such as in climate change research (e.g., glacier retreat), carbon sequestration (e.g., forest biomass monitoring), precision agriculture and natural hazard management. In all those areas, 3D data have helped to improve our process understanding by allowing quantifying the structural properties of earth surface features and their changes over time. This advancement has been fostered by technological developments and increased availability of 3D sensing systems. In particular, LiDAR (light detection and ranging) technology, also referred to as laser scanning, has made significant progress and has evolved into an operational tool in environmental research and geosciences. The main result of LiDAR measurements is a highly spatially resolved 3D point cloud. Each point within the LiDAR point cloud has a XYZ coordinate associated with it and often additional information such as the strength of the returned backscatter. The point cloud provided by LiDAR contains rich geospatial, structural, and potentially biochemical information about the surveyed objects. To deal with the inherently unorganized datasets and the large data volume (frequently millions of XYZ coordinates) of LiDAR datasets, a multitude of algorithms for automatic 3D object detection (e.g., of single trees) and physical surface description (e.g., biomass) have been developed. However, so far the exchange of datasets and approaches (i.e., extraction algorithms) among LiDAR users lacks behind. We propose a novel concept, the LiDAR Vegetation Investigation and Signature Analysis System (LVISA), which shall enhance sharing of i) reference datasets of single vegetation objects with rich reference data (e.g., plant species, basic plant morphometric information) and ii) approaches for information extraction (e.g., single tree detection, tree species classification based on waveform LiDAR features). We will build an extensive LiDAR data repository for supporting the development and benchmarking of LiDAR-based object information extraction. The LiDAR Vegetation Investigation and Signature Analysis System (LVISA) uses international web service standards (Open Geospatial Consortium, OGC) for geospatial data access and also analysis (e.g., OGC Web Processing Services). This will allow the research community identifying plant object specific vegetation features from LiDAR data, while accounting for differences in LiDAR systems (e.g., beam divergence), settings (e.g., point spacing), and calibration techniques. It is the goal of LVISA to develop generic 3D information extraction approaches, which can be seamlessly transferred to other datasets, timestamps and also extraction tasks. The current prototype of LVISA can be visited and tested online via http://uni-heidelberg.de/lvisa. Video tutorials provide a quick overview and entry into the functionality of LVISA. We will present the current advances of LVISA and we will highlight future research and extension of LVISA, such as integrating low-cost LiDAR data and datasets acquired by highly temporal scanning of vegetation (e.g., continuous measurements). Everybody is invited to join the LVISA development and share datasets and analysis approaches in an interoperable way via the web-based LVISA geoportal.

  6. Statistical Attitude Determination

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    2010-01-01

    All spacecraft require attitude determination at some level of accuracy. This can be a very coarse requirement of tens of degrees, in order to point solar arrays at the sun, or a very fine requirement in the milliarcsecond range, as required by Hubble Space Telescope. A toolbox of attitude determination methods, applicable across this wide range, has been developed over the years. There have been many advances in the thirty years since the publication of Reference, but the fundamentals remain the same. One significant change is that onboard attitude determination has largely superseded ground-based attitude determination, due to the greatly increased power of onboard computers. The availability of relatively inexpensive radiation-hardened microprocessors has led to the development of "smart" sensors, with autonomous star trackers being the first spacecraft application. Another new development is attitude determination using interferometry of radio signals from the Global Positioning System (GPS) constellation. This article reviews both the classic material and these newer developments at approximately the level of, with emphasis on. methods suitable for use onboard a spacecraft. We discuss both "single frame" methods that are based on measurements taken at a single point in time, and sequential methods that use information about spacecraft dynamics to combine the information from a time series of measurements.

  7. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... emissions average. This must include any Group 1 emission points to which the reference control technology... controls for a Group 1 emission point, the pollution prevention measure alone does not have to reduce... in control after November 15, 1990; (2) Group 1 emission points that are controlled by a reference...

  8. 40 CFR 63.652 - Emissions averaging provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... emissions average. This must include any Group 1 emission points to which the reference control technology... controls for a Group 1 emission point, the pollution prevention measure alone does not have to reduce... in control after November 15, 1990; (2) Group 1 emission points that are controlled by a reference...

  9. Practical wavelength calibration considerations for UV-visible Fourier-transform spectroscopy.

    PubMed

    Salit, M L; Travis, J C; Winchester, M R

    1996-06-01

    The intrinsic wavelength scale in a modern reference laser-controlled Michelson interferometer-sometimes referred to as the Connes advantage-offers excellent wavelength accuracy with relative ease. Truly superb wavelength accuracy, with total relative uncertainty in line position of the order of several parts in 10(8), should be within reach with single-point, multiplicative calibration. The need for correction of the wavelength scale arises from two practical effects: the use of a finite aperture, from which off-axis rays propagate through the interferometer, and imperfect geometric alignment of the sample beam with the reference beam and the optical axis of the moving mirror. Although an analytical correction can be made for the finite-aperture effect, calibration with a trusted wavelength standard is typically used to accomplish both corrections. Practical aspects of accurate calibration of an interferometer in the UV-visible region are discussed. Critical issues regarding accurate use of a standard external to the sample source and the evaluation and selection of an appropriate standard are addressed. Anomalous results for two different potential wavelength standards measured by Fabry-Perot interferometry (Ar II and (198)Hg I) are observed.

  10. Reference Clinical Database for Fixation Stability Metrics in Normal Subjects Measured with the MAIA Microperimeter.

    PubMed

    Morales, Marco U; Saker, Saker; Wilde, Craig; Pellizzari, Carlo; Pallikaris, Aristophanes; Notaroberto, Neil; Rubinstein, Martin; Rui, Chiara; Limoli, Paolo; Smolek, Michael K; Amoaku, Winfried M

    2016-11-01

    The purpose of this study was to establish a normal reference database for fixation stability measured with the bivariate contour ellipse area (BCEA) in the Macular Integrity Assessment (MAIA) microperimeter. Subjects were 358 healthy volunteers who had the MAIA examination. Fixation stability was assessed using two BCEA fixation indices (63% and 95% proportional values) and the percentage of fixation points within 1° and 2° from the fovea (P1 and P2). Statistical analysis was performed with linear regression and Pearson's product moment correlation coefficient. Average areas of 0.80 deg 2 (min = 0.03, max = 3.90, SD = 0.68) for the index BCEA@63% and 2.40 deg 2 (min = 0.20, max = 11.70, SD = 2.04) for the index BCEA@95% were found. The average values of P1 and P2 were 95% (min = 76, max = 100, SD = 5.31) and 99% (min = 91, max = 100, SD = 1.42), respectively. The Pearson's product moment test showed an almost perfect correlation index, r = 0.999, between BCEA@63% and BCEA@95%. Index P1 showed a very strong correlation with BCEA@63%, r = -0.924, as well as with BCEA@95%, r = -0.925. Index P2 demonstrated a slightly lower correlation with both BCEA@63% and BCEA@95%, r = -0.874 and -0.875, respectively. The single parameter of the BCEA@95% may be taken as accurately reporting fixation stability and serves as a reference database of normal subjects with a cutoff area of 2.40 ± 2.04 deg 2 in MAIA microperimeter. Fixation stability can be measured with different indices. This study originates reference fixation values for the MAIA using a single fixation index.

  11. A Reference Field for GCR Simulation and an LET-Based Implementation at NSRL

    NASA Technical Reports Server (NTRS)

    Slaba, Tony C.; Blattnig, Steve R.; Walker, Steven A.; Norbury, John W.

    2015-01-01

    Exposure to galactic cosmic rays (GCR) on long duration deep space missions presents a serious health risk to astronauts, with large uncertainties connected to the biological response. In order to reduce the uncertainties and gain understanding about the basic mechanisms through which space radiation initiates cancer and other endpoints, radiobiology experiments are performed. Some of the accelerator facilities supporting such experiments have matured to a point where simulating the broad range of particles and energies characteristic of the GCR environment in a single experiment is feasible from a technology, usage, and cost perspective. In this work, several aspects of simulating the GCR environment in the laboratory are discussed. First, comparisons are made between direct simulation of the external, free space GCR field and simulation of the induced tissue field behind shielding. It is found that upper energy constraints at the NASA Space Radiation Laboratory (NSRL) limit the ability to simulate the external, free space field directly (i.e. shielding placed in the beam line in front of a biological target and exposed to a free space spectrum). Second, variation in the induced tissue field associated with shielding configuration and solar activity is addressed. It is found that the observed variation is within physical uncertainties, allowing a single reference field for deep space missions to be defined. Third, an approach for simulating the reference field at NSRL is presented. The approach allows for the linear energy transfer (LET) spectrum of the reference field to be approximately represented with discrete ion and energy beams and implicitly maintains a reasonably accurate charge spectrum (or, average quality factor). Drawbacks of the proposed methodology are discussed and weighed against alternative simulation strategies. The neutron component and track structure characteristics of the proposed strategy are discussed in this context.

  12. The Neurobiology of Reference-Dependent Value Computation

    PubMed Central

    De Martino, Benedetto; Kumaran, Dharshan; Holt, Beatrice; Dolan, Raymond J.

    2009-01-01

    A key focus of current research in neuroeconomics concerns how the human brain computes value. Although, value has generally been viewed as an absolute measure (e.g., expected value, reward magnitude), much evidence suggests that value is more often computed with respect to a changing reference point, rather than in isolation. Here, we present the results of a study aimed to dissociate brain regions involved in reference-independent (i.e., “absolute”) value computations, from those involved in value computations relative to a reference point. During functional magnetic resonance imaging, subjects acted as buyers and sellers during a market exchange of lottery tickets. At a behavioral level, we demonstrate that subjects systematically accorded a higher value to objects they owned relative to those they did not, an effect that results from a shift in reference point (i.e., status quo bias or endowment effect). Our results show that activity in orbitofrontal cortex and dorsal striatum track parameters such as the expected value of lottery tickets indicating the computation of reference-independent value. In contrast, activity in ventral striatum indexed the degree to which stated prices, at a within-subjects and between-subjects level, were distorted with respect to a reference point. The findings speak to the neurobiological underpinnings of reference dependency during real market value computations. PMID:19321780

  13. Standardization of clinical enzyme analysis using frozen human serum pools with values assigned by the International Federation of Clinical Chemistry and Laboratory Medicine reference measurement procedures.

    PubMed

    Tong, Qing; Chen, Baorong; Zhang, Rui; Zuo, Chang

    Variation in clinical enzyme analysis, particularly across different measuring systems and laboratories, represents a critical but long-lasting problem in diagnosis. Calibrators with traceability and commutability are imminently needed to harmonize analysis in laboratory medicine. Fresh frozen human serum pools were assigned values for alanine aminotransferase (ALT), aspartate aminotransferase (AST), gamma-glutamyltransferase (GGT), creatine kinase (CK) and lactate dehydrogenase (LDH) by six laboratories with established International Federation of Clinical Chemistry and Laboratory Medicine reference measurement procedures. These serum pools were then used across 76 laboratories as a calibrator in the analysis of five enzymes. Bias and imprecision in the measurement of the five enzymes tested were significantly reduced by using the value-assigned serum in analytical systems with open and single-point calibration. The median (interquartile range) of the relative biases of ALT, AST, GGT, CK and LDH were 2.0% (0.6-3.4%), 0.8% (-0.8-2.3%), 1.0% (-0.5-2.0%), 0.2% (-0.3-1.0%) and 0.2% (-0.9-1.1%), respectively. Before calibration, the interlaboratory coefficients of variation (CVs) in the analysis of patient serum samples were 8.0-8.2%, 7.3-8.5%, 8.1-8.7%, 5.1-5.9% and 5.8-6.4% for ALT, AST, GGT, CK and LDH, respectively; after calibration, the CVs decreased to 2.7-3.3%, 3.0-3.6%, 1.6-2.1%, 1.8-1.9% and 3.3-3.5%, respectively. The results suggest that the use of fresh frozen serum pools significantly improved the comparability of test results in analytical systems with open and single-point calibration.

  14. 3D heart motion from single-plane angiography of the coronary vasculature: a model-based approach

    NASA Astrophysics Data System (ADS)

    Sherknies, Denis; Meunier, Jean; Tardif, Jean-Claude

    2004-05-01

    In order to complete a thorough examination of a patient heart muscle, physicians practice two common invasive procedures: the ventriculography, which allows the determination of the ejection fraction, and the coronarography, giving among other things, information on stenosis of arteries. We propose a method that allows the determination of a contraction index similar to ejection fraction, using only single-plane coronarography. Our method first reconstructs in 3D, selected points on the angiogram, using a 3D model devised from data published by Dodge ea. ['88, '92]. We then follow the point displacements through a complete heart contraction cycle. The objective function, minimizing the RMS distances between the angiogram and the model, relies on affine transformations, i.e. translation, rotation and isotropic scaling. We validate our method on simulated projections using cases from Dodge data. In order to avoid any bias, a leave-one-out strategy was used, which excludes the reference case when constructing the 3D coronary heart model. The simulated projections are created by transforming the reference case, with scaling, translation and rotation transformations, and by adding random 3D noise for each frame in the contraction cycle. Comparing the true scaling parameters to the reconstructed sequence, our method is quite robust (R2=96.6%, P<1%), even when noise error level is as high as 1 cm. Using 10 clinical cases we then proceeded to reconstruct the contraction sequence for a complete cardiac cycle starting at end-diastole. A simple heart contraction mathematical model permitted us to link the measured ejection fraction of the different cases to the maximum heart contraction amplitude (R2=57%, P<1%) determined by our method.

  15. Routine Microsecond Molecular Dynamics Simulations with AMBER on GPUs. 1. Generalized Born

    PubMed Central

    2012-01-01

    We present an implementation of generalized Born implicit solvent all-atom classical molecular dynamics (MD) within the AMBER program package that runs entirely on CUDA enabled NVIDIA graphics processing units (GPUs). We discuss the algorithms that are used to exploit the processing power of the GPUs and show the performance that can be achieved in comparison to simulations on conventional CPU clusters. The implementation supports three different precision models in which the contributions to the forces are calculated in single precision floating point arithmetic but accumulated in double precision (SPDP), or everything is computed in single precision (SPSP) or double precision (DPDP). In addition to performance, we have focused on understanding the implications of the different precision models on the outcome of implicit solvent MD simulations. We show results for a range of tests including the accuracy of single point force evaluations and energy conservation as well as structural properties pertainining to protein dynamics. The numerical noise due to rounding errors within the SPSP precision model is sufficiently large to lead to an accumulation of errors which can result in unphysical trajectories for long time scale simulations. We recommend the use of the mixed-precision SPDP model since the numerical results obtained are comparable with those of the full double precision DPDP model and the reference double precision CPU implementation but at significantly reduced computational cost. Our implementation provides performance for GB simulations on a single desktop that is on par with, and in some cases exceeds, that of traditional supercomputers. PMID:22582031

  16. Reference in Action: Links between Pointing and Language

    ERIC Educational Resources Information Center

    Cooperrider, Kensy Andrew

    2011-01-01

    When referring to things in the world, speakers produce utterances that are composites of speech and action. Pointing gestures are a pervasive part of such composite utterances, but many questions remain about exactly how pointing is integrated with speech. In this dissertation I present three strands of research that investigate relations of…

  17. Unsteady steady-states: Central causes of unintentional force drift

    PubMed Central

    Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2016-01-01

    We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely, the fingertip referent coordinate (RFT) and its apparent stiffness (CFT). The system's state is defined by a point in the {RFT; CFT} space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback, and attempted to maintain this force for 15 s after the feedback was removed. We used the “inverse piano” apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of RFT and CFT showed that force drop was mostly due to a drift in RFT towards the actual fingertip position. Three analysis techniques, namely, hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong co-variation in RFT and CFT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {RFT; CFT} relative to their average trends also displayed covariation. On the whole the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system towards a low-energy state, and (b) a faster synergic motion of RFT and CFT that tends to stabilize the output fingertip force about the slowly-drifting equilibrium point. PMID:27540726

  18. Unsteady steady-states: central causes of unintentional force drift.

    PubMed

    Ambike, Satyajit; Mattos, Daniela; Zatsiorsky, Vladimir M; Latash, Mark L

    2016-12-01

    We applied the theory of synergies to analyze the processes that lead to unintentional decline in isometric fingertip force when visual feedback of the produced force is removed. We tracked the changes in hypothetical control variables involved in single fingertip force production based on the equilibrium-point hypothesis, namely the fingertip referent coordinate (R FT ) and its apparent stiffness (C FT ). The system's state is defined by a point in the {R FT ; C FT } space. We tested the hypothesis that, after visual feedback removal, this point (1) moves along directions leading to drop in the output fingertip force, and (2) has even greater motion along directions that leaves the force unchanged. Subjects produced a prescribed fingertip force using visual feedback and attempted to maintain this force for 15 s after the feedback was removed. We used the "inverse piano" apparatus to apply small and smooth positional perturbations to fingers at various times after visual feedback removal. The time courses of R FT and C FT showed that force drop was mostly due to a drift in R FT toward the actual fingertip position. Three analysis techniques, namely hyperbolic regression, surrogate data analysis, and computation of motor-equivalent and non-motor-equivalent motions, suggested strong covariation in R FT and C FT stabilizing the force magnitude. Finally, the changes in the two hypothetical control variables {R FT ; C FT } relative to their average trends also displayed covariation. On the whole, the findings suggest that unintentional force drop is associated with (a) a slow drift of the referent coordinate that pulls the system toward a low-energy state and (b) a faster synergic motion of R FT and C FT that tends to stabilize the output fingertip force about the slowly drifting equilibrium point.

  19. Photogrammetry Tool for Forensic Analysis

    NASA Technical Reports Server (NTRS)

    Lane, John

    2012-01-01

    A system allows crime scene and accident scene investigators the ability to acquire visual scene data using cameras for processing at a later time. This system uses a COTS digital camera, a photogrammetry calibration cube, and 3D photogrammetry processing software. In a previous instrument developed by NASA, the laser scaling device made use of parallel laser beams to provide a photogrammetry solution in 2D. This device and associated software work well under certain conditions. In order to make use of a full 3D photogrammetry system, a different approach was needed. When using multiple cubes, whose locations relative to each other are unknown, a procedure that would merge the data from each cube would be as follows: 1. One marks a reference point on cube 1, then marks points on cube 2 as unknowns. This locates cube 2 in cube 1 s coordinate system. 2. One marks reference points on cube 2, then marks points on cube 1 as unknowns. This locates cube 1 in cube 2 s coordinate system. 3. This procedure is continued for all combinations of cubes. 4. The coordinate of all of the found coordinate systems is then merged into a single global coordinate system. In order to achieve maximum accuracy, measurements are done in one of two ways, depending on scale: when measuring the size of objects, the coordinate system corresponding to the nearest cube is used, or when measuring the location of objects relative to a global coordinate system, a merged coordinate system is used. Presently, traffic accident analysis is time-consuming and not very accurate. Using cubes with differential GPS would give absolute positions of cubes in the accident area, so that individual cubes would provide local photogrammetry calibration to objects near a cube.

  20. An Investigation on the Contribution of GLONASS to the Precise Point Positioning for Short Time Observations

    NASA Astrophysics Data System (ADS)

    Ulug, R.; Ozludemir, M. T.

    2016-12-01

    After 2011, through the modernization process of GLONASS, the number of satellites increased rapidly. This progress has made the GLONASS the only fully operational system alternative to GPS in point positioning. So far, many researches have been conducted to investigate the contribution of GLONASS to point positioning considering different methods such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP). The latter one, PPP, is a method that performs precise position determination using a single GNSS receiver. PPP method has become very attractive since the early 2000s and it provided great advantages for engineering and scientific applications. However, PPP method needs at least 2 hours observation time and the required observation length may be longer depending on several factors, such as the number of satellites, satellite configuration etc. The more satellites, the less observation time. Nevertheless the impact of the number of satellites included must be known very well. In this study, to determine the contribution of GLONASS on PPP, GLONASS satellite observations were added one by one from 1 to 5 satellite in 2, 4 and 6 hours of observations. For this purpose, the data collected at the IGS site ISTA was used. Data processing has been done for Day of Year (DOY) 197 in 2016. 24 hours GPS observations have been processed by Bernese 5.2 PPP module and the output was selected as the reference while 2, 4 and 6 hours GPS and GPS/GLONASS observations have been processed by magic GNSS PPP module. The results clearly showed that GPS/GLONASS observations improved positional accuracy, precision, dilution of precision and convergence to the reference coordinates. In this context, coordinate differences between 24 hours GPS observations and 6 hours GPS/GLONASS observations have been obtained as less than 2 cm.

  1. SU-E-T-167: Characterization of In-House Plastic Scintillator Detectors Array for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Liu, H; Dimofte, A

    Purpose: To characterize basic performance of plastic scintillator detectors (PSD) array designed for dosimetry of radiation therapy. Methods: An in-house PSD array has been developed by placing single point PSD into customized 2D holder. Each point PSD is a plastic scintillating fiber-based detector designed for highly accurate measurement of small radiotherapy fields used in patient plan verification and machine commissioning and QA procedures. A parallel fiber without PSD is used for Cerenkov separation by subtracting from PSD readings. Cerenkov separation was confirmed by optical spectroscopy. Alternative Cerenkov separation approaches are also investigated. The optical signal was converted to electronic signalmore » with a photodiode and then subsequently amplified. We measured its dosimetry performance, including percentage depth dose and output factor, and compared with reference ion chamber measurements. The PSD array is then placed along the radiation beam for multiple point dose measurement, representing subsets of PDD measurements, or perpendicular to the beam for profile measurements. Results: The dosimetry results of PSD point measurements agree well with reference ion chamber measurements. For percentage depth dose, the maximal differences between PSD and ion chamber results are 3.5% and 2.7% for 6MV and 15MV beams, respectively. For the output factors, PSD measurements are within 3% from ion chamber results. PDD and profile measurement with PSD array are also performed. Conclusions: The current design of multichannel PSD array is feasible for the dosimetry measurement in radiation therapy. Dose distribution along or perpendicular to the beam path could be measured. It might as well be used as range verification in proton therapy.A PS hollow fiber detector will be investigated to eliminate the Cerenkov radiation effect so that all 32 channels can be used.« less

  2. Accurate bond energies of hydrocarbons from complete basis set extrapolated multi-reference singles and doubles configuration interaction.

    PubMed

    Oyeyemi, Victor B; Pavone, Michele; Carter, Emily A

    2011-12-09

    Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: 1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; 2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and 3) DFT-B3LYP calculations of minimum-energy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of CC and CH bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Resolution Measurement from a Single Reconstructed Cryo-EM Density Map with Multiscale Spectral Analysis.

    PubMed

    Yang, Yu-Jiao; Wang, Shuai; Zhang, Biao; Shen, Hong-Bin

    2018-06-25

    As a relatively new technology to solve the three-dimensional (3D) structure of a protein or protein complex, single-particle reconstruction (SPR) of cryogenic electron microscopy (cryo-EM) images shows much superiority and is in a rapidly developing stage. Resolution measurement in SPR, which evaluates the quality of a reconstructed 3D density map, plays a critical role in promoting methodology development of SPR and structural biology. Because there is no benchmark map in the generation of a new structure, how to realize the resolution estimation of a new map is still an open problem. Existing approaches try to generate a hypothetical benchmark map by reconstructing two 3D models from two halves of the original 2D images for cross-reference, which may result in a premature estimation with a half-data model. In this paper, we report a new self-reference-based resolution estimation protocol, called SRes, that requires only a single reconstructed 3D map. The core idea of SRes is to perform a multiscale spectral analysis (MSSA) on the map through multiple size-variable masks segmenting the map. The MSSA-derived multiscale spectral signal-to-noise ratios (mSSNRs) reveal that their corresponding estimated resolutions will show a cliff jump phenomenon, indicating a significant change in the SSNR properties. The critical point on the cliff borderline is demonstrated to be the right estimator for the resolution of the map.

  4. Instrument Pointing Capabilities: Past, Present, and Future

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam; hide

    2011-01-01

    This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.

  5. The power of wholeness, consciousness, and caring a dialogue on nursing science, art, and healing.

    PubMed

    Cowling, W Richard; Smith, Marlaine C; Watson, Jean

    2008-01-01

    Wholeness, consciousness, and caring are 3 critical concepts singled out and positioned in the disciplinary discourse of nursing to distinguish it from other disciplines. This article is an outgrowth of a dialogue among 4 scholars, 3 who have participated extensively in work aimed at synthesizing converging points in nursing theory development. It proposes a unified vision of nursing knowledge that builds on their work as a reference point for extending reflection and dialogue about the discipline of nursing. We seek for an awakening of a higher/deeper place of wholeness, consciousness, and caring that will synthesize new ethical and intellectual forms and norms of "ontological caring literacy" to arrive at a unitary caring science praxis. We encourage the evolution of a mature caring-healing-health discipline and profession, helping affirm and sustain humanity, caring, and wholeness in our daily work and in the world.

  6. Point cloud registration from local feature correspondences-Evaluation on challenging datasets.

    PubMed

    Petricek, Tomas; Svoboda, Tomas

    2017-01-01

    Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.

  7. Circulating intact and cleaved forms of the urokinase-type plasminogen activator receptor: biological variation, reference intervals and clinical useful cut-points.

    PubMed

    Thurison, Tine; Christensen, Ib J; Lund, Ida K; Nielsen, Hans J; Høyer-Hansen, Gunilla

    2015-01-15

    High levels of circulating forms of the urokinase-type plasminogen activator receptor (uPAR) are significantly associated to poor prognosis in cancer patients. Our aim was to determine biological variations and reference intervals of the uPAR forms in blood, and in addition, to test the clinical relevance of using these as cut-points in colorectal cancer (CRC) prognosis. uPAR forms were measured in citrated and EDTA plasma samples using time-resolved fluorescence immunoassays. Diurnal, intra- and inter-individual variations were assessed in plasma samples from cohorts of healthy individuals. Reference intervals were determined in plasma from healthy individuals randomly selected from a Danish multi-center cross-sectional study. A cohort of CRC patients was selected from the same cross-sectional study. The reference intervals showed a slight increase with age and women had ~20% higher levels. The intra- and inter-individual variations were ~10% and ~20-30%, respectively and the measured levels of the uPAR forms were within the determined 95% reference intervals. No diurnal variation was found. Applying the normal upper limit of the reference intervals as cut-point for dichotomizing CRC patients revealed significantly decreased overall survival of patients with levels above this cut-point of any uPAR form. The reference intervals for the different uPAR forms are valid and the upper normal limits are clinically relevant cut-points for CRC prognosis. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Automated and continual determination of radio telescope reference points with sub-mm accuracy: results from a campaign at the Onsala Space Observatory

    NASA Astrophysics Data System (ADS)

    Lösler, Michael; Haas, Rüdiger; Eschelbach, Cornelia

    2013-08-01

    The Global Geodetic Observing System (GGOS) requires sub-mm accuracy, automated and continual determinations of the so-called local tie vectors at co-location stations. Co-location stations host instrumentation for several space geodetic techniques and the local tie surveys involve the relative geometry of the reference points of these instruments. Thus, these reference points need to be determined in a common coordinate system, which is a particular challenge for rotating equipment like radio telescopes for geodetic Very Long Baseline Interferometry. In this work we describe a concept to achieve automated and continual determinations of radio telescope reference points with sub-mm accuracy. We developed a monitoring system, including Java-based sensor communication for automated surveys, network adjustment and further data analysis. This monitoring system was tested during a monitoring campaign performed at the Onsala Space Observatory in the summer of 2012. The results obtained in this campaign show that it is possible to perform automated determination of a radio telescope reference point during normal operations of the telescope. Accuracies on the sub-mm level can be achieved, and continual determinations can be realized by repeated determinations and recursive estimation methods.

  9. Debye screening in single-molecule carbon nanotube field-effect sensors.

    PubMed

    Sorgenfrei, Sebastian; Chiu, Chien-Yang; Johnston, Matthew; Nuckolls, Colin; Shepard, Kenneth L

    2011-09-14

    Point-functionalized carbon nanotube field-effect transistors can serve as highly sensitive detectors for biomolecules. With a probe molecule covalently bound to a defect in the nanotube sidewall, two-level random telegraph noise (RTN) in the conductance of the device is observed as a result of a charged target biomolecule binding and unbinding at the defect site. Charge in proximity to the defect modulates the potential (and transmission) of the conductance-limiting barrier created by the defect. In this Letter, we study how these single-molecule electronic sensors are affected by ionic screening. Both charge in proximity to the defect site and buffer concentration are found to affect RTN amplitude in a manner that follows from simple Debye length considerations. RTN amplitude is also dependent on the potential of the electrolyte gate as applied to the reference electrode; at high enough gate potentials, the target DNA is completely repelled and RTN is suppressed.

  10. Debye screening in single-molecule carbon nanotube field-effect transistors

    PubMed Central

    Sorgenfrei, Sebastian; Chiu, Chien-yang; Johnston, Matthew; Nuckolls, Colin; Shepard, Kenneth L.

    2013-01-01

    Point-functionalized carbon nanotube field-effect transistors can serve as highly sensitive detectors for biomolecules. With a probe molecule covalently bound to a defect in the nanotube sidewall, two-level random telegraph noise (RTN) in the conductance of the device is observed as a result of a charged target biomolecule binding and unbinding at the defect site. Charge in proximity to the defect modulates the potential (and transmission) of the conductance-limiting barrier created by the defect. In this Letter, we study how these single-molecule electronic sensors are affected by ionic screening. Both charge in proximity to the defect site and buffer concentration are found to affect RTN amplitude in a manner that follows from simple Debye length considerations. RTN amplitude is also dependent on the potential of the electrolyte gate as applied to the reference electrode; at high enough repulsive potentials, the target DNA is completely repelled and RTN is suppressed. PMID:21806018

  11. Referent control and motor equivalence of reaching from standing

    PubMed Central

    Tomita, Yosuke; Feldman, Anatol G.

    2016-01-01

    Motor actions may result from central changes in the referent body configuration, defined as the body posture at which muscles begin to be activated or deactivated. The actual body configuration deviates from the referent configuration, particularly because of body inertia and environmental forces. Within these constraints, the system tends to minimize the difference between these configurations. For pointing movement, this strategy can be expressed as the tendency to minimize the difference between the referent trajectory (RT) and actual trajectory (QT) of the effector (hand). This process may underlie motor equivalent behavior that maintains the pointing trajectory regardless of the number of body segments involved. We tested the hypothesis that the minimization process is used to produce pointing in standing subjects. With eyes closed, 10 subjects reached from a standing position to a remembered target located beyond arm length. In randomly chosen trials, hip flexion was unexpectedly prevented, forcing subjects to take a step during pointing to prevent falling. The task was repeated when subjects were instructed to intentionally take a step during pointing. In most cases, reaching accuracy and trajectory curvature were preserved due to adaptive condition-specific changes in interjoint coordination. Results suggest that referent control and the minimization process associated with it may underlie motor equivalence in pointing. NEW & NOTEWORTHY Motor actions may result from minimization of the deflection of the actual body configuration from the centrally specified referent body configuration, in the limits of neuromuscular and environmental constraints. The minimization process may maintain reaching trajectory and accuracy regardless of the number of body segments involved (motor equivalence), as confirmed in this study of reaching from standing in young healthy individuals. Results suggest that the referent control process may underlie motor equivalence in reaching. PMID:27784802

  12. Identification of Suitable Reference Genes for Gene Expression Normalization in qRT-PCR Analysis in Watermelon

    PubMed Central

    Gao, Lingyun; Zhao, Shuang; Jiang, Wei; Huang, Yuan; Bie, Zhilong

    2014-01-01

    Watermelon is one of the major Cucurbitaceae crops and the recent availability of genome sequence greatly facilitates the fundamental researches on it. Quantitative real-time reverse transcriptase PCR (qRT–PCR) is the preferred method for gene expression analyses, and using validated reference genes for normalization is crucial to ensure the accuracy of this method. However, a systematic validation of reference genes has not been conducted on watermelon. In this study, transcripts of 15 candidate reference genes were quantified in watermelon using qRT–PCR, and the stability of these genes was compared using geNorm and NormFinder. geNorm identified ClTUA and ClACT, ClEF1α and ClACT, and ClCAC and ClTUA as the best pairs of reference genes in watermelon organs and tissues under normal growth conditions, abiotic stress, and biotic stress, respectively. NormFinder identified ClYLS8, ClUBCP, and ClCAC as the best single reference genes under the above experimental conditions, respectively. ClYLS8 and ClPP2A were identified as the best reference genes across all samples. Two to nine reference genes were required for more reliable normalization depending on the experimental conditions. The widely used watermelon reference gene 18SrRNA was less stable than the other reference genes under the experimental conditions. Catalase family genes were identified in watermelon genome, and used to validate the reliability of the identified reference genes. ClCAT1and ClCAT2 were induced and upregulated in the first 24 h, whereas ClCAT3 was downregulated in the leaves under low temperature stress. However, the expression levels of these genes were significantly overestimated and misinterpreted when 18SrRNA was used as a reference gene. These results provide a good starting point for reference gene selection in qRT–PCR analyses involving watermelon. PMID:24587403

  13. Identification of suitable reference genes for gene expression normalization in qRT-PCR analysis in watermelon.

    PubMed

    Kong, Qiusheng; Yuan, Jingxian; Gao, Lingyun; Zhao, Shuang; Jiang, Wei; Huang, Yuan; Bie, Zhilong

    2014-01-01

    Watermelon is one of the major Cucurbitaceae crops and the recent availability of genome sequence greatly facilitates the fundamental researches on it. Quantitative real-time reverse transcriptase PCR (qRT-PCR) is the preferred method for gene expression analyses, and using validated reference genes for normalization is crucial to ensure the accuracy of this method. However, a systematic validation of reference genes has not been conducted on watermelon. In this study, transcripts of 15 candidate reference genes were quantified in watermelon using qRT-PCR, and the stability of these genes was compared using geNorm and NormFinder. geNorm identified ClTUA and ClACT, ClEF1α and ClACT, and ClCAC and ClTUA as the best pairs of reference genes in watermelon organs and tissues under normal growth conditions, abiotic stress, and biotic stress, respectively. NormFinder identified ClYLS8, ClUBCP, and ClCAC as the best single reference genes under the above experimental conditions, respectively. ClYLS8 and ClPP2A were identified as the best reference genes across all samples. Two to nine reference genes were required for more reliable normalization depending on the experimental conditions. The widely used watermelon reference gene 18SrRNA was less stable than the other reference genes under the experimental conditions. Catalase family genes were identified in watermelon genome, and used to validate the reliability of the identified reference genes. ClCAT1and ClCAT2 were induced and upregulated in the first 24 h, whereas ClCAT3 was downregulated in the leaves under low temperature stress. However, the expression levels of these genes were significantly overestimated and misinterpreted when 18SrRNA was used as a reference gene. These results provide a good starting point for reference gene selection in qRT-PCR analyses involving watermelon.

  14. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    ERIC Educational Resources Information Center

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  15. Single-baseline RTK GNSS Positioning for Hydrographic Surveying

    NASA Astrophysics Data System (ADS)

    Metin Alkan, Reha; Murat Ozulu, I.; Ilçi, Veli; Kahveci, Muzaffer

    2015-04-01

    Positioning with GNSS technique can be carried out in two ways, absolute and relative. It has been possible to reach a few meters absolute point positioning accuracies in real time after disabling SA permanently in May 2000. Today, accuracies obtainable from absolute point positioning using code observations are not sufficient for most surveying applications. Thus to meet higher accuracy requirements, differential methods using single or dual frequency geodetic-grade GNSS receivers that measure carrier phase have to be used. However, this method requires time-cost field and office works and if the measurement is not carried out with conventional RTK method, user needs a GNSS data processing software to estimate the coordinates. If RTK is used, at least two or more GNSS receivers are required, one as a reference and the other as a rover. Moreover, the distance between the receivers must not exceed 15-20 km in order to be able to rapidly and reliably resolve the carrier phase ambiguities. On the other hand, based on the innovations and improvements in satellite geodesy and GNSS modernization studies occurred within the last decade, many new positioning methods and new approaches have been developed. One of them is Network-RTK (or commonly known as CORS) and the other is Single-baseline RTK. These methods are widely used for many surveying applications in many countries. The user of the system can obtain his/her position within a few cm level of accuracy in real-time with only a single GNSS receiver that has Network RTK (CORS) capability. When compared with the conventional differential and RTK methods, this technique has several significant advantages as it is easy to use and it produces accurate, cost-effective and rapid solutions. In Turkey, establishment of a multi-base RTK network was completed and opened for civilian use in 2009. This network is called CORS-TR and consists of 146 reference stations having about 80-100 km interstation distances. It is possible for a user to determine his/her position with a few cm accuracy in real time in Turkey. Besides, there are some province municipalities in Turkey which have established their own local CORS networks such as Istanbul (with 9 reference stations) and Ankara (with 10 reference stations). There is also a local RTK base station which disseminates real time position corrections for surveyors in Çorum province and is operated by Çorum Municipality. This is the first step of establishing a complete local CORS network in Çorum (the municipality has plans to increase this number and establish a CORS network within a few years). At the time of this study, unfortunately, national CORS-TR stations in Çorum Province were under maintenance and thus we could not receive corrections from our national CORS network. Instead, Çorum Province's local RTK reference station's corrections were used during the study. The main purpose of this study is to investigate the accuracy performance of the Single-baseline RTK GNSS system operated by Çorum Municipality in marine environment. For this purpose, a kinematic test measurement was carried out at Obruk Dam, Çorum, Turkey. During the test measurement, a small vessel equipped with a dual-frequency geodetic-grade GNSS receiver, Spectra Precision ProMark 500, was used. The coordinates of the vessel were obtained from the Single-baseline RTK system in ITRF datum in real-time with fix solutions. At the same time, the raw kinematic GNSS data were also recorded to the receiver in order to estimate the known coordinates of the vessel with post-processed differential kinematic technique. In this way, GPS data were collected under the same conditions, which allowed precise assessment of the used system. The measurements were carried out along the survey profiles for about 1 hour. During the kinematic test, another receiver was set up on a geodetic point at the shore and data were collected in static mode to calculate the coordinates of the vessel for each epoch. As mentioned above, the vessel coordinates were estimated very accurately by using data collected on shore and vessel by using differential GNSS technique. The Single-baseline RTK-derived coordinates were compared with those obtained from the post-processing of the GNSS data for each epoch. Computed differences show that the coordinates agree with the relative solutions at 7 cm and below in position. Some marine applications like precise hydrographic surveying, monitoring silt accretion and erosion in rivers, lakes, estuaries, coastal waters and harbor areas; marine geodynamics; automatic docking; dredging; construction work; attitude control of ships, buoys and floating platforms, require high accuracy better than 0.1 m in position and height. Results obtained from this application show that Single-baseline RTK and/or CORS systems can reliably be utilized for the above mentioned marine applications and some others especially for positioning as a strong alternative to the conventional differential methods.

  16. Metonymy and reference-point errors in novice programming

    NASA Astrophysics Data System (ADS)

    Miller, Craig S.

    2014-07-01

    When learning to program, students often mistakenly refer to an element that is structurally related to the element that they intend to reference. For example, they may indicate the attribute of an object when their intention is to reference the whole object. This paper examines these reference-point errors through the context of metonymy. Metonymy is a rhetorical device where the speaker states a referent that is structurally related to the intended referent. For example, the following sentence states an office bureau but actually refers to a person working at the bureau: The tourist asked the travel bureau for directions to the museum. Drawing upon previous studies, I discuss how student reference errors may be consistent with the use of metonymy. In particular, I hypothesize that students are more likely to reference an identifying element even when a structurally related element is intended. I then present two experiments, which produce results consistent with this analysis. In both experiments, students are more likely to produce reference-point errors that involve identifying attributes than descriptive attributes. Given these results, I explore the possibility that students are relying on habits of communication rather than the mechanistic principles needed for successful programming. Finally I discuss teaching interventions using live examples and how metonymy may be presented to non-computing students as pedagogy for computational thinking.

  17. SU-C-BRA-04: Automated Segmentation of Head-And-Neck CT Images for Radiotherapy Treatment Planning Via Multi-Atlas Machine Learning (MAML)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, X; Gao, H; Sharp, G

    Purpose: Accurate image segmentation is a crucial step during image guided radiation therapy. This work proposes multi-atlas machine learning (MAML) algorithm for automated segmentation of head-and-neck CT images. Methods: As the first step, the algorithm utilizes normalized mutual information as similarity metric, affine registration combined with multiresolution B-Spline registration, and then fuses together using the label fusion strategy via Plastimatch. As the second step, the following feature selection strategy is proposed to extract five feature components from reference or atlas images: intensity (I), distance map (D), box (B), center of gravity (C) and stable point (S). The box feature Bmore » is novel. It describes a relative position from each point to minimum inscribed rectangle of ROI. The center-of-gravity feature C is the 3D Euclidean distance from a sample point to the ROI center of gravity, and then S is the distance of the sample point to the landmarks. Then, we adopt random forest (RF) in Scikit-learn, a Python module integrating a wide range of state-of-the-art machine learning algorithms as classifier. Different feature and atlas strategies are used for different ROIs for improved performance, such as multi-atlas strategy with reference box for brainstem, and single-atlas strategy with reference landmark for optic chiasm. Results: The algorithm was validated on a set of 33 CT images with manual contours using a leave-one-out cross-validation strategy. Dice similarity coefficients between manual contours and automated contours were calculated: the proposed MAML method had an improvement from 0.79 to 0.83 for brainstem and 0.11 to 0.52 for optic chiasm with respect to multi-atlas segmentation method (MA). Conclusion: A MAML method has been proposed for automated segmentation of head-and-neck CT images with improved performance. It provides the comparable result in brainstem and the improved result in optic chiasm compared with MA. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  18. Spiral-syllabus course in wave phenomena to introduce majors and nonmajors to physics

    NASA Astrophysics Data System (ADS)

    Touger, Jerold S.

    1981-09-01

    A single course to introduce physics to both nonscience and physics majors has been developed, dealing with light, sound, and signal, transmission and reception, and emphasizing wave aspects of these phenomena. Themes such as the observational basis of physics, the progression from qualitative observation to measurement, physical models, mathematical modeling, and the utility of models in developing technology are stressed. Modes of presentation, consistent with the notion of a spiral syllabus, are explained with reference to the cognitive and educational theories of Bruner and Piaget. Reasons are discussed for choosing this subject matter in preference to Newtonian mechanics as a starting point for physics majors.

  19. A novel multi-model neuro-fuzzy-based MPPT for three-phase grid-connected photovoltaic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaouachi, Aymen; Kamel, Rashad M.; Nagasaka, Ken

    This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three multi-layered feed forwarded Artificial Neural Networks (ANN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated ANN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology,more » comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and nonlinear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network and the Perturb and Observe (P and O) algorithm dispositive. (author)« less

  20. Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective

    ERIC Educational Resources Information Center

    Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith

    2010-01-01

    The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…

  1. [Study on the experimental application of floating-reference method to noninvasive blood glucose sensing].

    PubMed

    Yu, Hui; Qi, Dan; Li, Heng-da; Xu, Ke-xin; Yuan, Wei-jie

    2012-03-01

    Weak signal, low instrument signal-to-noise ratio, continuous variation of human physiological environment and the interferences from other components in blood make it difficult to extract the blood glucose information from near infrared spectrum in noninvasive blood glucose measurement. The floating-reference method, which analyses the effect of glucose concentration variation on absorption coefficient and scattering coefficient, gets spectrum at the reference point and the measurement point where the light intensity variations from absorption and scattering are counteractive and biggest respectively. By using the spectrum from reference point as reference, floating-reference method can reduce the interferences from variation of physiological environment and experiment circumstance. In the present paper, the effectiveness of floating-reference method working on improving prediction precision and stability was assessed through application experiments. The comparison was made between models whose data were processed with and without floating-reference method. The results showed that the root mean square error of prediction (RMSEP) decreased by 34.7% maximally. The floating-reference method could reduce the influences of changes of samples' state, instrument noises and drift, and improve the models' prediction precision and stability effectively.

  2. Evolution of motion uncertainty in rectal cancer: implications for adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Kleijnen, Jean-Paul J. E.; van Asselen, Bram; Burbach, Johannes P. M.; Intven, Martijn; Philippens, Marielle E. P.; Reerink, Onne; Lagendijk, Jan J. W.; Raaymakers, Bas W.

    2016-01-01

    Reduction of motion uncertainty by applying adaptive radiotherapy strategies depends largely on the temporal behavior of this motion. To fully optimize adaptive strategies, insight into target motion is needed. The purpose of this study was to analyze stability and evolution in time of motion uncertainty of both the gross tumor volume (GTV) and clinical target volume (CTV) for patients with rectal cancer. We scanned 16 patients daily during one week, on a 1.5 T MRI scanner in treatment position, prior to each radiotherapy fraction. Single slice sagittal cine MRIs were made at the beginning, middle, and end of each scan session, for one minute at 2 Hz temporal resolution. GTV and CTV motion were determined by registering a delineated reference frame to time-points later in time. The 95th percentile of observed motion (dist95%) was taken as a measure of motion. The stability of motion in time was evaluated within each cine-MRI separately. The evolution of motion was investigated between the reference frame and the cine-MRIs of a single scan session and between the reference frame and the cine-MRIs of several days later in the course of treatment. This observed motion was then converted into a PTV-margin estimate. Within a one minute cine-MRI scan, motion was found to be stable and small. Independent of the time-point within the scan session, the average dist95% remains below 3.6 mm and 2.3 mm for CTV and GTV, respectively 90% of the time. We found similar motion over time intervals from 18 min to 4 days. When reducing the time interval from 18 min to 1 min, a large reduction in motion uncertainty is observed. A reduction in motion uncertainty, and thus the PTV-margin estimate, of 71% and 75% for CTV and tumor was observed, respectively. Time intervals of 15 and 30 s yield no further reduction in motion uncertainty compared to a 1 min time interval.

  3. Noncontact on-machine measurement system based on capacitive displacement sensors for single-point diamond turning

    NASA Astrophysics Data System (ADS)

    Li, Xingchang; Zhang, Zhiyu; Hu, Haifei; Li, Yingjie; Xiong, Ling; Zhang, Xuejun; Yan, Jiwang

    2018-04-01

    On-machine measurements can improve the form accuracy of optical surfaces in single-point diamond turning applications; however, commercially available linear variable differential transformer sensors are inaccurate and can potentially scratch the surface. We present an on-machine measurement system based on capacitive displacement sensors for high-precision optical surfaces. In the proposed system, a position-trigger method of measurement was developed to ensure strict correspondence between the measurement points and the measurement data with no intervening time-delay. In addition, a double-sensor measurement was proposed to reduce the electric signal noise during spindle rotation. Using the proposed system, the repeatability of 80-nm peak-to-valley (PV) and 8-nm root-mean-square (RMS) was achieved through analyzing four successive measurement results. The accuracy of 109-nm PV and 14-nm RMS was obtained by comparing with the interferometer measurement result. An aluminum spherical mirror with a diameter of 300 mm was fabricated, and the resulting measured form error after one compensation cut was decreased to 254 nm in PV and 52 nm in RMS. These results confirm that the measurements of the surface form errors were successfully used to modify the cutting tool path during the compensation cut, thereby ensuring that the diamond turning process was more deterministic. In addition, the results show that the noise level was significantly reduced with the reference sensor even under a high rotational speed.

  4. Effect of Receiver Choosing on Point Positions Determination in Network RTK

    NASA Astrophysics Data System (ADS)

    Bulbul, Sercan; Inal, Cevat

    2016-04-01

    Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.

  5. Effect of endogenous reference genes on digital PCR assessment of genetically engineered canola events.

    PubMed

    Demeke, Tigst; Eng, Monika

    2018-05-01

    Droplet digital PCR (ddPCR) has been used for absolute quantification of genetically engineered (GE) events. Absolute quantification of GE events by duplex ddPCR requires the use of appropriate primers and probes for target and reference gene sequences in order to accurately determine the amount of GE materials. Single copy reference genes are generally preferred for absolute quantification of GE events by ddPCR. Study has not been conducted on a comparison of reference genes for absolute quantification of GE canola events by ddPCR. The suitability of four endogenous reference sequences ( HMG-I/Y , FatA(A), CruA and Ccf) for absolute quantification of GE canola events by ddPCR was investigated. The effect of DNA extraction methods and DNA quality on the assessment of reference gene copy numbers was also investigated. ddPCR results were affected by the use of single vs. two copy reference genes. The single copy, FatA(A), reference gene was found to be stable and suitable for absolute quantification of GE canola events by ddPCR. For the copy numbers measured, the HMG-I/Y reference gene was less consistent than FatA(A) reference gene. The expected ddPCR values were underestimated when CruA and Ccf (two copy endogenous Cruciferin sequences) were used because of high number of copies. It is important to make an adjustment if two copy reference genes are used for ddPCR in order to obtain accurate results. On the other hand, real-time quantitative PCR results were not affected by the use of single vs. two copy reference genes.

  6. Development of a reference material of a single DNA molecule for the quality control of PCR testing.

    PubMed

    Mano, Junichi; Hatano, Shuko; Futo, Satoshi; Yoshii, Junji; Nakae, Hiroki; Naito, Shigehiro; Takabatake, Reona; Kitta, Kazumi

    2014-09-02

    We developed a reference material of a single DNA molecule with a specific nucleotide sequence. The double-strand linear DNA which has PCR target sequences at the both ends was prepared as a reference DNA molecule, and we named the PCR targets on each side as confirmation sequence and standard sequence. The highly diluted solution of the reference molecule was dispensed into 96 wells of a plastic PCR plate to make the average number of molecules in a well below one. Subsequently, the presence or absence of the reference molecule in each well was checked by real-time PCR targeting for the confirmation sequence. After an enzymatic treatment of the reaction mixture in the positive wells for the digestion of PCR products, the resultant solution was used as the reference material of a single DNA molecule with the standard sequence. PCR analyses revealed that the prepared samples included only one reference molecule with high probability. The single-molecule reference material developed in this study will be useful for the absolute evaluation of a detection limit of PCR-based testing methods, the quality control of PCR analyses, performance evaluations of PCR reagents and instruments, and the preparation of an accurate calibration curve for real-time PCR quantitation.

  7. A Comparison of Quantum and Molecular Mechanical Methods to Estimate Strain Energy in Druglike Fragments.

    PubMed

    Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto

    2017-06-26

    Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.

  8. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  9. A single-frequency double-pulse Ho:YLF laser for CO2-lidar

    NASA Astrophysics Data System (ADS)

    Kucirek, P.; Meissner, A.; Eiselt, P.; Höfer, M.; Hoffmann, D.

    2016-03-01

    A single-frequency q-switched Ho:YLF laser oscillator with a bow-tie ring resonator, specifically designed for highspectral stability, is reported. It is pumped with a dedicated Tm:YLF laser at 1.9 μm. The ramp-and-fire method with a DFB-diode laser as a reference is employed for generating single-frequency emission at 2051 nm. The laser is tested with different operating modes, including cw-pumping at different pulse repetition frequencies and gain-switched pumping. The standard deviation of the emission wavelength of the laser pulses is measured with the heterodyne technique at the different operating modes. Its dependence on the single-pass gain in the crystal and on the cavity finesse is investigated. At specific operating points the spectral stability of the laser pulses is 1.5 MHz (rms over 10 s). Under gain-switched pumping with 20% duty cycle and 2 W of average pump power, stable single-frequency pulse pairs with a temporal separation of 580 μs are produced at a repetition rate of 50 Hz. The measured pulse energy is 2 mJ (<2 % rms error on the pulse energy over 10 s) and the measured pulse duration is approx. 20 ns for each of the two pulses in the burst.

  10. Structured perceptual input imposes an egocentric frame of reference-pointing, imagery, and spatial self-consciousness.

    PubMed

    Marcel, Anthony; Dobel, Christian

    2005-01-01

    Perceptual input imposes and maintains an egocentric frame of reference, which enables orientation. When blindfolded, people tended to mistake the assumed intrinsic axes of symmetry of their immediate environment (a room) for their own egocentric relation to features of the room. When asked to point to the door and window, known to be at mid-points of facing (or adjacent) walls, they pointed with their arms at 180 degrees (or 90 degrees) angles, irrespective of where they thought they were in the room. People did the same when requested to imagine the situation. They justified their responses (inappropriately) by logical necessity or a structural description of the room rather than (appropriately) by relative location of themselves and the reference points. In eight experiments, we explored the effect on this in perception and imagery of: perceptual input (without perceptibility of the target reference points); imaging oneself versus another person; aids to explicit spatial self-consciousness; order of questions about self-location; and the relation of targets to the axes of symmetry of the room. The results indicate that, if one is deprived of structured perceptual input, as well as losing one's bearings, (a) one is likely to lose one's egocentric frame of reference itself, and (b) instead of pointing to reference points, one demonstrates their structural relation by adopting the intrinsic axes of the environment as one's own. This is prevented by providing noninformative perceptual input or by inducing subjects to imagine themselves from the outside, which makes explicit the fact of their being located relative to the world. The role of perceptual contact with a structured world is discussed in relation to sensory deprivation and imagery, appeal is made to Gibson's theory of joint egoreception and exteroception, and the data are related to recent theories of spatial memory and navigation.

  11. Comparative MR study of hepatic fat quantification using single-voxel proton spectroscopy, two-point dixon and three-point IDEAL.

    PubMed

    Kim, Hyeonjin; Taksali, Sara E; Dufour, Sylvie; Befroy, Douglas; Goodman, T Robin; Petersen, Kitt Falk; Shulman, Gerald I; Caprio, Sonia; Constable, R Todd

    2008-03-01

    Hepatic fat fraction (HFF) was measured in 28 lean/obese humans by single-voxel proton spectroscopy (MRS), a two-point Dixon (2PD), and a three-point iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) method (3PI). For the lean, obese, and total subject groups, the range of HFF measured by MRS was 0.3-3.5% (1.1 +/- 1.4%), 0.3-41.5% (11.7 +/- 12.1), and 0.3-41.5% (10.1 +/- 11.6%), respectively. For the same groups, the HFF measured by 2PD was -6.3-2.2% (-2.0 +/- 3.7%), -2.4-42.9% (12.9 +/- 13.8%), and -6.3-42.9% (10.5 +/- 13.7%), respectively, and for 3PI they were 7.9-12.8% (10.1 +/- 2.0%), 11.1-49.3% (22.0 +/- 12.2%), and 7.9-49.3% (20.0 +/- 11.8%), respectively. The HFF measured by MRS was highly correlated with those measured by 2PD (r = 0.954, P < 0.001) and 3PI (r = 0.973, P < 0.001). With the MRS data as a reference, the percentages of correct differentiation between normal and fatty liver with the MRI methods ranged from 68-93% for 2PD and 64-89% for 3PI. Our study demonstrates that the apparent HFF measured by the MRI methods can significantly vary depending on the choice of water-fat separation methods and sequences. Such variability may limit the clinical application of the MRI methods, particularly when a diagnosis of early fatty liver needs to be performed. Therefore, protocol-specific establishment of cutoffs for liver fat content may be necessary. (c) 2008 Wiley-Liss, Inc.

  12. Examining patterns of bat activity in Bandelier National Monument, New Mexico, using walking point transects

    USGS Publications Warehouse

    Ellison, L.E.; Everette, A.L.; Bogan, M.A.

    2005-01-01

    We conducted a preliminary study using small field crews, a single Anabat II detector coupled with a laptop computer, and point transects to examine patterns of bat activity at a scale of interest to local resource managers. The study was conducted during summers of 1996–1998 in Bandelier National Monument in the Jemez Mountains of northern New Mexico, a landscape with distinct vegetation zones and high species richness of bats. We developed simple models that described general patterns of acoustic activity within 4 vegetation zones based primarily on nightly variation and a qualitative index of habitat complexity. Bat acoustic activity (number of bat passes&sol point) did not vary dramatically among a limited sample of transects within a vegetation zone during 1996. In 1997 and 1998, single transects within each vegetation zone were established, and bat activity did not vary annually within these zones. Acoustic activity differed among the 4 vegetation zones of interest, with the greatest activity occurring in riparian canyon bottomland, intermediate activity in coniferous forest and a 1977 burned zone, and lowest activity in piñon-juniper woodlands. We identified 68.5% of 2,529 bat passes recorded during point-transect surveys to species using an echolocation call reference library we established for the area and qualitative characteristics of bat calls. Bat species richness and composition differed among vegetation zones. Results of these efforts were consistent with general knowledge of where different bat species typically forage and with the natural history of bats of New Mexico, suggesting such a method might have value for drawing inferences about bat activity in different vegetation zones.

  13. Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Jin, Guanghu; Dong, Zhen

    2018-04-01

    Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.

  14. Comparing theories of reference-dependent choice.

    PubMed

    Bhatia, Sudeep

    2017-09-01

    Preferences are influenced by the presence or absence of salient choice options, known as reference points. This behavioral tendency is traditionally attributed to the loss aversion and diminishing sensitivity assumptions of prospect theory. In contrast, some psychological research suggests that reference dependence is caused by attentional biases that increase the subjective weighting of the reference point's primary attributes. Although both theories are able to successfully account for behavioral findings involving reference dependence, this article shows that these theories make diverging choice predictions when available options are inferior to the reference point. It presents the results of 2 studies that use settings with inferior choice options to compare these 2 theories. The analysis involves quantitative fits to participant-level choice data, and the results indicate that most participants are better described by models with attentional bias than they are by models with loss aversion and diminishing sensitivity. These differences appear to be caused by violations of loss aversion and diminishing sensitivity in losses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Pressure-Drop Considerations in the Characterization of Dew-Point Transfer Standards at High Temperatures

    NASA Astrophysics Data System (ADS)

    Mitter, H.; Böse, N.; Benyon, R.; Vicente, T.

    2012-09-01

    During calibration of precision optical dew-point hygrometers (DPHs), it is usually necessary to take into account the pressure drop induced by the gas flow between the "point of reference" and the "point of use" (mirror or measuring head of the DPH) either as a correction of the reference dew-point temperature or as part of the uncertainty estimation. At dew-point temperatures in the range of ambient temperature and below, it is sufficient to determine the pressure drop for the required gas flow, and to keep the volumetric flow constant during the measurements. In this case, it is feasible to keep the dry-gas flow into the dew-point generator constant or to measure the flow downstream the DPH at ambient temperature. In normal operation, at least one DPH in addition to the monitoring DPH are used, and this operation has to be applied to each instrument. The situation is different at high dew-point temperatures up to 95 °C, the currently achievable upper limit reported in this paper. With increasing dew-point temperatures, the reference gas contains increasing amounts of water vapour and a constant dry-gas flow will lead to a significant enhanced volume flow at the conditions at the point of use, and therefore, to a significantly varying pressure drop depending on the applied dew-point temperature. At dew-point temperatures above ambient temperature, it is also necessary to heat the reference gas and the mirror head of the DPH sufficiently to avoid condensation which will additionally increase the volume flow and the pressure drop. In this paper, a method is provided to calculate the dry-gas flow rate needed to maintain a known wet-gas flow rate through a chilled mirror for a range of temperature and pressures.

  16. Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.; Guynn, Mark D.

    2012-01-01

    Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.

  17. Construction and validation of the midsagittal reference plane based on the skull base symmetry for three-dimensional cephalometric craniofacial analysis.

    PubMed

    Kim, Hak-Jin; Kim, Bong Chul; Kim, Jin-Geun; Zhengguo, Piao; Kang, Sang Hoon; Lee, Sang-Hwy

    2014-03-01

    The objective of this study was to determine the reliable midsagittal (MS) reference plane in practical ways for the three-dimensional craniofacial analysis on three-dimensional computed tomography images. Five normal human dry skulls and 20 normal subjects without any dysmorphoses or asymmetries were used. The accuracies and stability on repeated plane construction for almost every possible candidate MS plane based on the skull base structures were examined by comparing the discrepancies in distances and orientations from the reference points and planes of the skull base and facial bones on three-dimensional computed tomography images. The following reference points of these planes were stable, and their distribution was balanced: nasion and foramen cecum at the anterior part of the skull base, sella at the middle part, and basion and opisthion at the posterior part. The candidate reference planes constructed using the aforementioned reference points were thought to be reliable for use as an MS reference plane for the three-dimensional analysis of maxillofacial dysmorphosis.

  18. Comparative Bioavailability and Tolerability of a Single 2-mg Dose of 2 Repaglinide Tablet Formulations in Fasting, Healthy Chinese Male Volunteers: An Open-Label, Randomized-Sequence, 2-Period Crossover Study☆

    PubMed Central

    Zhai, Xue-jia; Hu, Kai; Chen, Fen; Lu, Yong-ning

    2013-01-01

    Background Repaglinide, an oral insulin secretagogue, was the first meglitinide analogue to be approved for use in patients with type 2 diabetes mellitus. Objective In our study, the bioavailability and tolerability of the proposed generic formulation with the established reference formulation of repaglinide 2 mg were compared in a fasting, healthy Chinese male population. Methods This 2-week, open-label, randomized-sequence, single-dose, 2-period crossover study was conducted in 22 healthy native Han Chinese male volunteers. Eligible subjects were randomly assigned in a 1:1 ratio to receive a single 2-mg dose of the test or reference formulation, followed by a 7-day washout period and administration of the alternate formulation. After an overnight fast, subjects received a single oral dose of repaglinide (2 mg). Blood samples were drawn at predetermined time points (0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, and 6.0 hours). All plasma concentrations of repaglinide were measured by LC-MS/MS. The observed Cmax, Tmax, t1/2, and AUC were assessed. The formulations were to be considered bioequivalent if the ln-transformed ratios of Cmax and AUC were within the predetermined bioequivalence range of 80% to 125% established by the State Food and Drug Administration of the People’s Republic of China. Tolerability was assessed throughout the study via subject interview, vital signs, and blood sampling. Results The mean (SD) age of the subjects was 24.2 (2.3) years; their mean (SD) weight was 62.6 (5.8) kg, their mean (SD) height was 172 (5.7) cm, and their mean (SD) body mass index was 21.0 (1.1). The mean (SD) Cmax for repaglinide with the test and reference formulations were 20.0 (5.1) and 18.7 (8.7) ng/mL. The AUC0–t for the test formulation was 46.3 (15.1) and AUC0–∞ was 47.9 (16.5) ng•h/mL. With the reference formulation, the corresponding values were 46.4 (26.1) and 49.0 (31.3) ng•h/mL. The mean (SD) Tmax values with the test and reference formulations were 1.2 (0.7) hours and 1.5 (0.8) hours and the mean (SD) values t1/2 values were 1.0 (0.3), and 0.9 (0.3) hours, respectively. The ln-transformed ratios of Cmax, AUC0–t, and AUC0–∞ were 113.6:1, 105.6:1, and 104.7:1. The corresponding 90% CIs were 99.8 to 129.2, 93.4 to 119.5, and 91.8 to 119.5, respectively. Conclusions This single-dose study found that the test and reference formulations of repaglinide met the regulatory criteria for bioequivalence in these fasting, healthy Chinese male volunteers. Both formulations appeared to be well tolerated. ClinicalTrials.gov identifier: 2012L01684. PMID:24465043

  19. Comparative Bioavailability and Tolerability of a Single 2-mg Dose of 2 Repaglinide Tablet Formulations in Fasting, Healthy Chinese Male Volunteers: An Open-Label, Randomized-Sequence, 2-Period Crossover Study.

    PubMed

    Zhai, Xue-Jia; Hu, Kai; Chen, Fen; Lu, Yong-Ning

    2013-12-01

    Repaglinide, an oral insulin secretagogue, was the first meglitinide analogue to be approved for use in patients with type 2 diabetes mellitus. In our study, the bioavailability and tolerability of the proposed generic formulation with the established reference formulation of repaglinide 2 mg were compared in a fasting, healthy Chinese male population. This 2-week, open-label, randomized-sequence, single-dose, 2-period crossover study was conducted in 22 healthy native Han Chinese male volunteers. Eligible subjects were randomly assigned in a 1:1 ratio to receive a single 2-mg dose of the test or reference formulation, followed by a 7-day washout period and administration of the alternate formulation. After an overnight fast, subjects received a single oral dose of repaglinide (2 mg). Blood samples were drawn at predetermined time points (0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0, and 6.0 hours). All plasma concentrations of repaglinide were measured by LC-MS/MS. The observed Cmax, Tmax, t1/2, and AUC were assessed. The formulations were to be considered bioequivalent if the ln-transformed ratios of Cmax and AUC were within the predetermined bioequivalence range of 80% to 125% established by the State Food and Drug Administration of the People's Republic of China. Tolerability was assessed throughout the study via subject interview, vital signs, and blood sampling. The mean (SD) age of the subjects was 24.2 (2.3) years; their mean (SD) weight was 62.6 (5.8) kg, their mean (SD) height was 172 (5.7) cm, and their mean (SD) body mass index was 21.0 (1.1). The mean (SD) Cmax for repaglinide with the test and reference formulations were 20.0 (5.1) and 18.7 (8.7) ng/mL. The AUC0-t for the test formulation was 46.3 (15.1) and AUC0-∞ was 47.9 (16.5) ng(•)h/mL. With the reference formulation, the corresponding values were 46.4 (26.1) and 49.0 (31.3) ng(•)h/mL. The mean (SD) Tmax values with the test and reference formulations were 1.2 (0.7) hours and 1.5 (0.8) hours and the mean (SD) values t1/2 values were 1.0 (0.3), and 0.9 (0.3) hours, respectively. The ln-transformed ratios of Cmax, AUC0-t, and AUC0-∞ were 113.6:1, 105.6:1, and 104.7:1. The corresponding 90% CIs were 99.8 to 129.2, 93.4 to 119.5, and 91.8 to 119.5, respectively. This single-dose study found that the test and reference formulations of repaglinide met the regulatory criteria for bioequivalence in these fasting, healthy Chinese male volunteers. Both formulations appeared to be well tolerated. ClinicalTrials.gov identifier: 2012L01684.

  20. Mapping the vestibular evoked myogenic potential (VEMP).

    PubMed

    Colebatch, James G

    2012-01-01

    Effects of different electrode placements and indifferent electrodes were investigated for the vestibular evoked myogenic potential (VEMP) recorded from the sternocleidomastoid muscle (SCM). In 5 normal volunteers, the motor point of the left SCM was identified and an electrode placed there. A grid of 7 additional electrodes was laid out, along and across the SCM, based upon the location of the motor point. One reference electrode was placed over the sternoclavicular joint and another over C7. There were clear morphological changes with differing recording sites and for the two reference electrodes, but the earliest and largest responses were recorded from the motor point. The C7 reference affected the level of rectified EMG and was associated with an initial negativity in some electrodes. The latencies of the p13 potentials increased with distance from the motor point but the n23 latencies did not. Thus the p13 potential behaved as a travelling wave whereas the n23 behaved as a standing wave. The C7 reference may be contaminated by other evoked myogenic activity. Ideally recordings should be made with an active electrode over the motor point.

  1. Automation of the Image Analysis for Thermographic Inspection

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1998-01-01

    Several data processing procedures for the pulse thermal inspection require preliminary determination of an unflawed region. Typically, an initial analysis of the thermal images is performed by an operator to determine the locations of unflawed and the defective areas. In the present work an algorithm is developed for automatically determining a reference point corresponding to an unflawed region. Results are obtained for defects which are arbitrarily located in the inspection region. A comparison is presented of the distributions of derived values with right and wrong localization of the reference point. Different algorithms of automatic determination of the reference point are compared.

  2. Local sensitivity of per-recruit fishing mortality reference points.

    PubMed

    Cadigan, N G; Wang, S

    2016-12-01

    We study the sensitivity of fishery management per-recruit harvest rates which may be part of a quantitative harvest strategy designed to achieve some objective for catch or population size. We use a local influence sensitivity analysis to derive equations that describe how these reference harvest rates are affected by perturbations to productivity processes. These equations give a basic theoretical understanding of sensitivity that can be used to predict what the likely impacts of future changes in productivity will be. Our results indicate that per-recruit reference harvest rates are more sensitive to perturbations when the equilibrium catch or population size per recruit, as functions of the harvest rate, have less curvature near the reference point. Overall our results suggest that per recruit reference points will, with some exceptions, usually increase if (1) growth rates increase, (2) natural mortality rates increase, or (3) fishery selectivity increases to an older age.

  3. Singlet-paired coupled cluster theory for open shells

    NASA Astrophysics Data System (ADS)

    Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2016-06-01

    Restricted single-reference coupled cluster theory truncated to single and double excitations accurately describes weakly correlated systems, but often breaks down in the presence of static or strong correlation. Good coupled cluster energies in the presence of degeneracies can be obtained by using a symmetry-broken reference, such as unrestricted Hartree-Fock, but at the cost of good quantum numbers. A large body of work has shown that modifying the coupled cluster ansatz allows for the treatment of strong correlation within a single-reference, symmetry-adapted framework. The recently introduced singlet-paired coupled cluster doubles (CCD0) method is one such model, which recovers correct behavior for strong correlation without requiring symmetry breaking in the reference. Here, we extend singlet-paired coupled cluster for application to open shells via restricted open-shell singlet-paired coupled cluster singles and doubles (ROCCSD0). The ROCCSD0 approach retains the benefits of standard coupled cluster theory and recovers correct behavior for strongly correlated, open-shell systems using a spin-preserving ROHF reference.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.

    Restricted single-reference coupled cluster theory truncated to single and double excitations accurately describes weakly correlated systems, but often breaks down in the presence of static or strong correlation. Good coupled cluster energies in the presence of degeneracies can be obtained by using a symmetry-broken reference, such as unrestricted Hartree-Fock, but at the cost of good quantum numbers. A large body of work has shown that modifying the coupled cluster ansatz allows for the treatment of strong correlation within a single-reference, symmetry-adapted framework. The recently introduced singlet-paired coupled cluster doubles (CCD0) method is one such model, which recovers correct behavior formore » strong correlation without requiring symmetry breaking in the reference. Here, we extend singlet-paired coupled cluster for application to open shells via restricted open-shell singlet-paired coupled cluster singles and doubles (ROCCSD0). The ROCCSD0 approach retains the benefits of standard coupled cluster theory and recovers correct behavior for strongly correlated, open-shell systems using a spin-preserving ROHF reference.« less

  5. GenLocDip: A Generalized Program to Calculate and Visualize Local Electric Dipole Moments.

    PubMed

    Groß, Lynn; Herrmann, Carmen

    2016-09-30

    Local dipole moments (i.e., dipole moments of atomic or molecular subsystems) are essential for understanding various phenomena in nanoscience, such as solvent effects on the conductance of single molecules in break junctions or the interaction between the tip and the adsorbate in atomic force microscopy. We introduce GenLocDip, a program for calculating and visualizing local dipole moments of molecular subsystems. GenLocDip currently uses the Atoms-In-Molecules (AIM) partitioning scheme and is interfaced to various AIM programs. This enables postprocessing of a variety of electronic structure output formats including cube and wavefunction files, and, in general, output from any other code capable of writing the electron density on a three-dimensional grid. It uses a modified version of Bader's and Laidig's approach for achieving origin-independence of local dipoles by referring to internal reference points which can (but do not need to be) bond critical points (BCPs). Furthermore, the code allows the export of critical points and local dipole moments into a POVray readable input format. It is particularly designed for fragments of large systems, for which no BCPs have been calculated for computational efficiency reasons, because large interfragment distances prevent their identification, or because a local partitioning scheme different from AIM was used. The program requires only minimal user input and is written in the Fortran90 programming language. To demonstrate the capabilities of the program, examples are given for covalently and non-covalently bound systems, in particular molecular adsorbates. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    PubMed Central

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing <30–40 samples from 8–10 health-clinics; Tier-3/Community laboratories servicing ∼50 health-clinics, processing <150 samples/day; high-volume centralized laboratories (Tier-4 and Tier-5) processing <300 or >600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT <6-hours could be achieved with placement of twenty-seven Tier-1/POC or eight Tier-2/POC-hubs, at a cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and <24 hour LTR-TAT for the district at $7.42 per-test. Conclusion Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼12–24-hour LTR-TAT, is ∼$2 more than existing referred services per-test, but 2–4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services. PMID:25517412

  7. Creation and validation of a novel body condition scoring method for the magellanic penguin (Spheniscus magellanicus) in the zoo setting.

    PubMed

    Clements, Julie; Sanchez, Jessica N

    2015-11-01

    This research aims to validate a novel, visual body scoring system created for the Magellanic penguin (Spheniscus magellanicus) suitable for the zoo practitioner. Magellanics go through marked seasonal fluctuations in body mass gains and losses. A standardized multi-variable visual body condition guide may provide a more sensitive and objective assessment tool compared to the previously used single variable method. Accurate body condition scores paired with seasonal weight variation measurements give veterinary and keeper staff a clearer understanding of an individual's nutritional status. San Francisco Zoo staff previously used a nine-point body condition scale based on the classic bird standard of a single point of keel palpation with the bird restrained in hand, with no standard measure of reference assigned to each scoring category. We created a novel, visual body condition scoring system that does not require restraint to assesses subcutaneous fat and muscle at seven body landmarks using illustrations and descriptive terms. The scores range from one, the least robust or under-conditioned, to five, the most robust, or over-conditioned. The ratio of body weight to wing length was used as a "gold standard" index of body condition and compared to both the novel multi-variable and previously used single-variable body condition scores. The novel multi-variable scale showed improved agreement with weight:wing ratio compared to the single-variable scale, demonstrating greater accuracy, and reliability when a trained assessor uses the multi-variable body condition scoring system. Zoo staff may use this tool to manage both the colony and the individual to assist in seasonally appropriate Magellanic penguin nutrition assessment. © 2015 Wiley Periodicals, Inc.

  8. Model reference adaptive control for the azimuth-pointing system of a balloon-borne stabilized platform

    NASA Technical Reports Server (NTRS)

    Lubin, Philip M.; Tomizuka, Masayoshi; Chingcuanco, Alfredo O.; Meinhold, Peter R.

    1991-01-01

    A balloon-born stabilized platform has been developed for the remotely operated altitude-azimuth pointing of a millimeter wave telescope system. This paper presents a development and implementation of model reference adaptive control (MRAC) for the azimuth-pointing system of the stabilized platform. The primary goal of the controller is to achieve pointing rms better than 0.1 deg. Simulation results indicate that MRAC can achieve pointing rms better than 0.1 deg. Ground test results show pointing rms better than 0.03 deg. Data from the first flight at the National Scientific Balloon Facility (NSBF) Palestine, Texas show pointing rms better than 0.02 deg.

  9. PLUME-FEATHER, Referencing and Finding Software for Research and Education

    NASA Astrophysics Data System (ADS)

    Bénassy, O.; Caron, C.; Ferret-Canape, C.; Cheylus, A.; Courcelle, E.; Dantec, C.; Dayre, P.; Dostes, T.; Durand, A.; Facq, A.; Gambini, G.; Geahchan, E.; Helft, C.; Hoffmann, D.; Ingarao, M.; Joly, P.; Kieffer, J.; Larré, J.-M.; Libes, M.; Morris, F.; Parmentier, H.; Pérochon, L.; Porte, O.; Romier, G.; Rousse, D.; Tournoy, R.; Valeins, H.

    2014-06-01

    PLUME-FEATHER is a non-profit project created to Promote economicaL, Useful and Maintained softwarEFor theHigher Education And THE Research communities. The site references software, mainly Free/Libre Open Source Software (FLOSS) from French universities and national research organisations, (CNRS, INRA...), laboratories or departments as well as other FLOSS software used and evaluated by users within these institutions. Each software is represented by a reference card, which describes origin, aim, installation, cost (if applicable) and user experience from the point of view of an academic user for academic users. Presently over 1000 programs are referenced on PLUME by more than 900 contributors. Although the server is maintained by a French institution, it is open to international contributions in the academic domain. All contained and validated contents are visible to anonymous public, whereas (presently more than 2000) registered users can contribute, starting with comments on single software reference cards up to help with the organisation and presentation of the referenced software products. The project has been presented to the HEP community in 2012 for the first time [1]. This is an update of the status and a call for (further) contributions.

  10. High accuracy attitude reference stabilization and pointing using the Teledyne SDG-5 gyro and the DRIRU II inertial reference unit

    NASA Astrophysics Data System (ADS)

    Green, K. N.; van Alstine, R. L.

    This paper presents the current performance levels of the SDG-5 gyro, a high performance two-axis dynamically tuned gyro, and the DRIRU II redundant inertial reference unit relating to stabilization and pointing applications. Also presented is a discussion of a product improvement program aimed at further noise reductions to meet the demanding requirements of future space defense applications.

  11. Wollaston prism phase-stepping point diffraction interferometer and method

    DOEpatents

    Rushford, Michael C.

    2004-10-12

    A Wollaston prism phase-stepping point diffraction interferometer for testing a test optic. The Wollaston prism shears light into reference and signal beams, and provides phase stepping at increased accuracy by translating the Wollaston prism in a lateral direction with respect to the optical path. The reference beam produced by the Wollaston prism is directed through a pinhole of a diaphragm to produce a perfect spherical reference wave. The spherical reference wave is recombined with the signal beam to produce an interference fringe pattern of greater accuracy.

  12. Relativistic spin-orbit interactions of photons and electrons

    NASA Astrophysics Data System (ADS)

    Smirnova, D. A.; Travin, V. M.; Bliokh, K. Y.; Nori, F.

    2018-04-01

    Laboratory optics, typically dealing with monochromatic light beams in a single reference frame, exhibits numerous spin-orbit interaction phenomena due to the coupling between the spin and orbital degrees of freedom of light. Similar phenomena appear for electrons and other spinning particles. Here we examine transformations of paraxial photon and relativistic-electron states carrying the spin and orbital angular momenta (AM) under the Lorentz boosts between different reference frames. We show that transverse boosts inevitably produce a rather nontrivial conversion from spin to orbital AM. The converted part is then separated between the intrinsic (vortex) and extrinsic (transverse shift or Hall effect) contributions. Although the spin, intrinsic-orbital, and extrinsic-orbital parts all point in different directions, such complex behavior is necessary for the proper Lorentz transformation of the total AM of the particle. Relativistic spin-orbit interactions can be important in scattering processes involving photons, electrons, and other relativistic spinning particles, as well as when studying light emitted by fast-moving bodies.

  13. Image processing of HCMM-satellite thermal images for superposition with other satellite imagery and topographic and thematic maps. [Upper Rhine River Valley and surrounding highlands Switzerland, Germany, and France

    NASA Technical Reports Server (NTRS)

    Gossmann, H.; Haberaecker, P. (Principal Investigator)

    1980-01-01

    The southwestern part of Central Europe between Basal and Frankfurt was used in a study to determine the accuracy with which a regionally bounded HCMM scene could be rectified with respect to a preassigned coordinate system. The scale to which excerpts from HCMM data can be sensibly enlarged and the question of how large natural structures must be in order to be identified in a satellite thermal image with the given resolution were also examined. Relief and forest and population distribution maps and a land use map derived from LANDSAT data were digitalized and adapted to a common reference system and then combined in a single multichannel data system. The control points for geometrical rectification were determined using the coordinates of the reference system. The multichannel scene was evaluated in several different manners such as the correlation of surface temperature and relief, surface temperature and land use, or surface temperature and built up areas.

  14. Using features of Arden Syntax with object-oriented medical data models for guideline modeling.

    PubMed

    Peleg, M; Ogunyemi, O; Tu, S; Boxwala, A A; Zeng, Q; Greenes, R A; Shortliffe, E H

    2001-01-01

    Computer-interpretable guidelines (CIGs) can deliver patient-specific decision support at the point of care. CIGs base their recommendations on eligibility and decision criteria that relate medical concepts to patient data. CIG models use expression languages for specifying these criteria, and define models for medical data to which the expressions can refer. In developing version 3 of the GuideLine Interchange Format (GLIF3), we used existing standards as the medical data model and expression language. We investigated the object-oriented HL7 Reference Information Model (RIM) as a default data model. We developed an expression language, called GEL, based on Arden Syntax's logic grammar. Together with other GLIF constructs, GEL reconciles incompatibilities between the data models of Arden Syntax and the HL7 RIM. These incompatibilities include Arden's lack of support for complex data types and time intervals, and the mismatch between Arden's single primary time and multiple time attributes of the HL7 RIM.

  15. An Indoor Positioning Technique Based on a Feed-Forward Artificial Neural Network Using Levenberg-Marquardt Learning Method

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Gholami, A.; Azimi, S.

    2017-09-01

    This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.

  16. Development of Single-Seed Near-Infrared Spectroscopic Predictions of Corn and Soybeans Constituents Using Bulk Teference Values and Mean Spectra

    USDA-ARS?s Scientific Manuscript database

    Near-Infrared reflectance spectroscopic prediction models were developed for common constituents of corn and soybeans using bulk reference values and mean spectra from single-seeds. The bulk reference model and a true single-seed model for soybean protein were compared to determine how well the bul...

  17. Robust iterative closest point algorithm based on global reference point for rotation invariant registration.

    PubMed

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Hu, Huaizhong; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm.

  18. Robust iterative closest point algorithm based on global reference point for rotation invariant registration

    PubMed Central

    Du, Shaoyi; Xu, Yiting; Wan, Teng; Zhang, Sirui; Xu, Guanglin; Zhang, Xuetao

    2017-01-01

    The iterative closest point (ICP) algorithm is efficient and accurate for rigid registration but it needs the good initial parameters. It is easily failed when the rotation angle between two point sets is large. To deal with this problem, a new objective function is proposed by introducing a rotation invariant feature based on the Euclidean distance between each point and a global reference point, where the global reference point is a rotation invariant. After that, this optimization problem is solved by a variant of ICP algorithm, which is an iterative method. Firstly, the accurate correspondence is established by using the weighted rotation invariant feature distance and position distance together. Secondly, the rigid transformation is solved by the singular value decomposition method. Thirdly, the weight is adjusted to control the relative contribution of the positions and features. Finally this new algorithm accomplishes the registration by a coarse-to-fine way whatever the initial rotation angle is, which is demonstrated to converge monotonically. The experimental results validate that the proposed algorithm is more accurate and robust compared with the original ICP algorithm. PMID:29176780

  19. Improving representation of canopy temperatures for modeling subcanopy incoming longwave radiation to the snow surface

    NASA Astrophysics Data System (ADS)

    Webster, Clare; Rutter, Nick; Jonas, Tobias

    2017-09-01

    A comprehensive analysis of canopy surface temperatures was conducted around a small and large gap at a forested alpine site in the Swiss Alps during the 2015 and 2016 snowmelt seasons (March-April). Canopy surface temperatures within the small gap were within 2-3°C of measured reference air temperature. Vertical and horizontal variations in canopy surface temperatures were greatest around the large gap, varying up to 18°C above measured reference air temperature during clear-sky days. Nighttime canopy surface temperatures around the study site were up to 3°C cooler than reference air temperature. These measurements were used to develop a simple parameterization for correcting reference air temperature for elevated canopy surface temperatures during (1) nighttime conditions (subcanopy shortwave radiation is 0 W m-2) and (2) periods of increased subcanopy shortwave radiation >400 W m-2 representing penetration of shortwave radiation through the canopy. Subcanopy shortwave and longwave radiation collected at a single point in the subcanopy over a 24 h clear-sky period was used to calculate a nighttime bulk offset of 3°C for scenario 1 and develop a multiple linear regression model for scenario 2 using reference air temperature and subcanopy shortwave radiation to predict canopy surface temperature with a root-mean-square error (RMSE) of 0.7°C. Outside of these two scenarios, reference air temperature was used to predict subcanopy incoming longwave radiation. Modeling at 20 radiometer locations throughout two snowmelt seasons using these parameterizations reduced the mean bias and RMSE to below 10 W m s-2 at all locations.

  20. Single point aerosol sampling: evaluation of mixing and probe performance in a nuclear stack.

    PubMed

    Rodgers, J C; Fairchild, C I; Wood, G O; Ortiz, C A; Muyshondt, A; McFarland, A R

    1996-01-01

    Alternative reference methodologies have been developed for sampling of radionuclides from stacks and ducts, which differ from the methods previously required by the United States Environmental Protection Agency. These alternative reference methodologies have recently been approved by the U.S. EPA for use in lieu of the current standard techniques. The standard EPA methods are prescriptive in selection of sampling locations and in design of sampling probes whereas the alternative reference methodologies are performance driven. Tests were conducted in a stack at Los Alamos National Laboratory to demonstrate the efficacy of some aspects of the alternative reference methodologies. Coefficients of variation of velocity, tracer gas, and aerosol particle profiles were determined at three sampling locations. Results showed that numerical criteria placed upon the coefficients of variation by the alternative reference methodologies were met at sampling stations located 9 and 14 stack diameters from the flow entrance, but not at a location that was 1.5 diameters downstream from the inlet. Experiments were conducted to characterize the transmission of 10 microns aerodynamic diameter liquid aerosol particles through three types of sampling probes. The transmission ratio (ratio of aerosol concentration at the probe exit plane to the concentration in the free stream) was 107% for a 113 L min-1 (4-cfm) anisokinetic shrouded probe, but only 20% for an isokinetic probe that follows the existing EPA standard requirements. A specially designed isokinetic probe showed a transmission ratio of 63%. The shrouded probe performance would conform to the alternative reference methodologies criteria; however, the isokinetic probes would not.

  1. Preset pivotal tool holder

    DOEpatents

    Asmanes, Charles

    1979-01-01

    A tool fixture is provided for precise pre-alignment of a radiused edge cutting tool in a tool holder relative to a fixed reference pivot point established on said holder about which the tool holder may be selectively pivoted relative to the fixture base member to change the contact point of the tool cutting edge with a workpiece while maintaining the precise same tool cutting radius relative to the reference pivot point.

  2. The multiple personalities of Watson and Crick strands.

    PubMed

    Cartwright, Reed A; Graur, Dan

    2011-02-08

    In genetics it is customary to refer to double-stranded DNA as containing a "Watson strand" and a "Crick strand." However, there seems to be no consensus in the literature on the exact meaning of these two terms, and the many usages contradict one another as well as the original definition. Here, we review the history of the terminology and suggest retaining a single sense that is currently the most useful and consistent. The Saccharomyces Genome Database defines the Watson strand as the strand which has its 5'-end at the short-arm telomere and the Crick strand as its complement. The Watson strand is always used as the reference strand in their database. Using this as the basis of our standard, we recommend that Watson and Crick strand terminology only be used in the context of genomics. When possible, the centromere or other genomic feature should be used as a reference point, dividing the chromosome into two arms of unequal lengths. Under our proposal, the Watson strand is standardized as the strand whose 5'-end is on the short arm of the chromosome, and the Crick strand as the one whose 5'-end is on the long arm. Furthermore, the Watson strand should be retained as the reference (plus) strand in a genomic database. This usage not only makes the determination of Watson and Crick unambiguous, but also allows unambiguous selection of reference stands for genomics. This article was reviewed by John M. Logsdon, Igor B. Rogozin (nominated by Andrey Rzhetsky), and William Martin.

  3. Isolation and selection of suitable reference genes for real-time PCR analyses in the skeletal muscle of the fine flounder in response to nutritional status: assessment and normalization of gene expression of growth-related genes.

    PubMed

    Fuentes, Eduardo N; Safian, Diego; Valdés, Juan Antonio; Molina, Alfredo

    2013-08-01

    In the present study, different reference genes were isolated, and their stability in the skeletal muscle of fine flounder subjected to different nutritional states was assessed using geNorm and NormFinder. The combinations between 18S and ActB; Fau and 18S; and Fau and Tubb were chosen as the most stable gene combinations in feeding, long-term fasting and refeeding, and short-term refeeding conditions, respectively. In all periods, ActB was identified as the single least stable gene. Subsequently, the expression of the myosin heavy chain (MYH) and the insulin-like growth factor-I receptor (IGF-IR) was assessed. A large variation in MYH and IGF-IR expression was found depending on the reference gene that was chosen for normalizing the expression of both genes. Using the most stable reference genes, mRNA levels of MYH decreased and IGF-IR increased during fasting, with both returning to basal levels during refeeding. However, the drop in mRNA levels for IGF-IR occurred during short-term refeeding, in contrast with the observed events in the expression of MYH, which occurred during long-term refeeding. The present study highlights the vast differences incurred when using unsuitable versus suitable reference genes for normalizing gene expression, pointing out that normalization without proper validation could result in a bias of gene expression.

  4. Method for Correcting Control Surface Angle Measurements in Single Viewpoint Photogrammetry

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W. (Inventor); Barrows, Danny A. (Inventor)

    2006-01-01

    A method of determining a corrected control surface angle for use in single viewpoint photogrammetry to correct control surface angle measurements affected by wing bending. First and second visual targets are spaced apart &om one another on a control surface of an aircraft wing. The targets are positioned at a semispan distance along the aircraft wing. A reference target separation distance is determined using single viewpoint photogrammetry for a "wind off condition. An apparent target separation distance is then computed for "wind on." The difference between the reference and apparent target separation distances is minimized by recomputing the single viewpoint photogrammetric solution for incrementally changed values of target semispan distances. A final single viewpoint photogrammetric solution is then generated that uses the corrected semispan distance that produced the minimized difference between the reference and apparent target separation distances. The final single viewpoint photogrammetric solution set is used to determine the corrected control surface angle.

  5. Can prospect theory explain risk-seeking behavior by terminally ill patients?

    PubMed

    Rasiel, Emma B; Weinfurt, Kevin P; Schulman, Kevin A

    2005-01-01

    Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.

  6. Computing Relative Free Energies of Solvation using Single Reference Thermodynamic Integration Augmented with Hamiltonian Replica Exchange.

    PubMed

    Khavrutskii, Ilja V; Wallqvist, Anders

    2010-11-09

    This paper introduces an efficient single-topology variant of Thermodynamic Integration (TI) for computing relative transformation free energies in a series of molecules with respect to a single reference state. The presented TI variant that we refer to as Single-Reference TI (SR-TI) combines well-established molecular simulation methodologies into a practical computational tool. Augmented with Hamiltonian Replica Exchange (HREX), the SR-TI variant can deliver enhanced sampling in select degrees of freedom. The utility of the SR-TI variant is demonstrated in calculations of relative solvation free energies for a series of benzene derivatives with increasing complexity. Noteworthy, the SR-TI variant with the HREX option provides converged results in a challenging case of an amide molecule with a high (13-15 kcal/mol) barrier for internal cis/trans interconversion using simulation times of only 1 to 4 ns.

  7. RAINE Public Communities

    EPA Pesticide Factsheets

    The file geodatabase (fgdb) contains the New England Town Boundaries and information related specifically to the Resilience and Adaptation in New England (RAINE) web application. This includes data tables relating to particular aspects of towns notably features, funding, impacts, partners, plans, and programs (refer to V_MAP_STATIC tables). New England Town Boundary coverage is a compilation of coverages received from the six New England State GIS Offices. The EPA New England GIS Center appended the coverages together into a single file and generated attrributes to link to the Facility Identification Online system. These feature class points represent the communities (Communities in gdb) and featured RAINE communities (RAINE_Communities_201609), which contain more detailed information that is contained within the included data tables.

  8. Wideband laser locking to an atomic reference with modulation transfer spectroscopy.

    PubMed

    Negnevitsky, V; Turner, L D

    2013-02-11

    We demonstrate that conventional modulated spectroscopy apparatus, used for laser frequency stabilization in many atomic physics laboratories, can be enhanced to provide a wideband lock delivering deep suppression of frequency noise across the acoustic range. Using an acousto-optic modulator driven with an agile oscillator, we show that wideband frequency modulation of the pump laser in modulation transfer spectroscopy produces the unique single lock-point spectrum previously demonstrated with electro-optic phase modulation. We achieve a laser lock with 100 kHz feedback bandwidth, limited by our laser control electronics. This bandwidth is sufficient to reduce frequency noise by 30 dB across the acoustic range and narrows the imputed linewidth by a factor of five.

  9. Fiber-Optic Pyrometer with Optically Powered Switch for Temperature Measurements

    PubMed Central

    Pérez-Prieto, Sandra; López-Cardona, Juan D.; Blanco, Enrique; Moreno-López, Jorge

    2018-01-01

    We report the experimental results on a new infrared fiber-optic pyrometer for very localized and high-speed temperature measurements ranging from 170 to 530 °C using low-noise photodetectors and high-gain transimpedance amplifiers with a single gain mode in the whole temperature range. We also report a shutter based on an optical fiber switch which is optically powered to provide a reference signal in an optical fiber pyrometer measuring from 200 to 550 °C. The tests show the potential of remotely powering via optical means a 300 mW power-hungry optical switch at a distance of 100 m, avoiding any electromagnetic interference close to the measuring point. PMID:29415477

  10. Fiber-Optic Pyrometer with Optically Powered Switch for Temperature Measurements.

    PubMed

    Vázquez, Carmen; Pérez-Prieto, Sandra; López-Cardona, Juan D; Tapetado, Alberto; Blanco, Enrique; Moreno-López, Jorge; Montero, David S; Lallana, Pedro C

    2018-02-06

    We report the experimental results on a new infrared fiber-optic pyrometer for very localized and high-speed temperature measurements ranging from 170 to 530 °C using low-noise photodetectors and high-gain transimpedance amplifiers with a single gain mode in the whole temperature range. We also report a shutter based on an optical fiber switch which is optically powered to provide a reference signal in an optical fiber pyrometer measuring from 200 to 550 °C. The tests show the potential of remotely powering via optical means a 300 mW power-hungry optical switch at a distance of 100 m, avoiding any electromagnetic interference close to the measuring point.

  11. Multipoint vibrometry with dynamic and static holograms.

    PubMed

    Haist, T; Lingel, C; Osten, W; Winter, M; Giesen, M; Ritter, F; Sandfort, K; Rembe, C; Bendel, K

    2013-12-01

    We report on two multipoint vibrometers with user-adjustable position of the measurement spots. Both systems are using holograms for beam deflection. The measurement is based on heterodyne interferometry with a frequency difference of 5 MHz between reference and object beam. One of the systems uses programmable positioning of the spots in the object volume but is limited concerning the light efficiency. The other system is based on static holograms in combination with mechanical adjustment of the measurement spots and does not have such a general efficiency restriction. Design considerations are given and we show measurement results for both systems. In addition, we analyze the sensitivity of the systems which is a major limitation compared to single point scanning systems.

  12. The hospital library and the enterprise portal.

    PubMed

    Bandy, Margaret; Fosmire, Brenda

    2004-01-01

    At Exempla Healthcare, the medical librarians and the e-Business staff are creating an enterprise information portal where medical reference is targeted, easily accessible, and supported by the medical librarians. A team approach has been essential. The e-Business department has worked for nine months coordinating technical challenges required to support personalization, targeted communications, and a single access point for clinical patient data. Exempla medical librarians have been involved in the definition and design of information access needs from the very beginning. The Clinicians Portal was the first developed, with other customizations to follow. Many challenges remain, but by definition, a portal is designed to be flexible and adapt to the changing needs of the enterprise it supports.

  13. Leave or Stay as a Risky Choice: Effects of Salary Reference Points and Anchors on Turnover Intention

    PubMed Central

    Xiong, Guanxing; Wang, X. T.; Li, Aimei

    2018-01-01

    Within a risky choice framework, we examine how multiple reference points and anchors regulate pay perception and turnover intentions in real organizational contexts with actual employees. We hypothesize that the salary range is psychologically demarcated by three reference points into four regions, the minimum requirement (MR), the status quo (SQ), and the goal (G). Three studies were conducted: Study 1 analyzed the relationship between turnover intention and the subjective likelihood of falling into each of four expected salary regions; Study 2 tested the mediating effect of pay satisfaction on salary reference point-dependent turnover intention; and Study 3 explored the anchoring effect of estimated peer salaries. The results show that turnover intention was higher in the region below MR or between SQ and G but lower in the region above G or between MR and SQ. That is, turnover intention can be high even in situations of salary raise, if the raise is below a salary goal (i.e., leaving for a lack of opportunity) and low even in situations of salary loss, if the expected salary is still above the MR (i.e., staying for security). In addition, turnover intention was regulated by pay satisfaction and peer salaries. In conclusion, turnover intention can be viewed as a risky choice adapted to salary reference points. PMID:29872409

  14. Assessing the accuracy of TDR-based water leak detection system

    NASA Astrophysics Data System (ADS)

    Fatemi Aghda, S. M.; GanjaliPour, K.; Nabiollahi, K.

    2018-03-01

    The use of TDR system to detect leakage locations in underground pipes has been developed in recent years. In this system, a bi-wire is installed in parallel with the underground pipes and is considered as a TDR sensor. This approach greatly covers the limitations arisen with using the traditional method of acoustic leak positioning. TDR based leak detection method is relatively accurate when the TDR sensor is in contact with water in just one point. Researchers have been working to improve the accuracy of this method in recent years. In this study, the ability of TDR method was evaluated in terms of the appearance of multi leakage points simultaneously. For this purpose, several laboratory tests were conducted. In these tests in order to simulate leakage points, the TDR sensor was put in contact with water at some points, then the number and the dimension of the simulated leakage points were gradually increased. The results showed that with the increase in the number and dimension of the leakage points, the error rate of the TDR-based water leak detection system increases. The authors tried, according to the results obtained from the laboratory tests, to develop a method to improve the accuracy of the TDR-based leak detection systems. To do that, they defined a few reference points on the TDR sensor. These points were created via increasing the distance between two conductors of TDR sensor and were easily identifiable in the TDR waveform. The tests were repeated again using the TDR sensor having reference points. In order to calculate the exact distance of the leakage point, the authors developed an equation in accordance to the reference points. A comparison between the results obtained from both tests (with and without reference points) showed that using the method and equation developed by the authors can significantly improve the accuracy of positioning the leakage points.

  15. Compact Integration of a GSM-19 Magnetic Sensor with High-Precision Positioning using VRS GNSS Technology

    PubMed Central

    Martín, Angel; Padín, Jorge; Anquela, Ana Belén; Sánchez, Juán; Belda, Santiago

    2009-01-01

    Magnetic data consists of a sequence of collected points with spatial coordinates and magnetic information. The spatial location of these points needs to be as exact as possible in order to develop a precise interpretation of magnetic anomalies. GPS is a valuable tool for accomplishing this objective, especially if the RTK approach is used. In this paper the VRS (Virtual Reference Station) technique is introduced as a new approach for real-time positioning of magnetic sensors. The main advantages of the VRS approach are, firstly, that only a single GPS receiver is needed (no base station is necessary), reducing field work and equipment costs. Secondly, VRS can operate at distances separated 50–70 km from the reference stations without degrading accuracy. A compact integration of a GSM-19 magnetometer sensor with a geodetic GPS antenna is presented; this integration does not diminish the operational flexibility of the original magnetometer and can work with the VRS approach. The coupled devices were tested in marshlands around Gandia, a city located approximately 100 km South of Valencia (Spain), thought to be the site of a Roman cemetery. The results obtained show adequate geometry and high-precision positioning for the structures to be studied (a comparison with the original low precision GPS of the magnetometer is presented). Finally, the results of the magnetic survey are of great interest for archaeological purposes. PMID:22574055

  16. Spatial Updating Strategy Affects the Reference Frame in Path Integration.

    PubMed

    He, Qiliang; McNamara, Timothy P

    2018-06-01

    This study investigated how spatial updating strategies affected the selection of reference frames in path integration. Participants walked an outbound path consisting of three successive waypoints in a featureless environment and then pointed to the first waypoint. We manipulated the alignment of participants' final heading at the end of the outbound path with their initial heading to examine the adopted reference frame. We assumed that the initial heading defined the principal reference direction in an allocentric reference frame. In Experiment 1, participants were instructed to use a configural updating strategy and to monitor the shape of the outbound path while they walked it. Pointing performance was best when the final heading was aligned with the initial heading, indicating the use of an allocentric reference frame. In Experiment 2, participants were instructed to use a continuous updating strategy and to keep track of the location of the first waypoint while walking the outbound path. Pointing performance was equivalent regardless of the alignment between the final and the initial headings, indicating the use of an egocentric reference frame. These results confirmed that people could employ different spatial updating strategies in path integration (Wiener, Berthoz, & Wolbers Experimental Brain Research 208(1) 61-71, 2011), and suggested that these strategies could affect the selection of the reference frame for path integration.

  17. Freezing Transition Studies Through Constrained Cell Model Simulation

    NASA Astrophysics Data System (ADS)

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.

    2014-10-01

    In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.

  18. A National Trial on Differences in Cerebral Perfusion Pressure Values by Measurement Location.

    PubMed

    McNett, Molly M; Bader, Mary Kay; Livesay, Sarah; Yeager, Susan; Moran, Cristina; Barnes, Arianna; Harrison, Kimberly R; Olson, DaiWai M

    2018-04-01

    Cerebral perfusion pressure (CPP) is a key parameter in management of brain injury with suspected impaired cerebral autoregulation. CPP is calculated by subtracting intracranial pressure (ICP) from mean arterial pressure (MAP). Despite consensus on importance of CPP monitoring, substantial variations exist on anatomical reference points used to measure arterial MAP when calculating CPP. This study aimed to identify differences in CPP values based on measurement location when using phlebostatic axis (PA) or tragus (Tg) as anatomical reference points. The secondary study aim was to determine impact of differences on patient outcomes at discharge. This was a prospective, repeated measures, multi-site national trial. Adult ICU patients with neurological injury necessitating ICP and CPP monitoring were consecutively enrolled from seven sites. Daily MAP/ICP/CPP values were gathered with the arterial transducer at the PA, followed by the Tg as anatomical reference points. A total of 136 subjects were enrolled, resulting in 324 paired observations. There were significant differences for CPP when comparing values obtained at PA and Tg reference points (p < 0.000). Differences remained significant in repeated measures model when controlling for clinical factors (mean CPP-PA = 80.77, mean CPP-Tg = 70.61, p < 0.000). When categorizing CPP as binary endpoint, 18.8% of values were identified as adequate with PA values, yet inadequate with CPP values measured at the Tg. Findings identify numerical differences for CPP based on anatomical reference location and highlight importance of a standard reference point for both clinical practice and future trials to limit practice variations and heterogeneity of findings.

  19. Flatness metrology based on small-angle deflectometric procedures with electronic tiltmeters

    NASA Astrophysics Data System (ADS)

    Ehret, G.; Laubach, S.; Schulz, M.

    2017-06-01

    The measurement of optical flats, e. g. synchrotron or XFEL mirrors, with single nanometer topography uncertainty is still challenging. At PTB, we apply for this task small-angle deflectometry in which the angle between the direction of the beam sent to the surface and the beam detected is small. Conventional deflectometric systems measure the surface angle with autocollimators whose light beam also represents the straightness reference. An advanced flatness metrology system was recently implemented at PTB that separates the straightness reference task from the angle detection task. We call it `Exact Autocollimation Deflectometric Scanning' because the specimen is slightly tilted in such a way that at every scanning position the specimen is `exactly' perpendicular to the reference light beam directed by a pentaprism to the surface under test. The tilt angle of the surface is then measured with an additional autocollimator. The advantage of the EADS method is that the two tasks (straightness reference and measurement of surface slope) are separated and each of these can be optimized independently. The idea presented in this paper is to replace this additional autocollimator by one or more electro-mechanical tiltmeters, which are typically faster and have a higher resolution than highly accurate commercially available autocollimators. We investigate the point stability and the linearity of a highly accurate electronic tiltmeter. The pros and cons of using tiltmeters in flatness metrology are discussed.

  20. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  1. A comparison of fisheries biological reference points estimated from temperature-specific multi-species and single-species climate-enhanced stock assessment models

    NASA Astrophysics Data System (ADS)

    Holsman, Kirstin K.; Ianelli, James; Aydin, Kerim; Punt, André E.; Moffitt, Elizabeth A.

    2016-12-01

    Multi-species statistical catch at age models (MSCAA) can quantify interacting effects of climate and fisheries harvest on species populations, and evaluate management trade-offs for fisheries that target several species in a food web. We modified an existing MSCAA model to include temperature-specific growth and predation rates and applied the modified model to three fish species, walleye pollock (Gadus chalcogrammus), Pacific cod (Gadus macrocephalus) and arrowtooth flounder (Atheresthes stomias), from the eastern Bering Sea (USA). We fit the model to data from 1979 through 2012, with and without trophic interactions and temperature effects, and use projections to derive single- and multi-species biological reference points (BRP and MBRP, respectively) for fisheries management. The multi-species model achieved a higher over-all goodness of fit to the data (i.e. lower negative log-likelihood) for pollock and Pacific cod. Variability from water temperature typically resulted in 5-15% changes in spawning, survey, and total biomasses, but did not strongly impact recruitment estimates or mortality. Despite this, inclusion of temperature in projections did have a strong effect on BRPs, including recommended yield, which were higher in single-species models for Pacific cod and arrowtooth flounder that included temperature compared to the same models without temperature effects. While the temperature-driven multi-species model resulted in higher yield MBPRs for arrowtooth flounder than the same model without temperature, we did not observe the same patterns in multi-species models for pollock and Pacific cod, where variability between harvest scenarios and predation greatly exceeded temperature-driven variability in yield MBRPs. Annual predation on juvenile pollock (primarily cannibalism) in the multi-species model was 2-5 times the annual harvest of adult fish in the system, thus predation represents a strong control on population dynamics that exceeds temperature-driven changes to growth and is attenuated through harvest-driven reductions in predator populations. Additionally, although we observed differences in spawning biomasses at the accepted biological catch (ABC) proxy between harvest scenarios and single- and multi-species models, discrepancies in spawning stock biomass estimates did not translate to large differences in yield. We found that multi-species models produced higher estimates of combined yield for aggregate maximum sustainable yield (MSY) targets than single species models, but were more conservative than single-species models when individual MSY targets were used, with the exception of scenarios where minimum biomass thresholds were imposed. Collectively our results suggest that climate and trophic drivers can interact to affect MBRPs, but for prey species with high predation rates, trophic- and management-driven changes may exceed direct effects of temperature on growth and predation. Additionally, MBRPs are not inherently more conservative than single-species BRPs. This framework provides a basis for the application of MSCAA models for tactical ecosystem-based fisheries management decisions under changing climate conditions.

  2. Ab initio based potential energy surface and kinetics study of the OH + NH3 hydrogen abstraction reaction.

    PubMed

    Monge-Palacios, M; Rangel, C; Espinosa-Garcia, J

    2013-02-28

    A full-dimensional analytical potential energy surface (PES) for the OH + NH3 → H2O + NH2 gas-phase reaction was developed based exclusively on high-level ab initio calculations. This reaction presents a very complicated shape with wells along the reaction path. Using a wide spectrum of properties of the reactive system (equilibrium geometries, vibrational frequencies, and relative energies of the stationary points, topology of the reaction path, and points on the reaction swath) as reference, the resulting analytical PES reproduces reasonably well the input ab initio information obtained at the coupled-cluster single double triple (CCSD(T)) = FULL/aug-cc-pVTZ//CCSD(T) = FC/cc-pVTZ single point level, which represents a severe test of the new surface. As a first application, on this analytical PES we perform an extensive kinetics study using variational transition-state theory with semiclassical transmission coefficients over a wide temperature range, 200-2000 K. The forward rate constants reproduce the experimental measurements, while the reverse ones are slightly underestimated. However, the detailed analysis of the experimental equilibrium constants (from which the reverse rate constants are obtained) permits us to conclude that the experimental reverse rate constants must be re-evaluated. Another severe test of the new surface is the analysis of the kinetic isotope effects (KIEs), which were not included in the fitting procedure. The KIEs reproduce the values obtained from ab initio calculations in the common temperature range, although unfortunately no experimental information is available for comparison.

  3. Assessment Study of Using Online (CSRS) GPS-PPP Service for Mapping Applications in Egypt

    NASA Astrophysics Data System (ADS)

    Abd-Elazeem, Mohamed; Farah, Ashraf; Farrag, Farrag

    2011-09-01

    Many applications in navigation, land surveying, land title definitions and mapping have been made simpler and more precise due to accessibility of Global Positioning System (GPS) data, and thus the demand for using advanced GPS techniques in surveying applications has become essential. The differential technique was the only source of accurate positioning for many years, and remained in use despite of its cost. The precise point positioning (PPP) technique is a viable alternative to the differential positioning method in which a user with a single receiver can attain positioning accuracy at the centimeter or decimeter scale. In recent years, many organizations introduced online (GPS-PPP) processing services capable of determining accurate geocentric positions using GPS observations. These services provide the user with receiver coordinates in free and unlimited access formats via the internet. This paper investigates the accuracy of the Canadian Spatial Reference System (CSRS) Precise Point Positioning (PPP) (CSRS-PPP) service supervised by the Geodetic Survey Division (GSD), Canada. Single frequency static GPS observations have been collected at three points covering time spans of 60, 90 and 120 minutes. These three observed sites form baselines of 1.6, 7, and 10 km, respectively. In order to assess the CSRS-PPP accuracy, the discrepancies between the CSRS-PPP estimates and the regular differential GPS solutions were computed. The obtained results illustrate that the PPP produces a horizontal error at the scale of a few decimeters; this is accurate enough to serve many mapping applications in developing countries with a savings in both cost and experienced labor.

  4. Nonimaging optical illumination system

    DOEpatents

    Winston, R.; Ries, H.

    1996-12-17

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source, a light reflecting surface, and a family of light edge rays defined along a reference line with the reflecting surface defined in terms of the reference line as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line, and D is a distance from a point on the reference line to the reflection surface along the desired edge ray through the point. 35 figs.

  5. Nonimaging optical illumination system

    DOEpatents

    Winston, R.; Ries, H.

    1998-10-06

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source a light reflecting surface, and a family of light edge rays defined along a reference line with the reflecting surface defined in terms of the reference lines a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line, and D is a distance from a point on the reference line to the reflection surface along the desired edge ray through the point. 35 figs.

  6. Factors affecting delays in first trimester pregnancy termination services in New Zealand.

    PubMed

    Silva, Martha; McNeill, Rob; Ashton, Toni

    2011-04-01

    To identify the factors affecting the timeliness of services in first trimester abortion service in New Zealand. Primary data were collected from all patients attending nine abortion clinics between February and May 2009. The outcome measured was delay between the first visit with a referring doctor and the date of the abortion procedure. Patient records (n=2,950) were audited to determine the timeline between the first point of entry to the health system and the date of abortion. Women were also invited to fill out a questionnaire identifying personal factors affecting access to services (n=1,086, response rate = 36.8%). Women who went to private clinic had a significantly shorter delay compared to public clinics. Controlling for clinic type, women who went to clinics that offered medical abortions or clinics that offered single day services experienced less delay. Also, women who had more than one visit with their referring doctor experienced a greater delay than those who had a single visit. The earlier in pregnancy women sought services the longer the delay. Women's decision-making did not have a significant effect on delay. Several clinic level and systemic factors are significantly associated with delay in first trimester abortion services. In order to ensure the best physical and emotional outcomes, timeliness of services must improve. © 2011 The Authors. ANZJPH © 2011 Public Health Association of Australia.

  7. Sharing reference data and including cows in the reference population improve genomic predictions in Danish Jersey.

    PubMed

    Su, G; Ma, P; Nielsen, U S; Aamand, G P; Wiggans, G; Guldbrandtsen, B; Lund, M S

    2016-06-01

    Small reference populations limit the accuracy of genomic prediction in numerically small breeds, such like Danish Jersey. The objective of this study was to investigate two approaches to improve genomic prediction by increasing size of reference population in Danish Jersey. The first approach was to include North American Jersey bulls in Danish Jersey reference population. The second was to genotype cows and use them as reference animals. The validation of genomic prediction was carried out on bulls and cows, respectively. In validation on bulls, about 300 Danish bulls (depending on traits) born in 2005 and later were used as validation data, and the reference populations were: (1) about 1050 Danish bulls, (2) about 1050 Danish bulls and about 1150 US bulls. In validation on cows, about 3000 Danish cows from 87 young half-sib families were used as validation data, and the reference populations were: (1) about 1250 Danish bulls, (2) about 1250 Danish bulls and about 1150 US bulls, (3) about 1250 Danish bulls and about 4800 cows, (4) about 1250 Danish bulls, 1150 US bulls and 4800 Danish cows. Genomic best linear unbiased prediction model was used to predict breeding values. De-regressed proofs were used as response variables. In the validation on bulls for eight traits, the joint DK-US bull reference population led to higher reliability of genomic prediction than the DK bull reference population for six traits, but not for fertility and longevity. Averaged over the eight traits, the gain was 3 percentage points. In the validation on cows for six traits (fertility and longevity were not available), the gain from inclusion of US bull in reference population was 6.6 percentage points in average over the six traits, and the gain from inclusion of cows was 8.2 percentage points. However, the gains from cows and US bulls were not accumulative. The total gain of including both US bulls and Danish cows was 10.5 percentage points. The results indicate that sharing reference data and including cows in reference population are efficient approaches to increase reliability of genomic prediction. Therefore, genomic selection is promising for numerically small population.

  8. Nanoparticles and metrology: a comparison of methods for the determination of particle size distributions

    NASA Astrophysics Data System (ADS)

    Coleman, Victoria A.; Jämting, Åsa K.; Catchpoole, Heather J.; Roy, Maitreyee; Herrmann, Jan

    2011-10-01

    Nanoparticles and products incorporating nanoparticles are a growing branch of nanotechnology industry. They have found a broad market, including the cosmetic, health care and energy sectors. Accurate and representative determination of particle size distributions in such products is critical at all stages of the product lifecycle, extending from quality control at point of manufacture to environmental fate at the point of disposal. Determination of particle size distributions is non-trivial, and is complicated by the fact that different techniques measure different quantities, leading to differences in the measured size distributions. In this study we use both mono- and multi-modal dispersions of nanoparticle reference materials to compare and contrast traditional and novel methods for particle size distribution determination. The methods investigated include ensemble techniques such as dynamic light scattering (DLS) and differential centrifugal sedimentation (DCS), as well as single particle techniques such as transmission electron microscopy (TEM) and microchannel resonator (ultra high-resolution mass sensor).

  9. Exact Solution to Stationary Onset of Convection Due to Surface Tension Variation in a Multicomponent Fluid Layer With Interfacial Deformation

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; McCaughan, Frances E.

    1998-01-01

    Stationary onset of convection due to surface tension variation in an unbounded multicomponent fluid layer is considered. Surface deformation is included and general flux boundary conditions are imposed on the stratifying agencies (temperature/composition) disturbance equations. Exact solutions are obtained to the general N-component problem for both finite and infinitesimal wavenumbers. Long wavelength instability may coexist with a finite wavelength instability for certain sets of parameter values, often referred to as frontier points. For an impermeable/insulated upper boundary and a permeable/conductive lower boundary, frontier boundaries are computed in the space of Bond number, Bo, versus Crispation number, Cr, over the range 5 x 10(exp -7) less than or equal to Bo less than or equal to 1. The loci of frontier points in (Bo, Cr) space for different values of N, diffusivity ratios, and, Marangoni numbers, collapsed to a single curve in (Bo, D(dimensional variable)Cr) space, where D(dimensional variable) is a Marangoni number weighted diffusivity ratio.

  10. Isoelectric points of viruses.

    PubMed

    Michen, B; Graule, T

    2010-08-01

    Viruses as well as other (bio-)colloids possess a pH-dependent surface charge in polar media such as water. This electrostatic charge determines the mobility of the soft particle in an electric field and thus governs its colloidal behaviour which plays a major role in virus sorption processes. The pH value at which the net surface charge switches its sign is referred to as the isoelectric point (abbreviations: pI or IEP) and is a characteristic parameter of the virion in equilibrium with its environmental water chemistry. Here, we review the IEP measurements of viruses that replicate in hosts of kingdom plantae, bacteria and animalia. IEPs of viruses are found in pH range from 1.9 to 8.4; most frequently, they are measured in a band of 3.5 < IEP < 7. However, the data appear to be scattered widely within single virus species. This discrepancy is discussed and should be considered when IEP values are used to account for virus sorption processes.

  11. Apparently abnormal Wechsler Memory Scale index score patterns in the normal population.

    PubMed

    Carrasco, Roman Marcus; Grups, Josefine; Evans, Brittney; Simco, Edward; Mittenberg, Wiley

    2015-01-01

    Interpretation of the Wechsler Memory Scale-Fourth Edition may involve examination of multiple memory index score contrasts and similar comparisons with Wechsler Adult Intelligence Scale-Fourth Edition ability indexes. Standardization sample data suggest that 15-point differences between any specific pair of index scores are relatively uncommon in normal individuals, but these base rates refer to a comparison between a single pair of indexes rather than multiple simultaneous comparisons among indexes. This study provides normative data for the occurrence of multiple index score differences calculated by using Monte Carlo simulations and validated against standardization data. Differences of 15 points between any two memory indexes or between memory and ability indexes occurred in 60% and 48% of the normative sample, respectively. Wechsler index score discrepancies are normally common and therefore not clinically meaningful when numerous such comparisons are made. Explicit prior interpretive hypotheses are necessary to reduce the number of index comparisons and associated false-positive conclusions. Monte Carlo simulation accurately predicts these false-positive rates.

  12. On the interplay between phonon-boundary scattering and phonon-point-defect scattering in SiGe thin films

    NASA Astrophysics Data System (ADS)

    Iskandar, A.; Abou-Khalil, A.; Kazan, M.; Kassem, W.; Volz, S.

    2015-03-01

    This paper provides theoretical understanding of the interplay between the scattering of phonons by the boundaries and point-defects in SiGe thin films. It also provides a tool for the design of SiGe-based high-efficiency thermoelectric devices. The contributions of the alloy composition, grain size, and film thickness to the phonon scattering rate are described by a model for the thermal conductivity based on the single-mode relaxation time approximation. The exact Boltzmann equation including spatial dependence of phonon distribution function is solved to yield an expression for the rate at which phonons scatter by the thin film boundaries in the presence of the other phonon scattering mechanisms. The rates at which phonons scatter via normal and resistive three-phonon processes are calculated by using perturbation theories with taking into account dispersion of confined acoustic phonons in a two dimensional structure. The vibrational parameters of the model are deduced from the dispersion of confined acoustic phonons as functions of temperature and crystallographic direction. The accuracy of the model is demonstrated with reference to recent experimental investigations regarding the thermal conductivity of single-crystal and polycrystalline SiGe films. The paper describes the strength of each of the phonon scattering mechanisms in the full temperature range. Furthermore, it predicts the alloy composition and film thickness that lead to minimum thermal conductivity in a single-crystal SiGe film, and the alloy composition and grain size that lead to minimum thermal conductivity in a polycrystalline SiGe film.

  13. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction

    PubMed Central

    Berveglieri, Adilson; Liang, Xinlian; Honkavaara, Eija

    2017-01-01

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras. PMID:29207468

  14. Vertical Optical Scanning with Panoramic Vision for Tree Trunk Reconstruction.

    PubMed

    Berveglieri, Adilson; Tommaselli, Antonio M G; Liang, Xinlian; Honkavaara, Eija

    2017-12-02

    This paper presents a practical application of a technique that uses a vertical optical flow with a fisheye camera to generate dense point clouds from a single planimetric station. Accurate data can be extracted to enable the measurement of tree trunks or branches. The images that are collected with this technique can be oriented in photogrammetric software (using fisheye models) and used to generate dense point clouds, provided that some constraints on the camera positions are adopted. A set of images was captured in a forest plot in the experiments. Weighted geometric constraints were imposed in the photogrammetric software to calculate the image orientation, perform dense image matching, and accurately generate a 3D point cloud. The tree trunks in the scenes were reconstructed and mapped in a local reference system. The accuracy assessment was based on differences between measured and estimated trunk diameters at different heights. Trunk sections from an image-based point cloud were also compared to the corresponding sections that were extracted from a dense terrestrial laser scanning (TLS) point cloud. Cylindrical fitting of the trunk sections allowed the assessment of the accuracies of the trunk geometric shapes in both clouds. The average difference between the cylinders that were fitted to the photogrammetric cloud and those to the TLS cloud was less than 1 cm, which indicates the potential of the proposed technique. The point densities that were obtained with vertical optical scanning were 1/3 less than those that were obtained with TLS. However, the point density can be improved by using higher resolution cameras.

  15. "Keep your eyes on the prize": reference points and racial differences in assessing progress toward equality.

    PubMed

    Eibach, Richard P; Ehrlinger, Joyce

    2006-01-01

    White Americans tend to perceive greater progress toward racial equality than do ethnic minorities. Correlational evidence (Study 1) and two experimental manipulations of framing (Studies 2 and 3) supported the hypothesis that this perception gap is associated with different reference points the two groups spontaneously use to assess progress, with Whites anchoring on comparisons with the past and ethnic minorities anchoring on ideal standards. Consistent with the hypothesis that the groups anchor on different reference points, the gap in perceptions of progress was affected by the time participants spent deliberating about the topic (Study 4). Implications for survey methods and political conflict are discussed.

  16. Reference-guided de novo assembly approach improves genome reconstruction for related species.

    PubMed

    Lischer, Heidi E L; Shimizu, Kentaro K

    2017-11-10

    The development of next-generation sequencing has made it possible to sequence whole genomes at a relatively low cost. However, de novo genome assemblies remain challenging due to short read length, missing data, repetitive regions, polymorphisms and sequencing errors. As more and more genomes are sequenced, reference-guided assembly approaches can be used to assist the assembly process. However, previous methods mostly focused on the assembly of other genotypes within the same species. We adapted and extended a reference-guided de novo assembly approach, which enables the usage of a related reference sequence to guide the genome assembly. In order to compare and evaluate de novo and our reference-guided de novo assembly approaches, we used a simulated data set of a repetitive and heterozygotic plant genome. The extended reference-guided de novo assembly approach almost always outperforms the corresponding de novo assembly program even when a reference of a different species is used. Similar improvements can be observed in high and low coverage situations. In addition, we show that a single evaluation metric, like the widely used N50 length, is not enough to properly rate assemblies as it not always points to the best assembly evaluated with other criteria. Therefore, we used the summed z-scores of 36 different statistics to evaluate the assemblies. The combination of reference mapping and de novo assembly provides a powerful tool to improve genome reconstruction by integrating information of a related genome. Our extension of the reference-guided de novo assembly approach enables the application of this strategy not only within but also between related species. Finally, the evaluation of genome assemblies is often not straight forward, as the truth is not known. Thus one should always use a combination of evaluation metrics, which not only try to assess the continuity but also the accuracy of an assembly.

  17. Proposed catalog of the neuroanatomy and the stratified anatomy for the 361 acupuncture points of 14 channels.

    PubMed

    Chapple, Will

    2013-10-01

    In spite of the extensive research on acupuncture mechanisms, no comprehensive and systematic peer-reviewed reference list of the stratified anatomical and the neuroanatomical features of all 361 acupuncture points exists. This study creates a reference list of the neuroanatomy and the stratified anatomy for each of the 361 acupuncture points on the 14 classical channels and for 34 extra points. Each acupuncture point was individually assessed to relate the point's location to anatomical and neuroanatomical features. The design of the catalogue is intended to be useful for any style of acupuncture or Oriental medicine treatment modality. The stratified anatomy was divided into shallow, intermediate and deep insertion. A separate stratified anatomy was presented for different needle angles and directions. The following are identified for each point: additional specifications for point location, the stratified anatomy, motor innervation, cutaneous nerve and sensory innervation, dermatomes, Langer's lines, and somatotopic organization in the primary sensory and motor cortices. Acupuncture points for each muscle, dermatome and myotome are also reported. This reference list can aid clinicians, practitioners and researchers in furthering the understanding and accurate practice of acupuncture. Additional research on the anatomical variability around acupuncture points, the frequency of needle contact with an anatomical structure in a clinical setting, and conformational imaging should be done to verify this catalogue. Copyright © 2013. Published by Elsevier B.V.

  18. Development of a computer program data base of a navigation aid environment for simulated IFR flight and landing studies

    NASA Technical Reports Server (NTRS)

    Bergeron, H. P.; Haynie, A. T.; Mcdede, J. B.

    1980-01-01

    A general aviation single pilot instrument flight rule simulation capability was developed. Problems experienced by single pilots flying in IFR conditions were investigated. The simulation required a three dimensional spatial navaid environment of a flight navigational area. A computer simulation of all the navigational aids plus 12 selected airports located in the Washington/Norfolk area was developed. All programmed locations in the list were referenced to a Cartesian coordinate system with the origin located at a specified airport's reference point. All navigational aids with their associated frequencies, call letters, locations, and orientations plus runways and true headings are included in the data base. The simulation included a TV displayed out-the-window visual scene of country and suburban terrain and a scaled model runway complex. Any of the programmed runways, with all its associated navaids, can be referenced to a runway on the airport in this visual scene. This allows a simulation of a full mission scenario including breakout and landing.

  19. Adding polarimetric imaging to depth map using improved light field camera 2.0 structure

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu

    2017-06-01

    Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.

  20. Gaussian process regression to accelerate geometry optimizations relying on numerical differentiation

    NASA Astrophysics Data System (ADS)

    Schmitz, Gunnar; Christiansen, Ove

    2018-06-01

    We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.

  1. Experiment data for determination of uncertainty of two-phase mass flow rate in a Semiscale Mod-3 system spool piece at Karlsruhe Kernforschungzentrum. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, A.G.

    1979-06-01

    Steady state, steam-water testing of a Semiscale Mod-3 system instrumented spool piece was accomplished in the Gesellschaft fur Kernforschung (GfK) facility at Karlsruhe Kernforschungzentrum, West Germany. The testing was undertaken to determine the accuracy of spool piece, two-phase mass flow rate, inferential measurements by comparison with upstream single-phase reference measurements. Other two-phase measurements were also made to aid in understanding the flow conditions and to implement data reduction. A total of 132 single- and two-phase test points were acquired, covering pressures from 0.4 to 7.5 MPa, flow rates from 0.5 to 4.9 kg/s, and two-phase mixture qualities from 1.0 tomore » 83% in the 66.7 mm inside diameter spool piece. The report includes a detailed description of the hardware and software and a tabulation of the data.« less

  2. Different equation-of-motion coupled cluster methods with different reference functions: The formyl radical

    NASA Astrophysics Data System (ADS)

    Kuś, Tomasz; Bartlett, Rodney J.

    2008-09-01

    The doublet and quartet excited states of the formyl radical have been studied by the equation-of-motion (EOM) coupled cluster (CC) method. The Sz spin-conserving singles and doubles (EOM-EE-CCSD) and singles, doubles, and triples (EOM-EE-CCSDT) approaches, as well as the spin-flipped singles and doubles (EOM-SF-CCSD) method have been applied, subject to unrestricted Hartree-Fock (HF), restricted open-shell HF, and quasirestricted HF references. The structural parameters, vertical and adiabatic excitation energies, and harmonic vibrational frequencies have been calculated. The issue of the reference function choice for the spin-flipped (SF) method and its impact on the results has been discussed using the experimental data and theoretical results available. The results show that if the appropriate reference function is chosen so that target states differ from the reference by only single excitations, then EOM-EE-CCSD and EOM-SF-CCSD methods give a very good description of the excited states. For the states that have a non-negligible contribution of the doubly excited configurations one is able to use the SF method with such a reference function, that in most cases the performance of the EOM-SF-CCSD method is better than that of the EOM-EE-CCSD approach.

  3. Basin stability measure of different steady states in coupled oscillators

    NASA Astrophysics Data System (ADS)

    Rakshit, Sarbendu; Bera, Bidesh K.; Majhi, Soumen; Hens, Chittaranjan; Ghosh, Dibakar

    2017-04-01

    In this report, we investigate the stabilization of saddle fixed points in coupled oscillators where individual oscillators exhibit the saddle fixed points. The coupled oscillators may have two structurally different types of suppressed states, namely amplitude death and oscillation death. The stabilization of saddle equilibrium point refers to the amplitude death state where oscillations are ceased and all the oscillators converge to the single stable steady state via inverse pitchfork bifurcation. Due to multistability features of oscillation death states, linear stability theory fails to analyze the stability of such states analytically, so we quantify all the states by basin stability measurement which is an universal nonlocal nonlinear concept and it interplays with the volume of basins of attractions. We also observe multi-clustered oscillation death states in a random network and measure them using basin stability framework. To explore such phenomena we choose a network of coupled Duffing-Holmes and Lorenz oscillators which are interacting through mean-field coupling. We investigate how basin stability for different steady states depends on mean-field density and coupling strength. We also analytically derive stability conditions for different steady states and confirm by rigorous bifurcation analysis.

  4. Reproducibility of tract segmentation between sessions using an unsupervised modelling-based approach.

    PubMed

    Clayden, Jonathan D; Storkey, Amos J; Muñoz Maniega, Susana; Bastin, Mark E

    2009-04-01

    This work describes a reproducibility analysis of scalar water diffusion parameters, measured within white matter tracts segmented using a probabilistic shape modelling method. In common with previously reported neighbourhood tractography (NT) work, the technique optimises seed point placement for fibre tracking by matching the tracts generated using a number of candidate points against a reference tract, which is derived from a white matter atlas in the present study. No direct constraints are applied to the fibre tracking results. An Expectation-Maximisation algorithm is used to fully automate the procedure, and make dramatically more efficient use of data than earlier NT methods. Within-subject and between-subject variances for fractional anisotropy and mean diffusivity within the tracts are then separated using a random effects model. We find test-retest coefficients of variation (CVs) similar to those reported in another study using landmark-guided single seed points; and subject to subject CVs similar to a constraint-based multiple ROI method. We conclude that our approach is at least as effective as other methods for tract segmentation using tractography, whilst also having some additional benefits, such as its provision of a goodness-of-match measure for each segmentation.

  5. Alternative Attitude Commanding and Control for Precise Spacecraft Landing

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2004-01-01

    A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.

  6. Performance of Mercury Triple-Point Cells Made in Brazil

    NASA Astrophysics Data System (ADS)

    Petkovic, S. G.; Santiago, J. F. N.; Filho, R. R.; Teixeira, R. N.; Santos, P. R. F.

    2003-09-01

    Fixed-points cells are primary standards in ITS-90. They contain reference material with a purity of 99.999 % or more. The gallium in a melting-point cell, for example, can reach a purity of 99.99999 %. This level of purity is not easy to obtain. However, substances like water and mercury can be purified by means of distillation and chemical procedures. This paper presents the results of mercury triple-point cells made in Brazil that were directly compared to a mercury triple-point cell of 99.999% purity. This reference cell, made by Isotech (England), was previously compared to cells from CENAM (Mexico) and NRC (Canada) and the maximum deviation found was approximately 0.4 mK. The purification stage started with a sample of mercury 99.3 % pure, and the repeated use of both mechanical and chemical processes led to a purification grade considered good enough for calibration of standard platinum resistance thermometers. The purification procedures, the method of construction of the cell, the laboratory facilities, the comparison results and the budget of uncertainties are described in this paper. All of the cells tested have a triple-point temperature within 0.25 mK of the triple-point temperature of the Inmetro reference cell.

  7. Archaeomagnetic studies in central Mexico—dating of Mesoamerican lime-plasters

    NASA Astrophysics Data System (ADS)

    Hueda-Tanabe, Y.; Soler-Arechalde, A. M.; Urrutia-Fucugauchi, J.; Barba, L.; Manzanilla, L.; Rebolledo-Vieyra, M.; Goguitchaichvili, A.

    2004-11-01

    For the first time results of an archaeomagnetic study of unburned lime-plasters from Teotihuacan and Tenochtitlan in central Mesoamerica are presented. Plasters made of lime, lithic clasts and water, appear during the Formative Period and were used for a variety of purposes in floors, sculptures, ceramics and supporting media for mural paintings in the Oaxaca and Maya area. In Central Mexico, grinded volcanic scoria rich in iron minerals is incorporated into the lime-plasters mixture. Samples were selected from two archaeological excavation projects in the Teopancazco residential compound of Teotihuacan and the large multi-stage structure of Templo Mayor in Tenochtitlan, where chronological information is available. The intensity of remanent magnetization (natural remanent magnetization (NRM)) and low-field susceptibility are weak reflecting low relative content of magnetic minerals. NRM directions are well grouped and alternating field demagnetization shows single or two-component magnetizations. Rockmagnetic experiments point to fine-grained titanomagnetites with pseudo-single domain behavior. Anisotropy of magnetic susceptibility (AMS) measurements document a depositional fabric, with normal to free-surface minimum AMS axes. Characteristic mean site directions were correlated to the paleosecular variation curve for Mesoamerica. Data from Templo Mayor reflect recent tilting of the structures. Teopancazco mean site declinations show good correspondence with the reference curve, in agreement with the radiocarbon dating. Dates for four stages of Teotihuacan occupancy based on the study of lime-plasters range from AD 350 to 550. A date for a possible Mazapa occupation around AD 850 or 950 is also suggested based on the archaeomagnetic correlation. The archaeomagnetic record of a plaster floor in Teopancazco differed from the other nearby sites pointing to a thermoremanent magnetization; comparison with the reference curve suggests dates around AD 1375 or 1415. The burning of the stucco floor likely occurred during a late re-occupation of the site by the Aztecs. Our results suggest that archaeomagnetic dating can be applied to lime-plasters, which are materials widely employed in Mesoamerica.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besemer, A; Marsh, I; Bednarz, B

    Purpose: The calculation of 3D internal dose calculations in targeted radionuclide therapy requires the acquisition and temporal coregistration of a serial PET/CT or SPECT/CT images. This work investigates the dosimetric impact of different temporal coregistration methods commonly used for 3D internal dosimetry. Methods: PET/CT images of four mice were acquired at 1, 24, 48, 72, 96, 144 hrs post-injection of {sup 124}I-CLR1404. The therapeutic {sup 131}I-CLR1404 absorbed dose rate (ADR) was calculated at each time point using a Geant4-based MC dosimetry platform using three temporal image coregistration Methods: (1) no coregistration (NC), whole body sequential CT-CT affine coregistration (WBAC), andmore » individual sequential ROI-ROI affine coregistration (IRAC). For NC, only the ROI mean ADR was integrated to obtain ROI mean doses. For WBAC, the CT at each time point was coregistered to a single reference CT. The CT transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the whole CT volume. For IRAC, each individual ROI was isolated and sequentially coregistered to a single reference ROI. The ROI transformations were applied to the corresponding ADR images and the dose was calculated on a voxel-basis within the ROI volumes. Results: The percent differences in the ROI mean doses were as large as 109%, 88%, and 32%, comparing the WBAC vs. IRAC, NC vs. IRAC, and NC vs. WBAC methods, respectively. The CoV in the mean dose between the all three methods ranged from 2–36%. The pronounced curvature of the spinal cord was not adequately coregistered using WBAC which resulted in large difference between the WBAC and IRAC. Conclusion: The method used for temporal image coregistration can result in large differences in 3D internal dosimetry calculations. Care must be taken to choose the most appropriate method depending on the imaging conditions, clinical site, and specific application. This work is partially funded by NIH Grant R21 CA198392-01.« less

  9. Streamflow loss quantification for groundwater flow modeling using a wading-rod-mounted acoustic Doppler current profiler in a headwater stream

    NASA Astrophysics Data System (ADS)

    Pflügl, Christian; Hoehn, Philipp; Hofmann, Thilo

    2017-04-01

    Irrespective of the availability of various field measurement and modeling approaches, the quantification of interactions between surface water and groundwater systems remains associated with high uncertainty. Such uncertainties on stream-aquifer interaction have a high potential to misinterpret the local water budget and water quality significantly. Due to typically considerable temporal variation of stream discharge rates, it is desirable for the measurement of streamflow to reduce the measuring duration while reducing uncertainty. Streamflow measurements, according to the velocity-area method, have been performed along reaches of a losing-disconnected, subalpine headwater stream using a 2-dimensional, wading-rod-mounted acoustic Doppler current profiler (ADCP). The method was chosen, with stream morphology not allowing for boat-mounted setups, to reduce uncertainty compared to conventional, single-point streamflow measurements of similar measurement duration. Reach-averaged stream loss rates were subsequently quantified between 12 cross sections. They enabled the delineation of strongly infiltrating stream reaches and their differentiation from insignificantly infiltrating reaches. Furthermore, a total of 10 near-stream observation wells were constructed and/or equipped with pressure and temperature loggers. The time series of near-stream groundwater temperature data were cross-correlated with stream temperature time series to yield supportive qualitative information on the delineation of infiltrating reaches. Subsequently, as a reference parameterization, the hydraulic conductivity and specific yield of a numerical, steady-state model of groundwater flow, in the unconfined glaciofluvial aquifer adjacent to the stream, were inversely determined incorporating the inferred stream loss rates. Applying synthetic sets of infiltration rates, resembling increasing levels of uncertainty associated with single-point streamflow measurements of comparable duration, the same inversion procedure was run. The volume-weighted mean of the respective parameter distribution within 200 m of stream periphery deviated increasingly from the reference parameterization at increasing deviation of infiltration rates.

  10. Reference point indentation is insufficient for detecting alterations in traditional mechanical properties of bone under common experimental conditions.

    PubMed

    Krege, John B; Aref, Mohammad W; McNerny, Erin; Wallace, Joseph M; Organ, Jason M; Allen, Matthew R

    2016-06-01

    Reference point indentation (RPI) was developed as a novel method to assess mechanical properties of bone in vivo, yet it remains unclear what aspects of bone dictate changes/differences in RPI-based parameters. The main RPI parameter, indentation distance increase (IDI), has been proposed to be inversely related to the ability of bone to form/tolerate damage. The goal of this work was to explore the relationshipre-intervention RPI measurebetween RPI parameters and traditional mechanical properties under varying experimental conditions (drying and ashing bones to increase brittleness, demineralizing bones and soaking in raloxifene to decrease brittleness). Beams were machined from cadaveric bone, pre-tested with RPI, subjected to experimental manipulation, post-tested with RPI, and then subjected to four-point bending to failure. Drying and ashing significantly reduced RPI's IDI, as well as ultimate load (UL), and energy absorption measured from bending tests. Demineralization increased IDI with minimal change to bending properties. Ex vivo soaking in raloxifene had no effect on IDI but tended to enhance post-yield behavior at the structural level. These data challenge the paradigm of an inverse relationship between IDI and bone toughness, both through correlation analyses and in the individual experiments where divergent patterns of altered IDI and mechanical properties were noted. Based on these results, we conclude that RPI measurements alone, as compared to bending tests, are insufficient to reach conclusions regarding mechanical properties of bone. This proves problematic for the potential clinical use of RPI measurements in determining fracture risk for a single patient, as it is not currently clear that there is an IDI, or even a trend of IDI, that can determine clinically relevant changes in tissue properties that may contribute to whole bone fracture resistance. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data

    NASA Astrophysics Data System (ADS)

    Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim

    2018-05-01

    The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.

  12. Reference point indentation is insufficient for detecting alterations in traditional mechanical properties of bone under common experimental conditions

    PubMed Central

    Krege, John B.; Aref, Mohammad W.; McNerny, Erin; Wallace, Joseph M.; Organ, Jason M.; Allen, Matthew R.

    2016-01-01

    Reference point indentation (RPI) was developed as a novel method to assess mechanical properties of bone in vivo, yet it remains unclear what aspects of bone dictate changes/differences in RPI-based parameters. The main RPI parameter, indentation distance increase (IDI), has been proposed to be inversely related to the ability of bone to form/tolerate damage. The goal of this work was to explore the relationship between RPI parameters and traditional mechanical properties under varying experimental conditions (drying and ashing bones to increase brittleness, demineralizing bones and soaking in raloxifene to decrease brittleness). Beams were machined from cadaveric bone, pre-tested with RPI, subjected to experimental manipulation, post-tested with RPI, and then subjected to four-point bending to failure. Drying and ashing significantly reduced RPI’s IDI, as well as ultimate load (UL), and energy absorption measured from bending tests. Demineralization increased IDI with minimal change to bending properties. Ex vivo soaking in raloxifene had no effect on IDI but tended to enhance post-yield behavior at the structural level. These data challenge the paradigm of an inverse relationship between IDI and bone toughness, both through correlation analyses and in the individual experiments where divergent patterns of altered IDI and mechanical properties were noted. Based on these results, we conclude that RPI measurements alone, as compared to bending tests, are insufficient to reach conclusions regarding mechanical properties of bone. This proves problematic for the potential clinical use of RPI measurements in determining fracture risk for a single patient, as it is not currently clear that there is an IDI, or even a trend of IDI, that can determine clinically relevant changes in tissue properties that may contribute to whole bone fracture resistance. PMID:27072518

  13. Universal Point of Care Testing for Lynch Syndrome in Patients with Upper Tract Urothelial Carcinoma.

    PubMed

    Metcalfe, Michael J; Petros, Firas G; Rao, Priya; Mork, Maureen E; Xiao, Lianchun; Broaddus, Russell R; Matin, Surena F

    2018-01-01

    Patients with Lynch syndrome are at risk for upper tract urothelial carcinoma. We sought to identify the incidence and most reliable means of point of care screening for Lynch syndrome in patients with upper tract urothelial carcinoma. A total of 115 consecutive patients with upper tract urothelial carcinoma without a history of Lynch syndrome were universally screened during followup from January 2013 through July 2016. We evaluated patient and family history using AMS (Amsterdam criteria) I and II, and tumor immunohistochemistry for mismatch repair proteins and microsatellite instability. Patients who were positive for AMS I/II, microsatellite instability or immunohistochemistry were classified as potentially having Lynch syndrome and referred for clinical genetic analysis and counseling. Patients with known Lynch syndrome served as positive controls. Of the 115 patients 16 (13.9%) screened positive for potential Lynch syndrome. Of these patients 7.0% met AMS II criteria, 11.3% had loss of at least 1 mismatch repair protein and 6.0% had high microsatellite instability. All 16 patients were referred for germline testing, 9 completed genetic analysis and counseling, and 6 were confirmed to have Lynch syndrome. All 7 patients with upper tract urothelial carcinoma who had a known history of Lynch syndrome were positive for AMS II criteria and at least a single mismatch repair protein loss while 5 of 6 had high microsatellite instability. We identified 13.9% of upper tract urothelial carcinoma cases as potential Lynch syndrome and 5.2% as confirmed Lynch syndrome at the point of care. These findings have important implications for universal screening of upper tract urothelial carcinoma, representing one of the highest rates of undiagnosed genetic disease in a urological cancer. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  14. Proprioceptive Interaction between the Two Arms in a Single-Arm Pointing Task.

    PubMed

    Kigawa, Kazuyoshi; Izumizaki, Masahiko; Tsukada, Setsuro; Hakuta, Naoyuki

    2015-01-01

    Proprioceptive signals coming from both arms are used to determine the perceived position of one arm in a two-arm matching task. Here, we examined whether the perceived position of one arm is affected by proprioceptive signals from the other arm in a one-arm pointing task in which participants specified the perceived position of an unseen reference arm with an indicator paddle. Both arms were hidden from the participant's view throughout the study. In Experiment 1, with both arms placed in front of the body, the participants received 70-80 Hz vibration to the elbow flexors of the reference arm (= right arm) to induce the illusion of elbow extension. This extension illusion was compared with that when the left arm elbow flexors were vibrated or not. The degree of the vibration-induced extension illusion of the right arm was reduced in the presence of left arm vibration. In Experiment 2, we found that this kinesthetic interaction between the two arms did not occur when the left arm was vibrated in an abducted position. In Experiment 3, the vibration-induced extension illusion of one arm was fully developed when this arm was placed at an abducted position, indicating that the brain receives increased proprioceptive input from a vibrated arm even if the arm was abducted. Our results suggest that proprioceptive interaction between the two arms occurs in a one-arm pointing task when the two arms are aligned with one another. The position sense of one arm measured using a pointer appears to include the influences of incoming information from the other arm when both arms were placed in front of the body and parallel to one another.

  15. Proprioceptive Interaction between the Two Arms in a Single-Arm Pointing Task

    PubMed Central

    Kigawa, Kazuyoshi; Izumizaki, Masahiko; Tsukada, Setsuro; Hakuta, Naoyuki

    2015-01-01

    Proprioceptive signals coming from both arms are used to determine the perceived position of one arm in a two-arm matching task. Here, we examined whether the perceived position of one arm is affected by proprioceptive signals from the other arm in a one-arm pointing task in which participants specified the perceived position of an unseen reference arm with an indicator paddle. Both arms were hidden from the participant’s view throughout the study. In Experiment 1, with both arms placed in front of the body, the participants received 70–80 Hz vibration to the elbow flexors of the reference arm (= right arm) to induce the illusion of elbow extension. This extension illusion was compared with that when the left arm elbow flexors were vibrated or not. The degree of the vibration-induced extension illusion of the right arm was reduced in the presence of left arm vibration. In Experiment 2, we found that this kinesthetic interaction between the two arms did not occur when the left arm was vibrated in an abducted position. In Experiment 3, the vibration-induced extension illusion of one arm was fully developed when this arm was placed at an abducted position, indicating that the brain receives increased proprioceptive input from a vibrated arm even if the arm was abducted. Our results suggest that proprioceptive interaction between the two arms occurs in a one-arm pointing task when the two arms are aligned with one another. The position sense of one arm measured using a pointer appears to include the influences of incoming information from the other arm when both arms were placed in front of the body and parallel to one another. PMID:26317518

  16. Numerical simulation of the effect of groundwater salinity on artificial freezing wall in coastal area

    NASA Astrophysics Data System (ADS)

    Hu, Rui; Liu, Quan

    2017-04-01

    During the engineering projects with artificial ground freezing (AFG) techniques in coastal area, the freezing effect is affected by groundwater salinity. Based on the theories of artificially frozen soil and heat transfer in porous material, and with the assumption that only the variations of total dissolved solids (TDS) impact on freezing point and thermal conductivity, a numerical model of an AFG project in a saline aquifer was established and validated by comparing the simulated temperature field with the calculated temperature based on the analytic solution of rupak (reference) for single-pipe freezing temperature field T. The formation and development of freezing wall were simulated with various TDS. The results showed that the variety of TDS caused the larger temperature difference near the frozen front. With increasing TDS in the saline aquifer (1 35g/L), the average thickness of freezing wall decreased linearly and the total formation time of the freezing wall increased linearly. Compared with of the scenario of fresh-water (<1g/L), the average thickness of frozen wall decreased by 6% and the total formation time of the freezing wall increased by 8% with each increasing TDS of 7g/L. Key words: total dissolved solids, freezing point, thermal conductivity, freezing wall, numerical simulation Reference D.J.Pringel, H.Eicken, H.J.Trodahl, etc. Thermal conductivity of landfast Antarctic and Arctic sea ice[J]. Journal of Geophysical Research, 2007, 112: 1-13. Lukas U.Arenson, Dave C.Sego. The effect of salinity on the freezing of coarse- grained sand[J]. Canadian Geotechnical Journal, 2006, 43: 325-337. Hui Bing, Wei Ma. Laboratory investigation of the freezing point of saline soil[J]. Cold Regions Science and Technology, 2011, 67: 79-88.

  17. Comparative cost-effectiveness of metformin-based dual therapies associated with risk of cardiovascular diseases among Chinese patients with type 2 diabetes: Evidence from a population-based national cohort in Taiwan.

    PubMed

    Ou, Huang-Tz; Chen, Yen-Ting; Liu, Ya-Ming; Wu, Jin-Shang

    2016-06-01

    To assess the cost-effectiveness of metformin-based dual therapies associated with cardiovascular disease (CVD) risk in a Chinese population with type 2 diabetes. We utilized Taiwan's National Health Insurance Research Database (NHIRD) 1997-2011, which is derived from the claims of National Health Insurance, a mandatory-enrollment single-payer system that covers over 99% of Taiwan's population. Four metformin-based dual therapy cohorts were used, namely a reference group of metformin plus sulfonylureas (Metformin-SU) and metformin plus acarbose, metformin plus thiazolidinediones (Metformin-TZD), and metformin plus glinides (Metformin-glinides). Using propensity scores, each subject in a comparison cohort was 1:1 matched to a referent. The effectiveness outcome was CVD risk. Only direct medical costs were included. The Markov chain model was applied to project lifetime outcomes, discounted at 3% per annum. The bootstrapping technique was performed to assess uncertainty in analysis. Metformin-glinides was most cost-effective in the base-case analysis; Metformin-glinides saved $194 USD for one percentage point of reduction in CVD risk, as compared to Metformin-SU. However, for the elderly or those with severe diabetic complications, Metformin-TZD, especially pioglitazone, was more suitable; as compared to Metformin-SU, Metformin-TZD saved $840.1 USD per percentage point of reduction in CVD risk. Among TZDs, Metformin-pioglitazone saved $1831.5 USD per percentage point of associated CVD risk reduction, as compared to Metformin-rosiglitazone. When CVD is considered an important clinical outcome, Metformin-pioglitazone is cost-effective, in particular for the elderly and those with severe diabetic complications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. High frequency of discordance between antimüllerian hormone and follicle-stimulating hormone levels in serum from estradiol-confirmed days 2 to 4 of the menstrual cycle from 5,354 women in U.S. fertility centers.

    PubMed

    Leader, Benjamin; Hegde, Aparna; Baca, Quentin; Stone, Kimberly; Lannon, Benjamin; Seifer, David B; Broekmans, Frank; Baker, Valerie L

    2012-10-01

    To determine the frequency of clinical discordance between antimüllerian hormone (AMH, ng/mL) and follicle-stimulating hormone (FSH, IU/L) by use of cut points defined by response to controlled ovarian stimulation in the same serum samples drawn on estradiol-confirmed, menstrual cycle days 2 to 4. Retrospective analysis. Fertility centers in 30 U.S. states and a single reference laboratory with uniform testing protocols. 5,354 women, 20 to 45 years of age. None. Frequency of discordance between serum AMH and FSH values. Of the 5,354 women tested, 1 in 5 had discordant AMH and FSH values defined as AMH <0.8 (concerning) with FSH <10 (reassuring) or AMH ≥ 0.8 (reassuring) with FSH ≥ 10 (concerning). Of the women with reassuring FSH values (n = 4,469), the concerning AMH values were found in 1 in 5 women in a highly age-dependent fashion, ranging from 1 in 11 women under 35 years of age to 1 in 3 women above 40 years of age. On the other hand, of the women with reassuring AMH values (n = 3,742), 1 in 18 had concerning FSH values, a frequency that did not vary in a statistically significant fashion by age. Clinical discordance in serum AMH and FSH values was frequent and age dependent using common clinical cut points, a large patient population, one reference laboratory, and uniform testing methodology. This conclusion is generalizable to women undergoing fertility evaluation, although AMH testing has not been standardized among laboratories, and the cut points presented are specific to the laboratory in this study. Copyright © 2012. Published by Elsevier Inc.

  19. 76 FR 66192 - Fisheries of the Northeastern United States; Monkfish; Framework Adjustment 7

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-26

    ... with the new annual catch target, and establishes revised biomass reference points for the Northern and... biomass reference points in the Monkfish FMP to be consistent with the results of SARC 50. Approved... SARC 50 report, the Southern Demersal Working Group recommended an approach that would set biomass...

  20. Effect of cow reference group on validation reliability of genomic evaluation.

    PubMed

    Koivula, M; Strandén, I; Aamand, G P; Mäntysaari, E A

    2016-06-01

    We studied the effect of including genomic data for cows in the reference population of single-step evaluations. Deregressed individual cow genetic evaluations (DRP) from milk production evaluations of Nordic Red Dairy cattle were used to estimate the single-step breeding values. Validation reliability and bias of the evaluations were calculated with four data sets including different amount of DRP record information from genotyped cows in the reference population. The gain in reliability was from 2% to 4% units for the production traits, depending on the used DRP data and the amount of genomic data. Moreover, inclusion of genotyped bull dams and their genotyped daughters seemed to create some bias in the single-step evaluation. Still, genotyping cows and their inclusion in the reference population is advantageous and should be encouraged.

  1. 40 CFR 29.9 - How does the Administrator receive and respond to comments?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... State office or official is designated to act as a single point of contact between a State process and... program selected under § 29.6. (b) The single point of contact is not obligated to transmit comments from.... However, if a State process recommendation is transmitted by a single point of contact, all comments from...

  2. Visual navigation of the UAVs on the basis of 3D natural landmarks

    NASA Astrophysics Data System (ADS)

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-12-01

    This work considers the tracking of the UAV (unmanned aviation vehicle) on the basis of onboard observations of natural landmarks including azimuth and elevation angles. It is assumed that UAV's cameras are able to capture the angular position of reference points and to measure the angles of the sight line. Such measurements involve the real position of UAV in implicit form, and therefore some of nonlinear filters such as Extended Kalman filter (EKF) or others must be used in order to implement these measurements for UAV control. Recently it was shown that modified pseudomeasurement method may be used to control UAV on the basis of the observation of reference points assigned along the UAV path in advance. However, the use of such set of points needs the cumbersome recognition procedure with the huge volume of on-board memory. The natural landmarks serving as such reference points which may be determined on-line can significantly reduce the on-board memory and the computational difficulties. The principal difference of this work is the usage of the 3D reference points coordinates which permits to determine the position of the UAV more precisely and thereby to guide along the path with higher accuracy which is extremely important for successful performance of the autonomous missions. The article suggests the new RANSAC for ISOMETRY algorithm and the use of recently developed estimation and control algorithms for tracking of given reference path under external perturbation and noised angular measurements.

  3. An Approach for High-precision Stand-alone Positioning in a Dynamic Environment

    NASA Astrophysics Data System (ADS)

    Halis Saka, M.; Metin Alkan, Reha; Ozpercin, Alişir

    2015-04-01

    In this study, an algorithm is developed for precise positioning in dynamic environment utilizing a single geodetic GNSS receiver using carrier phase data. In this method, users should start the measurement on a known point near the project area for a couple of seconds making use of a single dual-frequency geodetic-grade receiver. The technique employs iono-free carrier phase observations with precise products. The equation of the algorithm is given below; Sm(t(i+1))=SC(ti)+[ΦIF (t(i+1) )-ΦIF (ti)] where, Sm(t(i+1)) is the phase-range between satellites and the receiver, SC(ti) is the initial range computed from the initial known point coordinates and the satellite coordinates and ΦIF is the ionosphere-free phase measurement (in meters). Tropospheric path delays are modelled using the standard tropospheric model. To accomplish the process, an in-house program was coded and some functions were adopted from Easy-Suite available at http://kom.aau.dk/~borre/easy. In order to assess the performance of the introduced algorithm in a dynamic environment, a dataset from a kinematic test measurement was used. The data were collected from a kinematic test measurement in Istanbul, Turkey. In the test measurement, a geodetic dual-frequency GNSS receiver, Ashtech Z-Xtreme, was set up on a known point on the shore and a couple of epochs were recorded for initialization. The receiver was then moved to a vessel and data were collected for approximately 2.5 hours and the measurement was finalized on a known point on the shore. While the kinematic measurement on the vessel were carried out, another GNSS receiver was set up on a geodetic point with known coordinates on the shore and data were collected in static mode to calculate the reference trajectory of the vessel using differential technique. The coordinates of the vessel were calculated for each measurement epoch with the introduced method. With the purpose of obtaining more robust results, all coordinates were calculated once again by inversely, i.e. from the last epoch to the first one. In this way, the estimated coordinates were also controlled. The average of both computed coordinates were used as vessel coordinates and then compared with the known-coordinates those of geodetic receiver epoch by epoch. The results indicate that the calculated coordinates from the introduced method are consistent with the reference trajectory with an accuracy of about 1 decimeter. In contrast, the findings imply lower accuracy for height components with an accuracy of about 2 decimeters. This accuracy level meets the requirement of many applications including some marine applications, precise hydrographic surveying, dredging, attitude control of ships, buoys and floating platforms, marine geodesy, navigation and oceanography.

  4. Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Heo, Jeong Nam; Kang, Suk-Ho

    2007-03-01

    The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies, semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear optimization, the optimal points was evaluated and compared between reference points. The average distance between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.

  5. Precision Pointing Control System (PPCS) system design and analysis. [for gimbaled experiment platforms

    NASA Technical Reports Server (NTRS)

    Frew, A. M.; Eisenhut, D. F.; Farrenkopf, R. L.; Gates, R. F.; Iwens, R. P.; Kirby, D. K.; Mann, R. J.; Spencer, D. J.; Tsou, H. S.; Zaremba, J. G.

    1972-01-01

    The precision pointing control system (PPCS) is an integrated system for precision attitude determination and orientation of gimbaled experiment platforms. The PPCS concept configures the system to perform orientation of up to six independent gimbaled experiment platforms to design goal accuracy of 0.001 degrees, and to operate in conjunction with a three-axis stabilized earth-oriented spacecraft in orbits ranging from low altitude (200-2500 n.m., sun synchronous) to 24 hour geosynchronous, with a design goal life of 3 to 5 years. The system comprises two complementary functions: (1) attitude determination where the attitude of a defined set of body-fixed reference axes is determined relative to a known set of reference axes fixed in inertial space; and (2) pointing control where gimbal orientation is controlled, open-loop (without use of payload error/feedback) with respect to a defined set of body-fixed reference axes to produce pointing to a desired target.

  6. [Thoughts on the Witnessed Audit in Medical Device Single Audit Program].

    PubMed

    Wen, Jing; Xiao, Jiangyi; Wang, Aijun

    2018-02-08

    Medical Device Single Audit Program is one of the key projects in International Medical Device Regulators Forum, which has much experience to be used for reference. This paper briefly describes the procedures and contents of the Witnessed Audit in Medical Device Single Audit Program. Some revelations about the work of Witnessed Audit have been discussed, for reference by the Regulatory Authorities and the Auditing Organizations.

  7. Nonimaging optical illumination system

    DOEpatents

    Winston, Roland; Ries, Harald

    2000-01-01

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.

  8. Nonimaging optical illumination system

    DOEpatents

    Winston, Roland; Ries, Harald

    1998-01-01

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.

  9. Nonimaging optical illumination system

    DOEpatents

    Winston, Roland; Ries, Harald

    1996-01-01

    A nonimaging illumination optical device for producing a selected far field illuminance over an angular range. The optical device includes a light source 102, a light reflecting surface 108, and a family of light edge rays defined along a reference line 104 with the reflecting surface 108 defined in terms of the reference line 104 as a parametric function R(t) where t is a scalar parameter position and R(t)=k(t)+Du(t) where k(t) is a parameterization of the reference line 104, and D is a distance from a point on the reference line 104 to the reflection surface 108 along the desired edge ray through the point.

  10. Implementing system simulation of C3 systems using autonomous objects

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1987-01-01

    The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.

  11. Towards System Calibration of Panoramic Laser Scanners from a Single Station

    PubMed Central

    Medić, Tomislav; Holst, Christoph; Kuhlmann, Heiner

    2017-01-01

    Terrestrial laser scanner measurements suffer from systematic errors due to internal misalignments. The magnitude of the resulting errors in the point cloud in many cases exceeds the magnitude of random errors. Hence, the task of calibrating a laser scanner is important for applications with high accuracy demands. This paper primarily addresses the case of panoramic terrestrial laser scanners. Herein, it is proven that most of the calibration parameters can be estimated from a single scanner station without a need for any reference information. This hypothesis is confirmed through an empirical experiment, which was conducted in a large machine hall using a Leica Scan Station P20 panoramic laser scanner. The calibration approach is based on the widely used target-based self-calibration approach, with small modifications. A new angular parameterization is used in order to implicitly introduce measurements in two faces of the instrument and for the implementation of calibration parameters describing genuine mechanical misalignments. Additionally, a computationally preferable calibration algorithm based on the two-face measurements is introduced. In the end, the calibration results are discussed, highlighting all necessary prerequisites for the scanner calibration from a single scanner station. PMID:28513548

  12. KEY COMPARISON: Final Report on CCT-K7: Key comparison of water triple point cells

    NASA Astrophysics Data System (ADS)

    Stock, M.; Solve, S.; del Campo, D.; Chimenti, V.; Méndez-Lango, E.; Liedberg, H.; Steur, P. P. M.; Marcarino, P.; Dematteis, R.; Filipe, E.; Lobo, I.; Kang, K. H.; Gam, K. S.; Kim, Y.-G.; Renaot, E.; Bonnier, G.; Valin, M.; White, R.; Dransfield, T. D.; Duan, Y.; Xiaoke, Y.; Strouse, G.; Ballico, M.; Sukkar, D.; Arai, M.; Mans, A.; de Groot, M.; Kerkhof, O.; Rusby, R.; Gray, J.; Head, D.; Hill, K.; Tegeler, E.; Noatsch, U.; Duris, S.; Kho, H. Y.; Ugur, S.; Pokhodun, A.; Gerasimov, S. F.

    2006-01-01

    The triple point of water serves to define the kelvin, the unit of thermodynamic temperature, in the International System of Units (SI). Furthermore, it is the most important fixed point of the International Temperature Scale of 1990 (ITS-90). Any uncertainty in the realization of the triple point of water contributes directly to the measurement uncertainty over the wide temperature range from 13.8033 K to 1234.93 K. The Consultative Committee for Thermometry (CCT) decided at its 21st meeting in 2001 to carry out a comparison of water triple point cells and charged the BIPM with its organization. Water triple point cells from 20 national metrology institutes were carried to the BIPM and were compared with highest accuracy with two reference cells. The small day-to-day changes of the reference cells were determined by a least-squares technique. Prior to the measurements at the BIPM, the transfer cells were compared with the corresponding national references and therefore also allow comparison of the national references of the water triple point. This report presents the results of this comparison and gives detailed information about the measurements made at the BIPM and in the participating laboratories. It was found that the transfer cells show a standard deviation of 50 µK the difference between the extremes is 160 µK. The same spread is observed between the national references. The most important result of this work is that a correlation between the isotopic composition of the cell water and the triple point temperature was observed. To reduce the spread between different realizations, it is therefore proposed that the definition of the kelvin should refer to water of a specified isotopic composition. The CCT recommended to the International Committee of Weights and Measures (CIPM) to clarify the definition of the kelvin in the SI brochure by explicitly referring to water with the isotopic composition of Vienna Standard Mean Ocean Water (VSMOW). The CIPM accepted this recommendation and the next edition of the SI brochure will include this specification. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (MRA).

  13. Electro-optic modulation of a laser at microwave frequencies for interferometric purposes

    NASA Astrophysics Data System (ADS)

    Specht, Paul E.; Jilek, Brook A.

    2017-02-01

    A multi-point microwave interferometer (MPMI) concept was previously proposed by the authors for spatially-resolved, non-invasive tracking of a shock, reaction, or detonation front in energetic media [P. Specht et al., AIP Conf. Proc. 1793, 160010 (2017).]. The advantage of the MPMI concept over current microwave interferometry techniques is its detection of Doppler shifted microwave signals through electro-optic (EO) modulation of a laser. Since EO modulation preserves spatial variations in the Doppler shift, collecting the EO modulated laser light into a fiber array for recording with an optical heterodyne interferometer yields spatially-resolved velocity information. This work demonstrates the underlying physical principle of the MPMI diagnostic: the monitoring of a microwave signal with nanosecond temporal resolution using an optical heterodyne interferometer. For this purpose, the MPMI concept was simplified to a single-point construction using two tunable 1550 nm lasers and a 35.2 GHz microwave source. A (110) ZnTe crystal imparted the microwave frequency onto a laser, which was combined with a reference laser for determination of the microwave frequency in an optical heterodyne interferometer. A single, characteristic frequency associated with the microwave source was identified in all experiments, providing a means to monitor a microwave signal on nanosecond time scales. Lastly, areas for improving the frequency resolution of this technique are discussed, focusing on increasing the phase-modulated signal strength.

  14. Electro-optic modulation of a laser at microwave frequencies for interferometric purposes.

    PubMed

    Specht, Paul E; Jilek, Brook A

    2017-02-01

    A multi-point microwave interferometer (MPMI) concept was previously proposed by the authors for spatially-resolved, non-invasive tracking of a shock, reaction, or detonation front in energetic media [P. Specht et al., AIP Conf. Proc. 1793, 160010 (2017).]. The advantage of the MPMI concept over current microwave interferometry techniques is its detection of Doppler shifted microwave signals through electro-optic (EO) modulation of a laser. Since EO modulation preserves spatial variations in the Doppler shift, collecting the EO modulated laser light into a fiber array for recording with an optical heterodyne interferometer yields spatially-resolved velocity information. This work demonstrates the underlying physical principle of the MPMI diagnostic: the monitoring of a microwave signal with nanosecond temporal resolution using an optical heterodyne interferometer. For this purpose, the MPMI concept was simplified to a single-point construction using two tunable 1550 nm lasers and a 35.2 GHz microwave source. A (110) ZnTe crystal imparted the microwave frequency onto a laser, which was combined with a reference laser for determination of the microwave frequency in an optical heterodyne interferometer. A single, characteristic frequency associated with the microwave source was identified in all experiments, providing a means to monitor a microwave signal on nanosecond time scales. Lastly, areas for improving the frequency resolution of this technique are discussed, focusing on increasing the phase-modulated signal strength.

  15. Normalization and extension of single-collector efficiency correlation equation

    NASA Astrophysics Data System (ADS)

    Messina, Francesca; Marchisio, Daniele; Sethi, Rajandrea

    2015-04-01

    The colloidal transport and deposition are important phenomena involved in many engineering problems. In the environmental engineering field the use of micro- and nano-scale zerovalent iron (M-NZVI) is one of the most promising technologies for groundwater remediation. Colloid deposition is normally studied from a micro scale point of view and the results are then implemented in macro scale models that are used to design field-scale applications. The single collector efficiency concept predicts particles deposition onto a single grain of a complex porous medium in terms of probability that an approaching particle would be retained on the solid grain. In literature, many different approaches and equations exist to predict it, but most of them fail under specific conditions (e.g. very small or very big particle size and very low fluid velocity) because they predict efficiency values exceeding unity. By analysing particle fluxes and deposition mechanisms and performing a mass balance on the entire domain, the traditional definition of efficiency was reformulated and a novel total flux normalized correlation equation is proposed for predicting single-collector efficiency under a broad range of parameters. It has been formulated starting from a combination of Eulerian and Lagrangian numerical simulations, performed under Smoluchowski-Levich conditions, in a geometry which consists of a sphere enveloped by a control volume. In order to guarantee the independence of each term, the correlation equation is derived through a rigorous hierarchical parameter estimation process, accounting for single and mutual interacting transport mechanisms. The correlation equation provides efficiency values lower than one over a wide range of parameters and is valid both for point and finite-size particles. A reduced form is also proposed by elimination of the less relevant terms. References 1. Yao, K. M.; Habibian, M. M.; Omelia, C. R., Water and Waste Water Filtration - Concepts and Applications. Environ Sci Technol 1971, 5, (11), 1105-&. 2. Tufenkji, N., and M. Elimelech, Correlation equation for predicting single-collector efficiency in physicochemical filtration in saturated porous media. Environmental Science & Technology 2004 38(2):529-536. 3. Boccardo, G.; Marchisio, D. L.; Sethi, R., Microscale simulation of particle deposition in porous media. J Colloid Interface Sci 2014, 417, 227-37

  16. 47 CFR 68.105 - Minimum point of entry (MPOE) and demarcation point.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... be either the closest practicable point to where the wiring crosses a property line or the closest practicable point to where the wiring enters a multiunit building or buildings. The reasonable and... situations. (c) Single unit installations. For single unit installations existing as of August 13, 1990, and...

  17. Pan-genome multilocus sequence typing and outbreak-specific reference-based single nucleotide polymorphism analysis to resolve two concurrent Staphylococcus aureus outbreaks in neonatal services.

    PubMed

    Roisin, S; Gaudin, C; De Mendonça, R; Bellon, J; Van Vaerenbergh, K; De Bruyne, K; Byl, B; Pouseele, H; Denis, O; Supply, P

    2016-06-01

    We used a two-step whole genome sequencing analysis for resolving two concurrent outbreaks in two neonatal services in Belgium, caused by exfoliative toxin A-encoding-gene-positive (eta+) methicillin-susceptible Staphylococcus aureus with an otherwise sporadic spa-type t209 (ST-109). Outbreak A involved 19 neonates and one healthcare worker in a Brussels hospital from May 2011 to October 2013. After a first episode interrupted by decolonization procedures applied over 7 months, the outbreak resumed concomitantly with the onset of outbreak B in a hospital in Asse, comprising 11 neonates and one healthcare worker from mid-2012 to January 2013. Pan-genome multilocus sequence typing, defined on the basis of 42 core and accessory reference genomes, and single-nucleotide polymorphisms mapped on an outbreak-specific de novo assembly were used to compare 28 available outbreak isolates and 19 eta+/spa-type t209 isolates identified by routine or nationwide surveillance. Pan-genome multilocus sequence typing showed that the outbreaks were caused by independent clones not closely related to any of the surveillance isolates. Isolates from only ten cases with overlapping stays in outbreak A, including four pairs of twins, showed no or only a single nucleotide polymorphism variation, indicating limited sequential transmission. Detection of larger genomic variation, even from the start of the outbreak, pointed to sporadic seeding from a pre-existing exogenous source, which persisted throughout the whole course of outbreak A. Whole genome sequencing analysis can provide unique fine-tuned insights into transmission pathways of complex outbreaks even at their inception, which, with timely use, could valuably guide efforts for early source identification. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  18. Volume 2 - Point Sources

    EPA Pesticide Factsheets

    Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr

  19. The single-payer option: a reconsideration.

    PubMed

    Oliver, Adam

    2009-08-01

    This article discusses some of the merits and demerits of the single-payer model of health care financing, with particular reference to the English National Health Service (NHS). Specifically, it is argued that the main merits are that the model can directly provide universal health care coverage, thus eradicating or at least alleviating market failure and equity concerns, and that it can achieve this with relatively low total health care expenditure in general and -- as compared to the commercial multiple insurance model -- low administrative costs in particular. A perceived demerit of the single-payer model is that it can lead to excessive health care rationing, particularly in terms of waiting times, although it is argued here that long waits are probably caused by insufficient funding rather than by the single-payer model per se. Moreover, rationing of one form or another occurs in all health care systems, and single-payer models may be the best option if the aim is to incorporate structured rationing such that an entire population is subject to the same rules and is thus treated equitably. A further perceived disadvantage of the single-payer model is that it offers limited choice, which is necessarily true with respect to choice of insurer, but choice of provider can be, and increasingly is, a feature of centrally tax-financed health care systems. No model of health care funding is perfect; trade-offs are inevitable. Whether the merits of the single-payer model are judged to outweigh the demerits will typically vary both across countries at any point in time and within a country over time.

  20. Connecting Online Learners with Diverse Local Practices: The Design of Effective Common Reference Points for Conversation

    ERIC Educational Resources Information Center

    Friend Wise, Alyssa; Padmanabhan, Poornima; Duffy, Thomas M.

    2009-01-01

    This mixed-methods study probed the effectiveness of three kinds of objects (video, theory, metaphor) as common reference points for conversations between online learners (student teachers). Individuals' degree of detail-focus was examined as a potentially interacting covariate and the outcome measure was learners' level of tacit knowledge related…

  1. NGEE Arctic Plant Traits: Vegetation Plot Locations, Ecotypes, and Photos, Kougarok Road Mile Marker 64, Seward Peninsula, Alaska, 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colleen Iversen; Amy Breen; Verity Salmon

    Data includes GPS waypoints for intensive plots, reference points, vegetation plots, and soil temperature/moisture monitoring stations that were established in July 2016 at the Kougarok hill slope located at Kougarok Road, Mile Marker 64. Photographs of all intensive plots and reference points are also included.

  2. Tipping Points: Teachers' Reported Reasons for Referring Primary School Children for Excessive Anxiety

    ERIC Educational Resources Information Center

    Hinchliffe, Kaitlin J.; Campbell, Marilyn A.

    2016-01-01

    The current study explored the reasons that primary school teachers reported were tipping points for them in deciding whether or not and when to refer a child to the school student support team for excessive anxiety. Twenty teachers in two Queensland primary schools were interviewed. Content analysis of interview transcripts revealed six themes…

  3. A height-for-age growth reference for children with achondroplasia: Expanded applications and comparison with original reference data.

    PubMed

    Hoover-Fong, Julie; McGready, John; Schulze, Kerry; Alade, Adekemi Yewande; Scott, Charles I

    2017-05-01

    The height-for-age (HA) reference currently used for children with achondroplasia is not adaptable for electronic records or calculation of HA Z-scores. We report new HA curves and tables of mean and standard deviation (SD) HA, for calculating Z-scores, from birth-16 years in achondroplasia. Mixed longitudinal data were abstracted from medical records of achondroplasia patients from a single clinical practice (CIS, 1967-2004). Gender-specific height percentiles (5, 25, 50, 75, 95th) were estimated across the age continuum, using a 2 month window per time point smoothed by a quadratic smoothing algorithm. HA curves were constructed for 0-36 months and 2-16 years to optimize resolution for younger children. Mean monthly height (SD) was tabulated. These novel HA curves were compared to reference data currently in use for children with achondroplasia. 293 subjects (162 male/131 female) contributed 1,005 and 932 height measures, with greater data paucity with age. Mean HA tracked with original achondroplasia norms, particularly through mid-childhood (2-9 years), but with no evidence of a pubertal growth spurt. Standard deviation of height at each month interval increased from birth through 16 years. Birth length was lower in achondroplasia than average stature and, as expected, height deficits increased with age. A new HA reference is available for longitudinal growth assessment in achondroplasia, taking advantage of statistical modeling techniques and allowing for Z-score calculations. This is an important contribution to clinical care and research endeavors for the achondroplasia population. © 2017 Wiley Periodicals, Inc.

  4. The multiple personalities of Watson and Crick strands

    PubMed Central

    2011-01-01

    Background In genetics it is customary to refer to double-stranded DNA as containing a "Watson strand" and a "Crick strand." However, there seems to be no consensus in the literature on the exact meaning of these two terms, and the many usages contradict one another as well as the original definition. Here, we review the history of the terminology and suggest retaining a single sense that is currently the most useful and consistent. Proposal The Saccharomyces Genome Database defines the Watson strand as the strand which has its 5'-end at the short-arm telomere and the Crick strand as its complement. The Watson strand is always used as the reference strand in their database. Using this as the basis of our standard, we recommend that Watson and Crick strand terminology only be used in the context of genomics. When possible, the centromere or other genomic feature should be used as a reference point, dividing the chromosome into two arms of unequal lengths. Under our proposal, the Watson strand is standardized as the strand whose 5'-end is on the short arm of the chromosome, and the Crick strand as the one whose 5'-end is on the long arm. Furthermore, the Watson strand should be retained as the reference (plus) strand in a genomic database. This usage not only makes the determination of Watson and Crick unambiguous, but also allows unambiguous selection of reference stands for genomics. Reviewers This article was reviewed by John M. Logsdon, Igor B. Rogozin (nominated by Andrey Rzhetsky), and William Martin. PMID:21303550

  5. Improvements of the Ray-Tracing Based Method Calculating Hypocentral Loci for Earthquake Location

    NASA Astrophysics Data System (ADS)

    Zhao, A. H.

    2014-12-01

    Hypocentral loci are very useful to reliable and visual earthquake location. However, they can hardly be analytically expressed when the velocity model is complex. One of methods numerically calculating them is based on a minimum traveltime tree algorithm for tracing rays: a focal locus is represented in terms of ray paths in its residual field from the minimum point (namely initial point) to low residual points (referred as reference points of the focal locus). The method has no restrictions on the complexity of the velocity model but still lacks the ability of correctly dealing with multi-segment loci. Additionally, it is rather laborious to set calculation parameters for obtaining loci with satisfying completeness and fineness. In this study, we improve the ray-tracing based numerical method to overcome its advantages. (1) Reference points of a hypocentral locus are selected from nodes of the model cells that it goes through, by means of a so-called peeling method. (2) The calculation domain of a hypocentral locus is defined as such a low residual area that its connected regions each include one segment of the locus and hence all the focal locus segments are respectively calculated with the minimum traveltime tree algorithm for tracing rays by repeatedly assigning the minimum residual reference point among those that have not been traced as an initial point. (3) Short ray paths without branching are removed to make the calculated locus finer. Numerical tests show that the improved method becomes capable of efficiently calculating complete and fine hypocentral loci of earthquakes in a complex model.

  6. The Geodetic Monitoring of the Engineering Structure - A Practical Solution of the Problem in 3D Space

    NASA Astrophysics Data System (ADS)

    Filipiak-Kowszyk, Daria; Janowski, Artur; Kamiński, Waldemar; Makowska, Karolina; Szulwic, Jakub; Wilde, Krzysztof

    2016-12-01

    The study raises the issues concerning the automatic system designed for the monitoring of movement of controlled points, located on the roof covering of the Forest Opera in Sopot. It presents the calculation algorithm proposed by authors. It takes into account the specific design and location of the test object. High forest stand makes it difficult to use distant reference points. Hence the reference points used to study the stability of the measuring position are located on the ground elements of the sixmeter-deep concrete foundations, from which the steel arches are derived to support the roof covering (membrane) of the Forest Opera. The tacheometer used in the measurements is located in the glass body placed on a special platform attached to the steel arcs. Measurements of horizontal directions, vertical angles and distances can be additionally subject to errors caused by the laser beam penetration through the glass. Dynamic changes of weather conditions, including the temperature and pressure also have a significant impact on the value of measurement errors, and thus the accuracy of the final determinations represented by the relevant covariance matrices. The estimated coordinates of the reference points, controlled points and tacheometer along with the corresponding covariance matrices obtained from the calculations in the various epochs are used to determine the significance of acquired movements. In case of the stability of reference points, the algorithm assumes the ability to study changes in the position of tacheometer in time, on the basis of measurements performed on these points.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, R; Zhu, X; Li, S

    Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginalmore » mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.« less

  8. Does the Earth's Magnetic Field Serve as a Reference for Alignment of the Honeybee Waggle Dance?

    PubMed Central

    Lambinet, Veronika; Hayden, Michael E.; Bieri, Marco; Gries, Gerhard

    2014-01-01

    The honeybee (Apis mellifera) waggle dance, which is performed inside the hive by forager bees, informs hive mates about a potent food source, and recruits them to its location. It consists of a repeated figure-8 pattern: two oppositely directed turns interspersed by a short straight segment, the “waggle run”. The waggle run consists of a single stride emphasized by lateral waggling motions of the abdomen. Directional information pointing to a food source relative to the sun's azimuth is encoded in the angle between the waggle run line and a reference line, which is generally thought to be established by gravity. Yet, there is tantalizing evidence that the local (ambient) geomagnetic field (LGMF) could play a role. We tested the effect of the LGMF on the recruitment success of forager bees by placing observation hives inside large Helmholtz coils, and then either reducing the LGMF to 2% or shifting its apparent declination. Neither of these treatments reduced the number of nest mates that waggle dancing forager bees recruited to a feeding station located 200 m north of the hive. These results indicate that the LGMF does not act as the reference for the alignment of waggle-dancing bees. PMID:25541731

  9. Vital signs in older patients: age-related changes.

    PubMed

    Chester, Jennifer Gonik; Rudolph, James L

    2011-06-01

    Vital signs are objective measures of physiological function that are used to monitor acute and chronic disease and thus serve as a basic communication tool about patient status. The purpose of this analysis was to review age-related changes of traditional vital signs (blood pressure, pulse, respiratory rate, and temperature) with a focus on age-related molecular changes, organ system changes, systemic changes, and altered compensation to stressors. The review found that numerous physiological and pathological changes may occur with age and alter vital signs. These changes tend to reduce the ability of organ systems to adapt to physiological stressors, particularly in frail older patients. Because of the diversity of age-related physiological changes and comorbidities in an individual, single-point measurements of vital signs have less sensitivity in detecting disease processes. However, serial vital sign assessments may have increased sensitivity, especially when viewed in the context of individualized reference ranges. Vital sign change with age may be subtle because of reduced physiological ranges. However, change from an individual reference range may indicate important warning signs and thus may require additional evaluation to understand potential underlying pathological processes. As a result, individualized reference ranges may provide improved sensitivity in frail, older patients. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  10. A bioequivalence study of levothyroxine tablets versus an oral levothyroxine solution in healthy volunteers.

    PubMed

    Yannovits, N; Zintzaras, E; Pouli, A; Koukoulis, G; Lyberi, S; Savari, E; Potamianos, S; Triposkiadis, F; Stefanidis, I; Zartaloudis, E; Benakis, A

    2006-01-01

    Probably for genetic reasons a substantial part of the Greek population requires Levothyroxine treatment. Since commercially available Levothyroxine was first marketed, the manufacture and storage of the drug in tablet form has been complicated and difficult; and as cases of therapeutic failure have frequently been reported following treatment with this medicinal agent, quality control is an essential factor. Due to the unreliability of Levothyroxine-based commercial products, in the present study we decided to follow the Food and Drug Administration (FDA) guidelines*, and use a Levothyroxine solution as reference product. The bioavailability of the Levothyroxine sodium tablet formulation THYROHORMONE/Ni-The Ltd (0.2 mg/tab) and that of a reference oral solution (0.3 mg/100 ml) under fasting conditions were compared in an open, randomized, single-dose two-way crossover study. Twenty four healthy Caucasian volunteers (M/F=15/9, mean age=32.9+/-7.4yr) participated in the study. Bioavailability was assessed by pharmacokinetic parameters such as the area under plasma concentration-time curve from time zero up to the measurable last time point (AUC(last)) and the maximum plasma concentration (Cmax). Heparinized venous blood samples were collected pre-dose and up to a 48-hour period post-dose. Levothyroxine sodium in plasma samples was assayed by a validated electrochemiluninescent immunoassay technique. Statistical analysis showed that the post-dose thyrotropin-stimulating hormone (TSH) levels decreased significantly (p<0.05). Regarding Levothyroxine (T4), the point estimate of the test formulation to the reference formulation ratios (T/R) for AUC(last) and Cmax was 0.92 with 90% confidence limits (0.90, 0.94) and 0.93 with 90% confidence limits (0.91, 0.94), respectively. Regarding triiodo-L-thyronine (T3), the point estimate for the T/R ratios of AUC(last) and Cmax was 0.92 with 90% confidence limits (0.90, 0.95) and 0.94 with 90% confidence limits (0.92, 0.95), respectively. The 90% confidence limits for the pharmacokinetic parameters AUC(last) and Cmax lie within the acceptance limits for bioequivalence (0.80, 1.25), for both T3 and T4.

  11. TH-CD-BRA-03: Direct Measurement of Magnetic Field Correction Factors, KQB, for Application in Future Codes of Practice for Reference Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolthaus, J; Asselen, B van; Woodings, S

    2016-06-15

    Purpose: With an MR-linac, radiation is delivered in the presence of a magnetic field. Modifications in the codes of practice (CoPs) for reference dosimetry are required to incorporate the effect of the magnetic field. Methods: In most CoPs the absorbed dose is determined using the well-known kQ formalism as the product of the calibration coefficient, the corrected electrometer reading and kQ, to account for the difference in beam quality. To keep a similar formalism a single correction factor is introduced which replaces kQ, and which corrects for beam quality and B-field, kQ,B. In this study we propose a method tomore » determine kQ,B under reference conditions in the MRLinac without using a primary standard, as the product of:- the ratio between detector readings without and with B-field (kB),- the ratio between doses in the point of measurement with and without B-field (rho),- kQ in the absence of the B-field in the MRLinac beam (kQmrl0,Q0),The ratio of the readings, which covers the change in detector reading due to the different electron trajectories in the detector, was measured with a waterproof ionization chamber (IBA-FC65g) in a water phantom in the MRLinac without and with B-field. The change in dose-to-water in the point of measurement due to the B-field was determined with a Monte Carlo based TPS. Results: For the presented approach, the measured ratio of readings is 0.956, the calculated ratio of doses in the point of measurement is 0.995. Based on TPR20,10 measurements kQ was calculated as 0.989 using NCS-18. This yields a value of 0.9408 for kQ,B. Conclusion: The presented approach to determine kQ,B agrees with a method based on primary standards within 0.4% with an uncertainty of 1% (1 std.uncert). It differs from a similar approach using a PMMA-phantom and an NE2571 chamber with 1.3%.« less

  12. Near Real Time Structural Health Monitoring with Multiple Sensors in a Cloud Environment

    NASA Astrophysics Data System (ADS)

    Bock, Y.; Todd, M.; Kuester, F.; Goldberg, D.; Lo, E.; Maher, R.

    2017-12-01

    A repeated near real time 3-D digital surrogate representation of critical engineered structures can be used to provide actionable data on subtle time-varying displacements in support of disaster resiliency. We describe a damage monitoring system of optimally-integrated complementary sensors, including Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical Systems (MEMS) accelerometers coupled with the GNSS (seismogeodesy), light multi-rotor Unmanned Aerial Vehicles (UAVs) equipped with high-resolution digital cameras and GNSS/IMU, and ground-based Light Detection and Ranging (LIDAR). The seismogeodetic system provides point measurements of static and dynamic displacements and seismic velocities of the structure. The GNSS ties the UAV and LIDAR imagery to an absolute reference frame with respect to survey stations in the vicinity of the structure to isolate the building response to ground motions. The GNSS/IMU can also estimate the trajectory of the UAV with respect to the absolute reference frame. With these constraints, multiple UAVs and LIDAR images can provide 4-D displacements of thousands of points on the structure. The UAV systematically circumnavigates the target structure, collecting high-resolution image data, while the ground LIDAR scans the structure from different perspectives to create a detailed baseline 3-D reference model. UAV- and LIDAR-based imaging can subsequently be repeated after extreme events, or after long time intervals, to assess before and after conditions. The unique challenge is that disaster environments are often highly dynamic, resulting in rapidly evolving, spatio-temporal data assets with the need for near real time access to the available data and the tools to translate these data into decisions. The seismogeodetic analysis has already been demonstrated in the NASA AIST Managed Cloud Environment (AMCE) designed to manage large NASA Earth Observation data projects on Amazon Web Services (AWS). The Cloud provides distinct advantages in terms of extensive storage and computing resources required for processing UAV and LIDAR imagery. Furthermore, it avoids single points of failure and allows for remote operations during emergencies, when near real time access to structures may be limited.

  13. Learning and memory performance in breast cancer survivors 2 to 6 years post-treatment: the role of encoding versus forgetting.

    PubMed

    Root, James C; Andreotti, Charissa; Tsu, Loretta; Ellmore, Timothy M; Ahles, Tim A

    2016-06-01

    Our previous retrospective analysis of clinically referred breast cancer survivors' performance on learning and memory measures found a primary weakness in initial encoding of information into working memory with intact retention and recall of this same information at a delay. This suggests that survivors may misinterpret cognitive lapses as being due to forgetting when, in actuality, they were not able to properly encode this information at the time of initial exposure. Our objective in this study was to replicate and extend this pattern of performance to a research sample to increase the generalizability of this finding in a sample in which subjects were not clinically referred for cognitive issues. We contrasted learning and memory performance between breast cancer survivors on endocrine therapy 2 to 6 years post-treatment with age- and education-matched healthy controls. We then stratified lower- and higher-performing breast cancer survivors to examine specific patterns of learning and memory performance. Contrasts were generated for four aggregate visual and verbal memory variables from the California Verbal Learning Test-2 (CVLT-2) and the Brown Location Test (BLT): Single-trial Learning: Trial 1 performance, Multiple-trial Learning: Trial 5 performance, Delayed Recall: Long-delay Recall performance, and Memory Errors: False-positive errors. As predicted, breast cancer survivors' performance as a whole was significantly lower on Single-trial Learning than the healthy control group but exhibited no significant difference in Delayed Recall. In the secondary analysis contrasting lower- and higher-performing survivors on cognitive measures, the same pattern of lower Single-trial Learning performance was exhibited in both groups, with the additional finding of significantly weaker Multiple-trial Learning performance in the lower-performing breast cancer group and intact Delayed Recall performance in both groups. As with our earlier finding of weaker initial encoding with intact recall in a cohort of clinically referred breast cancer survivors, our results indicate this same profile in a research sample of breast cancer survivors. Further, when the breast cancer group was stratified by lower and higher performance, both groups exhibited significantly lower performance on initial encoding, with more pronounced encoding weakness in the lower-performing group. As in our previous research, survivors did not lose successfully encoded information over longer delays, either in the lower- or higher-performing group, again arguing against memory decay in survivors. The finding of weaker initial encoding of information together with intact delayed recall in survivors points to specific treatment interventions in rehabilitation of cognitive dysfunction. The finding of weaker initial encoding of information together with intact delayed recall in survivors points to specific treatment interventions in rehabilitation of cognitive dysfunction and is discussed.

  14. Effect of laser frequency noise on fiber-optic frequency reference distribution

    NASA Technical Reports Server (NTRS)

    Logan, R. T., Jr.; Lutes, G. F.; Maleki, L.

    1989-01-01

    The effect of the linewidth of a single longitude-mode laser on the frequency stability of a frequency reference transmitted over a single-mode optical fiber is analyzed. The interaction of the random laser frequency deviations with the dispersion of the optical fiber is considered to determine theoretically the effect on the Allan deviation (square root of the Allan variance) of the transmitted frequency reference. It is shown that the magnitude of this effect may determine the limit of the ultimate stability possible for frequency reference transmission on optical fiber, but is not a serious limitation to present system performance.

  15. Multipoint vibrometry with dynamic and static holograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haist, T.; Lingel, C.; Osten, W.

    2013-12-15

    We report on two multipoint vibrometers with user-adjustable position of the measurement spots. Both systems are using holograms for beam deflection. The measurement is based on heterodyne interferometry with a frequency difference of 5 MHz between reference and object beam. One of the systems uses programmable positioning of the spots in the object volume but is limited concerning the light efficiency. The other system is based on static holograms in combination with mechanical adjustment of the measurement spots and does not have such a general efficiency restriction. Design considerations are given and we show measurement results for both systems. Inmore » addition, we analyze the sensitivity of the systems which is a major limitation compared to single point scanning systems.« less

  16. Array microscopy technology and its application to digital detection of Mycobacterium tuberculosis

    NASA Astrophysics Data System (ADS)

    McCall, Brian P.

    Tuberculosis causes more deaths worldwide than any other curable infectious disease. This is the case despite tuberculosis appearing to be on the verge of eradication midway through the last century. Efforts at reversing the spread of tuberculosis have intensified since the early 1990s. Since then, microscopy has been the primary frontline diagnostic. In this dissertation, advances in clinical microscopy towards array microscopy for digital detection of Mycobacterium tuberculosis are presented. Digital array microscopy separates the tasks of microscope operation and pathogen detection and will reduce the specialization needed in order to operate the microscope. Distributing the work and reducing specialization will allow this technology to be deployed at the point of care, taking the front-line diagnostic for tuberculosis from the microscopy center to the community health center. By improving access to microscopy centers, hundreds of thousands of lives can be saved. For this dissertation, a lens was designed that can be manufactured as 4x6 array of microscopes. This lens design is diffraction limited, having less than 0.071 waves of aberration (root mean square) over the entire field of view. A total area imaged onto a full-frame digital image sensor is expected to be 3.94 mm2, which according to tuberculosis microscopy guidelines is more than sufficient for a sensitive diagnosis. The design is tolerant to single point diamond turning manufacturing errors, as found by tolerance analysis and by fabricating a prototype. Diamond micro-milling, a fabrication technique for lens array molds, was applied to plastic plano-concave and plano-convex lens arrays, and found to produce high quality optical surfaces. The micro-milling technique did not prove robust enough to produce bi-convex and meniscus lens arrays in a variety of lens shapes, however, and it required lengthy fabrication times. In order to rapidly prototype new lenses, a new diamond machining technique was developed called 4-axis single point diamond machining. This technique is 2-10x faster than micro-milling, depending on how advanced the micro-milling equipment is. With array microscope fabrication still in development, a single prototype of the lens designed for an array microscope was fabricated using single point diamond turning. The prototype microscope objective was validated in a pre-clinical trial. The prototype was compared with a standard clinical microscope objective in diagnostic tests. High concordance, a Fleiss's kappa of 0.88, was found between diagnoses made using the prototype and standard microscope objectives and a reference test. With the lens designed and validated and an advanced fabrication process developed, array microscopy technology is advanced to the point where it is feasible to rapidly prototype an array microscope for detection of tuberculosis and translate array microscope from an innovative concept to a device that can save lives.

  17. Modeling the Global Coronal Field with Simulated Synoptic Magnetograms from Earth and the Lagrange Points L3, L4, and L5

    NASA Astrophysics Data System (ADS)

    Petrie, Gordon; Pevtsov, Alexei; Schwarz, Andrew; DeRosa, Marc

    2018-06-01

    The solar photospheric magnetic flux distribution is key to structuring the global solar corona and heliosphere. Regular full-disk photospheric magnetogram data are therefore essential to our ability to model and forecast heliospheric phenomena such as space weather. However, our spatio-temporal coverage of the photospheric field is currently limited by our single vantage point at/near Earth. In particular, the polar fields play a leading role in structuring the large-scale corona and heliosphere, but each pole is unobservable for {>} 6 months per year. Here we model the possible effect of full-disk magnetogram data from the Lagrange points L4 and L5, each extending longitude coverage by 60°. Adding data also from the more distant point L3 extends the longitudinal coverage much further. The additional vantage points also improve the visibility of the globally influential polar fields. Using a flux-transport model for the solar photospheric field, we model full-disk observations from Earth/L1, L3, L4, and L5 over a solar cycle, construct synoptic maps using a novel weighting scheme adapted for merging magnetogram data from multiple viewpoints, and compute potential-field models for the global coronal field. Each additional viewpoint brings the maps and models into closer agreement with the reference field from the flux-transport simulation, with particular improvement at polar latitudes, the main source of the fast solar wind.

  18. Structural re-alignment in an immunologic surface region of ricin A chain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemla, A T; Zhou, C E

    2007-07-24

    We compared structure alignments generated by several protein structure comparison programs to determine whether existing methods would satisfactorily align residues at a highly conserved position within an immunogenic loop in ribosome inactivating proteins (RIPs). Using default settings, structure alignments generated by several programs (CE, DaliLite, FATCAT, LGA, MAMMOTH, MATRAS, SHEBA, SSM) failed to align the respective conserved residues, although LGA reported correct residue-residue (R-R) correspondences when the beta-carbon (Cb) position was used as the point of reference in the alignment calculations. Further tests using variable points of reference indicated that points distal from the beta carbon along a vector connectingmore » the alpha and beta carbons yielded rigid structural alignments in which residues known to be highly conserved in RIPs were reported as corresponding residues in structural comparisons between ricin A chain, abrin-A, and other RIPs. Results suggest that approaches to structure alignment employing alternate point representations corresponding to side chain position may yield structure alignments that are more consistent with observed conservation of functional surface residues than do standard alignment programs, which apply uniform criteria for alignment (i.e., alpha carbon (Ca) as point of reference) along the entirety of the peptide chain. We present the results of tests that suggest the utility of allowing user-specified points of reference in generating alternate structural alignments, and we present a web server for automatically generating such alignments.« less

  19. Features of the photometry of the superposition of coherent vector electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Sakhnovskyj, Mykhajlo Yu.; Tymochko, Bogdan M.; Rudeichuk, Volodymyr M.

    2018-01-01

    In the paper we propose a general approach to the calculation of the forming the intensity and polarization fields of the superposition of arbitrary coherent vector beams at points of a given reference plane. The method of measuring photometric parameters of a field, formed in the neighborhood of an arbitrary point of the plane of analysis by minimizing the values of irradiance in the vicinity of a given point (method of zero-amplitude at a given point), which is achieved by superimposing on it the reference wave with the controlled values of intensity, polarization state, phase, and angle of incidence, is proposed.

  20. Handheld tools assess medical necessity at the point of care.

    PubMed

    Pollard, Dan

    2002-01-01

    An emerging strategy to manage financial risk in clinical practice is to involve the physician at the point of care. Using handheld technology, encounter-specific information along with medical necessity policy can be presented to physicians allowing them to integrate it into their medical decision-making process. Three different strategies are discussed: reference books or paper encounter forms, electronic reference tools, and integrated process tools. The electronic reference tool strategy was evaluated and showed a return on investment exceeding 1200% due to reduced overhead costs associated with rework of claim errors.

  1. Comparison of dynamic FDG-microPET study in a rabbit turpentine-induced inflammatory model and in a rabbit VX2 tumor model.

    PubMed

    Hamazawa, Yoshimasa; Koyama, Koichi; Okamura, Terue; Wada, Yasuhiro; Wakasa, Tomoko; Okuma, Tomohisa; Watanabe, Yasuyoshi; Inoue, Yuichi

    2007-01-01

    We investigated the optimum time for the differentiation tumor from inflammation using dynamic FDG-microPET scans obtained by a MicroPET P4 scanner in animal models. Forty-six rabbits with 92 inflammatory lesions that were induced 2, 5, 7, 14, 30 and 60 days after 0.2 ml (Group 1) or 1.0 ml (Group 2) of turpentine oil injection were used as inflammatory models. Five rabbits with 10 VX2 tumors were used as the tumor model. Helical CT scans were performed before the PET studies. In the PET study, after 4 hours fasting, and following transmission scans and dynamic emission data acquisitions were performed until 2 hours after intravenous FDG injection. Images were reconstructed every 10 minutes using a filtered-back projection method. PET images were analyzed visually referring to CT images. For quantitative analysis, the inflammation-to-muscle (I/M) ratio and tumor-to-muscle (T/M) ratio were calculated after regions of interest were set in tumors and muscles referring to CT images and the time-I/M ratio and time-T/M ratio curves (TRCs) were prepared to show the change over time in these ratios. The histological appearance of both inflammatory lesions and tumor lesions were examined and compared with the CT and FDG-microPET images. In visual and quantitative analysis, All the I/M ratios and the T/M ratios increased over time except that Day 60 of Group 1 showed an almost flat curve. The TRC of the T/M ratio showed a linear increasing curve over time, while that of the I/M ratios showed a parabolic increasing over time at the most. FDG uptake in the inflammatory lesions reflected the histological findings. For differentiating tumors from inflammatory lesions with the early image acquired at 40 min for dual-time imaging, the delayed image must be acquired 30 min after the early image, while imaging at 90 min or later after intravenous FDG injection was necessary in single-time-point imaging. Our results suggest the possibility of shortening the overall testing time in clinical practice by adopting dual-time-point imaging rather than single-time-point imaging.

  2. Locally adapted NeQuick 2 model performance in European middle latitude ionosphere under different solar, geomagnetic and seasonal conditions

    NASA Astrophysics Data System (ADS)

    Vuković, Josip; Kos, Tomislav

    2017-10-01

    The ionosphere introduces positioning error in Global Navigation Satellite Systems (GNSS). There are several approaches for minimizing the error, with various levels of accuracy and different extents of coverage area. To model the state of the ionosphere in a region containing low number of reference GNSS stations, a locally adapted NeQuick 2 model can be used. Data ingestion updates the model with local level of ionization, enabling it to follow the observed changes of ionization levels. The NeQuick 2 model was adapted to local reference Total Electron Content (TEC) data using single station approach and evaluated using calibrated TEC data derived from 41 testing GNSS stations distributed around the data ingestion point. Its performance was observed in European middle latitudes in different ionospheric conditions of the period between 2011 and 2015. The modelling accuracy was evaluated in four azimuthal quadrants, with coverage radii calculated for three error thresholds: 12, 6 and 3 TEC Units (TECU). Diurnal radii change was observed for groups of days within periods of low and high solar activity and different seasons of the year. The statistical analysis was conducted on those groups of days, revealing trends in each of the groups, similarities between days within groups and the 95th percentile radii as a practically applicable measure of model performance. In almost all cases the modelling accuracy was better than 12 TECU, having the biggest radius from the data ingestion point. Modelling accuracy better than 6 TECU was achieved within reduced radius in all observed periods, while accuracy better than 3 TECU was reached only in summer. The calculated radii and interpolated error levels were presented on maps. That was especially useful in analyzing the model performance during the strongest geomagnetic storms of the observed period, with each of them having unique development and influence on model accuracy. Although some of the storms severely degraded the model accuracy, during most of the disturbed periods the model could be used, but with lower accuracy than in the quiet geomagnetic conditions. The comprehensive analysis of locally adapted NeQuick 2 model performance highlighted the challenges of using the single point data ingestion applied to a large region in middle latitudes and determined the achievable radii for different error thresholds in various ionospheric conditions.

  3. Predictors of early person reference development: maternal language input, attachment and neurodevelopmental markers.

    PubMed

    Lemche, Erwin; Joraschky, Peter; Klann-Delius, Gisela

    2013-12-01

    In a longitudinal natural language development study in Germany, the acquisition of verbal symbols for present persons, absent persons, inanimate things and the mother-toddler dyad was investigated. Following the notion that verbal referent use is more developed in ostensive contexts, symbolic play situations were coded for verbal person reference by means of noun and pronoun use. Depending on attachment classifications at twelve months of age, effects of attachment classification and maternal language input were studied up to 36 months in four time points. Hierarchical regression analyses revealed that, except for mother absence, maternal verbal referent input rates at 17 and 36 months were stronger predictors for all referent types than any of the attachment organizations, or any other social or biological predictor variable. Attachment effects accounted for up to 9.8% of unique variance proportions in the person reference variables. Perinatal and familial measures predicted person references dependent on reference type. The results of this investigation indicate that mother-reference, self-reference and thing-reference develop in similar quantities measured from the 17-month time point, but are dependent of attachment quality. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Automatic identification of the reference system based on the fourth ventricular landmarks in T1-weighted MR images.

    PubMed

    Fu, Yili; Gao, Wenpeng; Chen, Xiaoguang; Zhu, Minwei; Shen, Weigao; Wang, Shuguo

    2010-01-01

    The reference system based on the fourth ventricular landmarks (including the fastigial point and ventricular floor plane) is used in medical image analysis of the brain stem. The objective of this study was to develop a rapid, robust, and accurate method for the automatic identification of this reference system on T1-weighted magnetic resonance images. The fully automated method developed in this study consisted of four stages: preprocessing of the data set, expectation-maximization algorithm-based extraction of the fourth ventricle in the region of interest, a coarse-to-fine strategy for identifying the fastigial point, and localization of the base point. The method was evaluated on 27 Brain Web data sets qualitatively and 18 Internet Brain Segmentation Repository data sets and 30 clinical scans quantitatively. The results of qualitative evaluation indicated that the method was robust to rotation, landmark variation, noise, and inhomogeneity. The results of quantitative evaluation indicated that the method was able to identify the reference system with an accuracy of 0.7 +/- 0.2 mm for the fastigial point and 1.1 +/- 0.3 mm for the base point. It took <6 seconds for the method to identify the related landmarks on a personal computer with an Intel Core 2 6300 processor and 2 GB of random-access memory. The proposed method for the automatic identification of the reference system based on the fourth ventricular landmarks was shown to be rapid, robust, and accurate. The method has potentially utility in image registration and computer-aided surgery.

  5. Unitary cocycle representations of the Galilean line group: Quantum mechanical principle of equivalence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacGregor, B.R.; McCoy, A.E.; Wickramasekara, S., E-mail: wickrama@grinnell.edu

    2012-09-15

    We present a formalism of Galilean quantum mechanics in non-inertial reference frames and discuss its implications for the equivalence principle. This extension of quantum mechanics rests on the Galilean line group, the semidirect product of the real line and the group of analytic functions from the real line to the Euclidean group in three dimensions. This group provides transformations between all inertial and non-inertial reference frames and contains the Galilei group as a subgroup. We construct a certain class of unitary representations of the Galilean line group and show that these representations determine the structure of quantum mechanics in non-inertialmore » reference frames. Our representations of the Galilean line group contain the usual unitary projective representations of the Galilei group, but have a more intricate cocycle structure. The transformation formula for the Hamiltonian under the Galilean line group shows that in a non-inertial reference frame it acquires a fictitious potential energy term that is proportional to the inertial mass, suggesting the equivalence of inertial mass and gravitational mass in quantum mechanics. - Highlights: Black-Right-Pointing-Pointer A formulation of Galilean quantum mechanics in non-inertial reference frames is given. Black-Right-Pointing-Pointer The key concept is the Galilean line group, an infinite dimensional group. Black-Right-Pointing-Pointer Unitary, cocycle representations of the Galilean line group are constructed. Black-Right-Pointing-Pointer A non-central extension of the group underlies these representations. Black-Right-Pointing-Pointer Quantum equivalence principle and gravity emerge from these representations.« less

  6. Method of recertifying a loaded bearing member

    NASA Technical Reports Server (NTRS)

    Allison, Sidney G. (Inventor)

    1992-01-01

    A method is described of recertifying a loaded bearing member using ultrasound testing to compensate for different equipment configurations and temperature conditions. The standard frequency F1 of a reference block is determined via an ultrasonic tone burst generated by a first pulsed phased locked loop (P2L2) equipment configuration. Once a lock point number S is determined for F1, the reference frequency F1a of the reference block is determined at this lock point number via a second P2L2 equipment configuration to permit an equipment offset compensation factor Fo1=((F1-F1a)/F1)(1000000) to be determined. Next, a reference frequency F2 of the unloaded bearing member is determined using a second P2L2 equipment configuration and is then compensated for equipment offset errors via the relationship F2+F2(Fo1)/1000000. A lock point number b is also determined for F2. A resonant frequency F3 is determined for the reference block using a third P2L2 equipment configuration to determine a second offset compensation factor F02=((F1-F3)/F1) 1000000. Next the resonant frequency F4 of the loaded bearing member is measured at lock point number b via the third P2L2 equipment configuration and the bolt load determined by the relationship (-1000000)CI(((F2-F4)/F2)-Fo2), wherein CI is a factor correlating measured frequency shift to the applied load. Temperature compensation is also performed at each point in the process.

  7. The mass-action law based algorithms for quantitative econo-green bio-research.

    PubMed

    Chou, Ting-Chao

    2011-05-01

    The relationship between dose and effect is not random, but rather governed by the unified theory based on the median-effect equation (MEE) of the mass-action law. Rearrangement of MEE yields the mathematical form of the Michaelis-Menten, Hill, Henderson-Hasselbalch and Scatchard equations of biochemistry and biophysics, and the median-effect plot allows linearization of all dose-effect curves regardless of potency and shape. The "median" is the universal common-link and reference-point for the 1st-order to higher-order dynamics, and from single-entities to multiple-entities and thus, it allows the all for one and one for all unity theory to "integrate" simple and complex systems. Its applications include the construction of a dose-effect curve with a theoretical minimum of only two data points if they are accurately determined; quantification of synergism or antagonism at all dose and effect levels; the low-dose risk assessment for carcinogens, toxic substances or radiation; and the determination of competitiveness and exclusivity for receptor binding. Since the MEE algorithm allows the reduced requirement of the number of data points for small size experimentation, and yields quantitative bioinformatics, it points to the deterministic, efficient, low-cost biomedical research and drug discovery, and ethical planning for clinical trials. It is concluded that the contemporary biomedical sciences would greatly benefit from the mass-action law based "Green Revolution".

  8. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds

    PubMed Central

    Sawicki, Piotr

    2018-01-01

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011. PMID:29509679

  9. A New Approach for Inspection of Selected Geometric Parameters of a Railway Track Using Image-Based Point Clouds.

    PubMed

    Gabara, Grzegorz; Sawicki, Piotr

    2018-03-06

    The paper presents the results of testing a proposed image-based point clouds measuring method for geometric parameters determination of a railway track. The study was performed based on a configuration of digital images and reference control network. A DSLR (digital Single-Lens-Reflex) Nikon D5100 camera was used to acquire six digital images of the tested section of railway tracks. The dense point clouds and the 3D mesh model were generated with the use of two software systems, RealityCapture and PhotoScan, which have implemented different matching and 3D object reconstruction techniques: Multi-View Stereo and Semi-Global Matching, respectively. The study found that both applications could generate appropriate 3D models. Final meshes of 3D models were filtered with the MeshLab software. The CloudCompare application was used to determine the track gauge and cant for defined cross-sections, and the results obtained from point clouds by dense image matching techniques were compared with results of direct geodetic measurements. The obtained RMS difference in the horizontal (gauge) and vertical (cant) plane was RMS∆ < 0.45 mm. The achieved accuracy meets the accuracy condition of measurements and inspection of the rail tracks (error m < 1 mm), specified in the Polish branch railway instruction Id-14 (D-75) and the European technical norm EN 13848-4:2011.

  10. Real-time and accurate rail wear measurement method and experimental analysis.

    PubMed

    Liu, Zhen; Li, Fengjiao; Huang, Bangkui; Zhang, Guangjun

    2014-08-01

    When a train is running on uneven or curved rails, it generates violent vibrations on the rails. As a result, the light plane of the single-line structured light vision sensor is not vertical, causing errors in rail wear measurements (referred to as vibration errors in this paper). To avoid vibration errors, a novel rail wear measurement method is introduced in this paper, which involves three main steps. First, a multi-line structured light vision sensor (which has at least two linear laser projectors) projects a stripe-shaped light onto the inside of the rail. Second, the central points of the light stripes in the image are extracted quickly, and the three-dimensional profile of the rail is obtained based on the mathematical model of the structured light vision sensor. Then, the obtained rail profile is transformed from the measurement coordinate frame (MCF) to the standard rail coordinate frame (RCF) by taking the three-dimensional profile of the measured rail waist as the datum. Finally, rail wear constraint points are adopted to simplify the location of the rail wear points, and the profile composed of the rail wear points are compared with the standard rail profile in RCF to determine the rail wear. Both real data experiments and simulation experiments show that the vibration errors can be eliminated when the proposed method is used.

  11. 47 CFR 73.208 - Reference points and distance computations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...

  12. 47 CFR 73.208 - Reference points and distance computations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES RADIO BROADCAST SERVICES FM Broadcast Stations § 73.208 Reference points and distance computations... filed no later than: (i) The last day of a filing window if the application is for a new FM facility or...(d) and 73.3573(e) if the application is for a new FM facility or a major change in the reserved band...

  13. [A case of pure agraphia due to left parietal lobe infarction].

    PubMed

    Yaguchi, H; Bando, M; Kubo, T; Ohi, M; Suzuki, K

    1998-06-01

    We reported a case of a 63-year-old right handed man with pure agraphia due to the left parietal lobe infarction. The characteristics of agraphia in the patient were as follows. 1) The written letters were generally recognizable and well formed. 2) He succeeded in pointing to single Kana letter named by the examiner from the Japanese syllabary, but missed in pointing to Kana words. 3) Further, it took more time for the patient to point to even single Kana letter than for the control. 4) Most errors in Kana writing was substitution. Errors in Kanji writing are partial lacking or no response. But his ability in Kanji writing was facilitated by visual cues. He was unable to describe the Hen (a left-hand radical) and Tsukuri (the body) of some Kanji letters and to name some Kanji letters when their Hen and Tsukuri were orally given. We classified pure agraphia into two types out of some references. In one type (Type 1), letters in writing are poorly formed, but the ability to make words with the methods other than writing, for example spelling with anagrams or typing are preserved. In another type (Type 2), letters in writing were well-formed, but spelling with anagrams or typing were abnormal. Type 1 agraphia could result from the only deficit of graphic motor engram, while type 2 agraphia could be caused by the deficits other than graphic motor engram. Agraphia in this case belongs to the type 2. The features of agraphia in this case suggested that his agraphia was caused by a disorder in recalling graphemes of letters, and in arranging at least of Kana-letters.

  14. Fracture mechanics life analytical methods verification testing

    NASA Technical Reports Server (NTRS)

    Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.

    1994-01-01

    Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.

  15. Laser vibrometry in the quality control of the break of tanned leather

    NASA Astrophysics Data System (ADS)

    Preciado, J. Sanchez; Lopez, C. Perez; Hernandez-Montes, M. del Socorro; Torre-Ibarra, M. de la; Moreno, J. M. Flores; Ruiz, C. Tavera; Mendoza Santoyo, F.; Galan, M.

    2018-05-01

    Tanning industry treats hides and the skin of animals for their use in products such as clothes, furniture and car's interiors. The worth of leather is highly affected by defects that may appear prior or during the tanning process. Break, which refers to the wrinkling of the grain surface of leather, is one of the main issues because it affects not only the visual appearance of leather, but also its mechanical properties. The standardized method to classify the break pattern is done by bending the leather with the hand and comparing visually the resulting wrinkles that appear with a reference pattern, which makes the classification subjective and limits the evaluation to small areas. Laser vibrometry is an optical technique that has been applied in vibrational and modal analysis, which are methodologies used to obtain the mechanical properties of materials. This work demonstrates the use of a single-point vibrometer as a noncontact and nondestructive optical method to discriminate among five break levels, which could increase the effectiveness of leather classification for quality control in the tanning industry.

  16. Evaluation of Scaling Methods for Rotorcraft Icing

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2010-01-01

    This paper reports result of an experimental study in the NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the current recommended scaling methods developed for fixed-wing unprotected surface icing applications might apply to representative rotor blades at finite angle of attack. Unlike the fixed-wing case, there is no single scaling method that has been systematically developed and evaluated for rotorcraft icing applications. In the present study, scaling was based on the modified Ruff method with scale velocity determined by maintaining constant Weber number. Models were unswept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocities of 76 and 100 kt (39 and 52 m/s), droplet MVDs of 150 and 195 fun, and with stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 0deg and 5deg. It was shown that good ice shape scaling was achieved for NACA 0012 airfoils with angle of attack lip to 5deg.

  17. A Comparative MR Study of Hepatic Fat Quantification Using Single-voxel Proton Spectroscopy, Two-point Dixon and Three-point IDEAL

    PubMed Central

    Kim, Hyeonjin; Taksali, Sara E.; Dufour, Sylvie; Befroy, Douglas; Goodman, T. Robin; Petersen, Kitt Falk; Shulman, Gerald I.; Caprio, Sonia; Constable, R. Todd

    2009-01-01

    Hepatic fat fraction (HFF) was measured in 28 lean/obese humans by single-voxel proton spectroscopy (MRS), a two-point Dixon (2PD) and a three-point iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) method (3PI). For the lean, obese and total subject groups, the range of HFF measured by MRS was 0.3–3.5% (1.1±1.4%), 0.3–41.5% (11.7±12.1), and 0.3–41.5% (10.1±11.6%), respectively For the same groups, the HFF measured by 2PD was −6.3–2.2% (−2.0±3.7%), −2.4–42.9% (12.9±13.8%), and −6.3–42.9% (10.5±13.7%), respectively, and for 3PI they were 7.9–12.8% (10.1±2.0%), 11.1–49.3% (22.0±12.2%), and 7.9–49.3% (20.0±11.8%), respectively. The HFF measured by MRS was highly correlated with those measured by 2PD (r=0.954, p<0.001) and 3PI (r=0.973, p<0.001). With the MRS data as a reference, the percentages of correct diagnosis of fatty liver with the MRI methods ranged 75–93% for 2PI and 79–89% for 3PI. Our study demonstrates that the apparent HFF measured by the MRI methods can significantly vary depending on the choice of water-fat separation methods and sequences. Such variability may limit the clinical application of the MRI methods, particularly when a diagnosis of early fatty liver needs to be performed. Therefore, protocol-specific establishment of cutoffs for liver fat content may be necessary. PMID:18306404

  18. Media Criticism Group Speech

    ERIC Educational Resources Information Center

    Ramsey, E. Michele

    2004-01-01

    Objective: To integrate speaking practice with rhetorical theory. Type of speech: Persuasive. Point value: 100 points (i.e., 30 points based on peer evaluations, 30 points based on individual performance, 40 points based on the group presentation), which is 25% of course grade. Requirements: (a) References: 7-10; (b) Length: 20-30 minutes; (c)…

  19. 3D Surface Reconstruction and Volume Calculation of Rills

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Becker, Kerstin; Wirtz, Stefan; Seeger, Manuel; Ries, Johannes B.

    2015-04-01

    We use the low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique, which is implemented in the Software VisualSfM, for 3D surface reconstruction and volume calculation of an 18 meter long rill in Luxembourg. The images were taken with a Canon HD video camera 1) before a natural rainfall event, 2) after a natural rainfall event and before a rill experiment and 3) after a rill experiment. Recording with a video camera results compared to a photo camera not only a huge time advantage, the method also guarantees more than adequately overlapping sharp images. For each model, approximately 8 minutes of video were taken. As SfM needs single images, we automatically selected the sharpest image from 15 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs, recovers the camera positions and finally by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post models a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The outputs are three models at three different points in time. The results show that especially using images taken from suboptimal videos (bad lighting conditions, low contrast of the surface, too much in-motion unsharpness), the sharpness algorithm leads to much more matching features. Hence the point densities of the 3D models are increased and thereby clarify the calculations.

  20. An Updated Perspective of Single Event Gate Rupture and Single Event Burnout in Power MOSFETs

    NASA Astrophysics Data System (ADS)

    Titus, Jeffrey L.

    2013-06-01

    Studies over the past 25 years have shown that heavy ions can trigger catastrophic failure modes in power MOSFETs [e.g., single-event gate rupture (SEGR) and single-event burnout (SEB)]. In 1996, two papers were published in a special issue of the IEEE Transaction on Nuclear Science [Johnson, Palau, Dachs, Galloway and Schrimpf, “A Review of the Techniques Used for Modeling Single-Event Effects in Power MOSFETs,” IEEE Trans. Nucl. Sci., vol. 43, no. 2, pp. 546-560, April. 1996], [Titus and Wheatley, “Experimental Studies of Single-Event Gate Rupture and Burnout in Vertical Power MOSFETs,” IEEE Trans. Nucl. Sci., vol. 43, no. 2, pp. 533-545, Apr. 1996]. Those two papers continue to provide excellent information and references with regard to SEB and SEGR in vertical planar MOSFETs. This paper provides updated references/information and provides an updated perspective of SEB and SEGR in vertical planar MOSFETs as well as provides references/information to other device types that exhibit SEB and SEGR effects.

  1. Imputation of single nucleotide polymorhpism genotypes of Hereford cattle: reference panel size, family relationship and population structure

    USDA-ARS?s Scientific Manuscript database

    The objective of this study is to investigate single nucleotide polymorphism (SNP) genotypes imputation of Hereford cattle. Purebred Herefords were from two sources, Line 1 Hereford (N=240) and representatives of Industry Herefords (N=311). Using different reference panels of 62 and 494 males with 1...

  2. Effect of Chlorine Substitution on Sulfide Reactivity with OH Radicals

    DTIC Science & Technology

    2008-09-01

    Single point energy: MP2/6-311+G(3df,2p) (LRG) • Zero Point Energy from a vibrational frequency analysis: MP2/6-31++G** ( ZPE ) • Extrapolated energy...E(QCI) + E(LARG) – E(SML) + ZPE • Characterize the TS • Use a three-point fit methodology – fit a harmonic potential to three CCSD single point

  3. 49 CFR 172.315 - Packages containing limited quantities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... applicable, for the entry as shown in the § 172.101 Table, and placed within a square-on-point border in... to the package as to be readily visible. The width of line forming the square-on-point must be at... square-on-points bearing a single ID number, or a single square-on-point large enough to include each...

  4. Side Effects in Time Discounting Procedures: Fixed Alternatives Become the Reference Point

    PubMed Central

    2016-01-01

    Typical research on intertemporal choice utilizes a two-alternative forced choice (2AFC) paradigm requiring participants to choose between a smaller sooner and larger later payoff. In the adjusting-amount procedure (AAP) one of the alternatives is fixed and the other is adjusted according to particular choices made by the participant. Such a method makes the alternatives unequal in status and is speculated to make the fixed alternative a reference point for choices, thereby affecting the decision made. The current study shows that fixing different alternatives in the AAP influences discount rates in intertemporal choices. Specifically, individuals’ (N = 283) choices were affected to just the same extent by merely fixing an alternative as when choices were preceded by scenarios explicitly imposing reference points. PMID:27768759

  5. Maritime Claims Reference Manual

    DTIC Science & Technology

    1990-07-12

    line connecting the following points: Cap de la Morsetta--Pointe des Scoglietti-llot de Gargalo--Cap Rosso --Pointe d’Omignis--Cap de Feno (Golfe de...Tg. Rua. 127. 09 - 45.5 S 119 - 11.6 T Tg. Mambo. 128. 09 - 40.5 S 119 - 02.0 T 129. 08 - 53.6 S 118 - 29.9 T Toro Doro. 2-229 REFERENCE CO-ORDINATES...8217 - 12013.40’) to Capo Linaro (42001.70’ - 11050.20’); From Civitavecchia beacon (42005.75’ - 11046.65’) to Isola Giannutri Punta del Capel Rosso

  6. Reconstruction method for fringe projection profilometry based on light beams.

    PubMed

    Li, Xuexing; Zhang, Zhijiang; Yang, Chen

    2016-12-01

    A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.

  7. The Photogrammetry Cube

    NASA Technical Reports Server (NTRS)

    2008-01-01

    We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.

  8. Selection and validation of suitable reference genes for miRNA expression normalization by quantitative RT-PCR in citrus somatic embryogenic and adult tissues.

    PubMed

    Kou, Shu-Jun; Wu, Xiao-Meng; Liu, Zheng; Liu, Yuan-Long; Xu, Qiang; Guo, Wen-Wu

    2012-12-01

    miRNAs have recently been reported to modulate somatic embryogenesis (SE), a key pathway of plant regeneration in vitro. For expression level detection and subsequent function dissection of miRNAs in certain biological processes, qRT-PCR is one of the most effective and sensitive techniques, for which suitable reference gene selection is a prerequisite. In this study, three miRNAs and eight non-coding RNAs (ncRNA) were selected as reference candidates, and their expression stability was inspected in developing citrus SE tissues cultured at 20, 25, and 30 °C. Stability of the eight non-miRNA ncRNAs was further validated in five adult tissues without temperature treatment. The best single reference gene for SE tissues was snoR14 or snoRD25, while for the adult tissues the best one was U4; although they were not as stable as the optimal multiple references snoR14 + U6 for SE tissues and snoR14 + U5 for adult tissues. For expression normalization of less abundant miRNAs in SE tissues, miR3954 was assessed as a viable reference. Single reference gene snoR14 outperformed multiple references for the overall SE and adult tissues. As one of the pioneer systematic studies on reference gene identification for plant miRNA normalization, this study benefits future exploration on miRNA function in citrus and provides valuable information for similar studies in other higher plants. Three miRNAs and eight non-coding RNAs were tested as reference candidates on developing citrus SE tissues. Best single references snoR14 or snoRD25 and optimal multiple references snoR14 + U6, snoR14 + U5 were identified.

  9. The one number you need to grow.

    PubMed

    Reichheld, Frederick F

    2003-12-01

    Companies spend lots of time and money on complex tools to assess customer satisfaction. But they're measuring the wrong thing. The best predictor of top-line growth can usually be captured in a single survey question: Would you recommend this company to a friend? This finding is based on two years of research in which a variety of survey questions were tested by linking the responses with actual customer behavior--purchasing patterns and referrals--and ultimately with company growth. Surprisingly, the most effective question wasn't about customer satisfaction or even loyalty per se. In most of the industries studied, the percentage of customers enthusiastic enough about a company to refer it to a friend or colleague directly correlated with growth rates among competitors. Willingness to talk up a company or product to friends, family, and colleagues is one of the best indicators of loyalty because of the customer's sacrifice in making the recommendation. When customers act as references, they do more than indicate they've received good economic value from a company; they put their own reputations on the line. And they will risk their reputations only if they feel intense loyalty. The findings point to a new, simpler approach to customer research, one directly linked to a company's results. By substituting a single question--blunt tool though it may appear to be--for the complex black box of the customer satisfaction survey, companies can actually put consumer survey results to use and focus employees on the task of stimulating growth.

  10. A systematic approach to determining the properties of an iodine absorption cell for high-precision radial velocity measurements

    NASA Astrophysics Data System (ADS)

    Perdelwitz, V.; Huke, P.

    2018-06-01

    Absorption cells filled with diatomic iodine are frequently employed as wavelength reference for high-precision stellar radial velocity determination due their long-term stability and low cost. Despite their wide-spread usage in the community, there is little documentation on how to determine the ideal operating temperature of an individual cell. We have developed a new approach to measuring the effective molecular temperature inside a gas absorption cell and searching for effects detrimental to a high precision wavelength reference, utilizing the Boltzmann distribution of relative line depths within absorption bands of single vibrational transitions. With a high resolution Fourier transform spectrometer, we took a series of 632 spectra at temperatures between 23 °C and 66 °C. These spectra provide a sufficient basis to test the algorithm and demonstrate the stability and repeatability of the temperature determination via molecular lines on a single iodine absorption cell. The achievable radial velocity precision σRV is found to be independent of the cell temperature and a detailed analysis shows a wavelength dependency, which originates in the resolving power of the spectrometer in use and the signal-to-noise ratio. Two effects were found to cause apparent absolute shifts in radial velocity, a temperature-induced shift of the order of ˜1 ms-1K-1 and a more significant effect resulting in abrupt jumps of ≥50 ms-1 is determined to be caused by the temperature crossing the dew point of the molecular iodine.

  11. Investigation of scale effects in the TRF determined by VLBI

    NASA Astrophysics Data System (ADS)

    Wahl, Daniel; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The improvement of the International Terrestrial Reference Frame (ITRF) is of great significance for Earth sciences and one of the major tasks in geodesy. The translation, rotation and the scale-factor, as well as their linear rates, are solved in a 14-parameter transformation between individual frames of each space geodetic technique and the combined frame. In ITRF2008, as well as in the current release ITRF2014, the scale-factor is provided by Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR) in equal shares. Since VLBI measures extremely precise group delays that are transformed to baseline lengths by the velocity of light, a natural constant, VLBI is the most suitable method for providing the scale. The aim of the current work is to identify possible shortcomings in the VLBI scale contribution to ITRF2008. For developing recommendations for an enhanced estimation, scale effects in the Terrestrial Reference Frame (TRF) determined with VLBI are considered in detail and compared to ITRF2008. In contrast to station coordinates, where the scale is defined by a geocentric position vector, pointing from the origin of the reference frame to the station, baselines are not related to the origin. They are describing the absolute scale independently from the datum. The more accurate a baseline length, and consequently the scale, is estimated by VLBI, the better the scale contribution to the ITRF. Considering time series of baseline length between different stations, a non-linear periodic signal can clearly be recognized, caused by seasonal effects at observation sites. Modeling these seasonal effects and subtracting them from the original data enhances the repeatability of single baselines significantly. Other effects influencing the scale strongly, are jumps in the time series of baseline length, mainly evoked by major earthquakes. Co- and post-seismic effects can be identified in the data, having a non-linear character likewise. Modeling the non-linear motion or completely excluding affected stations is another important step for an improved scale determination. In addition to the investigation of single baseline repeatabilities also the spatial transformation, which is performed for determining parameters of the ITRF2008, are considered. Since the reliability of the resulting transformation parameters is higher the more identical points are used in the transformation, an approach where all possible stations are used as control points is comprehensible. Experiments that examine the scale-factor and its spatial behavior between control points in ITRF2008 and coordinates determined by VLBI only showed that the network geometry has a large influence on the outcome as well. Introducing an unequally distributed network for the datum configuration, the correlations between translation parameters and the scale-factor can become remarkably high. Only a homogeneous spatial distribution of participating stations yields a maximally uncorrelated scale-factor that can be interpreted independent from other parameters. In the current release of the ITRF, the ITRF2014, for the first time, non-linear effects in the time series of station coordinates are taken into account. The present work shows the importance and the right direction of the modification of the ITRF calculation. But also further improvements were found which lead to an enhanced scale determination.

  12. An elevated neutrophil-lymphocyte ratio is associated with adverse outcomes following single time-point paracetamol (acetaminophen) overdose: a time-course analysis.

    PubMed

    Craig, Darren G; Kitto, Laura; Zafar, Sara; Reid, Thomas W D J; Martin, Kirsty G; Davidson, Janice S; Hayes, Peter C; Simpson, Kenneth J

    2014-09-01

    The innate immune system is profoundly dysregulated in paracetamol (acetaminophen)-induced liver injury. The neutrophil-lymphocyte ratio (NLR) is a simple bedside index with prognostic value in a number of inflammatory conditions. To evaluate the prognostic accuracy of the NLR in patients with significant liver injury following single time-point and staggered paracetamol overdoses. Time-course analysis of 100 single time-point and 50 staggered paracetamol overdoses admitted to a tertiary liver centre. Timed laboratory samples were correlated with time elapsed after overdose or admission, respectively, and the NLR was calculated. A total of 49/100 single time-point patients developed hepatic encephalopathy (HE). Median NLRs were higher at both 72 (P=0.0047) and 96 h after overdose (P=0.0041) in single time-point patients who died or were transplanted. Maximum NLR values by 96 h were associated with increasing HE grade (P=0.0005). An NLR of more than 16.7 during the first 96 h following overdose was independently associated with the development of HE [odds ratio 5.65 (95% confidence interval 1.67-19.13), P=0.005]. Maximum NLR values by 96 h were strongly associated with the requirement for intracranial pressure monitoring (P<0.0001), renal replacement therapy (P=0.0002) and inotropic support (P=0.0005). In contrast, in the staggered overdose cohort, the NLR was not associated with adverse outcomes or death/transplantation either at admission or subsequently. The NLR is a simple test which is strongly associated with adverse outcomes following single time-point, but not staggered, paracetamol overdoses. Future studies should assess the value of incorporating the NLR into existing prognostic and triage indices of single time-point paracetamol overdose.

  13. USING REGIONAL EXPOSURE CRITERIA AND UPSTREAM REFERENCE DATA TO CHARACTERIZE SPATIAL AND TEMPORAL EXPOSURES TO CHEMICAL CONTAMINANTS

    EPA Science Inventory

    Analyses of biomarkers in fish were used to evaluate exposures among locations and across time. Two types of references were used for comparison, an upstream reference sample remote from known point sources and regional exposure criteria derived from a baseline of fish from refer...

  14. USING REGIONAL EXPOSURE CRITERIA AND UPSTREAM REFERENCE DATA TO CHARACTERIZE SPATIAL AND TEMPORAL EXPOSURES TO CHEMICAL CONTAMINANTS

    EPA Science Inventory

    Analyses of biomarkers in fish were used to evaluate exposures among locations and across time. Two types of references were used for comparison, an upstream reference sample remote from known point sources and regional exposure criteria derived from a basline of fish from refere...

  15. The SPQR experiment: detecting damage to orbiting spacecraft with ground-based telescopes

    NASA Astrophysics Data System (ADS)

    Paolozzi, Antonio; Porfilio, Manfredi; Currie, Douglas G.; Dantowitz, Ronald F.

    2007-09-01

    The objective of the Specular Point-like Quick Reference (SPQR) experiment was to evaluate the possibility of improving the resolution of ground-based telescopic imaging of manned spacecraft in orbit. The concept was to reduce image distortions due to atmospheric turbulence by evaluating the Point Spread Function (PSF) of a point-like light reference and processing the spacecraft image accordingly. The target spacecraft was the International Space Station (ISS) and the point-like reference was provided by a laser beam emitted by the ground station and reflected back to the telescope by a Cube Corner Reflector (CCR) mounted on an ISS window. The ultimate objective of the experiment was to demonstrate that it is possible to image spacecraft in Low Earth Orbit (LEO) with a resolution of 20 cm, which would have probably been sufficient to detect the damage which caused the Columbia disaster. The experiment was successfully performed from March to May 2005. The paper provides an overview of the SPQR experiment.

  16. Alternative Methods for Estimating Plane Parameters Based on a Point Cloud

    NASA Astrophysics Data System (ADS)

    Stryczek, Roman

    2017-12-01

    Non-contact measurement techniques carried out using triangulation optical sensors are increasingly popular in measurements with the use of industrial robots directly on production lines. The result of such measurements is often a cloud of measurement points that is characterized by considerable measuring noise, presence of a number of points that differ from the reference model, and excessive errors that must be eliminated from the analysis. To obtain vector information points contained in the cloud that describe reference models, the data obtained during a measurement should be subjected to appropriate processing operations. The present paperwork presents an analysis of suitability of methods known as RANdom Sample Consensus (RANSAC), Monte Carlo Method (MCM), and Particle Swarm Optimization (PSO) for the extraction of the reference model. The effectiveness of the tested methods is illustrated by examples of measurement of the height of an object and the angle of a plane, which were made on the basis of experiments carried out at workshop conditions.

  17. First impressions and beyond: marketing your practice in touch points--Part I.

    PubMed

    Bisera, Cheryl

    2012-01-01

    Often medical administrators or providers call in a marketing expert when they feel the practice is lacking the growth they want. What's on their mind is usually how to bring in more patients, and they automatically look to external marketing strategies. However, one of the most important elements to successfully marketing a practice is making sure you haven't created a turnstile, where new patients are coming often but not returning or being converted into loyal, referring patients. When new patients are going as quickly as they are coming, you aren't building solid growth. Loyal, referring patients are powerful marketing assets-they are in the community speaking good of you and your practice from first-hand experience. You can create this atmosphere of loyal, referring patients by providing positive touch points that fulfill the needs of your patients. Touch points are the groundwork supporting other types of marketing. This article covers three important touch points that are crucial to a positive patient experience.

  18. Circular common-path point diffraction interferometer.

    PubMed

    Du, Yongzhao; Feng, Guoying; Li, Hongru; Vargas, J; Zhou, Shouhuan

    2012-10-01

    A simple and compact point-diffraction interferometer with circular common-path geometry configuration is developed. The interferometer is constructed by a beam-splitter, two reflection mirrors, and a telescope system composed by two lenses. The signal and reference waves travel along the same path. Furthermore, an opaque mask containing a reference pinhole and a test object holder or test window is positioned in the common focal plane of the telescope system. The object wave is divided into two beams that take opposite paths along the interferometer. The reference wave is filtered by the reference pinhole, while the signal wave is transmitted through the object holder. The reference and signal waves are combined again in the beam-splitter and their interference is imaged in the CCD. The new design is compact, vibration insensitive, and suitable for the measurement of moving objects or dynamic processes.

  19. Bioequivalence of fixed-dose combination RIN®-150 to each reference drug in loose combination.

    PubMed

    Wang, H F; Wang, R; O'Gorman, M; Crownover, P; Damle, B

    2015-03-01

    RIN(®)-150 is a fixed-dose combination (FDC) tablet containing rifampicin (RMP, 150 mg) and isoniazid (INH, 75 mg) developed for the treatment of tuberculosis. This study was conducted at a single center: the Pfizer Clinical Research Unit in Singapore. To demonstrate bioequivalence of each drug component between RIN-150 and individual products in a loose combination. This was a randomized, open-label, single-dose, two-way crossover study. Subjects received single doses of RIN-150 or two individual reference products under fasting conditions in a crossover fashion, with at least 7 days washout between doses. The primary measures for comparison were peak plasma concentration (Cmax) and the area under plasma concentration-time curve (AUC). Of 28 subjects enrolled, 26 completed the study. The adjusted geometric mean ratios of Cmax and AUClast between the FDC and single-drug references and 90% confidence intervals were respectively 91.63% (90%CI 83.13-101.01) and 95.45% (90%CI 92.07-98.94) for RMP, and 107.58% (90%CI 96.07-120.47) and 103.45% (90%CI 99.33-107.75) for INH. Both formulations were generally well tolerated in this study. The RIN-150 FDC tablet formulation is bioequivalent to the two single-drug references for RMP and INH at equivalent doses.

  20. Phase-shifting point diffraction interferometer grating designs

    DOEpatents

    Naulleau, Patrick; Goldberg, Kenneth Alan; Tejnil, Edita

    2001-01-01

    In a phase-shifting point diffraction interferometer, by sending the zeroth-order diffraction to the reference pinhole of the mask and the first-order diffraction to the test beam window of the mask, the test and reference beam intensities can be balanced and the fringe contrast improved. Additionally, using a duty cycle of the diffraction grating other than 50%, the fringe contrast can also be improved.

  1. 16 CFR Figure 5 to Subpart A of... - Zero Reference Point Related to Detecting Plane

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Zero Reference Point Related to Detecting Plane 5 Figure 5 to Subpart A of Part 1209 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION CONSUMER PRODUCT SAFETY ACT REGULATIONS INTERIM SAFETY STANDARD FOR CELLULOSE INSULATION The Standard Pt. 1209, Subpt. A, Fig. 5 Figure 5 to Subpart A o...

  2. Reference Values for the Pediatric Quality of Life Inventory and the Multidimensional Fatigue Scale in Adolescent Athletes by Sport and Sex.

    PubMed

    Snyder Valier, Alison R; Welch Bacon, Cailee E; Bay, R Curtis; Molzen, Eileen; Lam, Kenneth C; Valovich McLeod, Tamara C

    2017-10-01

    Effective use of patient-rated outcome measures to facilitate optimal patient care requires an understanding of the reference values of these measures within the population of interest. Little is known about reference values for commonly used patient-rated outcome measures in adolescent athletes. To determine reference values for the Pediatric Quality of Life Inventory (PedsQL) and the Multidimensional Fatigue Scale (MFS) in adolescent athletes by sport and sex. Cross-sectional study; Level of evidence, 3. A convenience sample of interscholastic adolescent athletes from 9 sports was used. Participants completed the PedsQL and MFS during one testing session at the start of their sport season. Data were stratified by sport and sex. Dependent variables included the total PedsQL score and the 5 PedsQL subscale scores: physical functioning, psychosocial functioning, emotional functioning, social functioning, and school functioning. Dependent variables for the MFS included 3 subscale scores: general functioning, sleep functioning, and cognitive functioning. Summary statistics were reported for total and subscale scores by sport and sex. Among 3574 males and 1329 female adolescent athletes, the PedsQL scores (100 possible points) generally indicated high levels of health regardless of sport played. Mean PedsQL total and subscales scores ranged from 82.6 to 95.7 for males and 83.9 to 95.2 for females. Mean MFS subscale scores (100 possible points) ranged from 74.2 to 90.9 for males and 72.8 to 87.4 for females. Healthy male and female adolescent athletes reported relatively high levels of health on the PedsQL subscales and total scores regardless of sport; no mean scores were lower than 82.6 points for males or 83.9 points for females. On the MFS, males and females tended to report low effect of general and cognitive fatigue regardless of sport; mean scores were higher than 83.5 points for males and 83.8 points for females. Clinically, athletes who score below the reference values for their sport have poorer health status than average adolescent athletes participating in that sport. Scores below reference values may warrant consideration of early intervention or treatment.

  3. Application of a single-objective, hybrid genetic algorithm approach to pharmacokinetic model building.

    PubMed

    Sherer, Eric A; Sale, Mark E; Pollock, Bruce G; Belani, Chandra P; Egorin, Merrill J; Ivy, Percy S; Lieberman, Jeffrey A; Manuck, Stephen B; Marder, Stephen R; Muldoon, Matthew F; Scher, Howard I; Solit, David B; Bies, Robert R

    2012-08-01

    A limitation in traditional stepwise population pharmacokinetic model building is the difficulty in handling interactions between model components. To address this issue, a method was previously introduced which couples NONMEM parameter estimation and model fitness evaluation to a single-objective, hybrid genetic algorithm for global optimization of the model structure. In this study, the generalizability of this approach for pharmacokinetic model building is evaluated by comparing (1) correct and spurious covariate relationships in a simulated dataset resulting from automated stepwise covariate modeling, Lasso methods, and single-objective hybrid genetic algorithm approaches to covariate identification and (2) information criteria values, model structures, convergence, and model parameter values resulting from manual stepwise versus single-objective, hybrid genetic algorithm approaches to model building for seven compounds. Both manual stepwise and single-objective, hybrid genetic algorithm approaches to model building were applied, blinded to the results of the other approach, for selection of the compartment structure as well as inclusion and model form of inter-individual and inter-occasion variability, residual error, and covariates from a common set of model options. For the simulated dataset, stepwise covariate modeling identified three of four true covariates and two spurious covariates; Lasso identified two of four true and 0 spurious covariates; and the single-objective, hybrid genetic algorithm identified three of four true covariates and one spurious covariate. For the clinical datasets, the Akaike information criterion was a median of 22.3 points lower (range of 470.5 point decrease to 0.1 point decrease) for the best single-objective hybrid genetic-algorithm candidate model versus the final manual stepwise model: the Akaike information criterion was lower by greater than 10 points for four compounds and differed by less than 10 points for three compounds. The root mean squared error and absolute mean prediction error of the best single-objective hybrid genetic algorithm candidates were a median of 0.2 points higher (range of 38.9 point decrease to 27.3 point increase) and 0.02 points lower (range of 0.98 point decrease to 0.74 point increase), respectively, than that of the final stepwise models. In addition, the best single-objective, hybrid genetic algorithm candidate models had successful convergence and covariance steps for each compound, used the same compartment structure as the manual stepwise approach for 6 of 7 (86 %) compounds, and identified 54 % (7 of 13) of covariates included by the manual stepwise approach and 16 covariate relationships not included by manual stepwise models. The model parameter values between the final manual stepwise and best single-objective, hybrid genetic algorithm models differed by a median of 26.7 % (q₁ = 4.9 % and q₃ = 57.1 %). Finally, the single-objective, hybrid genetic algorithm approach was able to identify models capable of estimating absorption rate parameters for four compounds that the manual stepwise approach did not identify. The single-objective, hybrid genetic algorithm represents a general pharmacokinetic model building methodology whose ability to rapidly search the feasible solution space leads to nearly equivalent or superior model fits to pharmacokinetic data.

  4. Modelling of Vortex-Induced Loading on a Single-Blade Installation Setup

    NASA Astrophysics Data System (ADS)

    Skrzypiński, Witold; Gaunaa, Mac; Heinz, Joachim

    2016-09-01

    Vortex-induced integral loading fluctuations on a single suspended blade at various inflow angles were modeled in the presents work by means of stochastic modelling methods. The reference time series were obtained by 3D DES CFD computations carried out on the DTU 10MW reference wind turbine blade. In the reference time series, the flapwise force component, Fx, showed both higher absolute values and variation than the chordwise force component, Fz, for every inflow angle considered. For this reason, the present paper focused on modelling of the Fx and not the Fz whereas Fz would be modelled using exactly the same procedure. The reference time series were significantly different, depending on the inflow angle. This made the modelling of all the time series with a single and relatively simple engineering model challenging. In order to find model parameters, optimizations were carried out, based on the root-mean-square error between the Single-Sided Amplitude Spectra of the reference and modelled time series. In order to model well defined frequency peaks present at certain inflow angles, optimized sine functions were superposed on the stochastically modelled time series. The results showed that the modelling accuracy varied depending on the inflow angle. None the less, the modelled and reference time series showed a satisfactory general agreement in terms of their visual and frequency characteristics. This indicated that the proposed method is suitable to model loading fluctuations on suspended blades.

  5. Density functional theory study of the interaction of vinyl radical, ethyne, and ethene with benzene, aimed to define an affordable computational level to investigate stability trends in large van der Waals complexes

    NASA Astrophysics Data System (ADS)

    Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele

    2013-12-01

    Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol-1. The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).

  6. Density functional theory study of the interaction of vinyl radical, ethyne, and ethene with benzene, aimed to define an affordable computational level to investigate stability trends in large van der Waals complexes.

    PubMed

    Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele

    2013-12-28

    Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol(-1). The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).

  7. Multi-beam and single-chip LIDAR with discrete beam steering by digital micromirror device

    NASA Astrophysics Data System (ADS)

    Rodriguez, Joshua; Smith, Braden; Hellman, Brandon; Gin, Adley; Espinoza, Alonzo; Takashima, Yuzuru

    2018-02-01

    A novel Digital Micromirror Device (DMD) based beam steering enables a single chip Light Detection and Ranging (LIDAR) system for discrete scanning points. We present increasing number of scanning point by using multiple laser diodes for Multi-beam and Single-chip DMD-based LIDAR.

  8. Type T reference function suitability for low temperature applications

    NASA Astrophysics Data System (ADS)

    Dowell, D.

    2013-09-01

    Type T thermocouples are commonly used in industrial measurement applications due to their accuracy relative to other thermocouple types, low cost, and the ready availability of measurement equipment. Type T thermocouples are very effective when used in differential measurements, as there is no cold junction compensation necessary for the connections to the measurement equipment. Type T's published accuracy specifications result in its frequent use in low temperature applications. An examination of over 328 samples from a number of manufacturers has been completed for this investigation. Samples were compared to a Standard Platinum Resistance Thermometer (SPRT) at the LN2 boiling point along with four other standardized measurement points using a characterized ice point reference, low-thermal EMF scanner and an 8.5 digit multimeter, and the data compiled and analyzed. The test points were approximately -196 °C, -75 °C, 0 °C, +100 °C, and +200 °C. These data show an anomaly in the conformance to the reference functions where the reference functions meet at 0 °C. Additionally, in the temperature region between -100 °C to -200 °C, a positive offset of up to 5.4 °C exists between the reference function equations published in the ASTM E230-06 for the nitrogen point and the measured response of the actual wire. This paper will examine the historical and technological reasons for this anomaly in the both the ASTM and IEC reference functions. At the request of the author and the Proceedings Editor the above article has been replaced with a corrected version. The original PDF file supplied to AIP Publishing contained several figures with missing information/characters—caused by processes used to generate the PDF file. All figures were affected by this error. The article has been replaced and these figures now display correctly. The corrected article was published on 7 November 2013.

  9. Morphing of spatial objects in real time with interpolation by functions of radial and orthogonal basis

    NASA Astrophysics Data System (ADS)

    Kosnikov, Yu N.; Kuzmin, A. V.; Ho, Hoang Thai

    2018-05-01

    The article is devoted to visualization of spatial objects’ morphing described by the set of unordered reference points. A two-stage model construction is proposed to change object’s form in real time. The first (preliminary) stage is interpolation of the object’s surface by radial basis functions. Initial reference points are replaced by new spatially ordered ones. Reference points’ coordinates change patterns during the process of morphing are assigned. The second (real time) stage is surface reconstruction by blending functions of orthogonal basis. Finite differences formulas are applied to increase the productivity of calculations.

  10. Methods of practice and guidelines for using survey-grade global navigation satellite systems (GNSS) to establish vertical datum in the United States Geological Survey

    USGS Publications Warehouse

    Rydlund, Jr., Paul H.; Densmore, Brenda K.

    2012-01-01

    Geodetic surveys have evolved through the years to the use of survey-grade (centimeter level) global positioning to perpetuate and post-process vertical datum. The U.S. Geological Survey (USGS) uses Global Navigation Satellite Systems (GNSS) technology to monitor natural hazards, ensure geospatial control for climate and land use change, and gather data necessary for investigative studies related to water, the environment, energy, and ecosystems. Vertical datum is fundamental to a variety of these integrated earth sciences. Essentially GNSS surveys provide a three-dimensional position x, y, and z as a function of the North American Datum of 1983 ellipsoid and the most current hybrid geoid model. A GNSS survey may be approached with post-processed positioning for static observations related to a single point or network, or involve real-time corrections to provide positioning "on-the-fly." Field equipment required to facilitate GNSS surveys range from a single receiver, with a power source for static positioning, to an additional receiver or network communicated by radio or cellular for real-time positioning. A real-time approach in its most common form may be described as a roving receiver augmented by a single-base station receiver, known as a single-base real-time (RT) survey. More efficient real-time methods involving a Real-Time Network (RTN) permit the use of only one roving receiver that is augmented to a network of fixed receivers commonly known as Continually Operating Reference Stations (CORS). A post-processed approach in its most common form involves static data collection at a single point. Data are most commonly post-processed through a universally accepted utility maintained by the National Geodetic Survey (NGS), known as the Online Position User Service (OPUS). More complex post-processed methods involve static observations among a network of additional receivers collecting static data at known benchmarks. Both classifications provide users flexibility regarding efficiency and quality of data collection. Quality assurance of survey-grade global positioning is often overlooked or not understood and perceived uncertainties can be misleading. GNSS users can benefit from a blueprint of data collection standards used to ensure consistency among USGS mission areas. A classification of GNSS survey qualities provide the user with the ability to choose from the highest quality survey used to establish objective points with low uncertainties, identified as a Level I, to a GNSS survey for general topographic control without quality assurance, identified as a Level IV. A Level I survey is strictly limited to post-processed methods, whereas Level II, Level III, and Level IV surveys integrate variations of a RT approach. Among these classifications, techniques involving blunder checks and redundancy are important, and planning that involves the assessment of the overall satellite configuration, as well as terrestrial and space weather, are necessary to ensure an efficient and quality campaign. Although quality indicators and uncertainties are identified in post-processed methods using CORS, the accuracy of a GNSS survey is most effectively expressed as a comparison to a local benchmark that has a high degree of confidence. Real-time and post-processed methods should incorporate these "trusted" benchmarks as a check during any campaign. Global positioning surveys are expected to change rapidly in the future. The expansion of continuously operating reference stations, combined with newly available satellite signals, and enhancements to the conterminous geoid, are all sufficient indicators for substantial growth in real-time positioning and quality thereof.

  11. Assessing Posttraumatic Stress Disorder with or without Reference to a Single, Worst Traumatic Event: Examining Differences in Factor Structure

    ERIC Educational Resources Information Center

    Elhai, Jon D.; Engdahl, Ryan M.; Palmieri, Patrick A.; Naifeh, James A.; Schweinle, Amy; Jacobs, Gerard A.

    2009-01-01

    The authors examined the effects of a methodological manipulation on the Posttraumatic Stress Disorder (PTSD) Checklist's factor structure: specifically, whether respondents were instructed to reference a single worst traumatic event when rating PTSD symptoms. Nonclinical, trauma-exposed participants were randomly assigned to 1 of 2 PTSD…

  12. Improved maize reference genome with single-molecule technologies.

    PubMed

    Jiao, Yinping; Peluso, Paul; Shi, Jinghua; Liang, Tiffany; Stitzer, Michelle C; Wang, Bo; Campbell, Michael S; Stein, Joshua C; Wei, Xuehong; Chin, Chen-Shan; Guill, Katherine; Regulski, Michael; Kumari, Sunita; Olson, Andrew; Gent, Jonathan; Schneider, Kevin L; Wolfgruber, Thomas K; May, Michael R; Springer, Nathan M; Antoniou, Eric; McCombie, W Richard; Presting, Gernot G; McMullen, Michael; Ross-Ibarra, Jeffrey; Dawe, R Kelly; Hastie, Alex; Rank, David R; Ware, Doreen

    2017-06-22

    Complete and accurate reference genomes and annotations provide fundamental tools for characterization of genetic and functional variation. These resources facilitate the determination of biological processes and support translation of research findings into improved and sustainable agricultural technologies. Many reference genomes for crop plants have been generated over the past decade, but these genomes are often fragmented and missing complex repeat regions. Here we report the assembly and annotation of a reference genome of maize, a genetic and agricultural model species, using single-molecule real-time sequencing and high-resolution optical mapping. Relative to the previous reference genome, our assembly features a 52-fold increase in contig length and notable improvements in the assembly of intergenic spaces and centromeres. Characterization of the repetitive portion of the genome revealed more than 130,000 intact transposable elements, allowing us to identify transposable element lineage expansions that are unique to maize. Gene annotations were updated using 111,000 full-length transcripts obtained by single-molecule real-time sequencing. In addition, comparative optical mapping of two other inbred maize lines revealed a prevalence of deletions in regions of low gene density and maize lineage-specific genes.

  13. Backscatter calibration of high-frequency multibeam echosounder using a reference single-beam system, on natural seafloor

    NASA Astrophysics Data System (ADS)

    Eleftherakis, Dimitrios; Berger, Laurent; Le Bouffant, Naig; Pacault, Anne; Augustin, Jean-Marie; Lurton, Xavier

    2018-06-01

    The calibration of multibeam echosounders for backscatter measurements can be conducted efficiently and accurately using data from surveys over a reference natural area, implying appropriate measurements of the local absolute values of backscatter. Such a shallow area (20-m mean depth) has been defined and qualified in the Bay of Brest (France), and chosen as a reference area for multibeam systems operating at 200 and 300 kHz. The absolute reflectivity over the area was measured using a calibrated single-beam fishery echosounder (Simrad EK60) tilted at incidence angles varying between 0° and 60° with a step of 3°. This reference backscatter level is then compared to the average backscatter values obtained by a multibeam echosounder (here a Kongsberg EM 2040-D) at a close frequency and measured as a function of angle; the difference gives the angular bias applicable to the multibeam system for recorded level calibration. The method is validated by checking the single- and multibeam data obtained on other areas with sediment types different from the reference area.

  14. Analysis of data from NASA B-57B gust gradient program

    NASA Technical Reports Server (NTRS)

    Frost, W.; Lin, M. C.; Chang, H. P.; Ringnes, E.

    1985-01-01

    Statistical analysis of the turbulence measured in flight 6 of the NASA B-57B over Denver, Colorado, from July 7 to July 23, 1982 included the calculations of average turbulence parameters, integral length scales, probability density functions, single point autocorrelation coefficients, two point autocorrelation coefficients, normalized autospectra, normalized two point autospectra, and two point cross sectra for gust velocities. The single point autocorrelation coefficients were compared with the theoretical model developed by von Karman. Theoretical analyses were developed which address the effects spanwise gust distributions, using two point spatial turbulence correlations.

  15. Three questions you need to ask about your brand.

    PubMed

    Keller, Kevin Lane; Sternthal, Brian; Tybout, Alice

    2002-09-01

    Traditionally, the people responsible for positioning brands have concentrated on the differences that set each brand apart from the competition. But emphasizing differences isn't enough to sustain a brand against competitors. Managers should also consider the frame of reference within which the brand works and the features the brand shares with other products. Asking three questions about your brand can help: HAVE WE ESTABLISHED A FRAME?: A frame of reference--for Coke, it might be as narrow as other colas or as broad as all thirst-quenching drinks--signals to consumers the goal they can expect to achieve by using a brand. Brand managers need to pay close attention to this issue, in some cases expanding their focus in order to preempt the competition. ARE WE LEVERAGING OUR POINTS OF PARITY?: Certain points of parity must be met if consumers are to perceive your product as a legitimate player within its frame of reference. For instance, consumers might not consider a bank truly a bank unless it offers checking and savings plans. ARE THE POINTS OF DIFFERENCE COMPELLING?: A distinguishing characteristic that consumers find both relevant and believable can become a strong, favorable, unique brand association, capable of distinguishing the brand from others in the same frame of reference. Frames of reference, points of parity, and points of difference are moving targets. Maytag isn't the only dependable brand of appliance, Tide isn't the only detergent with whitening power, and BMWs aren't the only cars on the road with superior handling. The key questions you need to ask about your brand may not change, but their context certainly will. The saviest brand positioners are also the most vigilant.

  16. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  17. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  18. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  19. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  20. 7 CFR 1.626 - What regulations apply to a case referred for a hearing?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... referred for a hearing? (a) If NFS refers the case to OALJ, these regulations will continue to apply to the hearing process. (b) If NFS refers the case to the Department of the Interior's Office of Hearing and Appeals, the regulations at 43 CFR 45.1 et seq. will apply from that point. (c) If NFS refers the case to...

  1. Facial expression identification using 3D geometric features from Microsoft Kinect device

    NASA Astrophysics Data System (ADS)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  2. Principal component analysis of binding energies for single-point mutants of hT2R16 bound to an agonist correlate with experimental mutant cell response.

    PubMed

    Chen, Derek E; Willick, Darryl L; Ruckel, Joseph B; Floriano, Wely B

    2015-01-01

    Directed evolution is a technique that enables the identification of mutants of a particular protein that carry a desired property by successive rounds of random mutagenesis, screening, and selection. This technique has many applications, including the development of G protein-coupled receptor-based biosensors and designer drugs for personalized medicine. Although effective, directed evolution is not without challenges and can greatly benefit from the development of computational techniques to predict the functional outcome of single-point amino acid substitutions. In this article, we describe a molecular dynamics-based approach to predict the effects of single amino acid substitutions on agonist binding (salicin) to a human bitter taste receptor (hT2R16). An experimentally determined functional map of single-point amino acid substitutions was used to validate the whole-protein molecular dynamics-based predictive functions. Molecular docking was used to construct a wild-type agonist-receptor complex, providing a starting structure for single-point substitution simulations. The effects of each single amino acid substitution in the functional response of the receptor to its agonist were estimated using three binding energy schemes with increasing inclusion of solvation effects. We show that molecular docking combined with molecular mechanics simulations of single-point mutants of the agonist-receptor complex accurately predicts the functional outcome of single amino acid substitutions in a human bitter taste receptor.

  3. Detection of kinetic change points in piece-wise linear single molecule motion

    NASA Astrophysics Data System (ADS)

    Hill, Flynn R.; van Oijen, Antoine M.; Duderstadt, Karl E.

    2018-03-01

    Single-molecule approaches present a powerful way to obtain detailed kinetic information at the molecular level. However, the identification of small rate changes is often hindered by the considerable noise present in such single-molecule kinetic data. We present a general method to detect such kinetic change points in trajectories of motion of processive single molecules having Gaussian noise, with a minimum number of parameters and without the need of an assumed kinetic model beyond piece-wise linearity of motion. Kinetic change points are detected using a likelihood ratio test in which the probability of no change is compared to the probability of a change occurring, given the experimental noise. A predetermined confidence interval minimizes the occurrence of false detections. Applying the method recursively to all sub-regions of a single molecule trajectory ensures that all kinetic change points are located. The algorithm presented allows rigorous and quantitative determination of kinetic change points in noisy single molecule observations without the need for filtering or binning, which reduce temporal resolution and obscure dynamics. The statistical framework for the approach and implementation details are discussed. The detection power of the algorithm is assessed using simulations with both single kinetic changes and multiple kinetic changes that typically arise in observations of single-molecule DNA-replication reactions. Implementations of the algorithm are provided in ImageJ plugin format written in Java and in the Julia language for numeric computing, with accompanying Jupyter Notebooks to allow reproduction of the analysis presented here.

  4. Improving the discoverability, accessibility, and citability of omics datasets: a case report.

    PubMed

    Darlington, Yolanda F; Naumov, Alexey; McOwiti, Apollo; Kankanamge, Wasula H; Becnel, Lauren B; McKenna, Neil J

    2017-03-01

    Although omics datasets represent valuable assets for hypothesis generation, model testing, and data validation, the infrastructure supporting their reuse lacks organization and consistency. Using nuclear receptor signaling transcriptomic datasets as proof of principle, we developed a model to improve the discoverability, accessibility, and citability of published omics datasets. Primary datasets were retrieved from archives, processed to extract data points, then subjected to metadata enrichment and gap filling. The resulting secondary datasets were exposed on responsive web pages to support mining of gene lists, discovery of related datasets, and single-click citation integration with popular reference managers. Automated processes were established to embed digital object identifier-driven links to the secondary datasets in associated journal articles, small molecule and gene-centric databases, and a dataset search engine. Our model creates multiple points of access to reprocessed and reannotated derivative datasets across the digital biomedical research ecosystem, promoting their visibility and usability across disparate research communities. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Visual EKF-SLAM from Heterogeneous Landmarks †

    PubMed Central

    Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.

    2016-01-01

    Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602

  6. Gluon and Wilson loop TMDs for hadrons of spin ≤ 1

    NASA Astrophysics Data System (ADS)

    Boer, Daniël; Cotogno, Sabrina; van Daal, Tom; Mulders, Piet J.; Signori, Andrea; Zhou, Ya-Jin

    2016-10-01

    In this paper we consider the parametrizations of gluon transverse momentum dependent (TMD) correlators in terms of TMD parton distribution functions (PDFs). These functions, referred to as TMDs, are defined as the Fourier transforms of hadronic matrix elements of nonlocal combinations of gluon fields. The nonlocality is bridged by gauge links, which have characteristic paths (future or past pointing), giving rise to a process dependence that breaks universality. For gluons, the specific correlator with one future and one past pointing gauge link is, in the limit of small x, related to a correlator of a single Wilson loop. We present the parametrization of Wilson loop correlators in terms of Wilson loop TMDs and discuss the relation between these functions and the small- x `dipole' gluon TMDs. This analysis shows which gluon TMDs are leading or suppressed in the small- x limit. We discuss hadronic targets that are unpolarized, vector polarized (relevant for spin-1 /2 and spin-1 hadrons), and tensor polarized (relevant for spin-1 hadrons). The latter are of interest for studies with a future Electron-Ion Collider with polarized deuterons.

  7. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  8. Order of accuracy of QUICK and related convection-diffusion schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    This report attempts to correct some misunderstandings that have appeared in the literature concerning the order of accuracy of the QUICK scheme for steady-state convective modeling. Other related convection-diffusion schemes are also considered. The original one-dimensional QUICK scheme written in terms of nodal-point values of the convected variable (with a 1/8-factor multiplying the 'curvature' term) is indeed a third-order representation of the finite volume formulation of the convection operator average across the control volume, written naturally in flux-difference form. An alternative single-point upwind difference scheme (SPUDS) using node values (with a 1/6-factor) is a third-order representation of the finite difference single-point formulation; this can be written in a pseudo-flux difference form. These are both third-order convection schemes; however, the QUICK finite volume convection operator is 33 percent more accurate than the single-point implementation of SPUDS. Another finite volume scheme, writing convective fluxes in terms of cell-average values, requires a 1/6-factor for third-order accuracy. For completeness, one can also write a single-point formulation of the convective derivative in terms of cell averages, and then express this in pseudo-flux difference form; for third-order accuracy, this requires a curvature factor of 5/24. Diffusion operators are also considered in both single-point and finite volume formulations. Finite volume formulations are found to be significantly more accurate. For example, classical second-order central differencing for the second derivative is exactly twice as accurate in a finite volume formulation as it is in single-point.

  9. Post-surgical effects on language in patients with presumed low-grade glioma.

    PubMed

    Antonsson, M; Jakola, A; Longoni, F; Carstam, L; Hartelius, L; Thordstein, M; Tisell, M

    2018-05-01

    Low-grade glioma (LGG) is a slow-growing brain tumour often situated in or near areas involved in language and/or cognitive functions. Thus, language impairments due to tumour growth or surgical resection are obvious risks. We aimed to investigate language outcome following surgery in patients with presumed LGG, using a comprehensive and sensitive language assessment. Thirty-two consecutive patients with presumed LGG were assessed preoperative, early post-operative, and 3 months post-operative using sensitive tests including lexical retrieval, language comprehension and high-level language. The patients' preoperative language ability was compared with a reference group, but also with performance at post-operative controls. Further, the association between tumour location and language performance pre- and post-operatively was explored. Before surgery, the patients with presumed LGG performed worse on tests of lexical retrieval when compared to a reference group (BNT: LGG-group median 52, Reference-group median 54, P = .002; Animals: LGG-group mean 21.0, Reference-group mean 25, P = 001; Verbs: LGG-group mean 17.3, Reference-group mean 21.4, P = .001). At early post-operative assessment, we observed a decline in all language tests, whereas at 3 months there was only a decline on a single test of lexical retrieval (Animals: preoperative. median 20, post-op median 14, P = .001). The highest proportion of language impairment was found in the group with a tumour in language-eloquent areas at all time-points. Although many patients with a tumour in the left hemisphere deteriorated in their language function directly after surgery, their prognosis for recovery was good. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Trigger point-related sympathetic nerve activity in chronic sciatic leg pain: a case study.

    PubMed

    Skorupska, Elżbieta; Rychlik, Michał; Pawelec, Wiktoria; Bednarek, Agata; Samborski, Włodzimierz

    2014-10-01

    Sciatica has classically been associated with irritation of the sciatic nerve by the vertebral disc and consequent inflammation. Some authors suggest that active trigger points in the gluteus minimus muscle can refer pain in similar way to sciatica. Trigger point diagnosis is based on Travel and Simons criteria, but referred pain and twitch response are significant confirmatory signs of the diagnostic criteria. Although vasoconstriction in the area of a latent trigger point has been demonstrated, the vasomotor reaction of active trigger points has not been examined. We report the case of a 22-year-old Caucasian European man who presented with a 3-year history of chronic sciatic-type leg pain. In the third year of symptoms, coexistent myofascial pain syndrome was diagnosed. Acupuncture needle stimulation of active trigger points under infrared thermovisual camera showed a sudden short-term vasodilatation (an autonomic phenomenon) in the area of referred pain. The vasodilatation spread from 0.2 to 171.9 cm(2) and then gradually decreased. After needling, increases in average and maximum skin temperature were seen as follows: for the thigh, changes were +2.6°C (average) and +3.6°C (maximum); for the calf, changes were +0.9°C (average) and +1.4°C (maximum). It is not yet known whether the vasodilatation observed was evoked exclusively by dry needling of active trigger points. The complex condition of the patient suggests that other variables might have influenced the infrared thermovision camera results. We suggest that it is important to check if vasodilatation in the area of referred pain occurs in all patients with active trigger points. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Enhancement of MS2D Bartington point measurement of soil magnetic susceptibility

    NASA Astrophysics Data System (ADS)

    Fabijańczyk, Piotr; Zawadzki, Jarosław

    2015-04-01

    Field magnetometry is fast method used to assess the potential soil pollution. The most popular device used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. Typically, in order to calculate the reliable average value of soil magnetic susceptibility, a series of MS2D readings is performed in the sample point. As it was analyzed previously, such methodology makes it possible to significantly reduce the nugget effect of the variograms of soil magnetic susceptibility that is related to the micro-scale variance and measurement errors. The goal of this study was to optimize the process of taking a series of MS2D readings, whose average value constitutes a single measurement, in order to take into account micro-scale variations of soil magnetic susceptibility in proper determination of this parameter. This was done using statistical and geostatistical analyses. The analyses were performed using field MS2D measurements that were carried out in the study area located in the direct vicinity of the Katowice agglomeration. At 150 sample points 10 MS2D readings of soil magnetic susceptibility were taken. Using this data set, series of experimental variograms were calculated and modeled. Firstly, using single random MS2D reading for each sample point, and next using the data set increased by adding one more MS2D reading, until their number reached 10. The parameters of variogram: nugget effect, sill and range of correlation were used to determine the most suitable number of MS2D readings at sample point. The distributions of soil magnetic susceptibility at sample point were also analyzed in order to determine adequate number of readings enabling to calculate reliable average soil magnetic susceptibility. The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013. References: Zawadzki J., Magiera T., Fabijańczyk P., 2007. The influence of forest stand and organic horizon development on soil surface measurement of magnetic susceptibility. Polish Journal of Soil Science, XL(2), 113-124 Zawadzki J., Fabijańczyk P., Magiera T., Strzyszcz Z., 2010. Study of litter influence on magnetic susceptibility measurements of urban forest topsoils using the MS2D sensor. Environmental Earth Sciences, 61(2), 223-230.

  12. Point to a Referent, and Say, "What Is This?" Gesture as a Potential Cue to Identify Referents in a Discourse

    ERIC Educational Resources Information Center

    So, Wing Chee; Lim, Jia Yi

    2012-01-01

    This study explored whether caregivers' gestures followed the discourse-pragmatic principle of information status of referents (given vs. new) and how their children responded to those gestures when identifying referents. Ten Chinese-speaking and eight English-speaking caregivers were videotaped while interacting spontaneously with their children.…

  13. Nursing Reference Center: a point-of-care resource.

    PubMed

    Vardell, Emily; Paulaitis, Gediminas Geddy

    2012-01-01

    Nursing Reference Center is a point-of-care resource designed for the practicing nurse, as well as nursing administrators, nursing faculty, and librarians. Users can search across multiple resources, including topical Quick Lessons, evidence-based care sheets, patient education materials, practice guidelines, and more. Additional features include continuing education modules, e-books, and a new iPhone application. A sample search and comparison with similar databases were conducted.

  14. Reliable landmarks for precise topographical analyses of pathological structural changes of the ovine tibial plateau in 2-D and 3-D subspaces.

    PubMed

    Oláh, Tamás; Reinhard, Jan; Gao, Liang; Goebel, Lars K H; Madry, Henning

    2018-01-08

    Selecting identical topographical locations to analyse pathological structural changes of the osteochondral unit in translational models remains difficult. The specific aim of the study was to provide objectively defined reference points on the ovine tibial plateau based on 2-D sections of micro-CT images useful for reproducible sample harvesting and as standardized landmarks for landmark-based 3-D image registration. We propose 5 reference points, 11 reference lines and 12 subregions that are detectable macroscopically and on 2-D micro-CT sections. Their value was confirmed applying landmark-based rigid and affine 3-D registration methods. Intra- and interobserver comparison showed high reliabilities, and constant positions (standard errors < 1%). Spatial patterns of the thicknesses of the articular cartilage and subchondral bone plate were revealed by measurements in 96 individual points of the tibial plateau. As a case study, pathological phenomena 6 months following OA induction in vivo such as osteophytes and areas of OA development were mapped to the individual subregions. These new reference points and subregions are directly identifiable on tibial plateau specimens or macroscopic images, enabling a precise topographical location of pathological structural changes of the osteochondral unit in both 2-D and 3-D subspaces in a region-appropriate fashion relevant for translational investigations.

  15. An aggregate method to calibrate the reference point of cumulative prospect theory-based route choice model for urban transit network

    NASA Astrophysics Data System (ADS)

    Zhang, Yufeng; Long, Man; Luo, Sida; Bao, Yu; Shen, Hanxia

    2015-12-01

    Transit route choice model is the key technology of public transit systems planning and management. Traditional route choice models are mostly based on expected utility theory which has an evident shortcoming that it cannot accurately portray travelers' subjective route choice behavior for their risk preferences are not taken into consideration. Cumulative prospect theory (CPT), a brand new theory, can be used to describe travelers' decision-making process under the condition of uncertainty of transit supply and risk preferences of multi-type travelers. The method to calibrate the reference point, a key parameter to CPT-based transit route choice model, determines the precision of the model to a great extent. In this paper, a new method is put forward to obtain the value of reference point which combines theoretical calculation and field investigation results. Comparing the proposed method with traditional method, it shows that the new method can promote the quality of CPT-based model by improving the accuracy in simulating travelers' route choice behaviors based on transit trip investigation from Nanjing City, China. The proposed method is of great significance to logical transit planning and management, and to some extent makes up the defect that obtaining the reference point is solely based on qualitative analysis.

  16. Development of robots and application to industrial processes

    NASA Technical Reports Server (NTRS)

    Palm, W. J.; Liscano, R.

    1984-01-01

    An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accomodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction.

  17. Zinc-coordinated MOFs complexes regulated by hydrogen bonds: Synthesis, structure and luminescence study toward broadband white-light emission

    NASA Astrophysics Data System (ADS)

    Duan, Hui; Dan, Wenyan; Fang, Xiangdong

    2018-04-01

    Two new compounds, namely {[Zn(apc)2]·H2O}n (1) and [Zn(apc)2(H2O)2] (2), have been designed and synthesized with a multi-functional ligand 2-aminopyrimidine-5-carboxylic acid (Hapc). Both compounds were characterized by single crystal X-ray diffraction analysis (SC-XRD), elemental analysis (EA), infrared spectroscopy (IR), and thermogravimetric analysis (TG). In solid-state structures, 1 features a two-fold interpenetrating pillared-layer 3D framework with point symbol {83}2{86}, referring to tfa topology; while 2 exhibits a 3D framework based on super unit of Zn(apc)2(H2O)2 interconnected via hydrogen bonds. Furthermore, the luminescent properties of 1 and 2 were discussed.

  18. Syntenic block overlap multiplicities with a panel of reference genomes provide a signature of ancient polyploidization events.

    PubMed

    Zheng, Chunfang; Santos Muñoz, Daniella; Albert, Victor A; Sankoff, David

    2015-01-01

    Following whole genome duplication (WGD), there is a compact distribution of gene similarities within the genome reflecting duplicate pairs of all the genes in the genome. With time, the distribution broadens and loses volume due to variable decay of duplicate gene similarity and to the process of duplicate gene loss. If there are two WGD, the older one becomes so reduced and broad that it merges with the tail of the distributions resulting from more recent events, and it becomes difficult to distinguish them. The goal of this paper is to advance statistical methods of identifying, or at least counting, the WGD events in the lineage of a given genome. For a set of 15 angiosperm genomes, we analyze all 15 × 14 = 210 ordered pairs of target genome versus reference genome, using SynMap to find syntenic blocks. We consider all sets of B ≥ 2 syntenic blocks in the target genome that overlap in the reference genome as evidence of WGD activity in the target, whether it be one event or several. We hypothesize that in fitting an exponential function to the tail of the empirical distribution f (B) of block multiplicities, the size of the exponent will reflect the amount of WGD in the history of the target genome. By amalgamating the results from all reference genomes, a range of values of SynMap parameters, and alternative cutoff points for the tail, we find a clear pattern whereby multiple-WGD core eudicots have the smallest (negative) exponents, followed by core eudicots with only the single "γ" triplication in their history, followed by a non-core eudicot with a single WGD, followed by the monocots, with a basal angiosperm, the WGD-free Amborella having the largest exponent. The hypothesis that the exponent of the fit to the tail of the multiplicity distribution is a signature of the amount of WGD is verified, but there is also a clear complicating factor in the monocot clade, where a history of multiple WGD is not reflected in a small exponent.

  19. Random phase detection in multidimensional NMR.

    PubMed

    Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C

    2011-10-04

    Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.

  20. Attitude Control System Design for the Solar Dynamics Observatory

    NASA Technical Reports Server (NTRS)

    Starin, Scott R.; Bourkland, Kristin L.; Kuo-Chia, Liu; Mason, Paul A. C.; Vess, Melissa F.; Andrews, Stephen F.; Morgenstern, Wendy M.

    2005-01-01

    The Solar Dynamics Observatory mission, part of the Living With a Star program, will place a geosynchronous satellite in orbit to observe the Sun and relay data to a dedicated ground station at all times. SDO remains Sun- pointing throughout most of its mission for the instruments to take measurements of the Sun. The SDO attitude control system is a single-fault tolerant design. Its fully redundant attitude sensor complement includes 16 coarse Sun sensors, a digital Sun sensor, 3 two-axis inertial reference units, 2 star trackers, and 4 guide telescopes. Attitude actuation is performed using 4 reaction wheels and 8 thrusters, and a single main engine nominally provides velocity-change thrust. The attitude control software has five nominal control modes-3 wheel-based modes and 2 thruster-based modes. A wheel-based Safehold running in the attitude control electronics box improves the robustness of the system as a whole. All six modes are designed on the same basic proportional-integral-derivative attitude error structure, with more robust modes setting their integral gains to zero. The paper details the mode designs and their uses.

  1. Neuroimaging in attention-deficit hyperactivity disorder: beyond the frontostriatal circuitry.

    PubMed

    Cherkasova, Mariya V; Hechtman, Lily

    2009-10-01

    To review the findings of structural and functional neuroimaging studies in attention-deficit hyperactivity disorder (ADHD), with a focus on abnormalities reported in brain regions that lie outside the frontostriatal circuitry, which is currently believed to play a central role in the pathophysiology of ADHD. Relevant publications were found primarily by searching the MEDLINE and PubMed databases using the keywords ADHD and the abbreviations of magnetic resonance imaging (MRI), functional MRI, positron emission tomography, and single photon emission computed tomography. The reference lists of the articles found through the databases were then reviewed for the purpose of finding additional articles. There is now substantial evidence of structural and functional alterations in regions outside the frontostriatal circuitry in ADHD, most notably in the cerebellum and the parietal lobes. Although there is compelling evidence suggesting that frontostriatal dysfunction may be central to the pathophysiology of ADHD, the neuroimaging findings point to distributed neural substrates rather than a single one. More research is needed to elucidate the nature of contributions of nonfrontostriatal regions to the pathophysiology of ADHD.

  2. The Effect of Response Format on the Psychometric Properties of the Narcissistic Personality Inventory: Consequences for Item Meaning and Factor Structure.

    PubMed

    Ackerman, Robert A; Donnellan, M Brent; Roberts, Brent W; Fraley, R Chris

    2016-04-01

    The Narcissistic Personality Inventory (NPI) is currently the most widely used measure of narcissism in social/personality psychology. It is also relatively unique because it uses a forced-choice response format. We investigate the consequences of changing the NPI's response format for item meaning and factor structure. Participants were randomly assigned to one of three conditions: 40 forced-choice items (n = 2,754), 80 single-stimulus dichotomous items (i.e., separate true/false responses for each item; n = 2,275), or 80 single-stimulus rating scale items (i.e., 5-point Likert-type response scales for each item; n = 2,156). Analyses suggested that the "narcissistic" and "nonnarcissistic" response options from the Entitlement and Superiority subscales refer to independent personality dimensions rather than high and low levels of the same attribute. In addition, factor analyses revealed that although the Leadership dimension was evident across formats, dimensions with entitlement and superiority were not as robust. Implications for continued use of the NPI are discussed. © The Author(s) 2015.

  3. Datum maintenance of the main Egyptian geodetic control networks by utilizing Precise Point Positioning "PPP" technique

    NASA Astrophysics Data System (ADS)

    Rabah, Mostafa; Elmewafey, Mahmoud; Farahan, Magda H.

    2016-06-01

    A geodetic control network is the wire-frame or the skeleton on which continuous and consistent mapping, Geographic Information Systems (GIS), and surveys are based. Traditionally, geodetic control points are established as permanent physical monuments placed in the ground and precisely marked, located, and documented. With the development of satellite surveying methods and their availability and high degree of accuracy, a geodetic control network could be established by using GNSS and referred to an international terrestrial reference frame used as a three-dimensional geocentric reference system for a country. Based on this concept, in 1992, the Egypt Survey Authority (ESA) established two networks, namely High Accuracy Reference Network (HARN) and the National Agricultural Cadastral Network (NACN). To transfer the International Terrestrial Reference Frame to the HARN, the HARN was connected with four IGS stations. The processing results were 1:10,000,000 (Order A) for HARN and 1:1,000,000 (Order B) for NACN relative network accuracy standard between stations defined in ITRF1994 Epoch1996. Since 1996, ESA did not perform any updating or maintaining works for these networks. To see how non-performing maintenance degrading the values of the HARN and NACN, the available HARN and NACN stations in the Nile Delta were observed. The Processing of the tested part was done by CSRS-PPP Service based on utilizing Precise Point Positioning "PPP" and Trimble Business Center "TBC". The study shows the feasibility of Precise Point Positioning in updating the absolute positioning of the HARN network and its role in updating the reference frame (ITRF). The study also confirmed the necessity of the absent role of datum maintenance of Egypt networks.

  4. Reference-Frame-Independent and Measurement-Device-Independent Quantum Key Distribution Using One Single Source

    NASA Astrophysics Data System (ADS)

    Li, Qian; Zhu, Changhua; Ma, Shuquan; Wei, Kejin; Pei, Changxing

    2018-04-01

    Measurement-device-independent quantum key distribution (MDI-QKD) is immune to all detector side-channel attacks. However, practical implementations of MDI-QKD, which require two-photon interferences from separated independent single-photon sources and a nontrivial reference alignment procedure, are still challenging with current technologies. Here, we propose a scheme that significantly reduces the experimental complexity of two-photon interferences and eliminates reference frame alignment by the combination of plug-and-play and reference frame independent MDI-QKD. Simulation results show that the secure communication distance can be up to 219 km in the finite-data case and the scheme has good potential for practical MDI-QKD systems.

  5. Comparing Theories of Reference-Dependent Choice

    ERIC Educational Resources Information Center

    Bhatia, Sudeep

    2017-01-01

    Preferences are influenced by the presence or absence of salient choice options, known as reference points. This behavioral tendency is traditionally attributed to the loss aversion and diminishing sensitivity assumptions of prospect theory. In contrast, some psychological research suggests that reference dependence is caused by attentional biases…

  6. Urban Mass Transportation.

    ERIC Educational Resources Information Center

    Mervine, K. E.

    This bibliography is part of a series of Environmental Resource Packets prepared under a grant from EXXON Education Foundation. The most authoritative and accessible references in the urban transportation field are reviewed. The authors, publisher, point of view, level, and summary are given for each reference. The references are categorized…

  7. [Comparative evaluation of the marginal accuracy of single crowns fabricated computer using aided design/computer aided manufacturing methods, self-curing resin and Luxatemp].

    PubMed

    Jianming, Yuan; Ying, Tang; Feng, Pan; Weixing, Xu

    2016-12-01

    This study aims to compare the marginal accuracy of single crowns fabricated using self-curing resin, Luxatemp, and computer aided design/computer aided manufacturing (CAD/CAM) methods in clinical application. A total of 30 working dies, which were obtained from 30 clinical teeth prepared with full crown as standard, were created and made into 30 self-curing resin, Luxatemp, and CAD/CAM single crowns. The restorations were seated on the working dies, and stereomicroscope was used to observe and measure the thickness of reference points. One-way analysis of variance, which was performed using SPSS 19.0 software package, compared the marginal gap widths of self-curing resin, Luxatemp, and CAD/CAM provisional crowns. The mean marginal gap widths of the fabricated self-curing resin, Luxatemp, and CAD/CAM were (179.06±33.24), (88.83±9.56), and (43.61±7.27) μm, respectively. A significant difference was observed among the three provisional crowns (P<0.05). The marginal gap width of CAD/CAM provisional crown was lower than that of the self-curing resin and Luxatemp. Thus, the CAD/CAM provisional crown offers a better remediation effect in clinical application.

  8. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  9. Method of making macrocrystalline or single crystal semiconductor material

    NASA Technical Reports Server (NTRS)

    Shlichta, P. J. (Inventor); Holliday, R. J. (Inventor)

    1986-01-01

    A macrocrystalline or single crystal semiconductive material is formed from a primary substrate including a single crystal or several very large crystals of a relatively low melting material. This primary substrate is deposited on a base such as steel or ceramic, and it may be formed from such metals as zinc, cadmium, germanium, aluminum, tin, lead, copper, brass, magnesium silicide, or magnesium stannide. These materials generally have a melting point below about 1000 C and form on the base crystals the size of fingernails or greater. The primary substrate has an epitaxial relationship with a subsequently applied layer of material, and because of this epitaxial relationship, the material deposited on the primary substrate will have essentially the same crystal size as the crystals in the primary substrate. If required, successive layers are formed, each of a material which has an epitaxial relationship with the previously deposited layer, until a layer is formed which has an epitaxial relationship with the semiconductive material. This layer is referred to as the epitaxial substrate, and its crystals serve as sites for the growth of large crystals of semiconductive material. The primary substrate is passivated to remove or otherwise convert it into a stable or nonreactive state prior to deposition of the seconductive material.

  10. Integration of imagery and cartographic data through a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1983-01-01

    Several disparate data types are integrated by using control points as the basis for spatially registering the data to a map base. The data are reprojected to match the coordinates of the reference UTM (Universal Transverse Mercator) map projection, as expressed in lines and samples. Control point selection is the most critical aspect of integrating the Thematic Mapper Simulator MSS imagery with the cartographic data. It is noted that control points chosen from the imagery are subject to error from mislocated points, either points that did not correlate well to the reference map or minor pixel offsets because of interactive cursorring errors. Errors are also introduced in map control points when points are improperly located and digitized, leading to inaccurate latitude and longitude coordinates. Nonsystematic aircraft platform variations, such as yawl, pitch, and roll, affect the spatial fidelity of the imagery in comparison with the quadrangles. Features in adjacent flight paths do not always correspond properly owing to the systematic panorama effect and alteration of flightline direction, as well as platform variations.

  11. [Introduction of Quality Management System Audit in Medical Device Single Audit Program].

    PubMed

    Wen, Jing; Xiao, Jiangyi; Wang, Aijun

    2018-01-30

    The audit of the quality management system in the medical device single audit program covers the requirements of several national regulatory authorities, which has a very important reference value. This paper briefly described the procedures and contents of this audit. Some enlightenment on supervision and inspection are discussed in China, for reference by the regulatory authorities and auditing organizations.

  12. Estimating Total Heliospheric Magnetic Flux from Single-Point in Situ Measurements

    NASA Technical Reports Server (NTRS)

    Owens, M. J.; Arge, C. N.; Crooker, N. U.; Schwardron, N. A.; Horbury, T. S.

    2008-01-01

    A fraction of the total photospheric magnetic flux opens to the heliosphere to form the interplanetary magnetic field carried by the solar wind. While this open flux is critical to our understanding of the generation and evolution of the solar magnetic field, direct measurements are generally limited to single-point measurements taken in situ by heliospheric spacecraft. An observed latitude invariance in the radial component of the magnetic field suggests that extrapolation from such single-point measurements to total heliospheric magnetic flux is possible. In this study we test this assumption using estimates of total heliospheric flux from well-separated heliospheric spacecraft and conclude that single-point measurements are indeed adequate proxies for the total heliospheric magnetic flux, though care must be taken when comparing flux estimates from data collected at different heliocentric distances.

  13. 3D Surface Reconstruction of Rills in a Spanish Olive Grove

    NASA Astrophysics Data System (ADS)

    Brings, Christine; Gronz, Oliver; Seeger, Manuel; Wirtz, Stefan; Taguas, Encarnación; Ries, Johannes B.

    2016-04-01

    The low-cost, user-friendly photogrammetric Structure from Motion (SfM) technique is used for 3D surface reconstruction and difference calculation of an 18 meter long rill in South Spain (Andalusia, Puente Genil). The images were taken with a Canon HD video camera before and after a rill experiment in an olive grove. Recording with a video camera has compared to a photo camera a huge time advantage and the method also guarantees more than adequately overlapping sharp images. For each model, approximately 20 minutes of video were taken. As SfM needs single images, the sharpest image was automatically selected from 8 frame intervals. The sharpness was estimated using a derivative-based metric. Then, VisualSfM detects feature points in each image, searches matching feature points in all image pairs and recovers the camera and feature positions. Finally, by triangulation of camera positions and feature points the software reconstructs a point cloud of the rill surface. From the point cloud, 3D surface models (meshes) are created and via difference calculations of the pre and post model a visualization of the changes (erosion and accumulation areas) and quantification of erosion volumes are possible. The calculated volumes are presented in spatial units of the models and so real values must be converted via references. The results show that rills in olive groves have a high dynamic due to the lack of vegetation cover under the trees, so that the rill can incise until the bedrock. Another reason for the high activity is the intensive employment of machinery.

  14. Study of Individual Characteristic Abdominal Wall Thickness Based on Magnetic Anchored Surgical Instruments

    PubMed Central

    Dong, Ding-Hui; Liu, Wen-Yan; Feng, Hai-Bo; Fu, Yi-Li; Huang, Shi; Xiang, Jun-Xi; Lyu, Yi

    2015-01-01

    Background: Magnetic anchored surgical instruments (MASI), relying on magnetic force, can break through the limitations of the single port approach in dexterity. Individual characteristic abdominal wall thickness (ICAWT) deeply influences magnetic force that determines the safety of MASI. The purpose of this study was to research the abdominal wall characteristics in MASI applied environment to find ICAWT, and then construct an artful method to predict ICAWT, resulting in better safety and feasibility for MASI. Methods: For MASI, ICAWT is referred to the thickness of thickest point in the applied environment. We determined ICAWT through finding the thickest point in computed tomography scans. We also investigated the traits of abdominal wall thickness to discover the factor that can be used to predict ICAWT. Results: Abdominal wall at C point in the middle third lumbar vertebra plane (L3) is the thickest during chosen points. Fat layer thickness plays a more important role in abdominal wall thickness than muscle layer thickness. “BMI-ICAWT” curve was obtained based on abdominal wall thickness of C point in L3 plane, and the expression was as follow: f(x) = P1 × x2 + P2 × x + P3, where P1 = 0.03916 (0.01776, 0.06056), P2 = 1.098 (0.03197, 2.164), P3 = −18.52 (−31.64, −5.412), R-square: 0.99. Conclusions: Abdominal wall thickness of C point at L3 could be regarded as ICAWT. BMI could be a reliable predictor of ICAWT. In the light of “BMI-ICAWT” curve, we may conveniently predict ICAWT by BMI, resulting a better safety and feasibility for MASI. PMID:26228215

  15. Time-resolved contrast-enhanced MR angiography of the thorax in adults with congenital heart disease.

    PubMed

    Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich

    2006-10-01

    The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.

  16. The validity of multiphase DNS initialized on the basis of single--point statistics

    NASA Astrophysics Data System (ADS)

    Subramaniam, Shankar

    1999-11-01

    A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.

  17. Life satisfaction in women with epilepsy during and after pregnancy.

    PubMed

    Reiter, Simone Frizell; Bjørk, Marte Helene; Daltveit, Anne Kjersti; Veiby, Gyri; Kolstad, Eivind; Engelsen, Bernt A; Gilhus, Nils Erik

    2016-09-01

    The aim of this study was to investigate life satisfaction in women with epilepsy during and after pregnancy. The study was based on the Norwegian Mother and Child Cohort Study, including 102,265 women with and without epilepsy from the general population. Investigation took place at pregnancy weeks 15-19 and 6 and 18months postpartum. Women with epilepsy were compared with a reference group without epilepsy. The proportion of women with epilepsy was 0.6-0.7% at all three time points. Women with epilepsy reported lower life satisfaction and self-esteem both during and after pregnancy compared with the references. Single parenting correlated negatively with life satisfaction in epilepsy during the whole study period. Epilepsy was associated with lower levels of relationship satisfaction and higher levels of work strain during pregnancy and lower levels of self-efficacy and satisfactory somatic health 18months postpartum. Adverse life events, such as divorce, were more common in women with epilepsy compared with the references, and fewer women with epilepsy had a paid job 18months postpartum. Reduced life satisfaction associated with epilepsy during and after pregnancy showed that, even in a highly developed welfare society, women with epilepsy struggle. Mothers with epilepsy and their partners should be examined for emotional complaints and partnership satisfaction during and after pregnancy. Validated screening tools are available for such measures. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  19. Priming us and them: automatic assimilation and contrast in group attitudes.

    PubMed

    Ledgerwood, Alison; Chaiken, Shelly

    2007-12-01

    Social judgment theory holds that a person's own attitudes function as reference points, influencing the perception of others' attitudes. The authors argue that attitudes themselves are influenced by reference points, namely, the presumed attitudes of others. Whereas exposure to a group that acts as a contextual reference should cause attitude assimilation, exposure to a group that acts as a comparative reference should cause attitude contrast. In Study 1, participants subliminally primed with their political ingroup or outgroup endorsed more extreme political positions than did controls. Study 2 demonstrated that prime types known to uniquely facilitate assimilation and contrast enhanced the polarization effect in the ingroup and outgroup conditions, respectively. Study 3 established an important boundary condition for whether group salience produces attitude assimilation or contrast by showing that perceived closeness to the elderly moderates the direction and strength of the group priming effect. The results suggest that the transition from assimilation to contrast occurs when a group ceases to function as a context and becomes a comparison point. Implications for social judgment theory, assimilation and contrast research, and conflict escalation are discussed. (c) 2007 APA, all rights reserved.

  20. Levels at gaging stations

    USGS Publications Warehouse

    Kenney, Terry A.

    2010-01-01

    Operational procedures at U.S. Geological Survey gaging stations include periodic leveling checks to ensure that gages are accurately set to the established gage datum. Differential leveling techniques are used to determine elevations for reference marks, reference points, all gages, and the water surface. The techniques presented in this manual provide guidance on instruments and methods that ensure gaging-station levels are run to both a high precision and accuracy. Levels are run at gaging stations whenever differences in gage readings are unresolved, stations may have been damaged, or according to a pre-determined frequency. Engineer's levels, both optical levels and electronic digital levels, are commonly used for gaging-station levels. Collimation tests should be run at least once a week for any week that levels are run, and the absolute value of the collimation error cannot exceed 0.003 foot/100 feet (ft). An acceptable set of gaging-station levels consists of a minimum of two foresights, each from a different instrument height, taken on at least two independent reference marks, all reference points, all gages, and the water surface. The initial instrument height is determined from another independent reference mark, known as the origin, or base reference mark. The absolute value of the closure error of a leveling circuit must be less than or equal to ft, where n is the total number of instrument setups, and may not exceed |0.015| ft regardless of the number of instrument setups. Closure error for a leveling circuit is distributed by instrument setup and adjusted elevations are determined. Side shots in a level circuit are assessed by examining the differences between the adjusted first and second elevations for each objective point in the circuit. The absolute value of these differences must be less than or equal to 0.005 ft. Final elevations for objective points are determined by averaging the valid adjusted first and second elevations. If final elevations indicate that the reference gage is off by |0.015| ft or more, it must be reset.

Top