F-16 Task Analysis Criterion-Referenced Objective and Objectives Hierarchy Report. Volume 4
1981-03-01
Initiation cues: Engine flameout Systems presenting cues: Aircraft fuel, engine STANDARD: Authority: TACR 60-2 Performance precision: TD in first 1/3 of...task: None Initiation cues: On short final Systems preventing cues: N/A STANDARD: Authority: 60-2 Performance precision: +/- .5 AOA; TD zone 150-1000...precision: +/- .05 AOA; TD Zone 150-1000 Computational accuracy: N/A ... . . ... . ... e e m I TASK NO.: 1.9.4 BEHAVIOR: Perform short field landing
ERIC Educational Resources Information Center
Couch, Richard W.
Precision teaching (PT) is an approach to the science of human behavior that focuses on precise monitoring of carefully defined behaviors in an attempt to construct an environmental analysis of that behavior and its controlling variables. A variety of subjects have been used with PT, ranging in academic objectives from beginning reading to college…
Analysis and Test Support for Phillips Laboratory Precision Structures
1998-11-01
Air Force Research Laboratory ( AFRL ), Phillips Research Site . Task objectives centered...around analysis and structural dynamic test support on experiments within the Space Vehicles Directorate at Kirtland Air Force Base. These efforts help...support for Phillips Laboratory Precision Structures." Mr. James Goodding of CSA Engineering was the principal investigator for this task. Mr.
NASA Astrophysics Data System (ADS)
Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming
2018-03-01
In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.
Direction information in multiple object tracking is limited by a graded resource.
Horowitz, Todd S; Cohen, Michael A
2010-10-01
Is multiple object tracking (MOT) limited by a fixed set of structures (slots), a limited but divisible resource, or both? Here, we answer this question by measuring the precision of the direction representation for tracked targets. The signature of a limited resource is a decrease in precision as the square root of the tracking load. The signature of fixed slots is a fixed precision. Hybrid models predict a rapid decrease to asymptotic precision. In two experiments, observers tracked moving disks and reported target motion direction by adjusting a probe arrow. We derived the precision of representation of correctly tracked targets using a mixture distribution analysis. Precision declined with target load according to the square-root law up to six targets. This finding is inconsistent with both pure and hybrid slot models. Instead, directional information in MOT appears to be limited by a continuously divisible resource.
The neural basis of precise visual short-term memory for complex recognisable objects.
Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri
2017-10-01
Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
1976-01-01
Data analysis and supporting research in connection with the following objectives are discussed: (1) provide a precise and accurate geometric description of the earth's surface, (2) provide a precise and accurate mathematical description of the earth's gravitational field, and (3) determine time variations of the geometry of the ocean surface, the solid earth, the gravity field and other geophysical parameters.
THE APPLICATION OF CYBERNETICS IN PEDAGOGY.
ERIC Educational Resources Information Center
ATUTOV, P.R.
THE APPLICATION OF CYBERNETICS TO PEDAGOGY CAN CREATE A PRECISE SCIENCE OF INSTRUCTION AND EDUCATION THROUGH THE TIME-CONSUMING BUT INEVITABLE TRANSITION FROM IDENTIFICATION OF QUALITATIVE RELATIONSHIPS AMONG PEDAGOGICAL OBJECTS TO QUANTITATIVE ANALYSIS OF THESE OBJECTS. THE THEORETICAL UTILITY OF MATHEMATICAL MODELS AND FORMULAE FOR EXPLANATORY…
The economic case for precision medicine.
Gavan, Sean P; Thompson, Alexander J; Payne, Katherine
2018-01-01
Introduction : The advancement of precision medicine into routine clinical practice has been highlighted as an agenda for national and international health care policy. A principle barrier to this advancement is in meeting requirements of the payer or reimbursement agency for health care. This special report aims to explain the economic case for precision medicine, by accounting for the explicit objectives defined by decision-makers responsible for the allocation of limited health care resources. Areas covered : The framework of cost-effectiveness analysis, a method of economic evaluation, is used to describe how precision medicine can, in theory, exploit identifiable patient-level heterogeneity to improve population health outcomes and the relative cost-effectiveness of health care. Four case studies are used to illustrate potential challenges when demonstrating the economic case for a precision medicine in practice. Expert commentary : The economic case for a precision medicine should be considered at an early stage during its research and development phase. Clinical and economic evidence can be generated iteratively and should be in alignment with the objectives and requirements of decision-makers. Programmes of further research, to demonstrate the economic case of a precision medicine, can be prioritized by the extent that they reduce the uncertainty expressed by decision-makers.
The economic case for precision medicine
Gavan, Sean P.; Thompson, Alexander J.; Payne, Katherine
2018-01-01
ABSTRACT Introduction: The advancement of precision medicine into routine clinical practice has been highlighted as an agenda for national and international health care policy. A principle barrier to this advancement is in meeting requirements of the payer or reimbursement agency for health care. This special report aims to explain the economic case for precision medicine, by accounting for the explicit objectives defined by decision-makers responsible for the allocation of limited health care resources. Areas covered: The framework of cost-effectiveness analysis, a method of economic evaluation, is used to describe how precision medicine can, in theory, exploit identifiable patient-level heterogeneity to improve population health outcomes and the relative cost-effectiveness of health care. Four case studies are used to illustrate potential challenges when demonstrating the economic case for a precision medicine in practice. Expert commentary: The economic case for a precision medicine should be considered at an early stage during its research and development phase. Clinical and economic evidence can be generated iteratively and should be in alignment with the objectives and requirements of decision-makers. Programmes of further research, to demonstrate the economic case of a precision medicine, can be prioritized by the extent that they reduce the uncertainty expressed by decision-makers. PMID:29682615
Multiple-objective optimization in precision laser cutting of different thermoplastics
NASA Astrophysics Data System (ADS)
Tamrin, K. F.; Nukman, Y.; Choudhury, I. A.; Shirley, S.
2015-04-01
Thermoplastics are increasingly being used in biomedical, automotive and electronics industries due to their excellent physical and chemical properties. Due to the localized and non-contact process, use of lasers for cutting could result in precise cut with small heat-affected zone (HAZ). Precision laser cutting involving various materials is important in high-volume manufacturing processes to minimize operational cost, error reduction and improve product quality. This study uses grey relational analysis to determine a single optimized set of cutting parameters for three different thermoplastics. The set of the optimized processing parameters is determined based on the highest relational grade and was found at low laser power (200 W), high cutting speed (0.4 m/min) and low compressed air pressure (2.5 bar). The result matches with the objective set in the present study. Analysis of variance (ANOVA) is then carried out to ascertain the relative influence of process parameters on the cutting characteristics. It was found that the laser power has dominant effect on HAZ for all thermoplastics.
Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
2015-12-09
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory paradigm and using model-based electroencephalography analyses in humans, we thus bridge this gap and reveal behavioral and neural signatures of increased, attention-mediated working memory precision. We further show that the extent of alpha power modulation predicts the degree to which individuals' memory performance benefits from selective attention. Copyright © 2015 the authors 0270-6474/15/3516094-11$15.00/0.
3D reconstruction optimization using imagery captured by unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bassie, Abby L.; Meacham, Sean; Young, David; Turnage, Gray; Moorhead, Robert J.
2017-05-01
Because unmanned air vehicles (UAVs) are emerging as an indispensable image acquisition platform in precision agriculture, it is vitally important that researchers understand how to optimize UAV camera payloads for analysis of surveyed areas. In this study, imagery captured by a Nikon RGB camera attached to a Precision Hawk Lancaster was used to survey an agricultural field from six different altitudes ranging from 45.72 m (150 ft.) to 121.92 m (400 ft.). After collecting imagery, two different software packages (MeshLab and AgiSoft) were used to measure predetermined reference objects within six three-dimensional (3-D) point clouds (one per altitude scenario). In-silico measurements were then compared to actual reference object measurements, as recorded with a tape measure. Deviations of in-silico measurements from actual measurements were recorded as Δx, Δy, and Δz. The average measurement deviation in each coordinate direction was then calculated for each of the six flight scenarios. Results from MeshLab vs. AgiSoft offered insight into the effectiveness of GPS-defined point cloud scaling in comparison to user-defined point cloud scaling. In three of the six flight scenarios flown, MeshLab's 3D imaging software (user-defined scale) was able to measure object dimensions from 50.8 to 76.2 cm (20-30 inches) with greater than 93% accuracy. The largest average deviation in any flight scenario from actual measurements was 14.77 cm (5.82 in.). Analysis of the point clouds in AgiSoft (GPS-defined scale) yielded even smaller Δx, Δy, and Δz than the MeshLab measurements in over 75% of the flight scenarios. The precisions of these results are satisfactory in a wide variety of precision agriculture applications focused on differentiating and identifying objects using remote imagery.
Object detection in cinematographic video sequences for automatic indexing
NASA Astrophysics Data System (ADS)
Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel
2003-06-01
This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.
NASA Astrophysics Data System (ADS)
Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan
2018-07-01
Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.
Variability Analysis: Detection and Classification
NASA Astrophysics Data System (ADS)
Eyer, L.
2005-01-01
The Gaia mission will offer an exceptional opportunity to perform variability studies. The data homogeneity, its optimised photometric systems, composed of 11 medium and 4-5 broad bands, the high photometric precision in G band of one milli-mag for V = 13-15, the radial velocity measurements and the exquisite astrometric precision for one billion stars will permit a detailed description of variable objects like stars, quasars and asteroids. However the time sampling and the total number of measurements change from one object to another because of the satellite scanning law. The data analysis is a challenge because of the huge amount of data, the complexity of the observed objects and the peculiarities of the satellite, and needs thorough preparation. Experience can be gained by the study of past and present survey analyses and results, and Gaia should be put in perspective with the future large scale surveys, like PanSTARRS or LSST. We present the activities of the Variable Star Working Group and a general plan to digest this unprecedented data set, focusing here on the photometry.
Spatial analysis of NDVI readings with difference sampling density
USDA-ARS?s Scientific Manuscript database
Advanced remote sensing technologies provide research an innovative way of collecting spatial data for use in precision agriculture. Sensor information and spatial analysis together allow for a complete understanding of the spatial complexity of a field and its crop. The objective of the study was...
2011-01-01
Background Orthopaedic research projects focusing on small displacements in a small measurement volume require a radiation free, three dimensional motion analysis system. A stereophotogrammetrical motion analysis system can track wireless, small, light-weight markers attached to the objects. Thereby the disturbance of the measured objects through the marker tracking can be kept at minimum. The purpose of this study was to develop and evaluate a non-position fixed compact motion analysis system configured for a small measurement volume and able to zoom while tracking small round flat markers in respect to a fiducial marker which was used for the camera pose estimation. Methods The system consisted of two web cameras and the fiducial marker placed in front of them. The markers to track were black circles on a white background. The algorithm to detect a centre of the projected circle on the image plane was described and applied. In order to evaluate the accuracy (mean measurement error) and precision (standard deviation of the measurement error) of the optical measurement system, two experiments were performed: 1) inter-marker distance measurement and 2) marker displacement measurement. Results The first experiment of the 10 mm distances measurement showed a total accuracy of 0.0086 mm and precision of ± 0.1002 mm. In the second experiment, translations from 0.5 mm to 5 mm were measured with total accuracy of 0.0038 mm and precision of ± 0.0461 mm. The rotations of 2.25° amount were measured with the entire accuracy of 0.058° and the precision was of ± 0.172°. Conclusions The description of the non-proprietary measurement device with very good levels of accuracy and precision may provide opportunities for new, cost effective applications of stereophotogrammetrical analysis in musculoskeletal research projects, focusing on kinematics of small displacements in a small measurement volume. PMID:21284867
Busse, Harald; Thomas, Michael; Seiwerts, Matthias; Moche, Michael; Busse, Martin W; von Salis-Soglio, Georg; Kahn, Thomas
2008-01-01
To implement a PC-based morphometric analysis platform and to evaluate the feasibility and precision of MRI measurements of glenohumeral translation. Using a vertically open 0.5T MRI scanner, the shoulders of 10 healthy subjects were scanned in apprehension (AP) and in neutral position (NP), respectively. Surface models of the humeral head (HH) and the glenoid cavity (GC) were created from segmented MR images by three readers. Glenohumeral translation was determined by the projection point of the manually fitted HH center on the GC plane defined by the two main principal axes of the GC model. Positional precision, given as mean (extreme value at 95% confidence level), was 0.9 (1.8) mm for the HH center and 0.7 (1.6) mm for the GC centroid; angular GC precision was 1.3 degrees (2.3 degrees ) for the normal and about 4 degrees (7 degrees ) for the anterior and superior coordinate axes. The two-dimensional (2D) precision of the HH projection point was 1.1 (2.2) mm. A significant HH translation between AP and NP was found. Despite a limited quality of the underlying model data, our PC-based analysis platform allows a precise morphometric analysis of the glenohumeral joint. The software is easily extendable and may potentially be used for an objective evaluation of therapeutical measures.
Precise and Efficient Static Array Bound Checking for Large Embedded C Programs
NASA Technical Reports Server (NTRS)
Venet, Arnaud
2004-01-01
In this paper we describe the design and implementation of a static array-bound checker for a family of embedded programs: the flight control software of recent Mars missions. These codes are large (up to 250 KLOC), pointer intensive, heavily multithreaded and written in an object-oriented style, which makes their analysis very challenging. We designed a tool called C Global Surveyor (CGS) that can analyze the largest code in a couple of hours with a precision of 80%. The scalability and precision of the analyzer are achieved by using an incremental framework in which a pointer analysis and a numerical analysis of array indices mutually refine each other. CGS has been designed so that it can distribute the analysis over several processors in a cluster of machines. To the best of our knowledge this is the first distributed implementation of static analysis algorithms. Throughout the paper we will discuss the scalability setbacks that we encountered during the construction of the tool and their impact on the initial design decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, S.K.; Anderson, H.M.; Benson, S.B.
Quality assurance (QA) objectives for Phase 2 were that (1) scientific data generated would withstand scientific and legal scrutiny; (2) data would be gathered using appropriate procedures for sample collection, sample handling and security, chain of custody, laboratory analyses, and data reporting; (3) data would be of known precision and accuracy; and (4) data would meet data quality objectives defined in the Phase 2 Sampling and Analysis Plan. A review of the QA systems and quality control (QC) data associated with the Phase 2 investigation is presented to evaluate whether the data were of sufficient quality to satisfy Phase 2more » objectives. The data quality indicators of precision, accuracy, representativeness, comparability, completeness, and sensitivity were evaluated to determine any limitations associated with the data. Data were flagged with qualifiers that were associated with appropriate reason codes and documentation relating the qualifiers to the reviewer of the data. These qualifiers were then consolidated into an overall final qualifier to represent the quality of the data to the end user. In summary, reproducible, precise, and accurate measurements consistent with CRRI objectives and the limitations of the sampling and analytical procedures used were obtained for the data collected in support of the Phase 2 Remedial Investigation.« less
Optimization of deformation monitoring networks using finite element strain analysis
NASA Astrophysics Data System (ADS)
Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.
2018-04-01
An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.
Investigating the Conservation of Mechanical Energy Using Video Analysis: Four Cases
ERIC Educational Resources Information Center
Bryan, J. A.
2010-01-01
Inexpensive video analysis technology now enables students to make precise measurements of an object's position at incremental times during its motion. Such capability now allows users to "examine", rather than simply "assume", energy conservation in a variety of situations commonly discussed in introductory physics courses. This article describes…
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Yu, Jian-cheng; Zhang, Ai-qun; Wang, Ya-xing; Zhao, Wen-tao
2017-12-01
Combining high precision numerical analysis methods with optimization algorithms to make a systematic exploration of a design space has become an important topic in the modern design methods. During the design process of an underwater glider's flying-wing structure, a surrogate model is introduced to decrease the computation time for a high precision analysis. By these means, the contradiction between precision and efficiency is solved effectively. Based on the parametric geometry modeling, mesh generation and computational fluid dynamics analysis, a surrogate model is constructed by adopting the design of experiment (DOE) theory to solve the multi-objects design optimization problem of the underwater glider. The procedure of a surrogate model construction is presented, and the Gaussian kernel function is specifically discussed. The Particle Swarm Optimization (PSO) algorithm is applied to hydrodynamic design optimization. The hydrodynamic performance of the optimized flying-wing structure underwater glider increases by 9.1%.
Liu, Shiau-Hua; Dosher, Barbara Anne; Lu, Zhong-Lin
2009-06-01
Multiple attributes of a single-object are often processed more easily than attributes of different objects-a phenomenon associated with object attention. Here we investigate the influence of two factors, judgment frames and judgment precision, on dual-object report deficits as an index of object attention. [Han, S., Dosher, B., & Lu, Z.-L. (2003). Object attention revisited: Identifying mechanisms and boundary conditions. Psychological Science, 14, 598-604] predicted that consistency of the frame for judgments about two separate objects could reduce or eliminate the expression of object attention limitations. The current studies examine the effects of judgment frames and of task precision in orientation identification and find that dual-object report deficits within one feature are indeed affected modestly by the congruency of the judgments and more substantially by the required precision of judgments. The observed dual-object deficits affected contrast thresholds for incongruent frame conditions and for high precision judgments and reduce psychometric asymptotes. These dual-object deficits reflect a combined effect of multiplicative noise and external noise exclusion in dual-object conditions, both related to the effects of attention on the tuning of perceptual templates. These results have implications for modification of object attention theory, for understanding limitations on concurrent tasks.
Determining characteristics of artificial near-Earth objects using observability analysis
NASA Astrophysics Data System (ADS)
Friedman, Alex M.; Frueh, Carolin
2018-03-01
Observability analysis is a method for determining whether a chosen state of a system can be determined from the output or measurements. Knowledge of state information availability resulting from observability analysis leads to improved sensor tasking for observation of orbital debris and better control of active spacecraft. This research performs numerical observability analysis of artificial near-Earth objects. Analysis of linearization methods and state transition matrices is performed to determine the viability of applying linear observability methods to the nonlinear orbit problem. Furthermore, pre-whitening is implemented to reformulate classical observability analysis. In addition, the state in observability analysis is typically composed of position and velocity; however, including object characteristics beyond position and velocity can be crucial for precise orbit propagation. For example, solar radiation pressure has a significant impact on the orbit of high area-to-mass ratio objects in geosynchronous orbit. Therefore, determining the time required for solar radiation pressure parameters to become observable is important for understanding debris objects. In order to compare observability analysis results with and without measurement noise and an extended state, quantitative measures of observability are investigated and implemented.
An automated field phenotyping pipeline for application in grapevine research.
Kicherer, Anna; Herzog, Katja; Pflanz, Michael; Wieland, Markus; Rüger, Philipp; Kecke, Steffen; Kuhlmann, Heiner; Töpfer, Reinhard
2015-02-26
Due to its perennial nature and size, the acquisition of phenotypic data in grapevine research is almost exclusively restricted to the field and done by visual estimation. This kind of evaluation procedure is limited by time, cost and the subjectivity of records. As a consequence, objectivity, automation and more precision of phenotypic data evaluation are needed to increase the number of samples, manage grapevine repositories, enable genetic research of new phenotypic traits and, therefore, increase the efficiency in plant research. In the present study, an automated field phenotyping pipeline was setup and applied in a plot of genetic resources. The application of the PHENObot allows image acquisition from at least 250 individual grapevines per hour directly in the field without user interaction. Data management is handled by a database (IMAGEdata). The automatic image analysis tool BIVcolor (Berries in Vineyards-color) permitted the collection of precise phenotypic data of two important fruit traits, berry size and color, within a large set of plants. The application of the PHENObot represents an automated tool for high-throughput sampling of image data in the field. The automated analysis of these images facilitates the generation of objective and precise phenotypic data on a larger scale.
An Automated Field Phenotyping Pipeline for Application in Grapevine Research
Kicherer, Anna; Herzog, Katja; Pflanz, Michael; Wieland, Markus; Rüger, Philipp; Kecke, Steffen; Kuhlmann, Heiner; Töpfer, Reinhard
2015-01-01
Due to its perennial nature and size, the acquisition of phenotypic data in grapevine research is almost exclusively restricted to the field and done by visual estimation. This kind of evaluation procedure is limited by time, cost and the subjectivity of records. As a consequence, objectivity, automation and more precision of phenotypic data evaluation are needed to increase the number of samples, manage grapevine repositories, enable genetic research of new phenotypic traits and, therefore, increase the efficiency in plant research. In the present study, an automated field phenotyping pipeline was setup and applied in a plot of genetic resources. The application of the PHENObot allows image acquisition from at least 250 individual grapevines per hour directly in the field without user interaction. Data management is handled by a database (IMAGEdata). The automatic image analysis tool BIVcolor (Berries in Vineyards-color) permitted the collection of precise phenotypic data of two important fruit traits, berry size and color, within a large set of plants. The application of the PHENObot represents an automated tool for high-throughput sampling of image data in the field. The automated analysis of these images facilitates the generation of objective and precise phenotypic data on a larger scale. PMID:25730485
Automatic Topography Using High Precision Digital Moire Methods
NASA Astrophysics Data System (ADS)
Yatagai, T.; Idesawa, M.; Saito, S.
1983-07-01
Three types of moire topographic methods using digital techniques are proposed. Deformed gratings obtained by projecting a reference grating onto an object under test are subjected to digital analysis. The electronic analysis procedures of deformed gratings described here enable us to distinguish between depression and elevation of the object, so that automatic measurement of 3-D shapes and automatic moire fringe interpolation are performed. Based on the digital moire methods, we have developed a practical measurement system, with a linear photodiode array on a micro-stage as a scanning image sensor. Examples of fringe analysis in medical applications are presented.
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
Pusic, Martin V.; LeBlanc, Vicki; Patel, Vimla L.
2001-01-01
Traditional task analysis for instructional design has emphasized the importance of precisely defining behavioral educational objectives and working back to select objective-appropriate instructional strategies. However, this approach may miss effective strategies. Cognitive task analysis, on the other hand, breaks a process down into its component knowledge representations. Selection of instructional strategies based on all such representations in a domain is likely to lead to optimal instructional design. In this demonstration, using the interpretation of cervical spine x-rays as an educational example, we show how a detailed cognitive task analysis can guide the development of computer-aided instruction.
NASA Technical Reports Server (NTRS)
1974-01-01
Accomplishments in the continuing programs are reported. The data were obtained in support of the following broad objectives: (1) to provide a precise and accurate geometric description of the earth's surface; (2) to provide a precise and accurate mathematical description of the earth's gravitational field; and (3) to determine time variations of the geometry of the ocean surface, the solid earth, the gravity field, and other geophysical parameters.
NASA Astrophysics Data System (ADS)
Echeverria, Alex; Silva, Jorge F.; Mendez, Rene A.; Orchard, Marcos
2016-10-01
Context. The best precision that can be achieved to estimate the location of a stellar-like object is a topic of permanent interest in the astrometric community. Aims: We analyze bounds for the best position estimation of a stellar-like object on a CCD detector array in a Bayesian setting where the position is unknown, but where we have access to a prior distribution. In contrast to a parametric setting where we estimate a parameter from observations, the Bayesian approach estimates a random object (I.e., the position is a random variable) from observations that are statistically dependent on the position. Methods: We characterize the Bayesian Cramér-Rao (CR) that bounds the minimum mean square error (MMSE) of the best estimator of the position of a point source on a linear CCD-like detector, as a function of the properties of detector, the source, and the background. Results: We quantify and analyze the increase in astrometric performance from the use of a prior distribution of the object position, which is not available in the classical parametric setting. This gain is shown to be significant for various observational regimes, in particular in the case of faint objects or when the observations are taken under poor conditions. Furthermore, we present numerical evidence that the MMSE estimator of this problem tightly achieves the Bayesian CR bound. This is a remarkable result, demonstrating that all the performance gains presented in our analysis can be achieved with the MMSE estimator. Conclusions: The Bayesian CR bound can be used as a benchmark indicator of the expected maximum positional precision of a set of astrometric measurements in which prior information can be incorporated. This bound can be achieved through the conditional mean estimator, in contrast to the parametric case where no unbiased estimator precisely reaches the CR bound.
Viewing geometry determines the contribution of binocular vision to the online control of grasping.
Keefe, Bruce D; Watt, Simon J
2017-12-01
Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the 'viewing geometry' typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being 'hard-wired'. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, 'architectural' property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.
Computer-aided target tracking in motion analysis studies
NASA Astrophysics Data System (ADS)
Burdick, Dominic C.; Marcuse, M. L.; Mislan, J. D.
1990-08-01
Motion analysis studies require the precise tracking of reference objects in sequential scenes. In a typical situation, events of interest are captured at high frame rates using special cameras, and selected objects or targets are tracked on a frame by frame basis to provide necessary data for motion reconstruction. Tracking is usually done using manual methods which are slow and prone to error. A computer based image analysis system has been developed that performs tracking automatically. The objective of this work was to eliminate the bottleneck due to manual methods in high volume tracking applications such as the analysis of crash test films for the automotive industry. The system has proven to be successful in tracking standard fiducial targets and other objects in crash test scenes. Over 95 percent of target positions which could be located using manual methods can be tracked by the system, with a significant improvement in throughput over manual methods. Future work will focus on the tracking of clusters of targets and on tracking deformable objects such as airbags.
Braian, Michael; Jönsson, David; Kevci, Mir; Wennerberg, Ann
2018-07-01
To evaluate the accuracy and precision of objects produced by additive manufacturing systems (AM) for use in dentistry and to compare with subtractive manufacturing systems (SM). Ten specimens of two geometrical objects were produced by five different AM machines and one SM machine. Object A mimics an inlay-shaped object, while object B imitates a four-unit bridge model. All the objects were sorted into different measurement dimensions (x, y, z), linear distances, angles and corner radius. None of the additive manufacturing or subtractive manufacturing groups presented a perfect match to the CAD file with regard to all parameters included in the present study. Considering linear measurements, the precision for subtractive manufacturing group was consistent in all axes for object A, presenting results of <0.050mm. The additive manufacturing groups had consistent precision in the x-axis and y-axis but not in the z-axis. With regard to corner radius measurements, the SM group had the best overall accuracy and precision for both objects A and B when compared to the AM groups. Within the limitations of this in vitro study, the conclusion can be made that subtractive manufacturing presented overall precision on all measurements below 0.050mm. The AM machines also presented fairly good precision, <0.150mm, on all axes except for the z-axis. Knowledge regarding accuracy and precision for different production techniques utilized in dentistry is of great clinical importance. The dental community has moved from casting to milling and additive techniques are now being implemented. Thus all these production techniques need to be tested, compared and validated. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.
2018-01-01
Significant role in enhancing nuclear nonproliferation plays the analysis of obtained data and the inference of the presence or not of special nuclear materials in them. Among various types of measurements, gamma-ray spectra is the widest used type of data utilized for analysis in nonproliferation. In this chapter, a method that employs the fireworks algorithm (FWA) for analyzing gamma-ray spectra aiming at detecting gamma signatures is presented. In particular FWA is utilized to fit a set of known signatures to a measured spectrum by optimizing an objective function, with non-zero coefficients expressing the detected signatures. FWA is tested on amore » set of experimentally obtained measurements and various objective functions -MSE, RMSE, Theil-2, MAE, MAPE, MAP- with results exhibiting its potential in providing high accuracy and high precision of detected signatures. Furthermore, FWA is benchmarked against genetic algorithms, and multiple linear regression with results exhibiting its superiority over the rest tested algorithms with respect to precision for MAE, MAPE and MAP measures.« less
Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.
2018-01-01
The analysis of measured data plays a significant role in enhancing nuclear nonproliferation mainly by inferring the presence of patterns associated with special nuclear materials. Among various types of measurements, gamma-ray spectra is the widest utilized type of data in nonproliferation applications. In this paper, a method that employs the fireworks algorithm (FWA) for analyzing gamma-ray spectra aiming at detecting gamma signatures is presented. In particular, FWA is utilized to fit a set of known signatures to a measured spectrum by optimizing an objective function, where non-zero coefficients express the detected signatures. FWA is tested on a set of experimentallymore » obtained measurements optimizing various objective functions—MSE, RMSE, Theil-2, MAE, MAPE, MAP—with results exhibiting its potential in providing highly accurate and precise signature detection. Finally and furthermore, FWA is benchmarked against genetic algorithms and multiple linear regression, showing its superiority over those algorithms regarding precision with respect to MAE, MAPE, and MAP measures.« less
Comparative abrasive wear resistance and surface analysis of dental resin-based materials
Nayyer, Maleeha; Zahid, Shahreen; Hassan, Syed Hammad; Mian, Salman Aziz; Mehmood, Sana; Khan, Haroon Ahmed; Kaleem, Muhammad; Zafar, Muhammad Sohail; Khan, Abdul Samad
2018-01-01
Objective: The objective of this study was to assess the surface properties (microhardness and wear resistance) of various composites and compomer materials. In addition, the methodologies used for assessing wear resistance were compared. Materials and Methods: This study was conducted using restorative material (Filtek Z250, Filtek Z350, QuiXfil, SureFil SDR, and Dyract XP) to assess wear resistance. A custom-made toothbrush simulator was employed for wear testing. Before and after wear resistance, structural, surface, and physical properties were assessed using various techniques. Results: Structural changes and mass loss were observed after treatment, whereas no significant difference in terms of microhardness was observed. The correlation between atomic force microscopy (AFM) and profilometer and between wear resistance and filler volume was highly significant. The correlation between wear resistance and microhardness were insignificant. Conclusions: The AFM presented higher precision compared to optical profilometers at a nanoscale level, but both methods can be used in tandem for a more detailed and precise roughness analysis. PMID:29657526
Reduction procedures for accurate analysis of MSX surveillance experiment data
NASA Technical Reports Server (NTRS)
Gaposchkin, E. Mike; Lane, Mark T.; Abbot, Rick I.
1994-01-01
Technical challenges of the Midcourse Space Experiment (MSX) science instruments require careful characterization and calibration of these sensors for analysis of surveillance experiment data. Procedures for reduction of Resident Space Object (RSO) detections will be presented which include refinement and calibration of the metric and radiometric (and photometric) data and calculation of a precise MSX ephemeris. Examples will be given which support the reduction, and these are taken from ground-test data similar in characteristics to the MSX sensors and from the IRAS satellite RSO detections. Examples to demonstrate the calculation of a precise ephemeris will be provided from satellites in similar orbits which are equipped with S-band transponders.
Precision Higgs Boson Physics and Implications for Beyond the Standard Model Physics Theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, James
The discovery of the Higgs boson is one of science's most impressive recent achievements. We have taken a leap forward in understanding what is at the heart of elementary particle mass generation. We now have a significant opportunity to develop even deeper understanding of how the fundamental laws of nature are constructed. As such, we need intense focus from the scientific community to put this discovery in its proper context, to realign and narrow our understanding of viable theory based on this positive discovery, and to detail the implications the discovery has for theories that attempt to answer questions beyondmore » what the Standard Model can explain. This project's first main object is to develop a state-of-the-art analysis of precision Higgs boson physics. This is to be done in the tradition of the electroweak precision measurements of the LEP/SLC era. Indeed, the electroweak precision studies of the past are necessary inputs to the full precision Higgs program. Calculations will be presented to the community of Higgs boson observables that detail just how well various couplings of the Higgs boson can be measured, and more. These will be carried out using state-of-the-art theory computations coupled with the new experimental results coming in from the LHC. The project's second main objective is to utilize the results obtained from LHC Higgs boson experiments and the precision analysis, along with the direct search studies at LHC, and discern viable theories of physics beyond the Standard Model that unify physics to a deeper level. Studies will be performed on supersymmetric theories, theories of extra spatial dimensions (and related theories, such as compositeness), and theories that contain hidden sector states uniquely accessible to the Higgs boson. In addition, if data becomes incompatible with the Standard Model's low-energy effective lagrangian, new physics theories will be developed that explain the anomaly and put it into a more unified framework beyond the Standard Model.« less
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
Quantitative morphometrical characterization of human pronuclear zygotes.
Beuchat, A; Thévenaz, P; Unser, M; Ebner, T; Senn, A; Urner, F; Germond, M; Sorzano, C O S
2008-09-01
Identification of embryos with high implantation potential remains a challenge in in vitro fertilization (IVF). Subjective pronuclear (PN) zygote scoring systems have been developed for that purpose. The aim of this work was to provide a software tool that enables objective measuring of morphological characteristics of the human PN zygote. A computer program was created to analyse zygote images semi-automatically, providing precise morphological measurements. The accuracy of this approach was first validated by comparing zygotes from two different IVF centres with computer-assisted measurements or subjective scoring. Computer-assisted measurement and subjective scoring were then compared for their ability to classify zygotes with high and low implantation probability by using a linear discriminant analysis. Zygote images coming from the two IVF centres were analysed with the software, resulting in a series of precise measurements of 24 variables. Using subjective scoring, the cytoplasmic halo was the only feature which was significantly different between the two IVF centres. Computer-assisted measurements revealed significant differences between centres in PN centring, PN proximity, cytoplasmic halo and features related to nucleolar precursor bodies distribution. The zygote classification error achieved with the computer-assisted measurements (0.363) was slightly inferior to that of the subjective ones (0.393). A precise and objective characterization of the morphology of human PN zygotes can be achieved by the use of an advanced image analysis tool. This computer-assisted analysis allows for a better morphological characterization of human zygotes and can be used for classification.
ERIC Educational Resources Information Center
Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane
2010-01-01
The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…
Evaluating the Skill of Students to Determine Soil Morphology Characteristics
ERIC Educational Resources Information Center
Post, Donald F.; Parikh, Sanjai J.; Papp, Rae Ann; Ferriera, Laerta
2006-01-01
Precise and accurate pedon descriptions prepared by field scientists using standard techniques with defined terminology and methodology are essential in describing soil pedons. The accuracy of field measurements generally are defined in terms of how well they agree with objective criteria (e.g., laboratory analysis), such as mechanical analysis…
NASA Astrophysics Data System (ADS)
Chauhan, H.; Krishna Mohan, B.
2014-11-01
The present study was undertaken with the objective to check effectiveness of spectral similarity measures to develop precise crop spectra from the collected hyperspectral field spectra. In Multispectral and Hyperspectral remote sensing, classification of pixels is obtained by statistical comparison (by means of spectral similarity) of known field or library spectra to unknown image spectra. Though these algorithms are readily used, little emphasis has been placed on use of various spectral similarity measures to select precise crop spectra from the set of field spectra. Conventionally crop spectra are developed after rejecting outliers based only on broad-spectrum analysis. Here a successful attempt has been made to develop precise crop spectra based on spectral similarity. As unevaluated data usage leads to uncertainty in the image classification, it is very crucial to evaluate the data. Hence, notwithstanding the conventional method, the data precision has been performed effectively to serve the purpose of the present research work. The effectiveness of developed precise field spectra was evaluated by spectral discrimination measures and found higher discrimination values compared to spectra developed conventionally. Overall classification accuracy for the image classified by field spectra selected conventionally is 51.89% and 75.47% for the image classified by field spectra selected precisely based on spectral similarity. KHAT values are 0.37, 0.62 and Z values are 2.77, 9.59 for image classified using conventional and precise field spectra respectively. Reasonable higher classification accuracy, KHAT and Z values shows the possibility of a new approach for field spectra selection based on spectral similarity measure.
2010-01-01
Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures. PMID:20377897
Lunny, Carole; McKenzie, Joanne E; McDonald, Steve
2016-06-01
Locating overviews of systematic reviews is difficult because of an absence of appropriate indexing terms and inconsistent terminology used to describe overviews. Our objective was to develop a validated search strategy to retrieve overviews in MEDLINE. We derived a test set of overviews from the references of two method articles on overviews. Two population sets were used to identify discriminating terms, that is, terms that appear frequently in the test set but infrequently in two population sets of references found in MEDLINE. We used text mining to conduct a frequency analysis of terms appearing in the titles and abstracts. Candidate terms were combined and tested in MEDLINE in various permutations, and the performance of strategies measured using sensitivity and precision. Two search strategies were developed: a sensitivity-maximizing strategy, achieving 93% sensitivity (95% confidence interval [CI]: 87, 96) and 7% precision (95% CI: 6, 8), and a sensitivity-and-precision-maximizing strategy, achieving 66% sensitivity (95% CI: 58, 74) and 21% precision (95% CI: 17, 25). The developed search strategies enable users to more efficiently identify overviews of reviews compared to current strategies. Consistent language in describing overviews would aid in their identification, as would a specific MEDLINE Publication Type. Copyright © 2015 Elsevier Inc. All rights reserved.
Agley, Chibeza C.; Velloso, Cristiana P.; Lazarus, Norman R.
2012-01-01
The accurate measurement of the morphological characteristics of cells with nonuniform conformations presents difficulties. We report here a straightforward method using immunofluorescent staining and the commercially available imaging program Adobe Photoshop, which allows objective and precise information to be gathered on irregularly shaped cells. We have applied this measurement technique to the analysis of human muscle cells and their immunologically marked intracellular constituents, as these cells are prone to adopting a highly branched phenotype in culture. Use of this method can be used to overcome many of the long-standing limitations of conventional approaches for quantifying muscle cell size in vitro. In addition, wider applications of Photoshop as a quantitative and semiquantitative tool in immunocytochemistry are explored. PMID:22511600
Design of measurement system of 3D surface profile based on chromatic confocal technology
NASA Astrophysics Data System (ADS)
Wang, An-su; Xie, Bin; Liu, Zi-wei
2018-01-01
Chromatic confocal 3D profilometer has widely used in science investigation and industry fields recently for its high precision, great measuring range and numerical surface characteristic. It can provide exact and omnidirectional solution for manufacture and research by 3D non-contact surface analysis technique. The article analyzes the principle of surface measurement with chromatic confocal technology, and provides the designing indicators and requirements of the confocal system. As the key component, the dispersive objective used to achieve longitudinal focus vibration with wavelength was designed. The objective disperses the focus of wavelength between 400 700 nm to 15 mm longitudinal range. With selected spectrometer, the resolution of chromatic confocal 3D profilometer is no more than 5 μm, which can meet needs for the high precision non-contact surface profile measurement.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Sahraei, S.
2016-12-01
Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.
JASMINE: Data analysis and simulation
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Sako, Nobutada; Jasmine Working Group
JASMINE will study the structure and evolution of the Milky Way Galaxy. To accomplish these objectives JASMINE will measure trigonometric parallaxes, positions and proper motions of about 10 million stars with a precision of 10 μas at z = 14 mag. In this paper methods for data analysis and error budgets, on-board data handling such as sampling strategy and data compression, and simulation software for end-to-end simulation are presented.
EOS-AM precision pointing verification
NASA Technical Reports Server (NTRS)
Throckmorton, A.; Braknis, E.; Bolek, J.
1993-01-01
The Earth Observing System (EOS) AM mission requires tight pointing knowledge to meet scientific objectives, in a spacecraft with low frequency flexible appendage modes. As the spacecraft controller reacts to various disturbance sources and as the inherent appendage modes are excited by this control action, verification of precision pointing knowledge becomes particularly challenging for the EOS-AM mission. As presently conceived, this verification includes a complementary set of multi-disciplinary analyses, hardware tests and real-time computer in the loop simulations, followed by collection and analysis of hardware test and flight data and supported by a comprehensive data base repository for validated program values.
Taxonomy based analysis of force exchanges during object grasping and manipulation
Martin-Brevet, Sandra; Jarrassé, Nathanaël; Burdet, Etienne
2017-01-01
The flexibility of the human hand in object manipulation is essential for daily life activities, but remains relatively little explored with quantitative methods. On the one hand, recent taxonomies describe qualitatively the classes of hand postures for object grasping and manipulation. On the other hand, the quantitative analysis of hand function has been generally restricted to precision grip (with thumb and index opposition) during lifting tasks. The aim of the present study is to fill the gap between these two kinds of descriptions, by investigating quantitatively the forces exerted by the hand on an instrumented object in a set of representative manipulation tasks. The object was a parallelepiped object able to measure the force exerted on the six faces and its acceleration. The grasping force was estimated from the lateral force and the unloading force from the bottom force. The protocol included eleven tasks with complementary constraints inspired by recent taxonomies: four tasks corresponding to lifting and holding the object with different grasp configurations, and seven to manipulating the object (rotation around each of its axis and translation). The grasping and unloading forces and object rotations were measured during the five phases of the actions: unloading, lifting, holding or manipulation, preparation to deposit, and deposit. The results confirm the tight regulation between grasping and unloading forces during lifting, and extend this to the deposit phase. In addition, they provide a precise description of the regulation of force exchanges during various manipulation tasks spanning representative actions of daily life. The timing of manipulation showed both sequential and overlapping organization of the different sub-actions, and micro-errors could be detected. This phenomenological study confirms the feasibility of using an instrumented object to investigate complex manipulative behavior in humans. This protocol will be used in the future to investigate upper-limb dexterity in patients with sensory-motor impairments. PMID:28562617
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shulman, Holly; Ross, Nicole
2015-10-30
An additive manufacture technique known as laminated object manufacturing (LOM) was used to fabricate compact ceramic heat exchanger prototypes. LOM uses precision CO2 laser cutting of ceramic green tapes, which are then precision stacked to build a 3D object with fine internal features. Modeling was used to develop prototype designs and predict the thermal response, stress, and efficiency in the ceramic heat exchangers. Build testing and materials analyses were used to provide feedback for the design selection. During this development process, laminated object manufacturing protocols were established. This included laser optimization, strategies for fine feature integrity, lamination fluid control, greenmore » handling, and firing profile. Three full size prototypes were fabricated using two different designs. One prototype was selected for performance testing. During testing, cross talk leakage prevented the application of a high pressure differential, however, the prototype was successful at withstanding the high temperature operating conditions (1300 °F). In addition, analysis showed that the bulk of the part did not have cracks or leakage issues. This led to the development of a module method for next generation LOM heat exchangers. A scale-up cost analysis showed that given a purpose built LOM system, these ceramic heat exchangers would be affordable for the applications.« less
The Length of a Pestle: A Class Exercise in Measurement and Statistical Analysis.
ERIC Educational Resources Information Center
O'Reilly, James E.
1986-01-01
Outlines the simple exercise of measuring the length of an object as a concrete paradigm of the entire process of making chemical measurements and treating the resulting data. Discusses the procedure, significant figures, measurement error, spurious data, rejection of results, precision and accuracy, and student responses. (TW)
ERIC Educational Resources Information Center
Faraone, Stephen V.
2012-01-01
Objective: An earlier meta-analysis of pediatric clinical trials indicated that lisdexamfetamine dimesylate (LDX) had a greater effect size than other stimulant medications. This work tested the hypothesis that the apparent increased efficacy was artifactual. Method: The authors assessed two potential artifacts: an unusually high precision of…
Fundamental differences between optimization code test problems in engineering applications
NASA Technical Reports Server (NTRS)
Eason, E. D.
1984-01-01
The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.
Salem, Ran; Matityahu, Shlomi; Melchior, Aviva; Nikolaevsky, Mark; Noked, Ori; Sterer, Eran
2015-09-01
The precision of melting curve measurements using laser-heated diamond anvil cell (LHDAC) is largely limited by the correct and reliable determination of the onset of melting. We present a novel image analysis of speckle interference patterns in the LHDAC as a way to define quantitative measures which enable an objective determination of the melting transition. Combined with our low-temperature customized IR pyrometer, designed for measurements down to 500 K, our setup allows studying the melting curve of materials with low melting temperatures, with relatively high precision. As an application, the melting curve of Te was measured up to 35 GPa. The results are found to be in good agreement with previous data obtained at pressures up to 10 GPa.
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun
2015-04-01
Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.
Defining precision: The precision medicine initiative trials NCI-MPACT and NCI-MATCH.
Coyne, Geraldine O'Sullivan; Takebe, Naoko; Chen, Alice P
"Precision" trials, using rationally incorporated biomarker targets and molecularly selective anticancer agents, have become of great interest to both patients and their physicians. In the endeavor to test the cornerstone premise of precision oncotherapy, that is, determining if modulating a specific molecular aberration in a patient's tumor with a correspondingly specific therapeutic agent improves clinical outcomes, the design of clinical trials with embedded genomic characterization platforms which guide therapy are an increasing challenge. The National Cancer Institute Precision Medicine Initiative is an unprecedented large interdisciplinary collaborative effort to conceptualize and test the feasibility of trials incorporating sequencing platforms and large-scale bioinformatics processing that are not currently uniformly available to patients. National Cancer Institute-Molecular Profiling-based Assignment of Cancer Therapy and National Cancer Institute-Molecular Analysis for Therapy Choice are 2 genomic to phenotypic trials under this National Cancer Institute initiative, where treatment is selected according to predetermined genetic alterations detected using next-generation sequencing technology across a broad range of tumor types. In this article, we discuss the objectives and trial designs that have enabled the public-private partnerships required to complete the scale of both trials, as well as interim trial updates and strategic considerations that have driven data analysis and targeted therapy assignment, with the intent of elucidating further the benefits of this treatment approach for patients. Copyright © 2017. Published by Elsevier Inc.
Balancing feasibility and precision of wildlife habitat analysis in planning for natural resources
Anita T. Morzillo; Joshua S. Halofsky; Jennifer DiMiceli; Blair Csuti; Pamela Comeleo; Miles Hemstrom
2012-01-01
Wildlife conservation often is a central focus in planning for natural resource management. Evaluation of wildlife habitat involves balancing the desire for information about detailed habitat characteristics and the feasibility of completing analyses across large areas. Our objective is to describe tradeoffs made in assessments of wildlife habitat within a multiple-...
Pre-Test Assessment of the Upper Bound of the Drag Coefficient Repeatability of a Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Ulbrich, N.; L'Esperance, A.
2017-01-01
A new method is presented that computes a pre{test estimate of the upper bound of the drag coefficient repeatability of a wind tunnel model. This upper bound is a conservative estimate of the precision error of the drag coefficient. For clarity, precision error contributions associated with the measurement of the dynamic pressure are analyzed separately from those that are associated with the measurement of the aerodynamic loads. The upper bound is computed by using information about the model, the tunnel conditions, and the balance in combination with an estimate of the expected output variations as input. The model information consists of the reference area and an assumed angle of attack. The tunnel conditions are described by the Mach number and the total pressure or unit Reynolds number. The balance inputs are the partial derivatives of the axial and normal force with respect to all balance outputs. Finally, an empirical output variation of 1.0 microV/V is used to relate both random instrumentation and angle measurement errors to the precision error of the drag coefficient. Results of the analysis are reported by plotting the upper bound of the precision error versus the tunnel conditions. The analysis shows that the influence of the dynamic pressure measurement error on the precision error of the drag coefficient is often small when compared with the influence of errors that are associated with the load measurements. Consequently, the sensitivities of the axial and normal force gages of the balance have a significant influence on the overall magnitude of the drag coefficient's precision error. Therefore, results of the error analysis can be used for balance selection purposes as the drag prediction characteristics of balances of similar size and capacities can objectively be compared. Data from two wind tunnel models and three balances are used to illustrate the assessment of the precision error of the drag coefficient.
Dearing, Chey G; Kilburn, Sally; Lindsay, Kevin S
2014-03-01
Sperm counts have been linked to several fertility outcomes making them an essential parameter of semen analysis. It has become increasingly recognised that Computer-Assisted Semen Analysis (CASA) provides improved precision over manual methods but that systems are seldom validated robustly for use. The objective of this study was to gather the evidence to validate or reject the Sperm Class Analyser (SCA) as a tool for routine sperm counting in a busy laboratory setting. The criteria examined were comparison with the Improved Neubauer and Leja 20-μm chambers, within and between field precision, sperm concentration linearity from a stock diluted in semen and media, accuracy against internal and external quality material, assessment of uneven flow effects and a receiver operating characteristic (ROC) analysis to predict fertility in comparison with the Neubauer method. This work demonstrates that SCA CASA technology is not a standalone 'black box', but rather a tool for well-trained staff that allows rapid, high-number sperm counting providing errors are identified and corrected. The system will produce accurate, linear, precise results, with less analytical variance than manual methods that correlate well against the Improved Neubauer chamber. The system provides superior predictive potential for diagnosing fertility problems.
Study on evaluation methods for Rayleigh wave dispersion characteristic
Shi, L.; Tao, X.; Kayen, R.; Shi, H.; Yan, S.
2005-01-01
The evaluation of Rayleigh wave dispersion characteristic is the key step for detecting S-wave velocity structure. By comparing the dispersion curves directly with the spectra analysis of surface waves (SASW) method, rather than comparing the S-wave velocity structure, the validity and precision of microtremor-array method (MAM) can be evaluated more objectively. The results from the China - US joint surface wave investigation in 26 sites in Tangshan, China, show that the MAM has the same precision with SASW method in 83% of the 26 sites. The MAM is valid for Rayleigh wave dispersion characteristic testing and has great application potentiality for site S-wave velocity structure detection.
Objective monitoring of mTOR inhibitor therapy by three-dimensional facial analysis.
Baynam, Gareth S; Walters, Mark; Dawkins, Hugh; Bellgard, Matthew; Halbert, Anne R; Claes, Peter
2013-08-01
With advances in therapeutics for rare, genetic and syndromic diseases, there is an increasing need for objective assessments of phenotypic endpoints. These assessments will preferentially be high precision, non-invasive, non-irradiating, and relatively inexpensive and portable. We report a case of a child with an extensive lymphatic vascular malformation of the head and neck, treated with an mammalian target of Rapamycin (mTOR) inhibitor that was assessed using 3D facial analysis. This case illustrates that this technology is prospectively a cost-effective modality for treatment monitoring, and it supports that it may also be used for novel explorations of disease biology for conditions associated with disturbances in the mTOR, and interrelated, pathways.
Memory for a single object has differently variable precisions for relevant and irrelevant features.
Swan, Garrett; Collins, John; Wyble, Brad
2016-01-01
Working memory is a limited resource. To further characterize its limitations, it is vital to understand exactly what is encoded about a visual object beyond the "relevant" features probed in a particular task. We measured the memory quality of a task-irrelevant feature of an attended object by coupling a delayed estimation task with a surprise test. Participants were presented with a single colored arrow and were asked to retrieve just its color for the first half of the experiment before unexpectedly being asked to report its direction. Mixture modeling of the data revealed that participants had highly variable precision on the surprise test, indicating a coarse-grained memory for the irrelevant feature. Following the surprise test, all participants could precisely recall the arrow's direction; however, this improvement in direction memory came at a cost in precision for color memory even though only a single object was being remembered. We attribute these findings to varying levels of attention to different features during memory encoding.
Multi-objective optimization in quantum parameter estimation
NASA Astrophysics Data System (ADS)
Gong, BeiLi; Cui, Wei
2018-04-01
We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.
Qualitative computer aided evaluation of dental impressions in vivo.
Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H
2006-01-01
Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.
DiFrancesco, Robin; Rosenkranz, Susan L.; Taylor, Charlene R.; Pande, Poonam G.; Siminski, Suzanne M.; Jenny, Richard W.; Morse, Gene D.
2013-01-01
Among National Institutes of Health (NIH) HIV Research Networks conducting multicenter trials, samples from protocols that span several years are analyzed at multiple clinical pharmacology laboratories (CPLs) for multiple antiretrovirals (ARV). Drug assay data are, in turn, entered into study-specific datasets that are used for pharmacokinetic analyses, merged to conduct cross-protocol pharmacokinetic analysis and integrated with pharmacogenomics research to investigate pharmacokinetic-pharmacogenetic associations. The CPLs participate in a semi-annual proficiency testing (PT) program implemented by the Clinical Pharmacology Quality Assurance (CPQA) program. Using results from multiple PT rounds, longitudinal analyses of recovery are reflective of accuracy and precision within/across laboratories. The objectives of this longitudinal analysis of PT across multiple CPLs were to develop and test statistical models that longitudinally: (1)assess the precision and accuracy of concentrations reported by individual CPLs; (2)determine factors associated with round-specific and long-term assay accuracy, precision and bias using a new regression model. A measure of absolute recovery is explored as a simultaneous measure of accuracy and precision. Overall, the analysis outcomes assured 97% accuracy (±20% of the final target concentration of all (21)drug concentration results reported for clinical trial samples by multiple CPLs).Using the CLIA acceptance of meeting criteria for ≥2/3 consecutive rounds, all ten laboratories that participated in three or more rounds per analyte maintained CLIA proficiency. Significant associations were present between magnitude of error and CPL (Kruskal Wallis [KW]p<0.001), and ARV (KW p<0.001). PMID:24052065
DiFrancesco, Robin; Rosenkranz, Susan L; Taylor, Charlene R; Pande, Poonam G; Siminski, Suzanne M; Jenny, Richard W; Morse, Gene D
2013-10-01
Among National Institutes of Health HIV Research Networks conducting multicenter trials, samples from protocols that span several years are analyzed at multiple clinical pharmacology laboratories (CPLs) for multiple antiretrovirals. Drug assay data are, in turn, entered into study-specific data sets that are used for pharmacokinetic analyses, merged to conduct cross-protocol pharmacokinetic analysis, and integrated with pharmacogenomics research to investigate pharmacokinetic-pharmacogenetic associations. The CPLs participate in a semiannual proficiency testing (PT) program implemented by the Clinical Pharmacology Quality Assurance program. Using results from multiple PT rounds, longitudinal analyses of recovery are reflective of accuracy and precision within/across laboratories. The objectives of this longitudinal analysis of PT across multiple CPLs were to develop and test statistical models that longitudinally: (1) assess the precision and accuracy of concentrations reported by individual CPLs and (2) determine factors associated with round-specific and long-term assay accuracy, precision, and bias using a new regression model. A measure of absolute recovery is explored as a simultaneous measure of accuracy and precision. Overall, the analysis outcomes assured 97% accuracy (±20% of the final target concentration of all (21) drug concentration results reported for clinical trial samples by multiple CPLs). Using the Clinical Laboratory Improvement Act acceptance of meeting criteria for ≥2/3 consecutive rounds, all 10 laboratories that participated in 3 or more rounds per analyte maintained Clinical Laboratory Improvement Act proficiency. Significant associations were present between magnitude of error and CPL (Kruskal-Wallis P < 0.001) and antiretroviral (Kruskal-Wallis P < 0.001).
Centering Objects in the Workspace
ERIC Educational Resources Information Center
Free, Cory
2005-01-01
Drafters must be detail-oriented people. The objects they draw are interpreted and then built with the extreme precision required by today's manufacturers. Now that computer-aided drafting (CAD) has taken over the drafting profession, anything less than exact precision is unacceptable. In her drafting classes, the author expects her students to…
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
[Relations between biomedical variables: mathematical analysis or linear algebra?].
Hucher, M; Berlie, J; Brunet, M
1977-01-01
The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.
Effects of material properties and object orientation on precision grip kinematics.
Paulun, Vivian C; Gegenfurtner, Karl R; Goodale, Melvyn A; Fleming, Roland W
2016-08-01
Successfully picking up and handling objects requires taking into account their physical properties (e.g., material) and position relative to the body. Such features are often inferred by sight, but it remains unclear to what extent observers vary their actions depending on the perceived properties. To investigate this, we asked participants to grasp, lift and carry cylinders to a goal location with a precision grip. The cylinders were made of four different materials (Styrofoam, wood, brass and an additional brass cylinder covered with Vaseline) and were presented at six different orientations with respect to the participant (0°, 30°, 60°, 90°, 120°, 150°). Analysis of their grasping kinematics revealed differences in timing and spatial modulation at all stages of the movement that depended on both material and orientation. Object orientation affected the spatial configuration of index finger and thumb during the grasp, but also the timing of handling and transport duration. Material affected the choice of local grasp points and the duration of the movement from the first visual input until release of the object. We find that conditions that make grasping more difficult (orientation with the base pointing toward the participant, high weight and low surface friction) lead to longer durations of individual movement segments and a more careful placement of the fingers on the object.
Department of Defense Precise Time and Time Interval program improvement plan
NASA Technical Reports Server (NTRS)
Bowser, J. R.
1981-01-01
The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.
Mechanical stability of a microscope setup working at a few kelvins for single-molecule localization
NASA Astrophysics Data System (ADS)
Hinohara, Takuya; Hamada, Yuki I.; Nakamura, Ippei; Matsushita, Michio; Fujiyoshi, Satoru
2013-06-01
A great advantage of single-molecule fluorescence imaging is the localization precision of molecule beyond the diffraction limit. Although longer signal-acquisition yields higher precision, acquisition time at room temperature is normally limited by photobleaching, thermal diffusion, and so on. At low temperature of a few kelvins, much longer acquisition is possible and will improve precision if the sample and the objective are held stably enough. The present work examined holding stability of the sample and objective at 1.5 K in superfluid helium in the helium bath. The stability was evaluated by localization precision of a point scattering source of a polymer bead. Scattered light was collected by the objective, and imaged by a home-built rigid imaging unit. The standard deviation of the centroid position determined for 800 images taken continuously in 17 min was 0.5 nm in the horizontal and 0.9 nm in the vertical directions.
Gentilucci, Maurizio; Campione, Giovanna Cristina; Dalla Volta, Riccardo; Bernardis, Paolo
2009-12-01
Does the mirror system affect the control of speech? This issue was addressed in behavioral and Transcranial Magnetic Stimulation (TMS) experiments. In behavioral experiment 1, participants pronounced the syllable /da/ while observing (1) a hand grasping large and small objects with power and precision grasps, respectively, (2) a foot interacting with large and small objects and (3) differently sized objects presented alone. Voice formant 1 was higher when observing power as compared to precision grasp, whereas it remained unaffected by observation of the different types of foot interaction and objects alone. In TMS experiment 2, we stimulated hand motor cortex, while participants observed the two types of grasp. Motor Evoked Potentials (MEPs) of hand muscles active during the two types of grasp were greater when observing power than precision grasp. In experiments 3-5, TMS was applied to tongue motor cortex of participants silently pronouncing the syllable /da/ and simultaneously observing power and precision grasps, pantomimes of the two types of grasps, and differently sized objects presented alone. Tongue MEPs were greater when observing power than precision grasp either executed or pantomimed. Finally, in TMS experiment 6, the observation of foot interaction with large and small objects did not modulate tongue MEPs. We hypothesized that grasp observation activated motor commands to the mouth as well as to the hand that were congruent with the hand kinematics implemented in the observed type of grasp. The commands to the mouth selectively affected postures of phonation organs and consequently basic features of phonological units.
Magnetic Braking: A Video Analysis
NASA Astrophysics Data System (ADS)
Molina-Bolívar, J. A.; Abella-Palacios, A. J.
2012-10-01
This paper presents a laboratory exercise that introduces students to the use of video analysis software and the Lenz's law demonstration. Digital techniques have proved to be very useful for the understanding of physical concepts. In particular, the availability of affordable digital video offers students the opportunity to actively engage in kinematics in introductory-level physics.1,2 By using digital videos frame advance features and "marking" the position of a moving object in each frame, students are able to more precisely determine the position of an object at much smaller time increments than would be possible with common time devices. Once the student collects data consisting of positions and times, these values may be manipulated to determine velocity and acceleration. There are a variety of commercial and free applications that can be used for video analysis. Because the relevant technology has become inexpensive, video analysis has become a prevalent tool in introductory physics courses.
Shape optimization using a NURBS-based interface-enriched generalized FEM
Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...
2016-11-26
This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less
Results and lessons from the GMOS survey of transiting exoplanet atmospheres
NASA Astrophysics Data System (ADS)
Todorov, Kamen; Desert, Jean-Michel; Huitson, Catherine; Bean, Jacob; Fortney, Jonathan; Bergmann, Marcel; Stevenson, Kevin
2018-01-01
We present results from the first comprehensive survey program dedicated to probing transiting exoplanet atmospheres using transmission spectroscopy with a multi-object spectrograph (MOS). Our four-years survey focussed on ten close-in giant planets for which the wavelength dependent transit depths in the visible were measured with Gemini/GMOS. We present the complete analysis of all the targets observed (50 transits, 300 hours), and the challenges to overcome to achieve the best spectrophotometric precision (200-500 ppm / 10 nm). We also present the main results and conclusions from this survey. We show that the precision achieved by this survey permits to distinguish hazy atmospheres from cloud-free ones. We discuss the challenges faced by such an experiment, and the lessons learnt for future MOS survey. We lay out the challenges facing future ground based MOS transit surveys aiming for the atmospheric characterization of habitable worlds, and utilizing the next generation of multi-object spectrographs mounted on extremely large ground based telescopes (ELT, TMT).
Automated metastatic brain lesion detection: a computer aided diagnostic and clinical research tool
NASA Astrophysics Data System (ADS)
Devine, Jeremy; Sahgal, Arjun; Karam, Irene; Martel, Anne L.
2016-03-01
The accurate localization of brain metastases in magnetic resonance (MR) images is crucial for patients undergoing stereotactic radiosurgery (SRS) to ensure that all neoplastic foci are targeted. Computer automated tumor localization and analysis can improve both of these tasks by eliminating inter and intra-observer variations during the MR image reading process. Lesion localization is accomplished using adaptive thresholding to extract enhancing objects. Each enhancing object is represented as a vector of features which includes information on object size, symmetry, position, shape, and context. These vectors are then used to train a random forest classifier. We trained and tested the image analysis pipeline on 3D axial contrast-enhanced MR images with the intention of localizing the brain metastases. In our cross validation study and at the most effective algorithm operating point, we were able to identify 90% of the lesions at a precision rate of 60%.
Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki
2018-04-01
Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.
Inverse probability weighting for covariate adjustment in randomized studies
Li, Xiaochun; Li, Lingling
2013-01-01
SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Dos Santos, Wellington P; de Assis, Francisco M; de Souza, Ricardo E; Dos Santos Filho, Plinio B
2008-01-01
Alzheimer's disease is the most common cause of dementia, yet hard to diagnose precisely without invasive techniques, particularly at the onset of the disease. This work approaches image analysis and classification of synthetic multispectral images composed by diffusion-weighted (DW) magnetic resonance (MR) cerebral images for the evaluation of cerebrospinal fluid area and measuring the advance of Alzheimer's disease. A clinical 1.5 T MR imaging system was used to acquire all images presented. The classification methods are based on Objective Dialectical Classifiers, a new method based on Dialectics as defined in the Philosophy of Praxis. A 2-degree polynomial network with supervised training is used to generate the ground truth image. The classification results are used to improve the usual analysis of the apparent diffusion coefficient map.
A Mission Planning Approach for Precision Farming Systems Based on Multi-Objective Optimization.
Zhai, Zhaoyu; Martínez Ortega, José-Fernán; Lucas Martínez, Néstor; Rodríguez-Molina, Jesús
2018-06-02
As the demand for food grows continuously, intelligent agriculture has drawn much attention due to its capability of producing great quantities of food efficiently. The main purpose of intelligent agriculture is to plan agricultural missions properly and use limited resources reasonably with minor human intervention. This paper proposes a Precision Farming System (PFS) as a Multi-Agent System (MAS). Components of PFS are treated as agents with different functionalities. These agents could form several coalitions to complete the complex agricultural missions cooperatively. In PFS, mission planning should consider several criteria, like expected benefit, energy consumption or equipment loss. Hence, mission planning could be treated as a Multi-objective Optimization Problem (MOP). In order to solve MOP, an improved algorithm, MP-PSOGA, is proposed, taking advantages of the Genetic Algorithms and Particle Swarm Optimization. A simulation, called precise pesticide spraying mission, is performed to verify the feasibility of the proposed approach. Simulation results illustrate that the proposed approach works properly. This approach enables the PFS to plan missions and allocate scarce resources efficiently. The theoretical analysis and simulation is a good foundation for the future study. Once the proposed approach is applied to a real scenario, it is expected to bring significant economic improvement.
Software for Automated Image-to-Image Co-registration
NASA Technical Reports Server (NTRS)
Benkelman, Cody A.; Hughes, Heidi
2007-01-01
The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.
Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G
2016-05-01
With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.
2015-06-01
Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).
Precise orbit computation and sea surface modeling
NASA Technical Reports Server (NTRS)
Wakker, Karel F.; Ambrosius, B. A. C.; Rummel, R.; Vermaat, E.; Deruijter, W. P. M.; Vandermade, J. W.; Zimmerman, J. T. F.
1991-01-01
The research project described below is part of a long-term program at Delft University of Technology aiming at the application of European Remote Sensing satellite (ERS-1) and TOPEX/POSEIDON altimeter measurements for geophysical purposes. This program started in 1980 with the processing of Seasat laser range and altimeter height measurements and concentrates today on the analysis of Geosat altimeter data. The objectives of the TOPEX/POSEIDON research project are the tracking of the satellite by the Dutch mobile laser tracking system MTLRS-2, the computation of precise TOPEX/POSEIDON orbits, the analysis of the spatial and temporal distribution of the orbit errors, the improvement of ERS-1 orbits through the information obtained from the altimeter crossover difference residuals for crossing ERS-1 and TOPEX/POSEIDON tracks, the combination of ERS-1 and TOPEX/POSEIDON altimeter data into a single high-precision data set, and the application of this data set to model the sea surface. The latter application will focus on the determination of detailed regional mean sea surfaces, sea surface variability, ocean topography, and ocean currents in the North Atlantic, the North Sea, the seas around Indonesia, the West Pacific, and the oceans around South Africa.
NASA Astrophysics Data System (ADS)
Anai, T.; Kochi, N.; Yamada, M.; Sasaki, T.; Otani, H.; Sasaki, D.; Nishimura, S.; Kimoto, K.; Yasui, N.
2015-05-01
As the 3D image measurement software is now widely used with the recent development of computer-vision technology, the 3D measurement from the image is now has acquired the application field from desktop objects as wide as the topography survey in large geographical areas. Especially, the orientation, which used to be a complicated process in the heretofore image measurement, can be now performed automatically by simply taking many pictures around the object. And in the case of fully textured object, the 3D measurement of surface features is now done all automatically from the orientated images, and greatly facilitated the acquisition of the dense 3D point cloud from images with high precision. With all this development in the background, in the case of small and the middle size objects, we are now furnishing the all-around 3D measurement by a single digital camera sold on the market. And we have also developed the technology of the topographical measurement with the air-borne images taken by a small UAV [1~5]. In this present study, in the case of the small size objects, we examine the accuracy of surface measurement (Matching) by the data of the experiments. And as to the topographic measurement, we examine the influence of GCP distribution on the accuracy by the data of the experiments. Besides, we examined the difference of the analytical results in each of the 3D image measurement software. This document reviews the processing flow of orientation and the 3D measurement of each software and explains the feature of the each software. And as to the verification of the precision of stereo-matching, we measured the test plane and the test sphere of the known form and assessed the result. As to the topography measurement, we used the air-borne image data photographed at the test field in Yadorigi of Matsuda City, Kanagawa Prefecture JAPAN. We have constructed Ground Control Point which measured by RTK-GPS and Total Station. And we show the results of analysis made in each of the 3D image measurement software. Further, we deepen the study on the influence of the distribution of GCP on the precision.
NASA Technical Reports Server (NTRS)
2008-01-01
We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
1994-01-01
The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.
Enhanced online convolutional neural networks for object tracking
NASA Astrophysics Data System (ADS)
Zhang, Dengzhuo; Gao, Yun; Zhou, Hao; Li, Tianwen
2018-04-01
In recent several years, object tracking based on convolution neural network has gained more and more attention. The initialization and update of convolution filters can directly affect the precision of object tracking effective. In this paper, a novel object tracking via an enhanced online convolution neural network without offline training is proposed, which initializes the convolution filters by a k-means++ algorithm and updates the filters by an error back-propagation. The comparative experiments of 7 trackers on 15 challenging sequences showed that our tracker can perform better than other trackers in terms of AUC and precision.
Highly accurate quantitative spectroscopy of massive stars in the Galaxy
NASA Astrophysics Data System (ADS)
Nieva, María-Fernanda; Przybilla, Norbert
2017-11-01
Achieving high accuracy and precision in stellar parameter and chemical composition determinations is challenging in massive star spectroscopy. On one hand, the target selection for an unbiased sample build-up is complicated by several types of peculiarities that can occur in individual objects. On the other hand, composite spectra are often not recognized as such even at medium-high spectral resolution and typical signal-to-noise ratios, despite multiplicity among massive stars is widespread. In particular, surveys that produce large amounts of automatically reduced data are prone to oversight of details that turn hazardous for the analysis with techniques that have been developed for a set of standard assumptions applicable to a spectrum of a single star. Much larger systematic errors than anticipated may therefore result because of the unrecognized true nature of the investigated objects, or much smaller sample sizes of objects for the analysis than initially planned, if recognized. More factors to be taken care of are the multiple steps from the choice of instrument over the details of the data reduction chain to the choice of modelling code, input data, analysis technique and the selection of the spectral lines to be analyzed. Only when avoiding all the possible pitfalls, a precise and accurate characterization of the stars in terms of fundamental parameters and chemical fingerprints can be achieved that form the basis for further investigations regarding e.g. stellar structure and evolution or the chemical evolution of the Galaxy. The scope of the present work is to provide the massive star and also other astrophysical communities with criteria to evaluate the quality of spectroscopic investigations of massive stars before interpreting them in a broader context. The discussion is guided by our experiences made in the course of over a decade of studies of massive star spectroscopy ranging from the simplest single objects to multiple systems.
Plasmonic micropillars for precision cell force measurement across a large field-of-view
NASA Astrophysics Data System (ADS)
Xiao, Fan; Wen, Ximiao; Tan, Xing Haw Marvin; Chiou, Pei-Yu
2018-01-01
A plasmonic micropillar platform with self-organized gold nanospheres is reported for the precision cell traction force measurement across a large field-of-view (FOV). Gold nanospheres were implanted into the tips of polymer micropillars by annealing gold microdisks with nanosecond laser pulses. Each gold nanosphere is physically anchored in the center of a pillar tip and serves as a strong, point-source-like light scattering center for each micropillar. This allows a micropillar to be clearly observed and precisely tracked even under a low magnification objective lens for the concurrent and precision measurement across a large FOV. A spatial resolution of 30 nm for the pillar deflection measurement has been accomplished on this platform with a 20× objective lens.
Accurate object tracking system by integrating texture and depth cues
NASA Astrophysics Data System (ADS)
Chen, Ju-Chin; Lin, Yu-Hang
2016-03-01
A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.
2014-08-26
Indium, Rhodium, Ruthenium, Tungsten, Titanium, Chromium, Palladium, Copper, Platinum and Magnesium . These have been chosen because all of them...performance. vii. Considering that the observed behaviors occur precisely where UV surface-enhanced Raman spectra indicated strong local field...research objective was centered on the UV plasmonic properties of Rh NPs by means of surface-enhanced Raman spectroscopy, surface-enhanced
Fully automatic and precise data analysis developed for time-of-flight mass spectrometry.
Meyer, Stefan; Riedo, Andreas; Neuland, Maike B; Tulej, Marek; Wurz, Peter
2017-09-01
Scientific objectives of current and future space missions are focused on the investigation of the origin and evolution of the solar system with the particular emphasis on habitability and signatures of past and present life. For in situ measurements of the chemical composition of solid samples on planetary surfaces, the neutral atmospheric gas and the thermal plasma of planetary atmospheres, the application of mass spectrometers making use of time-of-flight mass analysers is a technique widely used. However, such investigations imply measurements with good statistics and, thus, a large amount of data to be analysed. Therefore, faster and especially robust automated data analysis with enhanced accuracy is required. In this contribution, an automatic data analysis software, which allows fast and precise quantitative data analysis of time-of-flight mass spectrometric data, is presented and discussed in detail. A crucial part of this software is a robust and fast peak finding algorithm with a consecutive numerical integration method allowing precise data analysis. We tested our analysis software with data from different time-of-flight mass spectrometers and different measurement campaigns thereof. The quantitative analysis of isotopes, using automatic data analysis, yields results with an accuracy of isotope ratios up to 100 ppm for a signal-to-noise ratio (SNR) of 10 4 . We show that the accuracy of isotope ratios is in fact proportional to SNR -1 . Furthermore, we observe that the accuracy of isotope ratios is inversely proportional to the mass resolution. Additionally, we show that the accuracy of isotope ratios is depending on the sample width T s by T s 0.5 . Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Renne, Walter; Ludlow, Mark; Fryml, John; Schurch, Zach; Mennito, Anthony; Kessler, Ray; Lauer, Abigail
2017-07-01
As digital impressions become more common and more digital impression systems are released onto the market, it is essential to systematically and objectively evaluate their accuracy. The purpose of this in vitro study was to evaluate and compare the trueness and precision of 6 intraoral scanners and 1 laboratory scanner in both sextant and complete-arch scenarios. Furthermore, time of scanning was evaluated and correlated with trueness and precision. A custom complete-arch model was fabricated with a refractive index similar to that of tooth structure. Seven digital impression systems were used to scan the custom model for both posterior sextant and complete arch scenarios. Analysis was performed using 3-dimensional metrology software to measure discrepancies between the master model and experimental casts. Of the intraoral scanners, the Planscan was found to have the best trueness and precision while the 3Shape Trios was found to have the poorest for sextant scanning (P<.001). The order of trueness for complete arch scanning was as follows: 3Shape D800 >iTero >3Shape TRIOS 3 >Carestream 3500 >Planscan >CEREC Omnicam >CEREC Bluecam. The order of precision for complete-arch scanning was as follows: CS3500 >iTero >3Shape D800 >3Shape TRIOS 3 >CEREC Omnicam >Planscan >CEREC Bluecam. For the secondary outcome evaluating the effect time has on trueness and precision, the complete- arch scan time was highly correlated with both trueness (r=0.771) and precision (r=0.771). For sextant scanning, the Planscan was found to be the most precise and true scanner. For complete-arch scanning, the 3Shape Trios was found to have the best balance of speed and accuracy. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Angular trapping of anisometric nano-objects in a fluid.
Celebrano, Michele; Rosman, Christina; Sönnichsen, Carsten; Krishnan, Madhavi
2012-11-14
We demonstrate the ability to trap, levitate, and orient single anisometric nanoscale objects with high angular precision in a fluid. An electrostatic fluidic trap confines a spherical object at a spatial location defined by the minimum of the electrostatic system free energy. For an anisometric object and a potential well lacking angular symmetry, the system free energy can further strongly depend on the object's orientation in the trap. Engineering the morphology of the trap thus enables precise spatial and angular confinement of a single levitating nano-object, and the process can be massively parallelized. Since the physics of the trap depends strongly on the surface charge of the object, the method is insensitive to the object's dielectric function. Furthermore, levitation of the assembled objects renders them amenable to individual manipulation using externally applied optical, electrical, or hydrodynamic fields, raising prospects for reconfigurable chip-based nano-object assemblies.
Cortical mechanisms for the segregation and representation of acoustic textures.
Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D
2010-02-10
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
Kaewkamnerd, Saowaluck; Uthaipibull, Chairat; Intarapanich, Apichart; Pannarut, Montri; Chaotheing, Sastra; Tongsima, Sissades
2012-01-01
Current malaria diagnosis relies primarily on microscopic examination of Giemsa-stained thick and thin blood films. This method requires vigorously trained technicians to efficiently detect and classify the malaria parasite species such as Plasmodium falciparum (Pf) and Plasmodium vivax (Pv) for an appropriate drug administration. However, accurate classification of parasite species is difficult to achieve because of inherent technical limitations and human inconsistency. To improve performance of malaria parasite classification, many researchers have proposed automated malaria detection devices using digital image analysis. These image processing tools, however, focus on detection of parasites on thin blood films, which may not detect the existence of parasites due to the parasite scarcity on the thin blood film. The problem is aggravated with low parasitemia condition. Automated detection and classification of parasites on thick blood films, which contain more numbers of parasite per detection area, would address the previous limitation. The prototype of an automatic malaria parasite identification system is equipped with mountable motorized units for controlling the movements of objective lens and microscope stage. This unit was tested for its precision to move objective lens (vertical movement, z-axis) and microscope stage (in x- and y-horizontal movements). The average precision of x-, y- and z-axes movements were 71.481 ± 7.266 μm, 40.009 ± 0.000 μm, and 7.540 ± 0.889 nm, respectively. Classification of parasites on 60 Giemsa-stained thick blood films (40 blood films containing infected red blood cells and 20 control blood films of normal red blood cells) was tested using the image analysis module. By comparing our results with the ones verified by trained malaria microscopists, the prototype detected parasite-positive and parasite-negative blood films at the rate of 95% and 68.5% accuracy, respectively. For classification performance, the thick blood films with Pv parasite was correctly classified with the success rate of 75% while the accuracy of Pf classification was 90%. This work presents an automatic device for both detection and classification of malaria parasite species on thick blood film. The system is based on digital image analysis and featured with motorized stage units, designed to easily be mounted on most conventional light microscopes used in the endemic areas. The constructed motorized module could control the movements of objective lens and microscope stage at high precision for effective acquisition of quality images for analysis. The analysis program could accurately classify parasite species, into Pf or Pv, based on distribution of chromatin size.
Demidenko, Natalia V; Penin, Aleksey A
2012-01-01
qRT-PCR is a generally acknowledged method for gene expression analysis due to its precision and reproducibility. However, it is well known that the accuracy of qRT-PCR data varies greatly depending on the experimental design and data analysis. Recently, a set of guidelines has been proposed that aims to improve the reliability of qRT-PCR. However, there are additional factors that have not been taken into consideration in these guidelines that can seriously affect the data obtained using this method. In this study, we report the influence that object morphology can have on qRT-PCR data. We have used a number of Arabidopsis thaliana mutants with altered floral morphology as models for this study. These mutants have been well characterised (including in terms of gene expression levels and patterns) by other techniques. This allows us to compare the results from the qRT-PCR with the results inferred from other methods. We demonstrate that the comparison of gene expression levels in objects that differ greatly in their morphology can lead to erroneous results.
Relevant Scatterers Characterization in SAR Images
NASA Astrophysics Data System (ADS)
Chaabouni, Houda; Datcu, Mihai
2006-11-01
Recognizing scenes in a single look meter resolution Synthetic Aperture Radar (SAR) images, requires the capability to identify relevant signal signatures in condition of variable image acquisition geometry, arbitrary objects poses and configurations. Among the methods to detect relevant scatterers in SAR images, we can mention the internal coherence. The SAR spectrum splitted in azimuth generates a series of images which preserve high coherence only for particular object scattering. The detection of relevant scatterers can be done by correlation study or Independent Component Analysis (ICA) methods. The present article deals with the state of the art for SAR internal correlation analysis and proposes further extensions using elements of inference based on information theory applied to complex valued signals. The set of azimuth looks images is analyzed using mutual information measures and an equivalent channel capacity is derived. The localization of the "target" requires analysis in a small image window, thus resulting in imprecise estimation of the second order statistics of the signal. For a better precision, a Hausdorff measure is introduced. The method is applied to detect and characterize relevant objects in urban areas.
FY16 Safeguards Technology Cart-Portable Mass Spectrometer Project Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Cyril V.; Whitten, William B.
The Oak Ridge National Laboratory project for the Next Generation Safeguards Initiative Safeguards Technology Development Subprogram has been involved in the development of a cart portable mass spectrometer based on a Thermo ITQ ion trap mass spectrometer (referred to simply as the ITQ) for the field analysis of 235U/238U ratios in UF6. A recent discovery of the project was that combining CO2 with UF6 and introducing the mixture to the mass spectrometer (MS) appeared to increase the ionization efficiency and, thus, reduce the amount of UF6 needed for an analysis while also reducing the corrosive effects of the sample. However,more » initial experimentation indicated that mixing parameters should be closely controlled to ensure reproducible results. To this end, a sample manifold (SM) that would ensure the precise mixing of UF6 and CO2 was designed and constructed. A number of experiments were outlined and conducted to determine optimum MS and SM conditions which would provide the most stable isotope ratio analysis. The principal objective of the project was to provide a retrofit ITQ mass spectrometer operating with a SM capable of achieving a variation in precision of less than 1% over 1 hour of sampling. This goal was achieved by project end with a variation in precision of 0.5 to 0.8% over 1 hour of sampling.« less
NASA Astrophysics Data System (ADS)
Wray, J. D.
2003-05-01
The robotic observatory telescope must point precisely on the target object, and then track autonomously to a fraction of the FWHM of the system PSF for durations of ten to twenty minutes or more. It must retain this precision while continuing to function at rates approaching thousands of observations per night for all its years of useful life. These stringent requirements raise new challenges unique to robotic telescope systems design. Critical design considerations are driven by the applicability of the above requirements to all systems of the robotic observatory, including telescope and instrument systems, telescope-dome enclosure systems, combined electrical and electronics systems, environmental (e.g. seeing) control systems and integrated computer control software systems. Traditional telescope design considerations include the effects of differential thermal strain, elastic flexure, plastic flexure and slack or backlash with respect to focal stability, optical alignment and angular pointing and tracking precision. Robotic observatory design must holistically encapsulate these traditional considerations within the overall objective of maximized long-term sustainable precision performance. This overall objective is accomplished through combining appropriate mechanical and dynamical system characteristics with a full-time real-time telescope mount model feedback computer control system. Important design considerations include: identifying and reducing quasi-zero-backlash; increasing size to increase precision; directly encoding axis shaft rotation; pointing and tracking operation via real-time feedback between precision mount model and axis mounted encoders; use of monolithic construction whenever appropriate for sustainable mechanical integrity; accelerating dome motion to eliminate repetitive shock; ducting internal telescope air to outside dome; and the principal design criteria: maximizing elastic repeatability while minimizing slack, plastic deformation and hysteresis to facilitate long-term repeatably precise pointing and tracking performance.
Analysis of video-recorded images to determine linear and angular dimensions in the growing horse.
Hunt, W F; Thomas, V G; Stiefel, W
1999-09-01
Studies of growth and conformation require statistical methods that are not applicable to subjective conformation standards used by breeders and trainers. A new system was developed to provide an objective approach for both science and industry, based on analysis of video images to measure aspects of conformation that were represented by angles or lengths. A studio crush was developed in which video images of horses of different sizes were taken after bone protuberances, located by palpation, were marked with white paper stickers. Screen pixel coordinates of calibration marks, bone markers and points on horse outlines were digitised from captured images and corrected for aspect ratio and 'fish-eye' lens effects. Calculations from the corrected coordinates produced linear dimensions and angular dimensions useful for comparison of horses for conformation and experimental purposes. The precision achieved by the method in determining linear and angular dimensions was examined through systematically determining variance for isolated steps of the procedure. Angles of the front limbs viewed from in front were determined with a standard deviation of 2-5 degrees and effects of viewing angle were detectable statistically. The height of the rump and wither were determined with precision closely related to the limitations encountered in locating a point on a screen, which was greater for markers applied to the skin than for points at the edge of the image. Parameters determined from markers applied to the skin were, however, more variable (because their relation to bone position was affected by movement), but still provided a means by which a number of aspects of size and conformation can be determined objectively for many horses during growth. Sufficient precision was achieved to detect statistically relatively small effects on calculated parameters of camera height position.
NASA Astrophysics Data System (ADS)
Ibrahim, Dahi Ghareab Abdelsalam; Yasui, Takeshi
2018-04-01
Two-wavelength phase-shift interferometry guided by optical frequency combs is presented. We demonstrate the operation of the setup with a large step sample simultaneously with a resolution test target with a negative pattern. The technique can investigate multi-objects simultaneously with high precision. Using this technique, several important applications in metrology that require high speed and precision are demonstrated.
Audio-based performance evaluation of squash players
Hajdú-Szücs, Katalin; Fenyvesi, Nóra; Vattay, Gábor
2018-01-01
In competitive sports it is often very hard to quantify the performance. A player to score or overtake may depend on only millesimal of seconds or millimeters. In racquet sports like tennis, table tennis and squash many events will occur in a short time duration, whose recording and analysis can help reveal the differences in performance. In this paper we show that it is possible to architect a framework that utilizes the characteristic sound patterns to precisely classify the types of and localize the positions of these events. From these basic information the shot types and the ball speed along the trajectories can be estimated. Comparing these estimates with the optimal speed and target the precision of the shot can be defined. The detailed shot statistics and precision information significantly enriches and improves data available today. Feeding them back to the players and the coaches facilitates to describe playing performance objectively and to improve strategy skills. The framework is implemented, its hardware and software components are installed and tested in a squash court. PMID:29579067
Memory color of natural familiar objects: effects of surface texture and 3-D shape.
Vurro, Milena; Ling, Yazhu; Hurlbert, Anya C
2013-06-28
Natural objects typically possess characteristic contours, chromatic surface textures, and three-dimensional shapes. These diagnostic features aid object recognition, as does memory color, the color most associated in memory with a particular object. Here we aim to determine whether polychromatic surface texture, 3-D shape, and contour diagnosticity improve memory color for familiar objects, separately and in combination. We use solid three-dimensional familiar objects rendered with their natural texture, which participants adjust in real time to match their memory color for the object. We analyze mean, accuracy, and precision of the memory color settings relative to the natural color of the objects under the same conditions. We find that in all conditions, memory colors deviate slightly but significantly in the same direction from the natural color. Surface polychromaticity, shape diagnosticity, and three dimensionality each improve memory color accuracy, relative to uniformly colored, generic, or two-dimensional shapes, respectively. Shape diagnosticity improves the precision of memory color also, and there is a trend for polychromaticity to do so as well. Differently from other studies, we find that the object contour alone also improves memory color. Thus, enhancing the naturalness of the stimulus, in terms of either surface or shape properties, enhances the accuracy and precision of memory color. The results support the hypothesis that memory color representations are polychromatic and are synergistically linked with diagnostic shape representations.
The Undiagnosed Diseases Network: Accelerating Discovery about Health and Disease.
Ramoni, Rachel B; Mulvihill, John J; Adams, David R; Allard, Patrick; Ashley, Euan A; Bernstein, Jonathan A; Gahl, William A; Hamid, Rizwan; Loscalzo, Joseph; McCray, Alexa T; Shashi, Vandana; Tifft, Cynthia J; Wise, Anastasia L
2017-02-02
Diagnosis at the edges of our knowledge calls upon clinicians to be data driven, cross-disciplinary, and collaborative in unprecedented ways. Exact disease recognition, an element of the concept of precision in medicine, requires new infrastructure that spans geography, institutional boundaries, and the divide between clinical care and research. The National Institutes of Health (NIH) Common Fund supports the Undiagnosed Diseases Network (UDN) as an exemplar of this model of precise diagnosis. Its goals are to forge a strategy to accelerate the diagnosis of rare or previously unrecognized diseases, to improve recommendations for clinical management, and to advance research, especially into disease mechanisms. The network will achieve these objectives by evaluating patients with undiagnosed diseases, fostering a breadth of expert collaborations, determining best practices for translating the strategy into medical centers nationwide, and sharing findings, data, specimens, and approaches with the scientific and medical communities. Building the UDN has already brought insights to human and medical geneticists. The initial focus has been on data sharing, establishing common protocols for institutional review boards and data sharing, creating protocols for referring and evaluating patients, and providing DNA sequencing, metabolomic analysis, and functional studies in model organisms. By extending this precision diagnostic model nationally, we strive to meld clinical and research objectives, improve patient outcomes, and contribute to medical science. Copyright © 2017 American Society of Human Genetics. All rights reserved.
Cosmological surveys with multi-object spectrographs
NASA Astrophysics Data System (ADS)
Colless, Matthew
2016-08-01
Multi-object spectroscopy has been a key technique contributing to the current era of `precision cosmology.' From the first exploratory surveys of the large-scale structure and evolution of the universe to the current generation of superbly detailed maps spanning a wide range of redshifts, multi-object spectroscopy has been a fundamentally important tool for mapping the rich structure of the cosmic web and extracting cosmological information of increasing variety and precision. This will continue to be true for the foreseeable future, as we seek to map the evolving geometry and structure of the universe over the full extent of cosmic history in order to obtain the most precise and comprehensive measurements of cosmological parameters. Here I briefly summarize the contributions that multi-object spectroscopy has made to cosmology so far, then review the major surveys and instruments currently in play and their prospects for pushing back the cosmological frontier. Finally, I examine some of the next generation of instruments and surveys to explore how the field will develop in coming years, with a particular focus on specialised multi-object spectrographs for cosmology and the capabilities of multi-object spectrographs on the new generation of extremely large telescopes.
Multisensory Self-Motion Compensation During Object Trajectory Judgments
Dokka, Kalpana; MacNeilage, Paul R.; DeAngelis, Gregory C.; Angelaki, Dora E.
2015-01-01
Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual–vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual–vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals. PMID:24062317
High-precision Orbit Fitting and Uncertainty Analysis of (486958) 2014 MU69
NASA Astrophysics Data System (ADS)
Porter, Simon B.; Buie, Marc W.; Parker, Alex H.; Spencer, John R.; Benecchi, Susan; Tanga, Paolo; Verbiscer, Anne; Kavelaars, J. J.; Gwyn, Stephen D. J.; Young, Eliot F.; Weaver, H. A.; Olkin, Catherine B.; Parker, Joel W.; Stern, S. Alan
2018-07-01
NASA’s New Horizons spacecraft will conduct a close flyby of the cold-classical Kuiper Belt Object (KBO) designated (486958) 2014 MU69 on 2019 January 1. At a heliocentric distance of 44 au, “MU69” will be the most distant object ever visited by a spacecraft. To enable this flyby, we have developed an extremely high-precision orbit fitting and uncertainty processing pipeline, making maximal use of the Hubble Space Telescope’s Wide Field Camera 3 (WFC3) and pre-release versions of the ESA Gaia Data Release 2 (DR2) catalog. This pipeline also enabled successful predictions of a stellar occultation by MU69 in 2017 July. We describe how we process the WFC3 images to match the Gaia DR2 catalog, extract positional uncertainties for this extremely faint target (typically 140 photons per WFC3 exposure), and translate those uncertainties into probability distribution functions for MU69 at any given time. We also describe how we use these uncertainties to guide New Horizons, plan stellar occultions of MU69, and derive MU69's orbital evolution and long-term stability.
Metric freeness and projectivity for classical and quantum normed modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helemskii, A Ya
2013-07-31
In functional analysis, there are several diverse approaches to the notion of projective module. We show that a certain general categorical scheme contains all basic versions as special cases. In this scheme, the notion of free object comes to the foreground, and, in the best categories, projective objects are precisely retracts of free ones. We are especially interested in the so-called metric version of projectivity and characterize the metrically free classical and quantum (= operator) normed modules. Informally speaking, so-called extremal projectivity, which was known earlier, is interpreted as a kind of 'asymptotical metric projectivity'. In addition, we answer themore » following specific question in the geometry of normed spaces: what is the structure of metrically projective modules in the simplest case of normed spaces? We prove that metrically projective normed spaces are precisely the subspaces of l{sub 1}(M) (where M is a set) that are denoted by l{sub 1}{sup 0}(M) and consist of finitely supported functions. Thus, in this case, projectivity coincides with freeness. Bibliography: 28 titles.« less
Effects of motor congruence on visual working memory.
Quak, Michel; Pecher, Diane; Zeelenberg, Rene
2014-10-01
Grounded-cognition theories suggest that memory shares processing resources with perception and action. The motor system could be used to help memorize visual objects. In two experiments, we tested the hypothesis that people use motor affordances to maintain object representations in working memory. Participants performed a working memory task on photographs of manipulable and nonmanipulable objects. The manipulable objects were objects that required either a precision grip (i.e., small items) or a power grip (i.e., large items) to use. A concurrent motor task that could be congruent or incongruent with the manipulable objects caused no difference in working memory performance relative to nonmanipulable objects. Moreover, the precision- or power-grip motor task did not affect memory performance on small and large items differently. These findings suggest that the motor system plays no part in visual working memory.
NASA Astrophysics Data System (ADS)
Talamonti, James Joseph
1995-01-01
Future NASA proposals include the placement of optical interferometer systems in space for a wide variety of astrophysical studies including a vastly improved deflection test of general relativity, a precise and direct calibration of the Cepheid distance scale, and the determination of stellar masses (Reasenberg et al., 1988). There are also plans for placing large array telescopes on the moon with the ultimate objective of being able to measure angular separations of less than 10 mu-arc seconds (Burns, 1990). These and other future projects will require interferometric measurement of the (baseline) distance between the optical elements comprising the systems. Eventually, space qualifiable interferometers capable of picometer (10^{-12}m) relative precision and nanometer (10^{ -9}m) absolute precision will be required. A numerical model was developed to emulate the capabilities of systems performing interferometric noncontact absolute distance measurements. The model incorporates known methods to minimize signal processing and digital sampling errors and evaluates the accuracy limitations imposed by spectral peak isolation using Hanning, Blackman, and Gaussian windows in the Fast Fourier Transform Technique. We applied this model to the specific case of measuring the relative lengths of a compound Michelson interferometer using a frequency scanned laser. By processing computer simulated data through our model, the ultimate precision is projected for ideal data, and data containing AM/FM noise. The precision is shown to be limited by non-linearities in the laser scan. A laboratory system was developed by implementing ultra-stable external cavity diode lasers into existing interferometric measuring techniques. The capabilities of the system were evaluated and increased by using the computer modeling results as guidelines for the data analysis. Experimental results measured 1-3 meter baselines with <20 micron precision. Comparison of the laboratory and modeling results showed that the laboratory precisions obtained were of the same order of magnitude as those predicted for computer generated results under similar conditions. We believe that our model can be implemented as a tool in the design for new metrology systems capable of meeting the precisions required by space-based interferometers.
Meshing complex macro-scale objects into self-assembling bricks
Hacohen, Adar; Hanniel, Iddo; Nikulshin, Yasha; Wolfus, Shuki; Abu-Horowitz, Almogit; Bachelet, Ido
2015-01-01
Self-assembly provides an information-economical route to the fabrication of objects at virtually all scales. However, there is no known algorithm to program self-assembly in macro-scale, solid, complex 3D objects. Here such an algorithm is described, which is inspired by the molecular assembly of DNA, and based on bricks designed by tetrahedral meshing of arbitrary objects. Assembly rules are encoded by topographic cues imprinted on brick faces while attraction between bricks is provided by embedded magnets. The bricks can then be mixed in a container and agitated, leading to properly assembled objects at high yields and zero errors. The system and its assembly dynamics were characterized by video and audio analysis, enabling the precise time- and space-resolved characterization of its performance and accuracy. Improved designs inspired by our system could lead to successful implementation of self-assembly at the macro-scale, allowing rapid, on-demand fabrication of objects without the need for assembly lines. PMID:26226488
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-09-01
The Quality Assurance/Quality Control (QA/QC) Program for Phase 2 of the Clinch River Remedial Investigation (CRRI) was designed to comply with both Department of Energy (DOE) Order 5700.6C and Environmental Protection Agency (EPA) QAMS-005/80 (EPA 1980a) guidelines. QA requirements and the general QA objectives for Phase 2 data were defined in the Phase 2 Sampling and Analysis Plan (SAP)-Quality Assurance Project Plan, and scope changes noted in the Phase 2 Sampling and Analysis Plan Addendum. The QA objectives for Phase 2 data were the following: (1) Scientific data generated will withstand scientific and legal scrutiny. (2) Data will be gatheredmore » using appropriate procedures for sample collection, sample handling and security, chain of custody (COC), laboratory analyses, and data reporting. (3) Data will be of known precision and accuracy. (4) Data will meet data quality objectives (DQOs) defined in the Phase 2 SAP.« less
USDA-ARS?s Scientific Manuscript database
Mineral concentration of plant biomass can affect its use in thermal conversion to energy. The objective of this study was to compare the precision and accuracy of university and private laboratories that conduct mineral analyses of plant biomass on a fee basis. Accuracy and precision of the laborat...
NASA Astrophysics Data System (ADS)
Chu, Zhongyi; Ma, Ye; Hou, Yueyang; Wang, Fengwen
2017-02-01
This paper presents a novel identification method for the intact inertial parameters of an unknown object in space captured by a manipulator in a space robotic system. With strong dynamic and kinematic coupling existing in the robotic system, the inertial parameter identification of the unknown object is essential for the ideal control strategy based on changes in the attitude and trajectory of the space robot via capturing operations. Conventional studies merely refer to the principle and theory of identification, and an error analysis process of identification is deficient for a practical scenario. To solve this issue, an analysis of the effect of errors on identification is illustrated first, and the accumulation of measurement or estimation errors causing poor identification precision is demonstrated. Meanwhile, a modified identification equation incorporating the contact force, as well as the force/torque of the end-effector, is proposed to weaken the accumulation of errors and improve the identification accuracy. Furthermore, considering a severe disturbance condition caused by various measured noises, the hybrid immune algorithm, Recursive Least Squares and Affine Projection Sign Algorithm (RLS-APSA), is employed to decode the modified identification equation to ensure a stable identification property. Finally, to verify the validity of the proposed identification method, the co-simulation of ADAMS-MATLAB is implemented by multi-degree of freedom models of a space robotic system, and the numerical results show a precise and stable identification performance, which is able to guarantee the execution of aerospace operations and prevent failed control strategies.
Semler, Joerg; Wellmann, Katharina; Wirth, Felicitas; Stein, Gregor; Angelova, Srebrina; Ashrafi, Mahak; Schempf, Greta; Ankerne, Janina; Ozsoy, Ozlem; Ozsoy, Umut; Schönau, Eckhard; Angelov, Doychin N; Irintchev, Andrey
2011-07-01
Precise assessment of motor deficits after traumatic spinal cord injury (SCI) in rodents is crucial for understanding the mechanisms of functional recovery and testing therapeutic approaches. Here we analyzed the applicability to a rat SCI model of an objective approach, the single-frame motion analysis, created and used for functional analysis in mice. Adult female Wistar rats were subjected to graded compression of the spinal cord. Recovery of locomotion was analyzed using video recordings of beam walking and inclined ladder climbing. Three out of four parameters used in mice appeared suitable: the foot-stepping angle (FSA) and the rump-height index (RHI), measured during beam walking, and for estimating paw placement and body weight support, respectively, and the number of correct ladder steps (CLS), assessing skilled limb movements. These parameters, similar to the Basso, Beattie, and Bresnahan (BBB) locomotor rating scores, correlated with lesion volume and showed significant differences between moderately and severely injured rats at 1-9 weeks after SCI. The beam parameters, but not CLS, correlated well with the BBB scores within ranges of poor and good locomotor abilities. FSA co-varied with RHI only in the severely impaired rats, while RHI and CLS were barely correlated. Our findings suggest that the numerical parameters estimate, as intended by design, predominantly different aspects of locomotion. The use of these objective measures combined with BBB rating provides a time- and cost-efficient opportunity for versatile and reliable functional evaluations in both severely and moderately impaired rats, combining clinical assessment with precise numerical measures.
PRECISE ANGLE MONITOR BASED ON THE CONCEPT OF PENCIL-BEAM INTERFEROMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
QIAN,S.; TAKACS,P.
2000-07-30
The precise angle monitoring is a very important metrology task for research, development and industrial applications. Autocollimator is one of the most powerful and widely applied instruments for small angle monitoring, which is based on the principle of geometric optics. In this paper the authors introduce a new precise angle monitoring system, Pencil-beam Angle Monitor (PAM), base on pencil beam interferometry. Its principle of operation is a combination of physical and geometrical optics. The angle calculation method is similar to the autocollimator. However, the autocollimator creates a cross image but the precise pencil-beam angle monitoring system produces an interference fringemore » on the focal plane. The advantages of the PAM are: high angular sensitivity, long-term stability character making angle monitoring over long time periods possible, high measurement accuracy in the order of sub-microradian, simultaneous measurement ability in two perpendicular directions or on two different objects, dynamic measurement possibility, insensitive to the vibration and air turbulence, automatic display, storage and analysis by use of the computer, small beam diameter making the alignment extremely easy and longer test distance. Some test examples are presented.« less
Kaszewska, Ewa A; Sylwestrzak, Marcin; Marczak, Jan; Skrzeczanowski, Wojciech; Iwanicka, Magdalena; Szmit-Naud, Elżbieta; Anglos, Demetrios; Targowski, Piotr
2013-08-01
A detailed feasibility study on the combined use of laser-induced breakdown spectroscopy with optical coherence tomography (LIBS/OCT), aiming at a realistic depth-resolved elemental analysis of multilayer stratigraphies in paintings, is presented. Merging a high spectral resolution LIBS system with a high spatial resolution spectral OCT instrument significantly enhances the quality and accuracy of stratigraphic analysis. First, OCT mapping is employed prior to LIBS analysis in order to assist the selection of specific areas of interest on the painting surface to be examined in detail. Then, intertwined with LIBS, the OCT instrument is used as a precise profilometer for the online determination of the depth of the ablation crater formed by individual laser pulses during LIBS depth-profile analysis. This approach is novel and enables (i) the precise in-depth scaling of elemental concentration profiles, and (ii) the recognition of layer boundaries by estimating the corresponding differences in material ablation rate. Additionally, the latter is supported, within the transparency of the object, by analysis of the OCT cross-sectional views. The potential of this method is illustrated by presenting results on the detailed analysis of the structure of an historic painting on canvas performed to aid planned restoration of the artwork.
Tone series and the nature of working memory capacity development.
Clark, Katherine M; Hardman, Kyle O; Schachtman, Todd R; Saults, J Scott; Glass, Bret A; Cowan, Nelson
2018-04-01
Recent advances in understanding visual working memory, the limited information held in mind for use in ongoing processing, are extended here to examine auditory working memory development. Research with arrays of visual objects has shown how to distinguish the capacity, in terms of the number of objects retained, from the precision of the object representations. We adapt the technique to sequences of nonmusical tones, in an investigation including children (6-13 years, N = 84) and adults (26-50 years, N = 31). For each series of 1 to 4 tones, the participant responded by using an 80-choice scale to try to reproduce the tone at a queried serial position. Despite the much longer-lasting usefulness of sensory memory for tones compared with visual objects, the observed tone capacity was similar to previous findings for visual capacity. The results also constrain theories of childhood working memory development, indicating increases with age in both the capacity and the precision of the tone representations, similar to the visual studies, rather than age differences in time-based memory decay. The findings, including patterns of correlations between capacity, precision, and some auxiliary tasks and questionnaires, establish capacity and precision as dissociable processes and place important constraints on various hypotheses of working memory development. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
Rivailler, P; Abernathy, E; Icenogle, J
2017-03-01
Recent studies have shown that the currently circulating rubella viruses are mostly members of two genotypes, 1E and 2B. Also, genetically distinct viruses of genotype 1G have been found in East and West Africa. This study used a Mantel test to objectively include both genetic diversity and geographic location in the definition of lineages, and identified statistically justified lineages (n=13) and sub-lineages (n=9) of viruses within genotypes 1G, 1E and 2B. Genotype 2B viruses were widely distributed, while viruses of genotype 1E as well as 1G and 1J were much more geographically restricted. This analysis showed that more precise groupings for rubella viruses are possible, which should improve the ability to track rubella viruses worldwide. A year-by-year analysis revealed gaps in surveillance that need to be resolved in order to support the surveillance needed for enhanced control and elimination goals for rubella.
Rivailler, P
2017-01-01
Recent studies have shown that the currently circulating rubella viruses are mostly members of two genotypes, 1E and 2B. Also, genetically distinct viruses of genotype 1G have been found in East and West Africa. This study used a Mantel test to objectively include both genetic diversity and geographic location in the definition of lineages, and identified statistically justified lineages (n=13) and sub-lineages (n=9) of viruses within genotypes 1G, 1E and 2B. Genotype 2B viruses were widely distributed, while viruses of genotype 1E as well as 1G and 1J were much more geographically restricted. This analysis showed that more precise groupings for rubella viruses are possible, which should improve the ability to track rubella viruses worldwide. A year-by-year analysis revealed gaps in surveillance that need to be resolved in order to support the surveillance needed for enhanced control and elimination goals for rubella. PMID:27959771
Bridging the gap between genome analysis and precision breeding in potato.
Gebhardt, Christiane
2013-04-01
Efficiency and precision in plant breeding can be enhanced by using diagnostic DNA-based markers for the selection of superior cultivars. This technique has been applied to many crops, including potatoes. The first generation of diagnostic DNA-based markers useful in potato breeding were enabled by several developments: genetic linkage maps based on DNA polymorphisms, linkage mapping of qualitative and quantitative agronomic traits, cloning and functional analysis of genes for pathogen resistance and genes controlling plant metabolism, and association genetics in collections of tetraploid varieties and advanced breeding clones. Although these have led to significant improvements in potato genetics, the prediction of most, if not all, natural variation in agronomic traits by diagnostic markers ultimately requires the identification of the causal genes and their allelic variants. This objective will be facilitated by new genomic tools, such as genomic resequencing and comparative profiling of the proteome, transcriptome, and metabolome in combination with phenotyping genetic materials relevant for variety development. Copyright © 2012 Elsevier Ltd. All rights reserved.
Area estimation of crops by digital analysis of Landsat data
NASA Technical Reports Server (NTRS)
Bauer, M. E.; Hixson, M. M.; Davis, B. J.
1978-01-01
The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2009-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2003 through June 2005. Results for the quality-control samples for 20 analytical procedures were evaluated for bias and precision. Control charts indicate that data for five of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, pH, silicon, and sodium. Seven of the analytical procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits; these procedures were: dissolved organic carbon, chloride, nitrate (ion chromatograph), nitrite, silicon, sodium, and sulfate. The calcium and magnesium procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The total aluminum and specific conductance procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 17 of 18 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 22 analytes. At least 85 percent of the samples met data-quality objectives for all analytes except total monomeric aluminum (82 percent of samples met objectives), total aluminum (77 percent of samples met objectives), chloride (80 percent of samples met objectives), fluoride (76 percent of samples met objectives), and nitrate (ion chromatograph) (79 percent of samples met objectives). The ammonium and total dissolved nitrogen did not meet the data-quality objectives. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with ratings for each sample in the satisfactory, good, and excellent ranges or less than 10 percent error. The P-sample (low-ionic-strength constituents) analysis had one marginal and two unsatisfactory ratings for the chloride procedure. The T-sample (trace constituents)analysis had two unsatisfactory ratings and one high range percent error for the aluminum procedure. The N-sample (nutrient constituents) analysis had one marginal rating for the nitrate procedure. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 84 percent of the samples met data-quality objectives for 11 of the 14 analytes; the exceptions were ammonium, total aluminum, and acid-neutralizing capacity. The ammonium procedure did not meet data quality objectives in all studies. Data-quality objectives were not met in 23 percent of samples analyzed for total aluminum and 45 percent of samples analyzed acid-neutralizing capacity. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 86 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, sodium, and sulfate. Data-quality objectives were not met by samples analyzed for fluoride.
Search for Cross-Correlations of Ultrahigh-Energy Cosmic Rays with BL Lacertae Objects
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; BenZvi, S.; Bergman, D. R.; Blake, S. A.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Rodriguez, D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.; HIRES Collaboration
2006-01-01
Data taken in stereo mode by the High Resolution Fly's Eye (HiRes) air fluorescence experiment are analyzed to search for correlations between the arrival directions of ultrahigh-energy cosmic rays with the positions of BL Lacertae objects. Several previous claims of significant correlations between BL Lac objects and cosmic rays observed by other experiments are tested. These claims are not supported by the HiRes data. However, we verify a recent analysis of correlations between HiRes events and a subset of confirmed BL Lac objects from the 10th Veron Catalog, and we study this correlation in detail. Due to the a posteriori nature of the search, the significance level cannot be reliably estimated and the correlation must be tested independently before any claim can be made. We identify the precise hypotheses that will be tested with statistically independent data.
Xie, Weizhen; Zhang, Weiwei
2017-09-01
Negative emotion sometimes enhances memory (higher accuracy and/or vividness, e.g., flashbulb memories). The present study investigates whether it is the qualitative (precision) or quantitative (the probability of successful retrieval) aspect of memory that drives these effects. In a visual long-term memory task, observers memorized colors (Experiment 1a) or orientations (Experiment 1b) of sequentially presented everyday objects under negative, neutral, or positive emotions induced with International Affective Picture System images. In a subsequent test phase, observers reconstructed objects' colors or orientations using the method of adjustment. We found that mnemonic precision was enhanced under the negative condition relative to the neutral and positive conditions. In contrast, the probability of successful retrieval was comparable across the emotion conditions. Furthermore, the boost in memory precision was associated with elevated subjective feelings of remembering (vividness and confidence) and metacognitive sensitivity in Experiment 2. Altogether, these findings suggest a novel precision-based account for emotional memories. Copyright © 2017 Elsevier B.V. All rights reserved.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2006-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance/quality-control data for the time period addressed in this report were stored in the laboratory's SAS data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality- control samples analyzed from July 1997 through June 1999. Results for the quality-control samples for 18 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration and (or) low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, total aluminum, ammonium, calcium, chloride, specific conductance, and sulfate. The data from the potassium and sodium analytical procedures are insufficient for evaluation. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 11 of 13 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. Blank analysis results for chloride showed that 22 percent of blanks did not meet data-quality objectives and results for dissolved organic carbon showed that 31 percent of the blanks did not meet data-quality objectives. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 14 of the 18 analytes. At least 90 percent of the samples met data-quality objectives for all analytes except total aluminum (70 percent of samples met objectives) and potassium (83 percent of samples met objectives). Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality for most constituents over the time period. The P-sample (low-ionic-strength constituents) analysis had good ratings in two of these studies and a satisfactory rating in the third. The results of the T-sample (trace constituents) analysis indicated high data quality with good ratings in all three studies. The N-sample (nutrient constituents) studies had one each of excellent, good, and satisfactory ratings. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 80 percent of the samples met data-quality objectives for 9 of the 13 analytes; the exceptions were dissolved organic carbon, ammonium, chloride, and specific conductance. Data-quality objectives were not met for dissolved organic carbon in two NWRI studies, but all of the samples were within control limits for the last study. Data-quality objectives were not met in 41 percent of samples analyzed for ammonium, 25 percent of samples analyzed for chloride, and 30 percent of samples analyzed for specific conductance. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 84 percent of the samples analyzed for calcium, chloride, magnesium, pH, and potassium. Data-quality objectives were met by 73 percent of those analyzed for sulfate. The data-quality objective was not met for sodium. The data are insufficient for evaluation of the specific conductance results.
Lemieux, Genevieve; Carey, Jason P; Flores-Mir, Carlos; Secanell, Marc; Hart, Adam; Lagravère, Manuel O
2016-01-01
Our objective was to identify and evaluate the accuracy and precision (intrarater and interrater reliabilities) of various anatomic landmarks for use in 3-dimensional maxillary and mandibular regional superimpositions. We used cone-beam computed tomography reconstructions of 10 human dried skulls to locate 10 landmarks in the maxilla and the mandible. Precision and accuracy were assessed with intrarater and interrater readings. Three examiners located these landmarks in the cone-beam computed tomography images 3 times with readings scheduled at 1-week intervals. Three-dimensional coordinates were determined (x, y, and z coordinates), and the intraclass correlation coefficient was computed to determine intrarater and interrater reliabilities, as well as the mean error difference and confidence intervals for each measurement. Bilateral mental foramina, bilateral infraorbital foramina, anterior nasal spine, incisive canal, and nasion showed the highest precision and accuracy in both intrarater and interrater reliabilities. Subspinale and bilateral lingulae had the lowest precision and accuracy in both intrarater and interrater reliabilities. When choosing the most accurate and precise landmarks for 3-dimensional cephalometric analysis or plane-derived maxillary and mandibular superimpositions, bilateral mental and infraorbital foramina, landmarks in the anterior region of the maxilla, and nasion appeared to be the best options of the analyzed landmarks. Caution is needed when using subspinale and bilateral lingulae because of their higher mean errors in location. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
The use of robotics in otolaryngology-head and neck surgery: a systematic review.
Maan, Zeshaan N; Gibbins, Nick; Al-Jabri, Talal; D'Souza, Alwyn R
2012-01-01
Robotic surgery has become increasingly used due to its enhancement of visualization, precision, and articulation. It eliminates many of the problems encountered with conventional minimally invasive techniques and has been shown to result in reduced blood loss and complications. The rise in endoscopic procedures in otolaryngology-head and neck surgery, and associated difficulties, suggests that robotic surgery may have a role to play. To determine whether robotic surgery conveys any benefits compared to conventional minimally invasive approaches, specifically looking at precision, operative time, and visualization. A systematic review of the literature with a defined search strategy. Searches of MEDLINE, EMBASE and CENTRAL using strategy: ((robot* OR (robot*AND surgery)) AND (ent OR otolaryngology)) to November 2010. Articles reviewed by authors and data compiled in tables for analysis. There were 33 references included in the study. Access and visualization were regularly mentioned as key benefits, though no objective data has been recorded in any study. Once initial setup difficulties were overcome, operative time was shown to decrease with robotic surgery, except in one controlled series of thyroid surgeries. Precision was also highlighted as an advantage, particularly in otological and skull base surgery. Postoperative outcomes were considered equivalent to or better than conventional surgery. Cost was the biggest drawback. The evidence base to date suggests there are benefits to robotic surgery in OHNS, particularly with regards to access, precision, and operative time but there is a lack of controlled, prospective studies with objective outcome measures. In addition, economic feasibility studies must be carried out before a robotic OHNS service is established. Copyright © 2012 Elsevier Inc. All rights reserved.
da Silva, Layzon Antonio Lemos; Pezzini, Bianca Ramos; Soares, Luciano
2015-01-01
Background: The chemical characterization is essential to validate the pharmaceutical use of vegetable raw materials. Ultraviolet spectroscopy is an important technique to determine flavonoids, which are important active compounds from Ocimum basilicum. Objective: The objective of this work was to optimize a spectrophotometric method, based on flavonoid-aluminum chloride (AlCl3) complexation to determine the total flavonoid content (TFC) in leaves of O. basilicum (herbal material), using response surface methodology. Materials and Methods: The effects of (1) the herbal material: Solvent ratio (0.02, 0.03, 0.05, 0.07, and 0.08 g/mL), (2) stock solution volume (0.8, 2.3, 4.4, 6.5, and 8.0 mL) and (3) AlCl3 volume (0.8, 1.0, 1.2, 1.4, and 1.6 mL) on the TFC were evaluated. The analytical performance parameters precision, linearity and robustness of the method were tested. Results: The herbal material: Solvent ratio and stock solution volume showed an important influence on the method response. After choosing the optimized conditions, the method exhibited a precision (RSD%) lower than 6% for repeatability (RSD%) and lower than 8% for intermediate precision (on the order of literature values for biotechnological methods), coefficient of correlation of 0.9984, and no important influence could be observed for variations of the time of complexation with AlCl3. However, the time and temperature of extraction were critical for TFC method and must be carefully controlled during the analysis. Conclusion: Thus, this study allowed the optimization of a simple, fast and precise method for the determination of the TFC in leaves of O. basilicum, which can be used to support the quality assessment of this herbal material. PMID:25709217
Effects of grasp compatibility on long-term memory for objects.
Canits, Ivonne; Pecher, Diane; Zeelenberg, René
2018-01-01
Previous studies have shown action potentiation during conceptual processing of manipulable objects. In four experiments, we investigated whether these motor actions also play a role in long-term memory. Participants categorized objects that afforded either a power grasp or a precision grasp as natural or artifact by grasping cylinders with either a power grasp or a precision grasp. In all experiments, responses were faster when the affordance of the object was compatible with the type of grasp response. However, subsequent free recall and recognition memory tasks revealed no better memory for object pictures and object names for which the grasp affordance was compatible with the grasp response. The present results therefore do not support the hypothesis that motor actions play a role in long-term memory. Copyright © 2017 Elsevier B.V. All rights reserved.
Results of monitoring of the high orbits with ISON optical network
NASA Astrophysics Data System (ADS)
Molotov, Igor; Schildknecht, Thomas; Zalles, Rodolfo; Rumyantsev, Vasilij; Voropaev, Viktor; Zolotov, Vladimir; Kokina, Tatiana; Montojo, Francisco Javier; Namkhai, Tungalag
2016-07-01
International Scientific Optical Network (ISON) represents one of largest systems specializing in observation of space objects. ISON provides permanent monitoring of the whole GEO region, regular surveying of Molniya type orbits, and tracking of objects at GEO, GTO, HEO and LEO. ISON project is continuously developing and is joining now the 37 observation facilities in 15 countries with 79 telescopes of different class (aperture from 12.5 cm to 2.6 m). 15.4 millions measurements in 2.1 millions of tracklets for about 4100 objects are collected by KIAM in 2015. 339 new space objects have been discovered, 307 previously lost objects have been rediscovered. Telescopes of two European observatories (AIUB Zimmerwald and TFRM Barcelona) and two Latin American observatories (Bolivian Tarija and Mexican UAS Cosala) have joined to the ISON survey subsystem. New subsystem of 7 telescopes for extended GEO surveys is created (it allows KIAM to determine more precise GEO orbits for conjunction analysis). Also deployment of Roscosmos subsystem of six dedicated mini-observatories EOP-1/EOP-2 has been completed. ISON encompasses five groups of telescopes and three scheduling centers to better serve user's requests. Obtained measurements are processing at KIAM ballistic center to be used for scientific and applied goals, including collision risks analysis and space situation analysis. Achieved parameters of the above mentioned telescopes and plans of ISON network development will be presented and discussed.
Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear
Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.
2012-01-01
Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688
Visual long-term memory has the same limit on fidelity as visual working memory.
Brady, Timothy F; Konkle, Talia; Gill, Jonathan; Oliva, Aude; Alvarez, George A
2013-06-01
Visual long-term memory can store thousands of objects with surprising visual detail, but just how detailed are these representations, and how can one quantify this fidelity? Using the property of color as a case study, we estimated the precision of visual information in long-term memory, and compared this with the precision of the same information in working memory. Observers were shown real-world objects in random colors and were asked to recall the colors after a delay. We quantified two parameters of performance: the variability of internal representations of color (fidelity) and the probability of forgetting an object's color altogether. Surprisingly, the fidelity of color information in long-term memory was comparable to the asymptotic precision of working memory. These results suggest that long-term memory and working memory may be constrained by a common limit, such as a bound on the fidelity required to retrieve a memory representation.
Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.; Rudd, Van; Shald, Scott; Sandford, Stephen; Dimarcantonio, Albert
2014-01-01
In this paper, the development of a long range ladar system known as ExoSPEAR at NASA Langley Research Center for tracking rapidly moving resident space objects is discussed. Based on 100 W, nanosecond class, near-IR laser, this ladar system with coherent detection technique is currently being investigated for short dwell time measurements of resident space objects (RSOs) in LEO and beyond for space surveillance applications. This unique ladar architecture is configured using a continuously agile doublet-pulse waveform scheme coupled to a closed-loop tracking and control loop approach to simultaneously achieve mm class range precision and mm/s velocity precision and hence obtain unprecedented track accuracies. Salient features of the design architecture followed by performance modeling and engagement simulations illustrating the dependence of range and velocity precision in LEO orbits on ladar parameters are presented. Estimated limits on detectable optical cross sections of RSOs in LEO orbits are discussed.
Anticipatory scaling of grip forces when lifting objects of everyday life.
Hermsdörfer, Joachim; Li, Yong; Randerath, Jennifer; Goldenberg, Georg; Eidenmüller, Sandra
2011-07-01
The ability to predict and anticipate the mechanical demands of the environment promotes smooth and skillful motor actions. Thus, the finger forces produced to grasp and lift an object are scaled to the physical properties such as weight. While grip force scaling is well established for neutral objects, only few studies analyzed objects known from daily routine and none studied grip forces. In the present study, eleven healthy subjects each lifted twelve objects of everyday life that encompassed a wide range of weights. The finger pads were covered with force sensors that enabled the measurement of grip force. A scale registered load forces. In a control experiment, the objects were wrapped into paper to prevent recognition by the subjects. Data from the first lift of each object confirmed that object weight was anticipated by adequately scaled forces. The maximum grip force rate during the force increase phase emerged as the most reliable measure to verify that weight was actually predicted and to characterize the precision of this prediction, while other force measures were scaled to object weight also when object identity was not known. Variability and linearity of the grip force-weight relationship improved for time points reached after liftoff, suggesting that sensory information refined the force adjustment. The same mechanism seemed to be involved with unrecognizable objects, though a lower precision was reached. Repeated lifting of the same object within a second and third presentation block did not improve the precision of the grip force scaling. Either practice was too variable or the motor system does not prioritize the optimization of the internal representation when objects are highly familiar.
Virdis, Salvatore Gonario Pasquale
2014-01-01
Monitoring and mapping shrimp farms, including their impact on land cover and land use, is critical to the sustainable management and planning of coastal zones. In this work, a methodology was proposed to set up a cost-effective and reproducible procedure that made use of satellite remote sensing, object-based classification approach, and open-source software for mapping aquaculture areas with high planimetric and thematic accuracy between 2005 and 2008. The analysis focused on two characteristic areas of interest of the Tam Giang-Cau Hai Lagoon (in central Vietnam), which have similar farming systems to other coastal aquaculture worldwide: the first was primarily characterised by locally referred "low tide" shrimp ponds, which are partially submerged areas; the second by earthed shrimp ponds, locally referred to as "high tide" ponds, which are non-submerged areas on the lagoon coast. The approach was based on the region-growing segmentation of high- and very high-resolution panchromatic images, SPOT5 and Worldview-1, and the unsupervised clustering classifier ISOSEG embedded on SPRING non-commercial software. The results, the accuracy of which was tested with a field-based aquaculture inventory, showed that in favourable situations (high tide shrimp ponds), the classification results provided high rates of accuracy (>95 %) through a fully automatic object-based classification. In unfavourable situations (low tide shrimp ponds), the performance degraded due to the low contrast between the water and the pond embankments. In these situations, the automatic results were improved by manual delineation of the embankments. Worldview-1 necessarily showed better thematic accuracy, and precise maps have been realised at a scale of up to 1:2,000. However, SPOT5 provided comparable results in terms of number of correctly classified ponds, but less accurate results in terms of the precision of mapped features. The procedure also demonstrated high degrees of reproducibility because it was applied to images with different spatial resolutions in an area that, during the investigated period, did not experience significant land cover changes.
Horvitz-Thompson survey sample methods for estimating large-scale animal abundance
Samuel, M.D.; Garton, E.O.
1994-01-01
Large-scale surveys to estimate animal abundance can be useful for monitoring population status and trends, for measuring responses to management or environmental alterations, and for testing ecological hypotheses about abundance. However, large-scale surveys may be expensive and logistically complex. To ensure resources are not wasted on unattainable targets, the goals and uses of each survey should be specified carefully and alternative methods for addressing these objectives always should be considered. During survey design, the impoflance of each survey error component (spatial design, propofiion of detected animals, precision in detection) should be considered carefully to produce a complete statistically based survey. Failure to address these three survey components may produce population estimates that are inaccurate (biased low), have unrealistic precision (too precise) and do not satisfactorily meet the survey objectives. Optimum survey design requires trade-offs in these sources of error relative to the costs of sampling plots and detecting animals on plots, considerations that are specific to the spatial logistics and survey methods. The Horvitz-Thompson estimators provide a comprehensive framework for considering all three survey components during the design and analysis of large-scale wildlife surveys. Problems of spatial and temporal (especially survey to survey) heterogeneity in detection probabilities have received little consideration, but failure to account for heterogeneity produces biased population estimates. The goal of producing unbiased population estimates is in conflict with the increased variation from heterogeneous detection in the population estimate. One solution to this conflict is to use an MSE-based approach to achieve a balance between bias reduction and increased variation. Further research is needed to develop methods that address spatial heterogeneity in detection, evaluate the effects of temporal heterogeneity on survey objectives and optimize decisions related to survey bias and variance. Finally, managers and researchers involved in the survey design process must realize that obtaining the best survey results requires an interactive and recursive process of survey design, execution, analysis and redesign. Survey refinements will be possible as further knowledge is gained on the actual abundance and distribution of the population and on the most efficient techniques for detection animals.
Novak, J.L.; Petterson, B.
1998-06-09
A sensing system locates an object by sensing the object`s effect on electric fields. The object`s effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions. 12 figs.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2009-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2005 through June 2007. Results for the quality-control samples for 19 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: total aluminum, calcium, magnesium, nitrate (colorimetric method), potassium, silicon, sodium, and sulfate. Eight of the analytical procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits; these procedures were: total aluminum, calcium, dissolved organic carbon, chloride, nitrate (ion chromatograph), potassium, silicon, and sulfate. The magnesium and pH procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The acid-neutralizing capacity, total monomeric aluminum, nitrite, and specific conductance procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicated that the procedures for 16 of 17 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 21 analytes. At least 93 percent of the samples met data-quality objectives for all analytes except acid-neutralizing capacity (85 percent of samples met objectives), total monomeric aluminum (83 percent of samples met objectives), total aluminum (85 percent of samples met objectives), and chloride (85 percent of samples met objectives). The ammonium and total dissolved nitrogen did not meet the data-quality objectives. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project met the Troy Laboratory data-quality objectives for 87 percent of the samples analyzed. The P-sample (low-ionic-strength constituents) analysis had two outliers each in two studies. The T-sample (trace constituents) analysis and the N-sample (nutrient constituents) analysis had one outlier each in two studies. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 85 percent of the samples met data-quality objectives for 11 of the 14 analytes; the exceptions were acid-neutralizing capacity, total aluminum and ammonium. Data-quality objectives were not met in 41 percent of samples analyzed for acid-neutralizing capacity, 50 percent of samples analyzed for total aluminum, and 44 percent of samples analyzed for ammonium. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 86 percent of the samples analyzed for calcium, magnesium, pH, potassium, and sodium. Data-quality objectives were met by 76 percent of the samples analyzed for chloride, 80 percent of the samples analyzed for specific conductance, and 77 percent of the samples analyzed for sulfate.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2006-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's LabMaster data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality-control samples analyzed from July 1999 through June 2001. Results for the quality-control samples for 18 analytical procedures were evaluated for bias and precision. Control charts indicate that data for eight of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, total aluminum, calcium, chloride and nitrate (ion chromatography and colormetric method) and sulfate. The total aluminum and dissolved organic carbon procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits. The calcium and specific conductance procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The magnesium procedure was biased for the high-concentration and low concentration samples, but was within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 14 of 15 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 17 of the 18 analytes. At least 90 percent of the samples met data-quality objectives for all analytes except ammonium (81 percent of samples met objectives), chloride (75 percent of samples met objectives), and sodium (86 percent of samples met objectives). Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with most ratings for each sample in the good to excellent range. The P-sample (low-ionic-strength constituents) analysis had one satisfactory rating for the specific conductance procedure in one study. The T-sample (trace constituents) analysis had one satisfactory rating for the aluminum procedure in one study and one unsatisfactory rating for the sodium procedure in another. The remainder of the samples had good or excellent ratings for each study. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 89 percent of the samples met data-quality objectives for 10 of the 14 analytes; the exceptions were ammonium, total aluminum, dissolved organic carbon, and sodium. Results indicate a positive bias for the ammonium procedure in all studies. Data-quality objectives were not met in 50 percent of samples analyzed for total aluminum, 38 percent of samples analyzed for dissolved organic carbon, and 27 percent of samples analyzed for sodium. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 91 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, and sulfate. Data-quality objectives were met by 75 percent of the samples analyzed for sodium and 58 percent of the samples analyzed for specific conductance.
Expertise for upright faces improves the precision but not the capacity of visual working memory.
Lorenc, Elizabeth S; Pratte, Michael S; Angeloni, Christopher F; Tong, Frank
2014-10-01
Considerable research has focused on how basic visual features are maintained in working memory, but little is currently known about the precision or capacity of visual working memory for complex objects. How precisely can an object be remembered, and to what extent might familiarity or perceptual expertise contribute to working memory performance? To address these questions, we developed a set of computer-generated face stimuli that varied continuously along the dimensions of age and gender, and we probed participants' memories using a method-of-adjustment reporting procedure. This paradigm allowed us to separately estimate the precision and capacity of working memory for individual faces, on the basis of the assumptions of a discrete capacity model, and to assess the impact of face inversion on memory performance. We found that observers could maintain up to four to five items on average, with equally good memory capacity for upright and upside-down faces. In contrast, memory precision was significantly impaired by face inversion at every set size tested. Our results demonstrate that the precision of visual working memory for a complex stimulus is not strictly fixed but, instead, can be modified by learning and experience. We find that perceptual expertise for upright faces leads to significant improvements in visual precision, without modifying the capacity of working memory.
Classification of LIDAR Data for Generating a High-Precision Roadway Map
NASA Astrophysics Data System (ADS)
Jeong, J.; Lee, I.
2016-06-01
Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.
First ISON observations for satellite conjunction analysis in the Western Hemisphere
NASA Astrophysics Data System (ADS)
Zalles, R.; Molotov, I.; Kokina, T.; Zolotov, V.; Condori, R.
2018-01-01
In this paper we report on observations of a pair of approaching space objects in the beginning of June 2016, observed jointly by the Tarija Observatory in Bolivia and the Mexican observatory of Sinaloa University in Cosala in the context of the ISON collaboration. These objects were the STAR ONE C1 (2007-056A) active satellite in GEO position 65\\ deg west, and the passive satellite LES 6 (1968-081D). The large number of measurements obtained in a few nights allowed a precise orbit reconstruction. The passive satellite LES 6 (with a brigtness amplitude variation of 3 magnitudes) was too faint for the small aperture of the Cosala telescope.
COSMOS: Carnegie Observatories System for MultiObject Spectroscopy
NASA Astrophysics Data System (ADS)
Oemler, A.; Clardy, K.; Kelson, D.; Walth, G.; Villanueva, E.
2017-05-01
COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.
Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar
2015-06-01
Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.
Wireless sensor networks for heritage object deformation detection and tracking algorithm.
Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu
2014-10-31
Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection.
Wireless Sensor Networks for Heritage Object Deformation Detection and Tracking Algorithm
Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu
2014-01-01
Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection. PMID:25365458
Lew, Matthew D.; Lee, Steven F.; Badieirostami, Majid; Moerner, W. E.
2011-01-01
We describe the corkscrew point spread function (PSF), which can localize objects in three dimensions throughout a 3.2 µm depth of field with nanometer precision. The corkscrew PSF rotates as a function of the axial (z) position of an emitter. Fisher information calculations show that the corkscrew PSF can achieve nanometer localization precision with limited numbers of photons. We demonstrate three-dimensional super-resolution microscopy with the corkscrew PSF by imaging beads on the surface of a triangular polydimethylsiloxane (PDMS) grating. With 99,000 photons detected, the corkscrew PSF achieves a localization precision of 2.7 nm in x, 2.1 nm in y, and 5.7 nm in z. PMID:21263500
Lew, Matthew D; Lee, Steven F; Badieirostami, Majid; Moerner, W E
2011-01-15
We describe the corkscrew point spread function (PSF), which can localize objects in three dimensions throughout a 3.2 μm depth of field with nanometer precision. The corkscrew PSF rotates as a function of the axial (z) position of an emitter. Fisher information calculations show that the corkscrew PSF can achieve nanometer localization precision with limited numbers of photons. We demonstrate three-dimensional super-resolution microscopy with the corkscrew PSF by imaging beads on the surface of a triangular polydimethylsiloxane (PDMS) grating. With 99,000 photons detected, the corkscrew PSF achieves a localization precision of 2.7 nm in x, 2.1 nm in y, and 5.7 nm in z.
Novak, James L.; Petterson, Ben
1998-06-09
A sensing system locates an object by sensing the object's effect on electric fields. The object's effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions.
NASA Astrophysics Data System (ADS)
Yamaguchi, M. S.; Yano, T.; Gouda, N.
2018-03-01
We develop a method for identifying a compact object in binary systems with astrometric measurements and apply it to some binaries. Compact objects in some high-mass X-ray binaries and gamma-ray binaries are unknown, which is responsible for the fact that emission mechanisms in such systems have not yet confirmed. The accurate estimate of the mass of the compact object allows us to identify the compact object in such systems. Astrometric measurements are expected to enable us to estimate the masses of the compact objects in the binary systems via a determination of a binary orbit. We aim to evaluate the possibility of the identification of the compact objects for some binary systems. We then calculate probabilities that the compact object is correctly identified with astrometric observation (= confidence level) by taking into account a dependence of the orbital shape on orbital parameters and distributions of masses of white dwarfs, neutron stars and black holes. We find that the astrometric measurements with the precision of 70 μas for γ Cas allow us to identify the compact object at 99 per cent confidence level if the compact object is a white dwarf with 0.6 M⊙. In addition, we can identify the compact object with the precision of 10 μas at 97 per cent or larger confidence level for LS I +61° 303 and 99 per cent or larger for HESS J0632+057. These results imply that the astrometric measurements with the 10 μas precision level can realize the identification of compact objects for γ Cas, LS I +61° 303, and HESS J0632+057.
Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets
NASA Astrophysics Data System (ADS)
Gold, P. O.; Cowgill, E.; Kreylos, O.
2009-12-01
Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point cloud, we scan from multiple locations an object of known geometry (a cylinder mounted above a square box). Preliminary results show that even in a controlled experimental scan of an object of known dimensions, there is significant variability in the precision of the registered point cloud. For example, when 3 scans of the central object are registered using 4 known points (maximum time, maximum equipment), the point clouds align to within ~1 cm (normal to the object surface). However, when the same point clouds are registered with only 1 known point (minimum time, minimum equipment), misalignment of the point clouds can range from 2.5 to 5 cm, depending on target type. The greater misalignment of the 3 point clouds when registered with fewer known points stems from the field method employed in acquiring the dataset and demonstrates the impact of field workflow on LiDAR dataset precision. By quantifying the degree of scan mismatch in results such as this, we can provide users with the information needed to maximize efficiency in remote field surveys.
Capacity and precision in an animal model of visual short-term memory.
Lara, Antonio H; Wallis, Jonathan D
2012-03-14
Temporary storage of information in visual short-term memory (VSTM) is a key component of many complex cognitive abilities. However, it is highly limited in capacity. Understanding the neurophysiological nature of this capacity limit will require a valid animal model of VSTM. We used a multiple-item color change detection task to measure macaque monkeys' VSTM capacity. Subjects' performance deteriorated and reaction times increased as a function of the number of items in memory. Additionally, we measured the precision of the memory representations by varying the distance between sample and test colors. In trials with similar sample and test colors, subjects made more errors compared to trials with highly discriminable colors. We modeled the error distribution as a Gaussian function and used this to estimate the precision of VSTM representations. We found that as the number of items in memory increases the precision of the representations decreases dramatically. Additionally, we found that focusing attention on one of the objects increases the precision with which that object is stored and degrades the precision of the remaining. These results are in line with recent findings in human psychophysics and provide a solid foundation for understanding the neurophysiological nature of the capacity limit of VSTM.
NASA Technical Reports Server (NTRS)
Cochran, William D.; Hatzes, Artie P.
1993-01-01
The McDonald Observatory Planetary Search program surveyed a sample of 33 nearby F, G, and K stars since September 1987 to search for substellar companion objects. Measurements of stellar radial velocity variations to a precision of better than 10 m/s were performed as routine observations to detect Jovian planets in orbit around solar type stars. Results confirm the detection of a companion object to HD114762.
Fazeli-Bakhtiyari, Rana; Panahi-Azar, Vahid; Sorouraddin, Mohammad Hossein; Jouyban, Abolghasem
2015-01-01
Objective(s): Dispersive liquid-liquid microextraction coupled with gas chromatography (GC)-flame ionization detector was developed for the determination of valproic acid (VPA) in human plasma. Materials and Methods: Using a syringe, a mixture of suitable extraction solvent (40 µl chloroform) and disperser (1 ml acetone) was quickly added to 10 ml of diluted plasma sample containing VPA (pH, 1.0; concentration of NaCl, 4% (w/v)), resulting in a cloudy solution. After centrifugation (6000 rpm for 6 min), an aliquot (1 µl) of the sedimented organic phase was removed using a 1-µl GC microsyringe and injected into the GC system for analysis. One variable at a time optimization method was used to study various parameters affecting the extraction efficiency of target analyte. Then, the developed method was fully validated for its accuracy, precision, recovery, stability, and robustness. Results: Under the optimum extraction conditions, good linearity range was obtained for the calibration graph, with correlation coefficient higher than 0.998. Limit of detection and lower limit of quantitation were 3.2 and 6 μg/ml, respectively. The relative standard deviations of intra and inter-day analysis of examined compound were less than 11.5%. The relative recoveries were found in the range of 97 to 107.5%. Finally, the validated method was successfully applied to the analysis of VPA in patient sample. Conclusion: The presented method has acceptable levels of precision, accuracy and relative recovery and could be used for therapeutic drug monitoring of VPA in human plasma. PMID:26730332
Testing Models of Stellar Structure and Evolution I. Comparison with Detached Eclipsing Binaries
NASA Astrophysics Data System (ADS)
del Burgo, C.; Allende Prieto, C.
2018-05-01
We present the results of an analysis aimed at testing the accuracy and precision of the PARSEC v1.2S library of stellar evolution models, combined with a Bayesian approach, to infer stellar parameters. We mainly employ the online DEBCat catalogue by Southworth, a compilation of detached eclipsing binary systems with published measurements of masses and radii to ˜ 2 per cent precision. We select a sample of 318 binary components, with masses between 0.10 and 14.5 solar units, and distances between 1.3 pc and ˜ 8 kpc for Galactic objects and ˜ 44-68 kpc for the extragalactic ones. The Bayesian analysis applied takes on input effective temperature, radius, and [Fe/H], and their uncertainties, returning theoretical predictions for other stellar parameters. From the comparison with dynamical masses, we conclude inferred masses are precisely derived for stars on the main-sequence and in the core-helium-burning phase, with respective uncertainties of 4 per cent and 7 per cent, on average. Subgiants and red giants masses are predicted within 14 per cent, and early asymptotic giant branch stars within 24 per cent. These results are helpful to further improve the models, in particular for advanced evolutionary stages for which our understanding is limited. We obtain distances and ages for the binary systems and compare them, whenever possible, with precise literature estimates, finding excellent agreement. We discuss evolutionary effects and the challenges associated with the inference of stellar ages from evolutionary models. We also provide useful polynomial fittings to theoretical zero-age main-sequence relations.
Berg, Wolfgang; Bechler, Robin; Laube, Norbert
2009-01-01
Since its first publication in 2000, the BONN-Risk-Index (BRI) has been successfully used to determine the calcium oxalate (CaOx) crystallization risk from urine samples. To date, a BRI-measuring device, the "Urolizer", has been developed, operating automatically and requiring only a minimum of preparation. Two major objectives were pursued: determination of Urolizer precision, and determination of the influence of 24-h urine storage at moderate temperatures on BRI. 24-h urine samples from 52 CaOx stone-formers were collected. A total of 37 urine samples were used for the investigation of Urolizer precision by performing six independent BRI determinations in series. In total, 30 samples were taken for additional investigation of urine storability. Each sample was measured thrice: directly after collection, after 24-h storage at T=21 degrees C, and after 24-h cooling at T=4 degrees C. Outcomes were statistically tested for identity with regard to the immediately obtained results. Repeat measurements for evaluation of Urolizer precision revealed statistical identity of data (p-0.05). 24-h storage of urine at both tested temperatures did not significantly affect BRI (p-0.05). The pilot-run Urolizer shows high analytical reliability. The innovative analysis device may be especially suited for urologists specializing in urolithiasis treatment. The possibility for urine storage at moderate temperatures without loss of analysis quality further demonstrates the applicability of the BRI method.
NASA Astrophysics Data System (ADS)
Belbachir, A. N.; Hofstätter, M.; Litzenberger, M.; Schön, P.
2009-10-01
A synchronous communication interface for neuromorphic temporal contrast vision sensors is described and evaluated in this paper. This interface has been designed for ultra high-speed synchronous arbitration of a temporal contrast image sensors pixels' data. Enabling high-precision timestamping, this system demonstrates its uniqueness for handling peak data rates and preserving the main advantage of the neuromorphic electronic systems, that is high and accurate temporal resolution. Based on a synchronous arbitration concept, the timestamping has a resolution of 100 ns. Both synchronous and (state-of-the-art) asynchronous arbiters have been implemented in a neuromorphic dual-line vision sensor chip in a standard 0.35 µm CMOS process. The performance analysis of both arbiters and the advantages of the synchronous arbitration over asynchronous arbitration in capturing high-speed objects are discussed in detail.
Towards accurate radial velocities from early type spectra in the framework of an ESO key programme
NASA Astrophysics Data System (ADS)
Verschueren, Werner; David, M.; Hensberge, Herman
In order to elucidate the internal kinematics in very young stellar groups, a dedicated machinery was set up, which made it possible to proceed from actual observations to reductions and correlation analysis to the ultimate derivation of early-type stellar radial velocities (RVs) with the requisite precision. The following ingredients are found to be essential to obtain RVs of early-type stars at the 1-km/s level of precision: high-resolution, high-S/N spectra covering a large wavelength range; maximal reduction of observational errors and the use of optimal reduction procedures; the intelligent use of a versatile cross-correlation package; and comparison of velocities derived from different regions of the spectrum in order to detect systematic mismatches between object and template spectrum in some of the lines.
[Marketing research in health service].
Ameri, Cinzia; Fiorini, Fulvio
2015-01-01
Marketing research is the systematic and objective search for, and analysis of, information relevant to the identification and solution of any problem in the field of marketing. The key words in this definition are: systematic, objective and analysis. Marketing research seeks to set about its task in a systematic and objective fashion. This means that a detailed and carefully designed research plan is developed in which each stage of the research is specified. Such a research plan is only considered adequate if it specifies: the research problem in concise and precise terms, the information necessary to address the problem, the methods to be employed in gathering the information and the analytical techniques to be used to interpret it. Maintaining objectivity in marketing research is essential if marketing management is to have sufficient confidence in its results to be prepared to take risky decisions based upon those results. To this end, as far as possible, marketing researchers employ the scientific method. The characteristics of the scientific method are that it translates personal prejudices, notions and opinions into explicit propositions (or hypotheses). These are tested empirically. At the same time alternative explanations of the event or phenomena of interest are given equal consideration.
2010-05-27
programming language, threads can only communicate through fields and this assertion prohibits an alias to the object under construction from being writ- ten...1.9. We call this type of reporting “compiler-like” in the sense that the descriptive message output by the tool has to communicate the semantics of...way to communicate a “need” for further annotation to the tool user because a precise expression of both the location and content of the needed
[Automated procedure for volumetric measurement of metastases: estimation of tumor burden].
Fabel, M; Bolte, H
2008-09-01
Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation.
An approach for quantitative image quality analysis for CT
NASA Astrophysics Data System (ADS)
Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe
2016-03-01
An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.
A dynamical approach in exploring the unknown mass in the Solar system using pulsar timing arrays
NASA Astrophysics Data System (ADS)
Guo, Y. J.; Lee, K. J.; Caballero, R. N.
2018-04-01
The error in the Solar system ephemeris will lead to dipolar correlations in the residuals of pulsar timing array for widely separated pulsars. In this paper, we utilize such correlated signals, and construct a Bayesian data-analysis framework to detect the unknown mass in the Solar system and to measure the orbital parameters. The algorithm is designed to calculate the waveform of the induced pulsar-timing residuals due to the unmodelled objects following the Keplerian orbits in the Solar system. The algorithm incorporates a Bayesian-analysis suit used to simultaneously analyse the pulsar-timing data of multiple pulsars to search for coherent waveforms, evaluate the detection significance of unknown objects, and to measure their parameters. When the object is not detectable, our algorithm can be used to place upper limits on the mass. The algorithm is verified using simulated data sets, and cross-checked with analytical calculations. We also investigate the capability of future pulsar-timing-array experiments in detecting the unknown objects. We expect that the future pulsar-timing data can limit the unknown massive objects in the Solar system to be lighter than 10-11-10-12 M⊙, or measure the mass of Jovian system to a fractional precision of 10-8-10-9.
Chen, Qiang; Chen, Yunhao; Jiang, Weiguo
2016-07-30
In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2009-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2001 through June 2003. Results for the quality-control samples for 19 analytical procedures were evaluated for bias and precision. Control charts indicate that data for six of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, chloride, magnesium, nitrate (ion chromatography), potassium, and sodium. The calcium procedure was biased throughout the analysis period for the high-concentration sample, but was within control limits. The total monomeric aluminum and fluoride procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The total aluminum, pH, specific conductance, and sulfate procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 16 of 18 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for the dissolved organic carbon or specific conductance procedures. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 21 analytes. At least 90 percent of the samples met data-quality objectives for all procedures except total monomeric aluminum (83 percent of samples met objectives), total aluminum (76 percent of samples met objectives), ammonium (73 percent of samples met objectives), dissolved organic carbon (86 percent of samples met objectives), and nitrate (81 percent of samples met objectives). The data-quality objective was not met for the nitrite procedure. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated satisfactory or above data quality over the time period, with most performance ratings for each sample in the good-to-excellent range. The N-sample (nutrient constituents) analysis had one unsatisfactory rating for the ammonium procedure in one study. The T-sample (trace constituents) analysis had one unsatisfactory rating for the magnesium procedure and one marginal rating for the potassium procedure in one study and one unsatisfactory rating for the sodium procedure in another. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 90 percent of the samples met data-quality objectives for 10 of the 14 analytes; the exceptions were acid-neutralizing capacity, ammonium, dissolved organic carbon, and sodium. Data-quality objectives were not met in 37 percent of samples analyzed for acid-neutralizing capacity, 28 percent of samples analyzed for dissolved organic carbon, and 30 percent of samples analyzed for sodium. Results indicate a positive bias for the ammonium procedure in one study and a negative bias in another. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 90 percent of the samples analyzed for calcium, chloride, magnesium, pH, potassium, and sodium. Data-quality objectives were met by 78 percent of
Accuracy or precision: Implications of sample design and methodology on abundance estimation
Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.
2015-01-01
Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.
DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.
Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou
2016-07-07
In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.
Analysis of neoplastic lesions in magnetic resonance imaging using self-organizing maps.
Mei, Paulo Afonso; de Carvalho Carneiro, Cleyton; Fraser, Stephen J; Min, Li Li; Reis, Fabiano
2015-12-15
To provide an improved method for the identification and analysis of brain tumors in MRI scans using a semi-automated computational approach, that has the potential to provide a more objective, precise and quantitatively rigorous analysis, compared to human visual analysis. Self-Organizing Maps (SOM) is an unsupervised, exploratory data analysis tool, which can automatically domain an image into selfsimilar regions or clusters, based on measures of similarity. It can be used to perform image-domain of brain tissue on MR images, without prior knowledge. We used SOM to analyze T1, T2 and FLAIR acquisitions from two MRI machines in our service from 14 patients with brain tumors confirmed by biopsies--three lymphomas, six glioblastomas, one meningioma, one ganglioglioma, two oligoastrocytomas and one astrocytoma. The SOM software was used to analyze the data from the three image acquisitions from each patient and generated a self-organized map for each containing 25 clusters. Damaged tissue was separated from the normal tissue using the SOM technique. Furthermore, in some cases it allowed to separate different areas from within the tumor--like edema/peritumoral infiltration and necrosis. In lesions with less precise boundaries in FLAIR, the estimated damaged tissue area in the resulting map appears bigger. Our results showed that SOM has the potential to be a powerful MR imaging analysis technique for the assessment of brain tumors. Copyright © 2015. Published by Elsevier B.V.
Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation.
Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling
2009-06-01
This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 microm can be achieved.
Development of novel hybrid flexure-based microgrippers for precision micro-object manipulation
NASA Astrophysics Data System (ADS)
Mohd Zubir, Mohd Nashrul; Shirinzadeh, Bijan; Tian, Yanling
2009-06-01
This paper describes the process of developing a microgripper that is capable of high precision and fidelity manipulation of micro-objects. The design adopts the concept of flexure-based hinges on its joints to provide the rotational motion, thus eliminating the inherent nonlinearities associated with the application of conventional rigid hinges. A combination of two modeling techniques, namely, pseudorigid body model and finite element analysis was utilized to expedite the prototyping procedure, which leads to the establishment of a high performance mechanism. A new hybrid compliant structure integrating cantilever beam and flexural hinge configurations within microgripper mechanism mainframe has been developed. This concept provides a novel approach to harness the advantages within each individual configuration while mutually compensating the limitations inherent between them. A wire electrodischarge machining technique was utilized to fabricate the gripper out of high grade aluminum alloy (Al 7075T6). Experimental studies were conducted on the model to obtain various correlations governing the gripper performance as well as for model verification. The experimental results demonstrate high level of compliance in comparison to the computational results. A high amplification characteristic and maximum achievable stroke of 100 μm can be achieved.
Multi-criteria analysis of potential recovery facilities in a reverse supply chain
NASA Astrophysics Data System (ADS)
Nukala, Satish; Gupta, Surendra M.
2005-11-01
Analytic Hierarchy Process (AHP) has been employed by researchers for solving multi-criteria analysis problems. However, AHP is often criticized for its unbalanced scale of judgments and failure to precisely handle the inherent uncertainty and vagueness in carrying out the pair-wise comparisons. With an objective to address these drawbacks, in this paper, we employ a fuzzy approach in selecting potential recovery facilities in the strategic planning of a reverse supply chain network that addresses the decision maker's level of confidence in the fuzzy assessments and his/her attitude towards risk. A numerical example is considered to illustrate the methodology.
Set size, individuation, and attention to shape.
Cantrell, Lisa; Smith, Linda B
2013-02-01
Much research has demonstrated a shape bias in categorizing and naming solid objects. This research has shown that when an entity is conceptualized as an individual object, adults and children attend to the object's shape. Separate research in the domain of numerical cognition suggest that there are distinct processes for quantifying small and large sets of discrete items. This research shows that small set discrimination, comparison, and apprehension is often precise for 1-3 and sometimes 4 items; however, large numerosity representation is imprecise. Results from three experiments suggest a link between the processes for small and large number representation and the shape bias in a forced choice categorization task using naming and non-naming procedures. Experiment 1 showed that adults generalized a newly learned name for an object to new instances of the same shape only when those instances were presented in sets of less than 3 or 4. Experiment 2 showed that preschool children who were monolingual speakers of three different languages were also influenced by set size when categorizing objects in sets. Experiment 3 extended these results and showed the same effect in a non-naming task and when the novel noun was presented in a count-noun syntax frame. The results are discussed in terms of a relation between the precision of object representation and the precision of small and large number representation. Copyright © 2012 Elsevier B.V. All rights reserved.
Mirsky, Simcha K; Barnea, Itay; Levi, Mattan; Greenspan, Hayit; Shaked, Natan T
2017-09-01
Currently, the delicate process of selecting sperm cells to be used for in vitro fertilization (IVF) is still based on the subjective, qualitative analysis of experienced clinicians using non-quantitative optical microscopy techniques. In this work, a method was developed for the automated analysis of sperm cells based on the quantitative phase maps acquired through use of interferometric phase microscopy (IPM). Over 1,400 human sperm cells from 8 donors were imaged using IPM, and an algorithm was designed to digitally isolate sperm cell heads from the quantitative phase maps while taking into consideration both the cell 3D morphology and contents, as well as acquire features describing sperm head morphology. A subset of these features was used to train a support vector machine (SVM) classifier to automatically classify sperm of good and bad morphology. The SVM achieves an area under the receiver operating characteristic curve of 88.59% and an area under the precision-recall curve of 88.67%, as well as precisions of 90% or higher. We believe that our automatic analysis can become the basis for objective and automatic sperm cell selection in IVF. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
An object-mediated updating account of insensitivity to transsaccadic change
Tas, A. Caglar; Moore, Cathleen M.; Hollingworth, Andrew
2012-01-01
Recent evidence has suggested that relatively precise information about the location and visual form of a saccade target object is retained across a saccade. However, this information appears to be available for report only when the target is removed briefly, so that the display is blank when the eyes land. We hypothesized that the availability of precise target information is dependent on whether a post-saccade object is mapped to the same object representation established for the presaccade target. If so, then the post-saccade features of the target overwrite the presaccade features, a process of object mediated updating in which visual masking is governed by object continuity. In two experiments, participants' sensitivity to the spatial displacement of a saccade target was improved when that object changed surface feature properties across the saccade, consistent with the prediction of the object-mediating updating account. Transsaccadic perception appears to depend on a mechanism of object-based masking that is observed across multiple domains of vision. In addition, the results demonstrate that surface-feature continuity contributes to visual stability across saccades. PMID:23092946
Automated analysis of art object surfaces using time-averaged digital speckle pattern interferometry
NASA Astrophysics Data System (ADS)
Lukomski, Michal; Krzemien, Leszek
2013-05-01
Technical development and practical evaluation of a laboratory built, out-of-plane digital speckle pattern interferometer (DSPI) are reported. The instrument was used for non-invasive, non-contact detection and characterization of early-stage damage, like fracturing and layer separation, of painted objects of art. A fully automated algorithm was developed for recording and analysis of vibrating objects utilizing continuous-wave laser light. The algorithm uses direct, numerical fitting or Hilbert transformation for an independent, quantitative evaluation of the Bessel function at every point of the investigated surface. The procedure does not require phase modulation and thus can be implemented within any, even the simplest, DSPI apparatus. The proposed deformation analysis is fast and computationally inexpensive. Diagnosis of physical state of the surface of a panel painting attributed to Nicolaus Haberschrack (a late-mediaeval painter active in Krakow) from the collection of the National Museum in Krakow is presented as an example of an in situ application of the developed methodology. It has allowed the effectiveness of the deformation analysis to be evaluated for the surface of a real painting (heterogeneous colour and texture) in a conservation studio where vibration level was considerably higher than in the laboratory. It has been established that the methodology, which offers automatic analysis of the interferometric fringe patterns, has a considerable potential to facilitate and render more precise the condition surveys of works of art.
2018-01-01
Objective To investigate the psychometric properties of the activities of daily living (ADL) instrument used in the analysis of Korean Longitudinal Study of Ageing (KLoSA) dataset. Methods A retrospective study was carried out involving 2006 KLoSA records of community-dwelling adults diagnosed with stroke. The ADL instrument used for the analysis of KLoSA included 17 items, which were analyzed using Rasch modeling to develop a robust outcome measure. The unidimensionality of the ADL instrument was examined based on confirmatory factor analysis with a one-factor model. Item-level psychometric analysis of the ADL instrument included fit statistics, internal consistency, precision, and the item difficulty hierarchy. Results The study sample included a total of 201 community-dwelling adults (1.5% of the Korean population with an age over 45 years; mean age=70.0 years, SD=9.7) having a history of stroke. The ADL instrument demonstrated unidimensional construct. Two misfit items, money management (mean square [MnSq]=1.56, standardized Z-statistics [ZSTD]=2.3) and phone use (MnSq=1.78, ZSTD=2.3) were removed from the analysis. The remaining 15 items demonstrated good item fit, high internal consistency (person reliability=0.91), and good precision (person strata=3.48). The instrument precisely estimated person measures within a wide range of theta (−4.75 logits < θ < 3.97 logits) and a reliability of 0.9, with a conceptual hierarchy of item difficulty. Conclusion The findings indicate that the 15 ADL items met Rasch expectations of unidimensionality and demonstrated good psychometric properties. It is proposed that the validated ADL instrument can be used as a primary outcome measure for assessing longitudinal disability trajectories in the Korean adult population and can be employed for comparative analysis of international disability across national aging studies. PMID:29765888
Dahab, Gamal M; Kheriza, Mohamed M; El-Beltagi, Hussien M; Fouda, Abdel-Motaal M; El-Din, Osama A Sharaf
2004-01-01
The precise quantification of fibrous tissue in liver biopsy sections is extremely important in the classification, diagnosis and grading of chronic liver disease, as well as in evaluating the response to antifibrotic therapy. Because the recently described methods of digital image analysis of fibrosis in liver biopsy sections have major flaws, including the use of out-dated techniques in image processing, inadequate precision and inability to detect and quantify perisinusoidal fibrosis, we developed a new technique in computerized image analysis of liver biopsy sections based on Adobe Photoshop software. We prepared an experimental model of liver fibrosis involving treatment of rats with oral CCl4 for 6 weeks. After staining liver sections with Masson's trichrome, a series of computer operations were performed including (i) reconstitution of seamless widefield images from a number of acquired fields of liver sections; (ii) image size and solution adjustment; (iii) color correction; (iv) digital selection of a specified color range representing all fibrous tissue in the image and; (v) extraction and calculation. This technique is fully computerized with no manual interference at any step, and thus could be very reliable for objectively quantifying any pattern of fibrosis in liver biopsy sections and in assessing the response to antifibrotic therapy. It could also be a valuable tool in the precise assessment of antifibrotic therapy to other tissue regardless of the pattern of tissue or fibrosis.
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
Capacity and precision in an animal model of visual short-term memory
Lara, Antonio H.; Wallis, Jonathan D.
2013-01-01
Temporary storage of information in visual short-term memory (VSTM) is a key component of many complex cognitive abilities. However, it is highly limited in capacity. Understanding the neurophysiological nature of this capacity limit will require a valid animal model of VSTM. We used a multiple-item color change detection task to measure macaque monkeys’ VSTM capacity. Subjects’ performance deteriorated and reaction times increased as a function of the number of items in memory. Additionally, we measured the precision of the memory representations by varying the distance between sample and test colors. In trials with similar sample and test colors, subjects made more errors compared to trials with highly discriminable colors. We modeled the error distribution as a Gaussian function and used this to estimate the precision of VSTM representations. We found that as the number of items in memory increases the precision of the representations decreases dramatically. Additionally, we found that focusing attention on one of the objects increases the precision with which that object is stored and degrading the precision of the remaining. These results are in line with recent findings in human psychophysics and provide a solid foundation for understanding the neurophysiological nature of the capacity limit of VSTM. PMID:22419756
Present situation and trend of precision guidance technology and its intelligence
NASA Astrophysics Data System (ADS)
Shang, Zhengguo; Liu, Tiandong
2017-11-01
This paper first introduces the basic concepts of precision guidance technology and artificial intelligence technology. Then gives a brief introduction of intelligent precision guidance technology, and with the help of development of intelligent weapon based on deep learning project in foreign: LRASM missile project, TRACE project, and BLADE project, this paper gives an overview of the current foreign precision guidance technology. Finally, the future development trend of intelligent precision guidance technology is summarized, mainly concentrated in the multi objectives, intelligent classification, weak target detection and recognition, intelligent between complex environment intelligent jamming and multi-source, multi missile cooperative fighting and other aspects.
Airborne Visible Laser Optical Communications (AVLOC) experiment
NASA Technical Reports Server (NTRS)
1974-01-01
A series of optical communication experiments between a high altitude aircraft at 18.3 km (60,000 ft) and a ground station were conducted by NASA from summer 1972 through winter 1973. The basic system was an optical tracker and transmitter located in each terminal. The aircraft transceiver consisted of a 5-mW HeNe laser transmitter with a 30-megabit modulator. The ground station beacon was an argon laser operating at 488 nm. A separate pulsed laser radar was used for initial acquisition. The objective of the experiment was to obtain engineering data on the precision tracking and communication system performance at both terminals. Atmospheric effects on the system performance was also an experiment objective. The system description, engineering analysis, testing, and flight results are discussed.
A real-time compliance mapping system using standard endoscopic surgical forceps.
Fakhry, Morkos; Bello, Fernando; Hanna, George B
2009-04-01
In endoscopic surgery, the use of long surgical instruments through access ports diminishes tactile feedback and degrades the surgeon's ability to identify hidden tissue abnormalities. To overcome this constraint, we developed a real-time compliance mapping system that is composed of: 1) a standard surgical instrument with a high-precision sensor configuration design; 2) real-time objective interpretation of the output signals for tissue identification; and 3) a novel human-computer interaction technique using interactive voice and handle force monitoring techniques to suit operating theater working environment. The system was calibrated and used in clinical practice in four routine endoscopic human procedures. In a laboratory-based experiment to compare the tissue discriminatory power of the system with that of surgeons' hands, the system's tissue discriminatory power was three times more sensitive and 10% less specific. The data acquisition precision was tested using principal component analysis (R(2)X = 0.975, Q2 [cumulative (cum)] = 0.808 ) and partial least square discriminate analysis (R(2)X = 0.903, R(2)Y = 0.729, Q2 (cum) = 0.572).
X-Ray Emission from "Uranium" Stars
NASA Technical Reports Server (NTRS)
Schlegel, Eric; Mushotzky, Richard (Technical Monitor)
2005-01-01
The project aims to secure XMM observations of two targets with extremely low abundances of the majority of heavy elements (e.g., log[Fe/H] $\\sim$-4), but that show absorption lines of uranium. The presence of an r-process element such as uranium requires a binary star system in which the companion underwent a supernova explosion. A binary star system raises the distinct possibility of the existence of a compact object, most likely a neutron star, in the binary, assuming it survived the supernova blast. The presence of a compact object then suggests X-ray emission if sufficient matter accretes to the compact object. The observations were completed less than one year ago following a series of reobservations to correct for significant flaring that occurred during the original observations. The ROSAT all-sky survey was used to report on the initial assessment of X-ray emission from these objects; only upper limits were reported. These upper limits were used to justify the XMM observing time, but with the expectation that upper limits would merely be pushed lower. The data analysis hinges critically on the quality and degree of precision with which the background is handled. During the past year, I have spent some time learning the ins and outs of XMM data analysis. In the coming year, I can apply that learning to the analysis of the 'uranium' stars.
NASA Technical Reports Server (NTRS)
Rogers, Ralph V.
1993-01-01
The TATSS Project's goal was to develop a design for computer software that would support the attainment of the following objectives for the air traffic simulation model: (1) Full freedom of movement for each aircraft object in the simulation model. Each aircraft object may follow any designated flight plan or flight path necessary as required by the experiment under consideration. (2) Object position precision up to +/- 3 meters vertically and +/- 15 meters horizontally. (3) Aircraft maneuvering in three space with the object position precision identified above. (4) Air traffic control operations and procedures. (5) Radar, communication, navaid, and landing aid performance. (6) Weather. (7) Ground obstructions and terrain. (8) Detection and recording of separation violations. (9) Measures of performance including deviations from flight plans, air space violations, air traffic control messages per aircraft, and traditional temporal based measures.
Technical advances in proteomics: new developments in data-independent acquisition.
Hu, Alex; Noble, William S; Wolf-Yadlin, Alejandro
2016-01-01
The ultimate aim of proteomics is to fully identify and quantify the entire complement of proteins and post-translational modifications in biological samples of interest. For the last 15 years, liquid chromatography-tandem mass spectrometry (LC-MS/MS) in data-dependent acquisition (DDA) mode has been the standard for proteomics when sampling breadth and discovery were the main objectives; multiple reaction monitoring (MRM) LC-MS/MS has been the standard for targeted proteomics when precise quantification, reproducibility, and validation were the main objectives. Recently, improvements in mass spectrometer design and bioinformatics algorithms have resulted in the rediscovery and development of another sampling method: data-independent acquisition (DIA). DIA comprehensively and repeatedly samples every peptide in a protein digest, producing a complex set of mass spectra that is difficult to interpret without external spectral libraries. Currently, DIA approaches the identification breadth of DDA while achieving the reproducible quantification characteristic of MRM or its newest version, parallel reaction monitoring (PRM). In comparative de novo identification and quantification studies in human cell lysates, DIA identified up to 89% of the proteins detected in a comparable DDA experiment while providing reproducible quantification of over 85% of them. DIA analysis aided by spectral libraries derived from prior DIA experiments or auxiliary DDA data produces identification and quantification as reproducible and precise as that achieved by MRM/PRM, except on low‑abundance peptides that are obscured by stronger signals. DIA is still a work in progress toward the goal of sensitive, reproducible, and precise quantification without external spectral libraries. New software tools applied to DIA analysis have to deal with deconvolution of complex spectra as well as proper filtering of false positives and false negatives. However, the future outlook is positive, and various researchers are working on novel bioinformatics techniques to address these issues and increase the reproducibility, fidelity, and identification breadth of DIA.
Conscientious objection to referrals for abortion: pragmatic solution or threat to women’s rights?
2014-01-01
Background Conscientious objection has spurred impassioned debate in many Western countries. Some Norwegian general practitioners (GPs) refuse to refer for abortion. Little is know about how the GPs carry out their refusals in practice, how they perceive their refusal to fit with their role as professionals, and how refusals impact patients. Empirical data can inform subsequent normative analysis. Methods Qualitative research interviews were conducted with seven GPs, all Christians. Transcripts were analysed using systematic text condensation. Results Informants displayed a marked ambivalence towards their own refusal practices. Five main topics emerged in the interviews: 1) carrying out conscientious objection in practice, 2) justification for conscientious objection, 3) challenges when relating to colleagues, 4) ambivalence and consistency, 5) effects on the doctor-patient relationship. Conclusions Norwegian GP conscientious objectors were given to consider both pros and cons when evaluating their refusal practices. They had settled on a practical compromise, the precise form of which would vary, and which was deemed an acceptable middle way between competing interests. PMID:24571955
NASA Technical Reports Server (NTRS)
Crawford, Daniel J.; Burdette, Daniel W.; Capron, William R.
1993-01-01
The methodology and techniques used to collect and analyze look-point position data from a real-time ATC display-format comparison experiment are documented. That study compared the delivery precision and controller workload of three final approach spacing aid display formats. Using an oculometer, controller lookpoint position data were collected, associated with gaze objects (e.g., moving aircraft) on the ATC display, and analyzed to determine eye-scan behavior. The equipment involved and algorithms for saving, synchronizing with the ATC simulation output, and filtering the data are described. Target (gaze object) and cross-check scanning identification algorithms are also presented. Data tables are provided of total dwell times, average dwell times, and cross-check scans. Flow charts, block diagrams, file record descriptors, and source code are included. The techniques and data presented are intended to benefit researchers in other studies that incorporate non-stationary gaze objects and oculometer equipment.
Simplified Rotation In Acoustic Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Gaspar, M. S.; Trinh, E. H.
1989-01-01
New technique based on old discovery used to control orientation of object levitated acoustically in axisymmetric chamber. Method does not require expensive equipment like additional acoustic drivers of precisely adjustable amplitude, phase, and frequency. Reflecting object acts as second source of sound. If reflecting object large enough, close enough to levitated object, or focuses reflected sound sufficiently, Rayleigh torque exerted on levitated object by reflected sound controls orientation of object.
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Eguía, Pablo; Collazo, Joaquín
2010-01-01
The objective of this study was to develop a methodology for the determination of the maximum sampling error and confidence intervals of thermal properties obtained from thermogravimetric analysis (TG), including moisture, volatile matter, fixed carbon and ash content. The sampling procedure of the TG analysis was of particular interest and was conducted with care. The results of the present study were compared to those of a prompt analysis, and a correlation between the mean values and maximum sampling errors of the methods were not observed. In general, low and acceptable levels of uncertainty and error were obtained, demonstrating that the properties evaluated by TG analysis were representative of the overall fuel composition. The accurate determination of the thermal properties of biomass with precise confidence intervals is of particular interest in energetic biomass applications. PMID:20717532
Magnetophoretic circuits for digital control of single particles and cells
NASA Astrophysics Data System (ADS)
Lim, Byeonghwa; Reddy, Venu; Hu, Xinghao; Kim, Kunwoo; Jadhav, Mital; Abedini-Nassab, Roozbeh; Noh, Young-Woock; Lim, Yong Taik; Yellen, Benjamin B.; Kim, Cheolgi
2014-05-01
The ability to manipulate small fluid droplets, colloidal particles and single cells with the precision and parallelization of modern-day computer hardware has profound applications for biochemical detection, gene sequencing, chemical synthesis and highly parallel analysis of single cells. Drawing inspiration from general circuit theory and magnetic bubble technology, here we demonstrate a class of integrated circuits for executing sequential and parallel, timed operations on an ensemble of single particles and cells. The integrated circuits are constructed from lithographically defined, overlaid patterns of magnetic film and current lines. The magnetic patterns passively control particles similar to electrical conductors, diodes and capacitors. The current lines actively switch particles between different tracks similar to gated electrical transistors. When combined into arrays and driven by a rotating magnetic field clock, these integrated circuits have general multiplexing properties and enable the precise control of magnetizable objects.
Femur Model Reconstruction Based on Reverse Engineering and Rapid Prototyping
NASA Astrophysics Data System (ADS)
Tang, Tongming; Zhang, Zheng; Ni, Hongjun; Deng, Jiawen; Huang, Mingyu
Precise reconstruction of 3D models is fundamental and crucial to the researches of human femur. In this paper we present our approach towards tackling this problem. The surface of a human femur was scanned using a hand-held 3D laser scanner. The data obtained, in the form of point cloud, was then processed using the reverse engineering software Geomagic and the CAD/CAM software CimatronE to reconstruct a digital 3D model. The digital model was then used by the rapid prototyping machine to build a physical model of human femur using 3D printing. The geometric characteristics of the obtained physical model matched that of the original femur. The process of "physical object - 3D data - digital 3D model - physical model" presented in this paper provides a foundation of precise modeling for the digital manufacturing, virtual assembly, stress analysis, and simulated surgery of artificial bionic femurs.
El-Amrawy, Fatema
2015-01-01
Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-06-01
Following a planning period during which the Lawrence Livermore Laboratory and the Department of Defense managing sponsor, the USAF Materials Laboratory, agreed on work statements, the Department of Defense Tri-Service Precision Machine-Tool Program began in February 1978. Milestones scheduled for the first quarter have been met. Tasks and manpower requirements for two basic projects, precision-machining commercialization (PMC) and a machine-tool task force (MTTF), were defined. Progress by PMC includes: (1) documentation of existing precision machine-tool technology by initiation and compilation of a bibliography containing several hundred entries: (2) identification of the problems and needs of precision turning-machine builders and ofmore » precision turning-machine users interested in developing high-precision machining capability; and (3) organization of the schedule and content of the first seminar, to be held in October 1978, which will bring together representatives from the machine-tool and optics communities to address the problems and begin the process of high-precision machining commercialization. Progress by MTTF includes: (1) planning for the organization of a team effort of approximately 60 to 80 international experts to contribute in various ways to project objectives, namely, to summarize state-of-the-art cutting-machine-tool technology and to identify areas where future R and D should prove technically and economically profitable; (2) preparation of a comprehensive plan to achieve those objectives; and (3) preliminary arrangements for a plenary session, also in October, when the task force will meet to formalize the details for implementing the plan.« less
Wang, Yuezong; Zhao, Zhizhong; Wang, Junshuai
2016-04-01
We present a novel and high-precision microscopic vision modeling method, which can be used for 3D data reconstruction in micro-gripping system with stereo light microscope. This method consists of four parts: image distortion correction, disparity distortion correction, initial vision model and residual compensation model. First, the method of image distortion correction is proposed. Image data required by image distortion correction comes from stereo images of calibration sample. The geometric features of image distortions can be predicted though the shape deformation of lines constructed by grid points in stereo images. Linear and polynomial fitting methods are applied to correct image distortions. Second, shape deformation features of disparity distribution are discussed. The method of disparity distortion correction is proposed. Polynomial fitting method is applied to correct disparity distortion. Third, a microscopic vision model is derived, which consists of two models, i.e., initial vision model and residual compensation model. We derive initial vision model by the analysis of direct mapping relationship between object and image points. Residual compensation model is derived based on the residual analysis of initial vision model. The results show that with maximum reconstruction distance of 4.1mm in X direction, 2.9mm in Y direction and 2.25mm in Z direction, our model achieves a precision of 0.01mm in X and Y directions and 0.015mm in Z direction. Comparison of our model with traditional pinhole camera model shows that two kinds of models have a similar reconstruction precision of X coordinates. However, traditional pinhole camera model has a lower precision of Y and Z coordinates than our model. The method proposed in this paper is very helpful for the micro-gripping system based on SLM microscopic vision. Copyright © 2016 Elsevier Ltd. All rights reserved.
Precision of MRI-based body composition measurements of postmenopausal women
Romu, Thobias; Thorell, Sofia; Lindblom, Hanna; Berin, Emilia; Holm, Anna-Clara Spetz; Åstrand, Lotta Lindh; Karlsson, Anette; Borga, Magnus; Hammar, Mats; Leinhard, Olof Dahlqvist
2018-01-01
Objectives To determine precision of magnetic resonance imaging (MRI) based fat and muscle quantification in a group of postmenopausal women. Furthermore, to extend the method to individual muscles relevant to upper-body exercise. Materials and methods This was a sub-study to a randomized control trial investigating effects of resistance training to decrease hot flushes in postmenopausal women. Thirty-six women were included, mean age 56 ± 6 years. Each subject was scanned twice with a 3.0T MR-scanner using a whole-body Dixon protocol. Water and fat images were calculated using a 6-peak lipid model including R2*-correction. Body composition analyses were performed to measure visceral and subcutaneous fat volumes, lean volumes and muscle fat infiltration (MFI) of the muscle groups’ thigh muscles, lower leg muscles, and abdominal muscles, as well as the three individual muscles pectoralis, latissimus, and rhomboideus. Analysis was performed using a multi-atlas, calibrated water-fat separated quantification method. Liver-fat was measured as average proton density fat-fraction (PDFF) of three regions-of-interest. Precision was determined with Bland-Altman analysis, repeatability, and coefficient of variation. Results All of the 36 included women were successfully scanned and analysed. The coefficient of variation was 1.1% to 1.5% for abdominal fat compartments (visceral and subcutaneous), 0.8% to 1.9% for volumes of muscle groups (thigh, lower leg, and abdomen), and 2.3% to 7.0% for individual muscle volumes (pectoralis, latissimus, and rhomboideus). Limits of agreement for MFI was within ± 2.06% for muscle groups and within ± 5.13% for individual muscles. The limits of agreement for liver PDFF was within ± 1.9%. Conclusion Whole-body Dixon MRI could characterize a range of different fat and muscle compartments with high precision, including individual muscles, in the study-group of postmenopausal women. The inclusion of individual muscles, calculated from the same scan, enables analysis for specific intervention programs and studies. PMID:29415060
Causes of failure with Szabo technique – An analysis of nine cases
Jain, Rajendra Kumar; Padmanabhan, T.N.C.; Chitnis, Nishad
2013-01-01
Objective The objective of this case series is to identify and define causes of failure of Szabo technique in rapid-exchange monorail system for ostial lesions. Methods and results From March 2009 to March 2011, 42 patients with an ostial lesion were treated percutaneously at our institution using Szabo technique in a monorail stent system. All patients received unfractionated heparin during intervention. Loading dose of clopidogrel, followed by clopidogrel and aspirin was administered. In 57% of patients, drug-eluting stents were used and in 42.8% patients bare metal stents. The stent was advanced over both wires, the target wire and the anchor wire. The anchor wire, which was passed through the proximal trailing strut of the stent helps to achieve precise stenting. The procedure was considered to be successful if stent was placed precisely covering the lesion and without stent loss or anchor wire prolapsing. Of the total 42 patients, the procedure was successful in 33, while failed in 9. Majority of failures were due to wire entanglement, which was fixed successfully in 3 cases by removing and reinserting the anchor wire. Out of other three failures, in one stent dislodgment occurred, stent could not cross the lesion in one and in another anchor wire got looped and prolapsed into target vessel. Conclusion This case series shows that the Szabo technique, in spite of some difficulties like wire entanglement, stent dislodgement and resistance during stent advancement, is a simple and feasible method for treating variety of ostial lesions precisely compared to conventional angioplasty. PMID:23809379
An object correlation and maneuver detection approach for space surveillance
NASA Astrophysics Data System (ADS)
Huang, Jian; Hu, Wei-Dong; Xin, Qin; Du, Xiao-Yong
2012-10-01
Object correlation and maneuver detection are persistent problems in space surveillance and maintenance of a space object catalog. We integrate these two problems into one interrelated problem, and consider them simultaneously under a scenario where space objects only perform a single in-track orbital maneuver during the time intervals between observations. We mathematically formulate this integrated scenario as a maximum a posteriori (MAP) estimation. In this work, we propose a novel approach to solve the MAP estimation. More precisely, the corresponding posterior probability of an orbital maneuver and a joint association event can be approximated by the Joint Probabilistic Data Association (JPDA) algorithm. Subsequently, the maneuvering parameters are estimated by optimally solving the constrained non-linear least squares iterative process based on the second-order cone programming (SOCP) algorithm. The desired solution is derived according to the MAP criterions. The performance and advantages of the proposed approach have been shown by both theoretical analysis and simulation results. We hope that our work will stimulate future work on space surveillance and maintenance of a space object catalog.
Analyzing Cyber Security Threats on Cyber-Physical Systems Using Model-Based Systems Engineering
NASA Technical Reports Server (NTRS)
Kerzhner, Aleksandr; Pomerantz, Marc; Tan, Kymie; Campuzano, Brian; Dinkel, Kevin; Pecharich, Jeremy; Nguyen, Viet; Steele, Robert; Johnson, Bryan
2015-01-01
The spectre of cyber attacks on aerospace systems can no longer be ignored given that many of the components and vulnerabilities that have been successfully exploited by the adversary on other infrastructures are the same as those deployed and used within the aerospace environment. An important consideration with respect to the mission/safety critical infrastructure supporting space operations is that an appropriate defensive response to an attack invariably involves the need for high precision and accuracy, because an incorrect response can trigger unacceptable losses involving lives and/or significant financial damage. A highly precise defensive response, considering the typical complexity of aerospace environments, requires a detailed and well-founded understanding of the underlying system where the goal of the defensive response is to preserve critical mission objectives in the presence of adversarial activity. In this paper, a structured approach for modeling aerospace systems is described. The approach includes physical elements, network topology, software applications, system functions, and usage scenarios. We leverage Model-Based Systems Engineering methodology by utilizing the Object Management Group's Systems Modeling Language to represent the system being analyzed and also utilize model transformations to change relevant aspects of the model into specialized analyses. A novel visualization approach is utilized to visualize the entire model as a three-dimensional graph, allowing easier interaction with subject matter experts. The model provides a unifying structure for analyzing the impact of a particular attack or a particular type of attack. Two different example analysis types are demonstrated in this paper: a graph-based propagation analysis based on edge labels, and a graph-based propagation analysis based on node labels.
Marki, Alex; Ermilov, Eugeny; Zakrzewicz, Andreas; Koller, Akos; Secomb, Timothy W; Pries, Axel R
2014-04-01
The aim of the study was to establish a user-friendly approach for single fluorescence particle 3D localization and tracking with nanometre precision in a standard fluorescence microscope using a point spread function (PSF) approach, and to evaluate validity and precision for different analysis methods and optical conditions with particular application to microcirculatory flow dynamics and cell biology. Images of fluorescent particles were obtained with a standard fluorescence microscope equipped with a piezo positioner for the objective. Whole pattern (WP) comparison with a PSF recorded for the specific set-up and measurement of the outermost ring radius (ORR) were used for analysis. Images of fluorescent particles were recorded over a large range (about 7μm) of vertical positions, with and without distortion by overlapping particles as well as in the presence of cultured endothelial cells. For a vertical range of 6.5μm the standard deviation (SD) from the predicted value, indicating validity, was 9.3/8.7 nm (WP/ORR) in the vertical and 8.2/11.7 nm in the horizontal direction. The precision, determined by repeated measurements, was 5.1/3.8 nm in the vertical and 2.9/3.7 nm in the horizontal direction. WP was more robust with respect to underexposure or overlapping images. On the surface of cultured endothelial cells, a layer with 2.5 times increased viscosity and a thickness of about 0.8μm was detected. With a validity in the range of 10 nm and a precision down to about 3-5 nm obtained by standard fluorescent microscopy, the PSF approach offers a valuable tool for a variety of experimental investigations of particle localizations, including the assessment of endothelial cell microenvironment.
Active laser radar (lidar) for measurement of corresponding height and reflectance images
NASA Astrophysics Data System (ADS)
Froehlich, Christoph; Mettenleiter, M.; Haertl, F.
1997-08-01
For the survey and inspection of environmental objects, a non-tactile, robust and precise imaging of height and depth is the basis sensor technology. For visual inspection,surface classification, and documentation purposes, however, additional information concerning reflectance of measured objects is necessary. High-speed acquisition of both geometric and visual information is achieved by means of an active laser radar, supporting consistent 3D height and 2D reflectance images. The laser radar is an optical-wavelength system, and is comparable to devices built by ERIM, Odetics, and Perceptron, measuring the range between sensor and target surfaces as well as the reflectance of the target surface, which corresponds to the magnitude of the back scattered laser energy. In contrast to these range sensing devices, the laser radar under consideration is designed for high speed and precise operation in both indoor and outdoor environments, emitting a minimum of near-IR laser energy. It integrates a laser range measurement system and a mechanical deflection system for 3D environmental measurements. This paper reports on design details of the laser radar for surface inspection tasks. It outlines the performance requirements and introduces the measurement principle. The hardware design, including the main modules, such as the laser head, the high frequency unit, the laser beam deflection system, and the digital signal processing unit are discussed.the signal processing unit consists of dedicated signal processors for real-time sensor data preprocessing as well as a sensor computer for high-level image analysis and feature extraction. The paper focuses on performance data of the system, including noise, drift over time, precision, and accuracy with measurements. It discuses the influences of ambient light, surface material of the target, and ambient temperature for range accuracy and range precision. Furthermore, experimental results from inspection of buildings, monuments and industrial environments are presented. The paper concludes by summarizing results achieved in industrial environments and gives a short outlook to future work.
D Tracking Based Augmented Reality for Cultural Heritage Data Management
NASA Astrophysics Data System (ADS)
Battini, C.; Landi, G.
2015-02-01
The development of contactless documentation techniques is allowing researchers to collect high volumes of three-dimensional data in a short time but with high levels of accuracy. The digitalisation of cultural heritage opens up the possibility of using image processing and analysis, and computer graphics techniques, to preserve this heritage for future generations; augmenting it with additional information or with new possibilities for its enjoyment and use. The collection of precise datasets about cultural heritage status is crucial for its interpretation, its conservation and during the restoration processes. The application of digital-imaging solutions for various feature extraction, image data-analysis techniques, and three-dimensional reconstruction of ancient artworks, allows the creation of multidimensional models that can incorporate information coming from heterogeneous data sets, research results and historical sources. Real objects can be scanned and reconstructed virtually, with high levels of data accuracy and resolution. Real-time visualisation software and hardware is rapidly evolving and complex three-dimensional models can be interactively visualised and explored on applications developed for mobile devices. This paper will show how a 3D reconstruction of an object, with multiple layers of information, can be stored and visualised through a mobile application that will allow interaction with a physical object for its study and analysis, using 3D Tracking based Augmented Reality techniques.
Is There Space for the Objective Force?
2003-04-07
force through the combination of precision weapons and knowledge-based warfare. Army forces will survive through information dominance , provided by a...Objective Forces. Space-based systems will be foundational building blocks for the Objective Force to achieve information dominance and satellite...communications required for information dominance across a distributed battlefield? Second, what exists to provide the Objective Force information
Set size, individuation, and attention to shape
Cantrell, Lisa; Smith, Linda B.
2013-01-01
Much research has demonstrated a shape bias in categorizing and naming solid objects. This research has shown that when an entity is conceptualized as an individual object, adults and children attend to the object’s shape. Separate research in the domain of numerical cognition suggest that there are distinct processes for quantifying small and large sets of discrete items. This research shows that small set discrimination, comparison, and apprehension is often precise for 1–3 and sometimes 4 items; however, large numerosity representation is imprecise. Results from three experiments suggest a link between the processes for small and large number representation and the shape bias in a forced choice categorization task using naming and non-naming procedures. Experiment 1 showed that adults generalized a newly learned name for an object to new instances of the same shape only when those instances were presented in sets of less than 3 or 4. Experiment 2 showed that preschool children who were monolingual speakers of three different languages were also influenced by set size when categorizing objects in sets. Experiment 3 extended these results and showed the same effect in a non-naming task and when the novel noun was presented in a count-noun syntax frame. The results are discussed in terms of a relation between the precision of object representation and the precision of small and large number representation. PMID:23167969
NASA Technical Reports Server (NTRS)
Belton, Michael J. S.; Mueller, Beatrice
1991-01-01
The scientific objectives were as follows: (1) to construct a well sampled photometric time series of comet Halley extending to large heliocentric distances both post and pre-perihelion passage and derive a precise ephemeris for the nuclear spin so that the physical and chemical characteristics of individual regions of activity on the nucleus can be determined; and (2) to extend the techniques in the study of Comet Halley to the study of other cometary nuclei and to obtain new observational data.
Chen, Qiang; Chen, Yunhao; Jiang, Weiguo
2016-01-01
In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285
Muro-de-la-Herran, Alvaro; Garcia-Zapirain, Begonya; Mendez-Zorrilla, Amaia
2014-01-01
This article presents a review of the methods used in recognition and analysis of the human gait from three different approaches: image processing, floor sensors and sensors placed on the body. Progress in new technologies has led the development of a series of devices and techniques which allow for objective evaluation, making measurements more efficient and effective and providing specialists with reliable information. Firstly, an introduction of the key gait parameters and semi-subjective methods is presented. Secondly, technologies and studies on the different objective methods are reviewed. Finally, based on the latest research, the characteristics of each method are discussed. 40% of the reviewed articles published in late 2012 and 2013 were related to non-wearable systems, 37.5% presented inertial sensor-based systems, and the remaining 22.5% corresponded to other wearable systems. An increasing number of research works demonstrate that various parameters such as precision, conformability, usability or transportability have indicated that the portable systems based on body sensors are promising methods for gait analysis. PMID:24556672
An improved K-means clustering algorithm in agricultural image segmentation
NASA Astrophysics Data System (ADS)
Cheng, Huifeng; Peng, Hui; Liu, Shanmei
Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.
Intercomparison of analytical methods for arsenic speciation in human urine.
Crecelius, E; Yager, J
1997-06-01
An intercomparison exercise was conducted for the quantification of arsenic species in spiked human urine. The primary objective of the exercise was to determine the variance among laboratories in the analysis of arsenic species such as inorganic As (As+3 and As+5), monomethylarsonic acid (MMA), and dimethylarsinic acid (DMA). Laboratories that participated had previous experience with arsenic speciation analysis. The results of this interlaboratory comparison are encouraging. There is relatively good agreement on the concentrations of these arsenic species in urine at concentrations that are relevant to research on the metabolism of arsenic in humans and other mammals. Both the accuracy and precision are relatively poor for arsenic concentrations of less than about 5 micrograms/l.
Outline of a new approach to the analysis of complex systems and decision processes.
NASA Technical Reports Server (NTRS)
Zadeh, L. A.
1973-01-01
Development of a conceptual framework for dealing with systems which are too complex or too ill-defined to admit of precise quantitative analysis. The approach outlined is based on the premise that the key elements in human thinking are not numbers, but labels of fuzzy sets - i.e., classes of objects in which the transition from membership to nonmembership is gradual rather than abrupt. The approach in question has three main distinguishing features - namely, the use of so-called 'linguistic' variables in place of or in addition to numerical variables, the characterization of simple relations between variables by conditional fuzzy statements, and the characterization of complex relations by fuzzy algorithms.
Wee, Natalie; Asplund, Christopher L; Chee, Michael W L
2013-06-01
Visual short-term memory (VSTM) is an important measure of information processing capacity and supports many higher-order cognitive processes. We examined how sleep deprivation (SD) and maintenance duration interact to influence the number and precision of items in VSTM using an experimental design that limits the contribution of lapses at encoding. For each trial, participants attempted to maintain the location and color of three stimuli over a delay. After a retention interval of either 1 or 10 seconds, participants reported the color of the item at the cued location by selecting it on a color wheel. The probability of reporting the probed item, the precision of report, and the probability of reporting a nonprobed item were determined using a mixture-modeling analysis. Participants were studied twice in counterbalanced order, once after a night of normal sleep and once following a night of sleep deprivation. Sleep laboratory. Nineteen healthy college age volunteers (seven females) with regular sleep patterns. Approximately 24 hours of total SD. SD selectively reduced the number of integrated representations that can be retrieved after a delay, while leaving the precision of object information in the stored representations intact. Delay interacted with SD to lower the rate of successful recall. Visual short-term memory is compromised during sleep deprivation, an effect compounded by delay. However, when memories are retrieved, they tend to be intact.
The Use of Instructional Objectives: A Model for Second-Year Podiatric Surgical Residency.
ERIC Educational Resources Information Center
Lepow, Gary M.; Levy, Leonard A.
1980-01-01
The use of highly specific objectives can be the basis for a second-year podiatric surgical residency program. They show both residents and attending staff precisely the knowledge and skills to be achieved and aid evaluation of students. A series of objectives is provided. (MSE)
A list of some bright objects which S-052 can observe
NASA Technical Reports Server (NTRS)
Mcquire, J. P.
1972-01-01
In order to find out the precise orientation of the photographs obtained by the High Altitude Observatory's ATM white light coronagraph, celestial objects must appear on each roll of film. A list of such bright objects and the times during which they can be observed is presented.
TRoPICALS: A Computational Embodied Neuroscience Model of Compatibility Effects
ERIC Educational Resources Information Center
Caligiore, Daniele; Borghi, Anna M.; Parisi, Domenico; Baldassarre, Gianluca
2010-01-01
Perceiving objects activates the representation of their affordances. For example, experiments on compatibility effects showed that categorizing objects by producing certain handgrips (power or precision) is faster if the requested responses are compatible with the affordance elicited by the size of objects (e.g., small or large). The article…
van den Berg, Irene; Boichard, Didier; Lund, Mogens Sandø
2016-11-01
The objective of this study was to compare mapping precision and power of within-breed and multibreed genome-wide association studies (GWAS) and to compare the results obtained by the multibreed GWAS with 3 meta-analysis methods. The multibreed GWAS was expected to improve mapping precision compared with a within-breed GWAS because linkage disequilibrium is conserved over shorter distances across breeds than within breeds. The multibreed GWAS was also expected to increase detection power for quantitative trait loci (QTL) segregating across breeds. GWAS were performed for production traits in dairy cattle, using imputed full genome sequences of 16,031 bulls, originating from 6 French and Danish dairy cattle populations. Our results show that a multibreed GWAS can be a valuable tool for the detection and fine mapping of quantitative trait loci. The number of QTL detected with the multibreed GWAS was larger than the number detected by the within-breed GWAS, indicating an increase in power, especially when the 2 Holstein populations were combined. The largest number of QTL was detected when all populations were combined. The analysis combining all breeds was, however, dominated by Holstein, and QTL segregating in other breeds but not in Holstein were sometimes overshadowed by larger QTL segregating in Holstein. Therefore, the GWAS combining all breeds except Holstein was useful to detect such peaks. Combining all breeds except Holstein resulted in smaller QTL intervals on average, but this outcome was not the case when the Holstein populations were included in the analysis. Although no decrease in the average QTL size was observed, mapping precision did improve for several QTL. Out of 3 different multibreed meta-analysis methods, the weighted z-scores model resulted in the most similar results to the full multibreed GWAS and can be useful as an alternative to a full multibreed GWAS. Differences between the multibreed GWAS and the meta-analyses were larger when different breeds were combined than when the 2 Holstein populations were combined. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Single particle maximum likelihood reconstruction from superresolution microscopy images
Verdier, Timothée; Gunzenhauser, Julia; Manley, Suliana; Castelnovo, Martin
2017-01-01
Point localization superresolution microscopy enables fluorescently tagged molecules to be imaged beyond the optical diffraction limit, reaching single molecule localization precisions down to a few nanometers. For small objects whose sizes are few times this precision, localization uncertainty prevents the straightforward extraction of a structural model from the reconstructed images. We demonstrate in the present work that this limitation can be overcome at the single particle level, requiring no particle averaging, by using a maximum likelihood reconstruction (MLR) method perfectly suited to the stochastic nature of such superresolution imaging. We validate this method by extracting structural information from both simulated and experimental PALM data of immature virus-like particles of the Human Immunodeficiency Virus (HIV-1). MLR allows us to measure the radii of individual viruses with precision of a few nanometers and confirms the incomplete closure of the viral protein lattice. The quantitative results of our analysis are consistent with previous cryoelectron microscopy characterizations. Our study establishes the framework for a method that can be broadly applied to PALM data to determine the structural parameters for an existing structural model, and is particularly well suited to heterogeneous features due to its single particle implementation. PMID:28253349
Attimarad, Mahesh
2010-01-01
The objective of this study was to develop simple, precise, accurate and sensitive UV spectrophotometric methods for the simultaneous determination of ofloxacin (OFX) and flavoxate HCl (FLX) in pharmaceutical formulations. The first method is based on absorption ratio method, by formation of Q absorbance equation at 289 nm (λmax of OFX) and 322.4 nm (isoabsorptive point). The linearity range was found to be 1 to 30 μg/ml for FLX and OFX. In the method-II second derivative absorption at 311.4 nm for OFX (zero crossing for FLX) and at 246.2 nm for FLX (zero crossing for OFX) was used for the determination of the drugs and the linearity range was found to be 2 to 30 μg/ml for OFX and 2-75 μg /ml for FLX. The accuracy and precision of the methods were determined and validated statistically. Both the methods showed good reproducibility and recovery with % RSD less than 1.5%. Both the methods were found to be rapid, specific, precise and accurate and can be successfully applied for the routine analysis of OFX and FLX in combined dosage form PMID:24826003
Sutapun, Boonsong; Somboonkaew, Armote; Amarit, Ratthasart; Chanhorm, Sataporn
2015-01-01
This work describes a new design of a fiber-optic confocal probe suitable for measuring the central thicknesses of small-radius optical lenses or similar objects. The proposed confocal probe utilizes an integrated camera that functions as a shape-encoded position-sensing device. The confocal signal for thickness measurement and beam-shape data for off-axis measurement can be simultaneously acquired using the proposed probe. Placing the probe’s focal point off-center relative to a sample’s vertex produces a non-circular image at the camera’s image plane that closely resembles an ellipse for small displacements. We were able to precisely position the confocal probe’s focal point relative to the vertex point of a ball lens with a radius of 2.5 mm, with a lateral resolution of 1.2 µm. The reflected beam shape based on partial blocking by an aperture was analyzed and verified experimentally. The proposed confocal probe offers a low-cost, high-precision technique, an alternative to a high-cost three-dimensional surface profiler, for tight quality control of small optical lenses during the manufacturing process. PMID:25871720
THE PRISM MULTI-OBJECT SURVEY (PRIMUS). II. DATA REDUCTION AND REDSHIFT FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cool, Richard J.; Moustakas, John; Blanton, Michael R.
2013-04-20
The PRIsm MUlti-object Survey (PRIMUS) is a spectroscopic galaxy redshift survey to z {approx} 1 completed with a low-dispersion prism and slitmasks allowing for simultaneous observations of {approx}2500 objects over 0.18 deg{sup 2}. The final PRIMUS catalog includes {approx}130,000 robust redshifts over 9.1 deg{sup 2}. In this paper, we summarize the PRIMUS observational strategy and present the data reduction details used to measure redshifts, redshift precision, and survey completeness. The survey motivation, observational techniques, fields, target selection, slitmask design, and observations are presented in Coil et al. Comparisons to existing higher-resolution spectroscopic measurements show a typical precision of {sigma}{sub z}/(1more » + z) = 0.005. PRIMUS, both in area and number of redshifts, is the largest faint galaxy redshift survey completed to date and is allowing for precise measurements of the relationship between active galactic nuclei and their hosts, the effects of environment on galaxy evolution, and the build up of galactic systems over the latter half of cosmic history.« less
Quantitative optical metrology with CMOS cameras
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.
2004-08-01
Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.
Automated quantification of neurite outgrowth orientation distributions on patterned surfaces
NASA Astrophysics Data System (ADS)
Payne, Matthew; Wang, Dadong; Sinclair, Catriona M.; Kapsa, Robert M. I.; Quigley, Anita F.; Wallace, Gordon G.; Razal, Joselito M.; Baughman, Ray H.; Münch, Gerald; Vallotton, Pascal
2014-08-01
Objective. We have developed an image analysis methodology for quantifying the anisotropy of neuronal projections on patterned substrates. Approach. Our method is based on the fitting of smoothing splines to the digital traces produced using a non-maximum suppression technique. This enables precise estimates of the local tangents uniformly along the neurite length, and leads to unbiased orientation distributions suitable for objectively assessing the anisotropy induced by tailored surfaces. Main results. In our application, we demonstrate that carbon nanotubes arrayed in parallel bundles over gold surfaces induce a considerable neurite anisotropy; a result which is relevant for regenerative medicine. Significance. Our pipeline is generally applicable to the study of fibrous materials on 2D surfaces and should also find applications in the study of DNA, microtubules, and other polymeric materials.
[Research of Identify Spatial Object Using Spectrum Analysis Technique].
Song, Wei; Feng, Shi-qi; Shi, Jing; Xu, Rong; Wang, Gong-chang; Li, Bin-yu; Liu, Yu; Li, Shuang; Cao Rui; Cai, Hong-xing; Zhang, Xi-he; Tan, Yong
2015-06-01
The high precision scattering spectrum of spatial fragment with the minimum brightness of 4.2 and the resolution of 0.5 nm has been observed using spectrum detection technology on the ground. The obvious differences for different types of objects are obtained by the normalizing and discrete rate analysis of the spectral data. Each of normalized multi-frame scattering spectral line shape for rocket debris is identical. However, that is different for lapsed satellites. The discrete rate of the single frame spectrum of normalized space debris for rocket debris ranges from 0.978% to 3.067%, and the difference of oscillation and average value is small. The discrete rate for lapsed satellites ranges from 3.118 4% to 19.472 7%, and the difference of oscillation and average value relatively large. The reason is that the composition of rocket debris is single, while that of the lapsed satellites is complex. Therefore, the spectrum detection technology on the ground can be used to the classification of the spatial fragment.
HeatWave: the next generation of thermography devices
NASA Astrophysics Data System (ADS)
Moghadam, Peyman; Vidas, Stephen
2014-05-01
Energy sustainability is a major challenge of the 21st century. To reduce environmental impact, changes are required not only on the supply side of the energy chain by introducing renewable energy sources, but also on the demand side by reducing energy usage and improving energy efficiency. Currently, 2D thermal imaging is used for energy auditing, which measures the thermal radiation from the surfaces of objects and represents it as a set of color-mapped images that can be analysed for the purpose of energy efficiency monitoring. A limitation of such a method for energy auditing is that it lacks information on the geometry and location of objects with reference to each other, particularly across separate images. Such a limitation prevents any quantitative analysis to be done, for example, detecting any energy performance changes before and after retrofitting. To address these limitations, we have developed a next generation thermography device called Heat Wave. Heat Wave is a hand-held 3D thermography device that consists of a thermal camera, a range sensor and color camera, and can be used to generate precise 3D model of objects with augmented temperature and visible information. As an operator holding the device smoothly waves it around the objects of interest, Heat Wave can continuously track its own pose in space and integrate new information from the range and thermal and color cameras into a single, and precise 3D multi-modal model. Information from multiple viewpoints can be incorporated together to improve the accuracy, reliability and robustness of the global model. The approach also makes it possible to reduce any systematic errors associated with the estimation of surface temperature from the thermal images.
Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis
Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.
2015-01-01
Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-02-21
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
Guna, Jože; Jakus, Grega; Pogačnik, Matevž; Tomažič, Sašo; Sodnik, Jaka
2014-01-01
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system. PMID:24566635
NASA Astrophysics Data System (ADS)
Li, Nan; Zhu, Xiufang
2017-04-01
Cultivated land resources is the key to ensure food security. Timely and accurate access to cultivated land information is conducive to a scientific planning of food production and management policies. The GaoFen 1 (GF-1) images have high spatial resolution and abundant texture information and thus can be used to identify fragmentized cultivated land. In this paper, an object-oriented artificial bee colony algorithm was proposed for extracting cultivated land from GF-1 images. Firstly, the GF-1 image was segmented by eCognition software and some samples from the segments were manually identified into 2 types (cultivated land and non-cultivated land). Secondly, the artificial bee colony (ABC) algorithm was used to search for classification rules based on the spectral and texture information extracted from the image objects. Finally, the extracted classification rules were used to identify the cultivated land area on the image. The experiment was carried out in Hongze area, Jiangsu Province using wide field-of-view sensor on the GF-1 satellite image. The total precision of classification result was 94.95%, and the precision of cultivated land was 92.85%. The results show that the object-oriented ABC algorithm can overcome the defect of insufficient spectral information in GF-1 images and obtain high precision in cultivated identification.
[Estimation of body fat by DXA and the four compartment model in Mexican youth].
Ramírez, Erik; Valencia, Mauro E; Moya Camarena, Silvia Y; Alemán-Mateo, Heliodoro; Méndez, Rosa O
2010-09-01
The objective of this study was to validate the estimation of body fat (%BF) by DXA (Dual-Energy X-Ray AbsorciomDPX-MD) against the four compartment model (4C) of body composition in 32 Mexican pubertal girls and boys (aged 9-14 y; F=16). The mean of the difference between DXA and 4C model was -3.5 %BF (p=0.171). The limits of agreement (95% = 2 SD) were +5% to -12%BF. The precision of estimated limits of y the confidence intervals were -1.9% to -5.1%BF (P = 0.050). The concordance correlation coefficient was p = 0.85. The test of accuracy for coincidence of slop intercepts between DXA and the 4C model showed no coincidence (p < 0.05). The precision by R2 explained 83% of the variance (SEE, 4.1%). The individual accuracy assess by the total error was 5.6%. The group mean accuracy by two way analysis of variance of body fat did not show interaction between method (DXA-4C model) and separate analysis of gender and overweight. However, there was an effect of method (p = 0.043) in the presence of overweight (p < 0.001). In conclusion, the estimation of percent of body fat by DXA was not precise and accurate in a group of Mexican children. However, results do not limit the utility of DXA for the measurements of body composition and its relation with health outcomes, especially in follow up studies.
Knock-Outs, Stick-Outs, Cut-Outs: Clipping Paths Separate Objects from Background.
ERIC Educational Resources Information Center
Wilson, Bradley
1998-01-01
Outlines a six-step process that allows computer operators, using Photoshop software, to create "knock-outs" to precisely define the path that will serve to separate the object from the background. (SR)
Bruno, Nicola; Uccelli, Stefano; Viviani, Eva; de'Sperati, Claudio
2016-10-01
According to a previous report, the visual coding of size does not obey Weber's law when aimed at guiding a grasp (Ganel et al., 2008a). This result has been interpreted as evidence for a fundamental difference between sensory processing in vision-for-perception, which needs to compress a wide range of physical objects to a restricted range of percepts, and vision-for-action when applied to the much narrower range of graspable and reachable objects. We compared finger aperture in a motor task (precision grip) and perceptual task (cross modal matching or "manual estimation" of the object's size). Crucially, we tested the whole range of graspable objects. We report that both grips and estimations clearly violate Weber's law with medium-to-large objects, but are essentially consistent with Weber's law with smaller objects. These results differ from previous characterizations of perception-action dissociations in the precision of representations of object size. Implications for current functional interpretations of the dorsal and ventral processing streams in the human visual system are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optimization of Exposure Time Division for Multi-object Photometry
NASA Astrophysics Data System (ADS)
Popowicz, Adam; Kurek, Aleksander R.
2017-09-01
Optical observations of wide fields of view entail the problem of selecting the best exposure time. As many objects are usually observed simultaneously, the quality of photometry of the brightest ones is always better than that of the dimmer ones, even though all of them are frequently equally interesting for astronomers. Thus, measuring all objects with the highest possible precision is desirable. In this paper, we present a new optimization algorithm, dedicated for the division of exposure time into sub-exposures, which enables photometry with a more balanced noise budget. The proposed technique increases the photometric precision of dimmer objects at the expense of the measurement fidelity of the brightest ones. We have tested the method on real observations using two telescope setups, demonstrating its usefulness and good consistency with theoretical expectations. The main application of our approach is a wide range of sky surveys, including ones performed by space telescopes. The method can be used to plan virtually any photometric observation of objects that show a wide range of magnitudes.
NASA Astrophysics Data System (ADS)
Lillo-Box, J.; Barrado, D.; Mancini, L.; Henning, Th.; Figueira, P.; Ciceri, S.; Santos, N.
2015-04-01
Context. The Kepler mission has searched for planetary transits in more than two hundred thousand stars by obtaining very accurate photometric data over a long period of time. Among the thousands of detected candidates, the planetary nature of around 15% has been established or validated by different techniques. But additional data are needed to characterize the rest of the candidates and reject other possible configurations. Aims: We started a follow-up program to validate, confirm, and characterize some of the planet candidates. In this paper we present the radial velocity analysis of those that present large variations, which are compatible with being eclipsing binaries. We also study those showing high rotational velocities, which prevents us from reaching the necessary precision to detect planetary-like objects. Methods: We present new radial velocity results for 13 Kepler objects of interest (KOIs) obtained with the CAFE spectrograph at the Calar Alto Observatory and analyze their high-spatial resolution (lucky) images obtained with AstraLux and the Kepler light curves of some interesting cases. Results: We have found five spectroscopic and eclipsing binaries (group A). Among them, the case of KOI-3853 is of particular interest. This system is a new example of the so-called heartbeat stars, showing dynamic tidal distortions in the Kepler light curve. We have also detected duration and depth variations of the eclipse. We suggest possible scenarios to explain such an effect, including the presence of a third substellar body possibly detected in our radial velocity analysis. We also provide upper mass limits to the transiting companions of six other KOIs with high rotational velocities (group B). This property prevents the radial velocity method from achieving the necessary precision to detect planetary-like masses. Finally, we analyze the large radial velocity variations of two other KOIs, which are incompatible with the presence of planetary-mass objects (group C).These objects are likely to be stellar binaries. However, a longer timespan is needed to complete their characterization. Based on observations collected at the German-Spanish Astronomical Center, Calar Alto, jointly operated by the Max-Planck Institut fur Astronomie (Heidelberg) and the Instituto de Astrofísica de Andalucía (IAA-CSIC, Granada).Appendix A is available in electronic form at http://www.aanda.org
USDA-ARS?s Scientific Manuscript database
Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...
USDA-ARS?s Scientific Manuscript database
Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...
Santa Fe School Precision Teaching Program, Evaluation Report 1974-75.
ERIC Educational Resources Information Center
Spencer, Mary L.; Henderson, Joan C.
The Santa Fe Precision Teaching for Effective Learning, (PTEL) an ESEA Title III program, was selected as a remedial instructional approach to the performance and motivational problems of Santa Fe students. It proposed the following six major program objectives: (1) planning and implementation of start-up activities; (2) staff training in the…
The Density of Mid-sized Kuiper Belt Objects from ALMA Thermal Observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Michael E.; Butler, Bryan J.
The densities of mid-sized Kuiper Belt objects (KBOs) are a key constraint in understanding the assembly of objects in the outer solar system. These objects are critical for understanding the currently unexplained transition from the smallest KBOs with densities lower than that of water, to the largest objects with significant rock content. Mapping this transition is made difficult by the uncertainties in the diameters of these objects, which maps into an even larger uncertainty in volume and thus density. The substantial collecting area of the Atacama Large Millimeter Array allows significantly more precise measurements of thermal emission from outer solarmore » system objects and could potentially greatly improve the density measurements. Here we use new thermal observations of four objects with satellites to explore the improvements possible with millimeter data. We find that effects due to effective emissivity at millimeter wavelengths make it difficult to use the millimeter data directly to find diameters and thus volumes for these bodies. In addition, we find that when including the effects of model uncertainty, the true uncertainties on the sizes of outer solar system objects measured with radiometry are likely larger than those previously published. Substantial improvement in object sizes will likely require precise occultation measurements.« less
Variation objective analyses for cyclone studies
NASA Technical Reports Server (NTRS)
Achtemeier, G. L.; Kidder, S. Q.; Ochs, H. T.
1985-01-01
The objectives were to: (1) develop an objective analysis technique that will maximize the information content of data available from diverse sources, with particular emphasis on the incorporation of observations from satellites with those from more traditional immersion techniques; and (2) to develop a diagnosis of the state of the synoptic scale atmosphere on a much finer scale over a much broader region than is presently possible to permit studies of the interactions and energy transfers between global, synoptic and regional scale atmospheric processes. The variational objective analysis model consists of the two horizontal momentum equations, the hydrostatic equation, and the integrated continuity equation for a dry hydrostatic atmosphere. Preliminary tests of the model with the SESMAE I data set are underway for 12 GMT 10 April 1979. At this stage of purpose of the analysis is not the diagnosis of atmospheric structures but rather the validation of the model. Model runs for rawinsonde data and with the precision modulus weights set to force most of the adjustment of the wind field to the mass field have produced 90 to 95 percent reductions in the imbalance of the initial data after only 4-cycles through the Euler-Lagrange equations. Sensitivity tests for linear stability of the 11 Euler-Lagrange equations that make up the VASP Model 1 indicate that there will be a lower limit to the scales of motion that can be resolved by this method. Linear stability criteria are violated where there is large horizontal wind shear near the upper tropospheric jet.
Advancing Lidar Sensors Technologies for Next Generation Landing Missions
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin; Hines, Glenn D.; Roback, Vincent E.; Petway, Larry B.; Barnes, Bruce W.; Brewster, Paul F.; Pierrottet, Diego F.; Bulyshev, Alexander
2015-01-01
Missions to solar systems bodies must meet increasingly ambitious objectives requiring highly reliable "precision landing", and "hazard avoidance" capabilities. Robotic missions to the Moon and Mars demand landing at pre-designated sites of high scientific value near hazardous terrain features, such as escarpments, craters, slopes, and rocks. Missions aimed at paving the path for colonization of the Moon and human landing on Mars need to execute onboard hazard detection and precision maneuvering to ensure safe landing near previously deployed assets. Asteroid missions require precision rendezvous, identification of the landing or sampling site location, and navigation to the highly dynamic object that may be tumbling at a fast rate. To meet these needs, NASA Langley Research Center (LaRC) has developed a set of advanced lidar sensors under the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. These lidar sensors can provide precision measurement of vehicle relative proximity, velocity, and orientation, and high resolution elevation maps of the surface during the descent to the targeted body. Recent flights onboard Morpheus free-flyer vehicle have demonstrated the viability of ALHAT lidar sensors for future landing missions to solar system bodies.
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
Debats, Nienke B.; Kingma, Idsart; Beek, Peter J.; Smeets, Jeroen B. J.
2012-01-01
How does the magnitude of the exploration force influence the precision of haptic perceptual estimates? To address this question, we examined the perceptual precision for moment of inertia (i.e., an object's “angular mass”) under different force conditions, using the Weber fraction to quantify perceptual precision. Participants rotated a rod around a fixed axis and judged its moment of inertia in a two-alternative forced-choice task. We instructed different levels of exploration force, thereby manipulating the magnitude of both the exploration force and the angular acceleration. These are the two signals that are needed by the nervous system to estimate moment of inertia. Importantly, one can assume that the absolute noise on both signals increases with an increase in the signals' magnitudes, while the relative noise (i.e., noise/signal) decreases with an increase in signal magnitude. We examined how the perceptual precision for moment of inertia was affected by this neural noise. In a first experiment we found that a low exploration force caused a higher Weber fraction (22%) than a high exploration force (13%), which suggested that the perceptual precision was constrained by the relative noise. This hypothesis was supported by the result of a second experiment, in which we found that the relationship between exploration force and Weber fraction had a similar shape as the theoretical relationship between signal magnitude and relative noise. The present study thus demonstrated that the amount of force used to explore an object can profoundly influence the precision by which its properties are perceived. PMID:23028437
Improvement of Stand Jig Sealer and Its Increased Production Capacity
NASA Astrophysics Data System (ADS)
Soebandrija, K. E. N.; Astuti, S. W. D.
2014-03-01
This paper has the objective to prove that improvement of Stand Jig Sealer can lead to the cycle time target as part of Improvement efforts and its Productivity. Prior researches through prior journals both classics journal such as Quesnay (1766) and Solow (1957) and updated journal such as Reikard (2011) researches, are mentioned and elaborated. Precisely, the research is narrowed down and specified into automotive industry and eventually the software related of SPSS and Structural Equation Modeling ( SEM ). The analysis and its method are conducted through the calculation working time. The mentioned calculation are reinforced with the hypothesis test using SPSS Version 19 and involve parameters of production efficiency, productivity calculation, and the calculation of financial investments. The results obtained are augmented achievement of cycle time target ≤ 80 seconds posterior to improvement stand jig sealer. The result from calculation of SPSS-19 version comprise the following aspects: the one-sided hypothesis test is rejection of Ho:μ≥80 seconds, the correlation rs=0.84, regression y = 0.159+0.642x, validity R table = 0.4438, reliability value of Cronbach's alpha = 0.885>0.70, independence (Chi Square) Asymp. Sig=0.028<0.05, 95% efficiency, increase productivity 11%, financial analysis (NPV 2,340,596>0, PI 2.04>1, IRR 45.56%>i=12.68%, PP=1.86). The Mentioned calculation results support the hypothesis and ultimately align with the objective of this paper to prove that improvement of Stand Jig Sealer and its relation toward the cycle time target. Precisely, the improvement of production capacity of PT. Astra Daihatsu Motor.
Fast EEG spike detection via eigenvalue analysis and clustering of spatial amplitude distribution
NASA Astrophysics Data System (ADS)
Fukami, Tadanori; Shimada, Takamasa; Ishikawa, Bunnoshin
2018-06-01
Objective. In the current study, we tested a proposed method for fast spike detection in electroencephalography (EEG). Approach. We performed eigenvalue analysis in two-dimensional space spanned by gradients calculated from two neighboring samples to detect high-amplitude negative peaks. We extracted the spike candidates by imposing restrictions on parameters regarding spike shape and eigenvalues reflecting detection characteristics of individual medical doctors. We subsequently performed clustering, classifying detected peaks by considering the amplitude distribution at 19 scalp electrodes. Clusters with a small number of candidates were excluded. We then defined a score for eliminating spike candidates for which the pattern of detected electrodes differed from the overall pattern in a cluster. Spikes were detected by setting the score threshold. Main results. Based on visual inspection by a psychiatrist experienced in EEG, we evaluated the proposed method using two statistical measures of precision and recall with respect to detection performance. We found that precision and recall exhibited a trade-off relationship. The average recall value was 0.708 in eight subjects with the score threshold that maximized the F-measure, with 58.6 ± 36.2 spikes per subject. Under this condition, the average precision was 0.390, corresponding to a false positive rate 2.09 times higher than the true positive rate. Analysis of the required processing time revealed that, using a general-purpose computer, our method could be used to perform spike detection in 12.1% of the recording time. The process of narrowing down spike candidates based on shape occupied most of the processing time. Significance. Although the average recall value was comparable with that of other studies, the proposed method significantly shortened the processing time.
StreakDet data processing and analysis pipeline for space debris optical observations
NASA Astrophysics Data System (ADS)
Virtanen, Jenni; Flohrer, Tim; Muinonen, Karri; Granvik, Mikael; Torppa, Johanna; Poikonen, Jonne; Lehti, Jussi; Santti, Tero; Komulainen, Tuomo; Naranen, Jyri
We describe a novel data processing and analysis pipeline for optical observations of space debris. The monitoring of space object populations requires reliable acquisition of observational data, to support the development and validation of space debris environment models, the build-up and maintenance of a catalogue of orbital elements. In addition, data is needed for the assessment of conjunction events and for the support of contingency situations or launches. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a “track before detect” problem, resulting in streaks, i.e., object trails of arbitrary lengths, in the images. The scope of the ESA-funded StreakDet (Streak detection and astrometric reduction) project is to investigate solutions for detecting and reducing streaks from optical images, particularly in the low signal-to-noise ratio (SNR) domain, where algorithms are not readily available yet. For long streaks, the challenge is to extract precise position information and related registered epochs with sufficient precision. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, there is a need to discuss and compare these approaches for space debris analysis, in order to develop and evaluate prototype implementations. In the StreakDet project, we develop algorithms applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The proposed processing pipeline starts from the segmentation of the acquired image (i.e., the extraction of all sources), followed by the astrometric and photometric characterization of the candidate streaks, and ends with orbital validation of the detected streaks. A central concept of the pipeline is streak classification which guides the actual characterization process by aiming to identify the interesting sources and to filter out the uninteresting ones, as well as by allowing the tailoring of algorithms for specific streak classes (e.g. point-like vs. long, disintegrated streaks). To validate the single-image detections, the processing is finalized by orbital analysis, resulting in preliminary orbital classification (Earth-bound vs. non-Earth-bound orbit) for the detected streaks.
Quantitative topographic differentiation of the neonatal EEG.
Paul, Karel; Krajca, Vladimír; Roth, Zdenek; Melichar, Jan; Petránek, Svojmil
2006-09-01
To test the discriminatory topographic potential of a new method of the automatic EEG analysis in neonates. A quantitative description of the neonatal EEG can contribute to the objective assessment of the functional state of the brain, and may improve the precision of diagnosing cerebral dysfunctions manifested by 'disorganization', 'dysrhythmia' or 'dysmaturity'. 21 healthy, full-term newborns were examined polygraphically during sleep (EEG-8 referential derivations, respiration, ECG, EOG, EMG). From each EEG record, two 5-min samples (one from the middle of quiet sleep, the other from the middle of active sleep) were subject to subsequent automatic analysis and were described by 13 variables: spectral features and features describing shape and variability of the signal. The data from individual infants were averaged and the number of variables was reduced by factor analysis. All factors identified by factor analysis were statistically significantly influenced by the location of derivation. A large number of statistically significant differences were also established when comparing the effects of individual derivations on each of the 13 measured variables. Both spectral features and features describing shape and variability of the signal are largely accountable for the topographic differentiation of the neonatal EEG. The presented method of the automatic EEG analysis is capable to assess the topographic characteristics of the neonatal EEG, and it is adequately sensitive and describes the neonatal electroencephalogram with sufficient precision. The discriminatory capability of the used method represents a promise for their application in the clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang, E-mail: cyzhao@pmo.ac.cn
The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and themore » astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.« less
NASA Astrophysics Data System (ADS)
Ilieva, Tamara; Gekov, Svetoslav
2017-04-01
The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.
NASA Astrophysics Data System (ADS)
Andrae, Peter; Beeck, Manfred-Andreas; Jueptner, Werner P. O.; Nadeborn, Werner; Osten, Wolfgang
1996-09-01
Holographic interferometry makes it possible to measure high precision displacement data in the range of the wavelength of the used laser light. However, the determination of 3D- displacement vectors of objects with complex surfaces requires the measurement of 3D-object coordinates not only to consider local sensitivities but to distinguish between in-plane deformation, i.e. strains, and out-of-plane components, i.e. shears, too. To this purpose both the surface displacement and coordinates have to be combined and it is advantageous to make the data available for CAE- systems. The object surface has to be approximated analytically from the measured point cloud to generate a surface mesh. The displacement vectors can be assigned to the nodes of this surface mesh for visualization of the deformation of the object under test. They also can be compared to the results of FEM-calculations or can be used as boundary conditions for further numerical investigations. Here the 3D-object coordinates are measured in a separate topometric set-up using a modified fringe projection technique to acquire absolute phase values and a sophisticated geometrical model to map these phase data onto coordinates precisely. The determination of 3D-displacement vectors requires the measurement of several interference phase distributions for at least three independent sensitivity directions depending on the observation and illumination directions as well as the 3D-position of each measuring point. These geometric quantities have to be transformed into a reference coordinate system of the interferometric set-up in order to calculate the geometric matrix. The necessary transformation can be realized by means of a detection of object features in both data sets and a subsequent determination of the external camera orientation. This paper presents a consistent solution for the measurement and combination of shape and displacement data including their transformation into simulation systems. The described procedure will be demonstrated on an automotive component. Thus more accurate and effective measurement techniques make it possible to bring experimental and numerical displacement analysis closer.
NASA Astrophysics Data System (ADS)
Staier, Florian; Eipel, Heinz; Matula, Petr; Evsikov, Alexei V.; Kozubek, Michal; Cremer, Christoph; Hausmann, Michael
2011-09-01
With the development of novel fluorescence techniques, high resolution light microscopy has become a challenging technique for investigations of the three-dimensional (3D) micro-cosmos in cells and sub-cellular components. So far, all fluorescence microscopes applied for 3D imaging in biosciences show a spatially anisotropic point spread function resulting in an anisotropic optical resolution or point localization precision. To overcome this shortcoming, micro axial tomography was suggested which allows object tilting on the microscopic stage and leads to an improvement in localization precision and spatial resolution. Here, we present a miniaturized device which can be implemented in a motor driven microscope stage. The footprint of this device corresponds to a standard microscope slide. A special glass fiber can manually be adjusted in the object space of the microscope lens. A stepwise fiber rotation can be controlled by a miniaturized stepping motor incorporated into the device. By means of a special mounting device, test particles were fixed onto glass fibers, optically localized with high precision, and automatically rotated to obtain views from different perspective angles under which distances of corresponding pairs of objects were determined. From these angle dependent distance values, the real 3D distance was calculated with a precision in the ten nanometer range (corresponding here to an optical resolution of 10-30 nm) using standard microscopic equipment. As a proof of concept, the spindle apparatus of a mature mouse oocyte was imaged during metaphase II meiotic arrest under different perspectives. Only very few images registered under different rotation angles are sufficient for full 3D reconstruction. The results indicate the principal advantage of the micro axial tomography approach for many microscopic setups therein and also those of improved resolutions as obtained by high precision localization determination.
A flight test method for pilot/aircraft analysis
NASA Technical Reports Server (NTRS)
Koehler, R.; Buchacker, E.
1986-01-01
In high precision flight maneuvres a pilot is a part of a closed loop pilot/aircraft system. The assessment of the flying qualities is highly dependent on the closed loop characteristics related to precision maneuvres like approach, landing, air-to-air tracking, air-to-ground tracking, close formation flying and air-to air refueling of the receiver. The object of a research program at DFVLR is the final flight phase of an air to ground mission. In this flight phase the pilot has to align the aircraft with the target, correct small deviations from the target direction and keep the target in his sights for a specific time period. To investigate the dynamic behavior of the pilot-aircraft system a special ground attack flight test technique with a prolonged tracking maneuvres was developed. By changing the targets during the attack the pilot is forced to react continously on aiming errors in his sights. Thus the closed loop pilot/aircraft system is excited over a wide frequency range of interest, the pilot gets more information about mission oriented aircraft dynamics and suitable flight test data for a pilot/aircraft analysis can be generated.
NASA Astrophysics Data System (ADS)
Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H. R.
2017-09-01
Remotely sensed datasets offer a reliable means to precisely estimate biophysical characteristics of individual species sparsely distributed in open woodlands. Moreover, object-oriented classification has exhibited significant advantages over different classification methods for delineation of tree crowns and recognition of species in various types of ecosystems. However, it still is unclear if this widely-used classification method can have its advantages on unmanned aerial vehicle (UAV) digital images for mapping vegetation cover at single-tree levels. In this study, UAV orthoimagery was classified using object-oriented classification method for mapping a part of wild pistachio nature reserve in Zagros open woodlands, Fars Province, Iran. This research focused on recognizing two main species of the study area (i.e., wild pistachio and wild almond) and estimating their mean crown area. The orthoimage of study area was consisted of 1,076 images with spatial resolution of 3.47 cm which was georeferenced using 12 ground control points (RMSE=8 cm) gathered by real-time kinematic (RTK) method. The results showed that the UAV orthoimagery classified by object-oriented method efficiently estimated mean crown area of wild pistachios (52.09±24.67 m2) and wild almonds (3.97±1.69 m2) with no significant difference with their observed values (α=0.05). In addition, the results showed that wild pistachios (accuracy of 0.90 and precision of 0.92) and wild almonds (accuracy of 0.90 and precision of 0.89) were well recognized by image segmentation. In general, we concluded that UAV orthoimagery can efficiently produce precise biophysical data of vegetation stands at single-tree levels, which therefore is suitable for assessment and monitoring open woodlands.
Precision medicine in myasthenia graves: begin from the data precision
Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng
2016-01-01
Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759
Qiao, Yu; Wang, Wei; Minematsu, Nobuaki; Liu, Jianzhuang; Takeda, Mitsuo; Tang, Xiaoou
2009-10-01
This paper studies phase singularities (PSs) for image representation. We show that PSs calculated with Laguerre-Gauss filters contain important information and provide a useful tool for image analysis. PSs are invariant to image translation and rotation. We introduce several invariant features to characterize the core structures around PSs and analyze the stability of PSs to noise addition and scale change. We also study the characteristics of PSs in a scale space, which lead to a method to select key scales along phase singularity curves. We demonstrate two applications of PSs: object tracking and image matching. In object tracking, we use the iterative closest point algorithm to determine the correspondences of PSs between two adjacent frames. The use of PSs allows us to precisely determine the motions of tracked objects. In image matching, we combine PSs and scale-invariant feature transform (SIFT) descriptor to deal with the variations between two images and examine the proposed method on a benchmark database. The results indicate that our method can find more correct matching pairs with higher repeatability rates than some well-known methods.
Comparative analysis of autofocus functions in digital in-line phase-shifting holography.
Fonseca, Elsa S R; Fiadeiro, Paulo T; Pereira, Manuela; Pinheiro, António
2016-09-20
Numerical reconstruction of digital holograms relies on a precise knowledge of the original object position. However, there are a number of relevant applications where this parameter is not known in advance and an efficient autofocusing method is required. This paper addresses the problem of finding optimal focusing methods for use in reconstruction of digital holograms of macroscopic amplitude and phase objects, using digital in-line phase-shifting holography in transmission mode. Fifteen autofocus measures, including spatial-, spectral-, and sparsity-based methods, were evaluated for both synthetic and experimental holograms. The Fresnel transform and the angular spectrum reconstruction methods were compared. Evaluation criteria included unimodality, accuracy, resolution, and computational cost. Autofocusing under angular spectrum propagation tends to perform better with respect to accuracy and unimodality criteria. Phase objects are, generally, more difficult to focus than amplitude objects. The normalized variance, the standard correlation, and the Tenenbaum gradient are the most reliable spatial-based metrics, combining computational efficiency with good accuracy and resolution. A good trade-off between focus performance and computational cost was found for the Fresnelet sparsity method.
The International GPS Service (IGS) as a Continuous Reference System for Precise GPS Positioning
NASA Technical Reports Server (NTRS)
Neilan, Ruth; Heflin, Michael; Watkins, Michael; Zumberge, James
1996-01-01
The International GPS Service for Geodynamics (IGS) is an organization which operates under the auspices of the International Association of Geodesy (IAG) and has been operational since January 1994. The primary objective of the IGS is to provide precise GPS data and data products to support geodetic and geophysical research activities.
Deficits in Coordinative Bimanual Timing Precision in Children with Specific Language Impairment
ERIC Educational Resources Information Center
Vuolo, Janet; Goffman, Lisa; Zelaznik, Howard N.
2017-01-01
Purpose: Our objective was to delineate components of motor performance in specific language impairment (SLI); specifically, whether deficits in timing precision in one effector (unimanual tapping) and in two effectors (bimanual clapping) are observed in young children with SLI. Method: Twenty-seven 4- to 5-year-old children with SLI and 21…
Telescope technology for space-borne submillimeter astronomy
NASA Technical Reports Server (NTRS)
Lehman, David H.; Helou, George
1990-01-01
The Precision Segmented Reflector (PSR) project which is developing telescope technology needed for future spaceborne submillimeter astronomy missions is described. Four major technical areas are under development. Lighweight composite mirrors and associated materials, precision structures and segmented reflector figure sensing and control are discussed. The objectives of the PSR project, approaches, and project technology status, are reported.
Search Strategy to Identify Dental Survival Analysis Articles Indexed in MEDLINE.
Layton, Danielle M; Clarke, Michael
2016-01-01
Articles reporting survival outcomes (time-to-event outcomes) in patients over time are challenging to identify in the literature. Research shows the words authors use to describe their dental survival analyses vary, and that allocation of medical subject headings by MEDLINE indexers is inconsistent. Together, this undermines accurate article identification. The present study aims to develop and validate a search strategy to identify dental survival analyses indexed in MEDLINE (Ovid). A gold standard cohort of articles was identified to derive the search terms, and an independent gold standard cohort of articles was identified to test and validate the proposed search strategies. The first cohort included all 6,955 articles published in the 50 dental journals with the highest impact factors in 2008, of which 95 articles were dental survival articles. The second cohort included all 6,514 articles published in the 50 dental journals with the highest impact factors for 2012, of which 148 were dental survival articles. Each cohort was identified by a systematic hand search. Performance parameters of sensitivity, precision, and number needed to read (NNR) for the search strategies were calculated. Sensitive, precise, and optimized search strategies were developed and validated. The performances of the search strategy maximizing sensitivity were 92% sensitivity, 14% precision, and 7.11 NNR; the performances of the strategy maximizing precision were 93% precision, 10% sensitivity, and 1.07 NNR; and the performances of the strategy optimizing the balance between sensitivity and precision were 83% sensitivity, 24% precision, and 4.13 NNR. The methods used to identify search terms were objective, not subjective. The search strategies were validated in an independent group of articles that included different journals and different publication years. Across the three search strategies, dental survival articles can be identified with sensitivity up to 92%, precision up to 93%, and NNR of less than two articles to identify relevant records. This research has highlighted the impact that variation in reporting and indexing has on article identification and has improved researchers' ability to identify dental survival articles.
Intercomparison of analytical methods for arsenic speciation in human urine.
Crecelius, E; Yager, J
1997-01-01
An intercomparison exercise was conducted for the quantification of arsenic species in spiked human urine. The primary objective of the exercise was to determine the variance among laboratories in the analysis of arsenic species such as inorganic As (As+3 and As+5), monomethylarsonic acid (MMA), and dimethylarsinic acid (DMA). Laboratories that participated had previous experience with arsenic speciation analysis. The results of this interlaboratory comparison are encouraging. There is relatively good agreement on the concentrations of these arsenic species in urine at concentrations that are relevant to research on the metabolism of arsenic in humans and other mammals. Both the accuracy and precision are relatively poor for arsenic concentrations of less than about 5 micrograms/l. PMID:9288500
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
Object-Driven and Temporal Action Rules Mining
ERIC Educational Resources Information Center
Hajja, Ayman
2013-01-01
In this thesis, I present my complete research work in the field of action rules, more precisely object-driven and temporal action rules. The drive behind the introduction of object-driven and temporally based action rules is to bring forth an adapted approach to extract action rules from a subclass of systems that have a specific nature, in which…
Haiyang, Yu; Tian, Luo
2016-06-01
Target restoration space (TRS) is the most precise space required for designing optimal prosthesis. TRS consists of an internal or external tooth space to confirm the esthetics and function of the final restoration. Therefore, assisted with quantitive analysis transfer, TRS quantitative analysis is a significant improvement for minimum tooth preparation. This article presents TRS quantity-related measurement, analysis, transfer, and internal relevance of three TR. classifications. Results reveal the close bond between precision and minimally invasive treatment. This study can be used to improve the comprehension and execution of precise esthetic prosthodontics.
Extreme gravity tests with gravitational waves from compact binary coalescences: (II) ringdown
NASA Astrophysics Data System (ADS)
Berti, Emanuele; Yagi, Kent; Yang, Huan; Yunes, Nicolás
2018-05-01
The LIGO/Virgo detections of binary black hole mergers marked a watershed moment in astronomy, ushering in the era of precision tests of Kerr dynamics. We review theoretical and experimental challenges that must be overcome to carry out black hole spectroscopy with present and future gravitational wave detectors. Among other topics, we discuss quasinormal mode excitation in binary mergers, astrophysical event rates, tests of black hole dynamics in modified theories of gravity, parameterized "post-Kerr" ringdown tests, exotic compact objects, and proposed data analysis methods to improve spectroscopic tests of Kerr dynamics by stacking multiple events.
A preliminary design study for a cosmic X-ray spectrometer
NASA Technical Reports Server (NTRS)
1972-01-01
The results are described of theoretical and experimental investigations aimed at the development of a curved crystal cosmic X-ray spectrometer to be used at the focal plane of the large orbiting X-ray telescope on the third High Energy Astronomical Observatory. The effort was concentrated on the development of spectrometer concepts and their evaluation by theoretical analysis, computer simulation, and laboratory testing with breadboard arrangements of crystals and detectors. In addition, a computer-controlled facility for precision testing and evaluation of crystals in air and vacuum was constructed. A summary of research objectives and results is included.
Mind-body dualism and the compatibility of medical methods.
Burkhardt, Hans; Imaguire, Guido
2002-01-01
In this paper we analyse some misleading theses concerning the old controversy over the relation between mind and body presented in contemporary medical literature. We undertake an epistemological clarification of the axiomatic structure of medical methods. This clarification, in turn, requires a precise philosophical explanation of the presupposed concepts. This analysis will establish two results: (1) that the mind-body dualism cannot be understood as a kind of biological variation of the subject-object dichotomy in physics, and (2) that the thesis of the incompatibility between somatic and psychosomatic medicine held by naturalists and others lacks solid epistemological foundation.
Nano-ADEPT Aeroloads Wind Tunnel Test
NASA Technical Reports Server (NTRS)
Smith, Brandon; Cassell, A.; Yount, B.; Kruger, C.; Brivkalns, C.; Makino, A.; Zarchi, K.; McDaniel, R.; Venkatapathy, E.; Swanson, G.
2015-01-01
Analysis completed since the test suggests that all test objectives were met– This claim will be verified in the coming weeks as the data is examined further– Final disposition of test objective success will be documented in a final reportsubmitted to NASA stakeholders (early August 2015)– Expect conference paper in early 2016• Data products and observations made during testing will be used to refinecomputational models of Nano-ADEPT• Carbon fabric relaxed from its pre-test state during the test– System-level tolerance for relaxation will be driven by destination-specific andmission-specific aerothermal and aerodynamic requirements• Bonus experiment of asymmetric shape demonstrates that an asymmetricdeployable blunt body can be used to generate measureable lift– With a strut actuation system and a robust GN&C algorithm, this effect could beused to steer a blunt body at hypersonic speeds to aid precision landing
Time in Science: Reversibility vs. Irreversibility
NASA Astrophysics Data System (ADS)
Pomeau, Yves
To discuss properly the question of irreversibility one needs to make a careful distinction between reversibility of the equations of motion and the choice of the initial conditions. This is also relevant for the rather confuse philosophy of the wave packet reduction in quantum mechanics. The explanation of this reduction requires also to make precise assumptions on what initial data are accessible in our world. Finally I discuss how a given (and long) time record can be shown in an objective way to record an irreversible or reversible process. Or: can a direction of time be derived from its analysis? This leads quite naturally to examine if there is a possible spontaneous breaking of the time reversal symmetry in many body systems, a symmetry breaking that would be put in evidence objectively by looking at certain specific time correlations.
NASA Astrophysics Data System (ADS)
Afgani, M. W.; Suryadi, D.; Dahlan, J. A.
2017-09-01
The aim of this study was to know the level of undergraduate students’ mathematical understanding ability based on APOS theory perspective. The APOS theory provides an evaluation framework to describe the level of students’ understanding and mental structure about their conception to a mathematics concept. The levels of understanding in APOS theory are action, process, object, and schema conception. The subjects were 59 students of mathematics education whom had attended a class of the limit of function at a university in Palembang. The method was qualitative descriptive with 4 test items. The result showed that most of students were still at the level of action conception. They could calculate and use procedure precisely to the mathematics objects that was given, but could not reach the higher conception yet.
Hsu, Hsiu-Yun; Kuo, Li-Chieh; Chiu, Haw-Yen; Jou, I-Ming; Su, Fong-Chin
2009-11-01
Patients with median nerve compression at the carpal tunnel often have poor sensory afferents. Without adequate sensory modulation control, these patients frequently exhibit clumsy performance and excessive force output in the affected hand. We analyzed precision grip function after the sensory recovery of patients with carpal tunnel syndrome (CTS) who underwent carpal tunnel release (CTR). Thirteen CTS patients were evaluated using a custom-designed pinch device and conventional sensory tools before and after CTR to measure sensibility, maximum pinch strength, and anticipated pinch force adjustments to movement-induced load fluctuations in a pinch-holding-up activity. Based on these tests, five force-related parameters and sensory measurements were used to determine improvements in pinch performance after sensory recovery. The force ratio between the exerted pinch force and maximum load force of the lifting object was used to determine pinch force coordination and to prove that CTR enabled precision motor output. The magnitude of peak pinch force indicated an economic force output during manipulations following CTR. The peak pinch force, force ratio, and percentage of maximum pinch force also demonstrated a moderate correlation with the Semmes-Weinstein test. Analysis of these tests revealed that improved sensory function helped restore patients' performance in precise pinch force control evaluations. These results suggest that sensory information plays an important role in adjusting balanced force output in dexterous manipulation. (c) 2009 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Sub-cell turning to accomplish micron-level alignment of precision assemblies
NASA Astrophysics Data System (ADS)
Kumler, James J.; Buss, Christian
2017-08-01
Higher performance expectations for complex optical systems demand tighter alignment requirements for lens assembly alignment. In order to meet diffraction limited imaging performance over wide spectral bands across the UV and visible wavebands, new manufacturing approaches and tools must be developed if the optical systems will be produced consistently in volume production. This is especially applicable in the field of precision microscope objectives for life science, semiconductor inspection and laser material processing systems. We observe a rising need for the improvement in the optical imaging performance of objective lenses. The key challenge lies in the micron-level decentration and tilt of each lens element. One solution for the production of high quality lens systems is sub-cell assembly with alignment turning. This process relies on an automatic alignment chuck to align the optical axis of a mounted lens to the spindle axis of the machine. Subsequently, the mount is cut with diamond tools on a lathe with respect to the optical axis of the mount. Software controlled integrated measurement technology ensures highest precision. In addition to traditional production processes, further dimensions can be controlled in a very precise manner, e.g. the air gaps between the lenses. Using alignment turning simplifies further alignment steps and reduces the risk of errors. This paper describes new challenges in microscope objective design and manufacturing, and addresses difficulties with standard production processes. A new measurement and alignment technique is described, and strengths and limitations are outlined.
Use of Terrestrial Laser Scanning Technology for Long Term High Precision Deformation Monitoring
Vezočnik, Rok; Ambrožič, Tomaž; Sterle, Oskar; Bilban, Gregor; Pfeifer, Norbert; Stopar, Bojan
2009-01-01
The paper presents a new methodology for high precision monitoring of deformations with a long term perspective using terrestrial laser scanning technology. In order to solve the problem of a stable reference system and to assure the high quality of possible position changes of point clouds, scanning is integrated with two complementary surveying techniques, i.e., high quality static GNSS positioning and precise tacheometry. The case study object where the proposed methodology was tested is a high pressure underground pipeline situated in an area which is geologically unstable. PMID:22303152
Miniature injection-molded optics for fiber-optic, in vivo confocal microscopy
NASA Astrophysics Data System (ADS)
Chidley, Matthew D.; Liang, Chen; Descour, Michael R.; Sung, Kung-Bin; Richards-Kortum, Rebecca R.; Gillenwater, Ann
2002-12-01
In collaboration with the Department of Biomedical Engineering at the University of Texas at Austin and the UT MD Anderson Cancer Center, a laser scanning fiber confocal reflectance microscope (FCRM) system has been designed and tested for in vivo detection of cervical and oral pre-cancers. This system along with specially developed diagnosis algorithms and techniques can achieve an unprecedented specificity and sensitivity for the diagnosis of pre-cancers in epithelial tissue. The FCRM imaging system consists of an NdYAG laser (1064 nm), scanning mirrors/optics, precision pinhole, detector, and an endoscopic probe (the objective). The objective is connected to the rest of the imaging system via a fiber bundle. The fiber bundle allows the rest of the system to be remotely positioned in a convenient location. Only the objective comes into contact with the patient. It is our intent that inexpensive mass-produced disposable endoscopic probes would be produced for large clinical trials. This paper touches on the general design process of developing a miniature, high numerical aperture, injection-molded (IM) objective. These IM optical designs are evaluated and modified based on manufacturing and application constraints. Based on these driving criteria, one specific optical design was chosen and a detailed tolerance analysis was conducted. The tolerance analysis was custom built to create a realistic statistical analysis for integrated IM lens elements that can be stacked one on top of another using micro-spheres resting in tiny circular grooves. These configurations allow each lens element to be rotated and possibly help compensate for predicted manufacturing errors. This research was supported by a grant from the National Institutes of Health (RO1 CA82880). Special thanks go to Applied Image Group/Optics for the numerous fabrication meetings concerning the miniature IM objective.
Object-Based Dense Matching Method for Maintaining Structure Characteristics of Linear Buildings
Yan, Yiming; Qiu, Mingjie; Zhao, Chunhui; Wang, Liguo
2018-01-01
In this paper, we proposed a novel object-based dense matching method specially for the high-precision disparity map of building objects in urban areas, which can maintain accurate object structure characteristics. The proposed framework mainly includes three stages. Firstly, an improved edge line extraction method is proposed for the edge segments to fit closely to building outlines. Secondly, a fusion method is proposed for the outlines under the constraint of straight lines, which can maintain the building structural attribute with parallel or vertical edges, which is very useful for the dense matching method. Finally, we proposed an edge constraint and outline compensation (ECAOC) dense matching method to maintain building object structural characteristics in the disparity map. In the proposed method, the improved edge lines are used to optimize matching search scope and matching template window, and the high-precision building outlines are used to compensate the shape feature of building objects. Our method can greatly increase the matching accuracy of building objects in urban areas, especially at building edges. For the outline extraction experiments, our fusion method verifies the superiority and robustness on panchromatic images of different satellites and different resolutions. For the dense matching experiments, our ECOAC method shows great advantages for matching accuracy of building objects in urban areas compared with three other methods. PMID:29596393
Poulsen, Ingrid; Kreiner, Svend; Engberg, Aase W
2018-02-13
The Early Functional Abilities scale assesses the restoration of brain function after brain injury, based on 4 dimensions. The primary objective of this study was to evaluate the validity, objectivity, reliability and measurement precision of the Early Functional Abilities scale by Rasch model item analysis. A secondary objective was to examine the relationship between the Early Functional Abilities scale and the Functional Independence Measurement™, in order to establish the criterion validity of the Early Functional Abilities scale and to compare the sensitivity of measurements using the 2 instruments. The Rasch analysis was based on the assessment of 408 adult patients at admission to sub-acute rehabilitation in Copenhagen, Denmark after traumatic brain injury. The Early Functional Abilities scale provides valid and objective measurement of vegetative (autonomic), facio-oral, sensorimotor and communicative/cognitive functions. Removal of one item from the sensorimotor scale confirmed unidimensionality for each of the 4 subscales, but not for the entire scale. The Early Functional Abilities subscales are sensitive to differences between patients in ranges in which the Functional Independence Measurement™ has a floor effect. The Early Functional Abilities scale assesses the early recovery of important aspects of brain function after traumatic brain injury, but is not unidimensional. We recommend removal of the "standing" item and calculation of summary subscales for the separate dimensions.
Névéol, Aurélie; Zeng, Kelly; Bodenreider, Olivier
2006-01-01
Objective This paper explores alternative approaches for the evaluation of an automatic indexing tool for MEDLINE, complementing the traditional precision and recall method. Materials and methods The performance of MTI, the Medical Text Indexer used at NLM to produce MeSH recommendations for biomedical journal articles is evaluated on a random set of MEDLINE citations. The evaluation examines semantic similarity at the term level (indexing terms). In addition, the documents retrieved by queries resulting from MTI index terms for a given document are compared to the PubMed related citations for this document. Results Semantic similarity scores between sets of index terms are higher than the corresponding Dice similarity scores. Overall, 75% of the original documents and 58% of the top ten related citations are retrieved by queries based on the automatic indexing. Conclusions The alternative measures studied in this paper confirm previous findings and may be used to select particular documents from the test set for a more thorough analysis. PMID:17238409
Precise Ages for the Benchmark Brown Dwarfs HD 19467 B and HD 4747 B
NASA Astrophysics Data System (ADS)
Wood, Charlotte; Boyajian, Tabetha; Crepp, Justin; von Braun, Kaspar; Brewer, John; Schaefer, Gail; Adams, Arthur; White, Tim
2018-01-01
Large uncertainty in the age of brown dwarfs, stemming from a mass-age degeneracy, makes it difficult to constrain substellar evolutionary models. To break the degeneracy, we need ''benchmark" brown dwarfs (found in binary systems) whose ages can be determined independent of their masses. HD~19467~B and HD~4747~B are two benchmark brown dwarfs detected through the TRENDS (TaRgeting bENchmark objects with Doppler Spectroscopy) high-contrast imaging program for which we have dynamical mass measurements. To constrain their ages independently through isochronal analysis, we measured the radii of the host stars with interferometry using the Center for High Angular Resolution Astronomy (CHARA) Array. Assuming the brown dwarfs have the same ages as their host stars, we use these results to distinguish between several substellar evolutionary models. In this poster, we present new age estimates for HD~19467 and HD~4747 that are more accurate and precise and show our preliminary comparisons to cooling models.
Extracting contours of oval-shaped objects by Hough transform and minimal path algorithms
NASA Astrophysics Data System (ADS)
Tleis, Mohamed; Verbeek, Fons J.
2014-04-01
Circular and oval-like objects are very common in cell and micro biology. These objects need to be analyzed, and to that end, digitized images from the microscope are used so as to come to an automated analysis pipeline. It is essential to detect all the objects in an image as well as to extract the exact contour of each individual object. In this manner it becomes possible to perform measurements on these objects, i.e. shape and texture features. Our measurement objective is achieved by probing contour detection through dynamic programming. In this paper we describe a method that uses Hough transform and two minimal path algorithms to detect contours of (ovoid-like) objects. These algorithms are based on an existing grey-weighted distance transform and a new algorithm to extract the circular shortest path in an image. The methods are tested on an artificial dataset of a 1000 images, with an F1-score of 0.972. In a case study with yeast cells, contours from our methods were compared with another solution using Pratt's figure of merit. Results indicate that our methods were more precise based on a comparison with a ground-truth dataset. As far as yeast cells are concerned, the segmentation and measurement results enable, in future work, to retrieve information from different developmental stages of the cell using complex features.
Moving Object Detection Using Scanning Camera on a High-Precision Intelligent Holder.
Chen, Shuoyang; Xu, Tingfa; Li, Daqun; Zhang, Jizhou; Jiang, Shenwang
2016-10-21
During the process of moving object detection in an intelligent visual surveillance system, a scenario with complex background is sure to appear. The traditional methods, such as "frame difference" and "optical flow", may not able to deal with the problem very well. In such scenarios, we use a modified algorithm to do the background modeling work. In this paper, we use edge detection to get an edge difference image just to enhance the ability of resistance illumination variation. Then we use a "multi-block temporal-analyzing LBP (Local Binary Pattern)" algorithm to do the segmentation. In the end, a connected component is used to locate the object. We also produce a hardware platform, the core of which consists of the DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) platforms and the high-precision intelligent holder.
The Precision Efficacy Analysis for Regression Sample Size Method.
ERIC Educational Resources Information Center
Brooks, Gordon P.; Barcikowski, Robert S.
The general purpose of this study was to examine the efficiency of the Precision Efficacy Analysis for Regression (PEAR) method for choosing appropriate sample sizes in regression studies used for precision. The PEAR method, which is based on the algebraic manipulation of an accepted cross-validity formula, essentially uses an effect size to…
Precision and Accuracy of Analysis for Boron in ITP Samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tovo, L.L.
'Inductively Coupled Plasma Emission Spectroscopy (ICPES) has been used by the Analytical Development Section (ADS) to measure boron in catalytic tetraphenylboron decomposition studies performed by the Waste Processing Technology (WPT) section. Analysis of these samples is complicated due to the presence of high concentrations of sodium and organic compounds. Previously, we found signal suppression in samples analyzed "as received". We suspected that the suppression was due to the high organic concentration (up to 0.01 molar organic decomposition products) in the samples. When the samples were acid digested prior to analysis, the suppression was eliminated. The precision of the reported boronmore » concentration was estimated as 10 percent based on the known precision of the inorganic boron standard used for calibration and quality control check of the ICPES analysis. However, a precision better than 10 percent was needed to evaluate ITP process operating parameters. Therefore, the purpose of this work was (1) to measure, instead of estimating, the precision of the boron measurement on ITP samples and (2) to determine the optimum precision attainable with current instrumentation.'« less
López-Gil, Norberto; Fernández-Sánchez, Vicente; Thibos, Larry N.; Montés-Micó, Robert
2010-01-01
Purpose We studied the accuracy and precision of 32 objective wavefront methods for finding the amplitude of accommodation obtained in 180 eyes. Methods Ocular accommodation was stimulated with 0.5 D steps in target vergence spanning the full range of accommodation for each subject. Subjective monocular amplitude of accommodation was measured using two clinical methods, using negative lenses and with a custom Badal optometer. Results Both subjective methods gave similar results. Results obtained from the Badal optometer where used to test the accuracy of the objective methods. All objective methods showed lower amplitude of accommodation that the subjective ones by an amount that varied from 0.2 to 1.1 D depending on the method. The precision in this prediction also varied between subjects, with an average standard error of the mean of 0.1 D that decreased with age. Conclusions Depth of field increases subjective of amplitude of accommodation overestimating the objective amplitude obtained with all the metrics used. The change in the negative direction of spherical aberration during accommodation increases the amplitude of accommodation by an amount that varies with age.
Validity evidence for the Simulated Colonoscopy Objective Performance Evaluation scoring system.
Trinca, Kristen D; Cox, Tiffany C; Pearl, Jonathan P; Ritter, E Matthew
2014-02-01
Low-cost, objective systems to assess and train endoscopy skills are needed. The aim of this study was to evaluate the ability of Simulated Colonoscopy Objective Performance Evaluation to assess the skills required to perform endoscopy. Thirty-eight subjects were included in this study, all of whom performed 4 tasks. The scoring system measured performance by calculating precision and efficiency. Data analysis assessed the relationship between colonoscopy experience and performance on each task and the overall score. Endoscopic trainees' Simulated Colonoscopy Objective Performance Evaluation scores correlated significantly with total colonoscopy experience (r = .61, P = .003) and experience in the past 12 months (r = .63, P = .002). Significant differences were seen among practicing endoscopists, nonendoscopic surgeons, and trainees (P < .0001). When the 4 tasks were analyzed, each showed significant correlation with colonoscopy experience (scope manipulation, r = .44, P = .044; tool targeting, r = .45, P = .04; loop management, r = .47, P = .032; mucosal inspection, r = .65, P = .001) and significant differences in performance between the endoscopist groups, except for mucosal inspection (scope manipulation, P < .0001; tool targeting, P = .002; loop management, P = .0008; mucosal inspection, P = .27). Simulated Colonoscopy Objective Performance Evaluation objectively assesses the technical skills required to perform endoscopy and shows promise as a platform for proficiency-based skills training. Published by Elsevier Inc.
USDA-ARS?s Scientific Manuscript database
The objective of this study was to evaluate and compare amino acid digestibility of several feedstuffs using 2 commonly accepted methods: the precision-fed cecectomized rooster assay (PFR) and the standardized ileal amino acid assay (SIAAD). Six corn, 6 corn distillers dried grains with or without s...
USDA-ARS?s Scientific Manuscript database
The objective of this study was to determine amino acid digestibility of 4 feedstuffs [soybean meal (SBM), canola meal, fish meal, and meat and bone meal (MBM)] using the precision-fed cecectomized rooster assay (PFR), the standardized ileal assay (SIAAD), and a newly developed precision-fed ileal b...
A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System
Yuan, Jianying; Wang, Qiong; Li, Bailin
2014-01-01
3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736
Microlensing Constraints on the Mass of Single Stars from HST Astrometric Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kains, N.; Calamida, A.; Sahu, K. C.
Here, we report on the first results from a large-scale observing campaign aiming to use astrometric microlensing to detect and place limits on the mass of single objects, including stellar remnants. We used the Hubble Space Telescope to monitor stars near the Galactic Center for three years, and we measured the brightness and positions of ~2 million stars at each observing epoch. In addition to this, we monitored the same pointings using the VIMOS imager on the Very Large Telescope. The stars we monitored include several bright microlensing events observed from the ground by the OGLE collaboration. In this paper,more » we present the analysis of our photometric and astrometric measurements for six of these events, and derive mass constraints for the lens in each of them. Although these constraints are limited by the photometric precision of ground-based data, and our ability to determine the lens distance, we were able to constrain the size of the Einstein ring radius thanks to our precise astrometric measurements—the first routine measurements of this type from a large-scale observing program. In conclusion, this demonstrates the power of astrometric microlensing as a tool to constrain the masses of stars, stellar remnants, and, in the future, extrasolar planets, using precise ground- and space-based observations.« less
Microlensing Constraints on the Mass of Single Stars from HST Astrometric Measurements
Kains, N.; Calamida, A.; Sahu, K. C.; ...
2017-07-14
Here, we report on the first results from a large-scale observing campaign aiming to use astrometric microlensing to detect and place limits on the mass of single objects, including stellar remnants. We used the Hubble Space Telescope to monitor stars near the Galactic Center for three years, and we measured the brightness and positions of ~2 million stars at each observing epoch. In addition to this, we monitored the same pointings using the VIMOS imager on the Very Large Telescope. The stars we monitored include several bright microlensing events observed from the ground by the OGLE collaboration. In this paper,more » we present the analysis of our photometric and astrometric measurements for six of these events, and derive mass constraints for the lens in each of them. Although these constraints are limited by the photometric precision of ground-based data, and our ability to determine the lens distance, we were able to constrain the size of the Einstein ring radius thanks to our precise astrometric measurements—the first routine measurements of this type from a large-scale observing program. In conclusion, this demonstrates the power of astrometric microlensing as a tool to constrain the masses of stars, stellar remnants, and, in the future, extrasolar planets, using precise ground- and space-based observations.« less
A general natural-language text processor for clinical radiology.
Friedman, C; Alderson, P O; Austin, J H; Cimino, J J; Johnson, S B
1994-01-01
OBJECTIVE: Development of a general natural-language processor that identifies clinical information in narrative reports and maps that information into a structured representation containing clinical terms. DESIGN: The natural-language processor provides three phases of processing, all of which are driven by different knowledge sources. The first phase performs the parsing. It identifies the structure of the text through use of a grammar that defines semantic patterns and a target form. The second phase, regularization, standardizes the terms in the initial target structure via a compositional mapping of multi-word phrases. The third phase, encoding, maps the terms to a controlled vocabulary. Radiology is the test domain for the processor and the target structure is a formal model for representing clinical information in that domain. MEASUREMENTS: The impression sections of 230 radiology reports were encoded by the processor. Results of an automated query of the resultant database for the occurrences of four diseases were compared with the analysis of a panel of three physicians to determine recall and precision. RESULTS: Without training specific to the four diseases, recall and precision of the system (combined effect of the processor and query generator) were 70% and 87%. Training of the query component increased recall to 85% without changing precision. PMID:7719797
Applied 3D printing for microscopy in health science research
NASA Astrophysics Data System (ADS)
Brideau, Craig; Zareinia, Kourosh; Stys, Peter
2015-03-01
The rapid prototyping capability offered by 3D printing is considered advantageous for commercial applications. However, the ability to quickly produce precision custom devices is highly beneficial in the research laboratory setting as well. Biological laboratories require the manipulation and analysis of delicate living samples, thus the ability to create custom holders, support equipment, and adapters allow the extension of existing laboratory machines. Applications include camera adapters and stage sample holders for microscopes, surgical guides for tissue preparation, and small precision tools customized to unique specifications. Where high precision is needed, especially the reproduction of fine features, a printer with a high resolution is needed. However, the introduction of cheaper, lower resolution commercial printers have been shown to be more than adequate for less demanding projects. For direct manipulation of delicate samples, biocompatible raw materials are often required, complicating the printing process. This paper will examine some examples of 3D-printed objects for laboratory use, and provide an overview of the requirements for 3D printing for this application. Materials, printing resolution, production, and ease of use will all be reviewed with an eye to producing better printers and techniques for laboratory applications. Specific case studies will highlight applications for 3D-printed devices in live animal imaging for both microscopy and Magnetic Resonance Imaging.
Design and control of a multi-DOF micromanipulator dedicated to multiscale micromanipulation
NASA Astrophysics Data System (ADS)
Yang, Yi-Ling; Wei, Yan-Ding; Lou, Jun-Qiang; Fu, Lei; Fang, Sheng
2017-11-01
This paper presents the design, implementation and control of a new piezoelectrically actuated compliant micromanipulator dedicated to multiscale, precision and reliable operations. To begin with, the manipulator is devised to obtain multi degrees of freedom and large workspace ranges. Two-stage amplification mechanisms (consists of the leverage and the rocker mechanisms) and composite parallelogram mechanisms are combined to construct the lower microstage. Meanwhile, the structure design of the upper dual-driven microgripper is based on the bridge-type mechanism and the unilateral parallelogram mechanism. Through finite-element analysis, the structural parameters of the micromanipulator are optimized and the structural interaction performances are examined. Moreover, a cooperative control strategy is proposed to achieve the synchronous control of the motion trajectory, the gripper position and the contact force. Precision motion control in terms of the hysteresis phenomenon and system disturbances is ensured by using an adaptive sliding mode control (SMC). In particular, an improved nonsymmetrical Bouc-Wen model and a fuzzy regulator are proposed in the SMC. Several experimental investigations are conducted to validate the effectiveness of the developed micromanipulator by performing transferring operations of a micro-object. Experimental results demonstrate that the micromanipulator presents good characteristics, and precision and robust operation can be acquired using the cooperative controller.
Simple Perfusion Apparatus (SPA) for Manipulation, Tracking and Study of Oocytes and Embryos
Angione, Stephanie L.; Oulhen, Nathalie; Brayboy, Lynae M.; Tripathi, Anubhav; Wessel, Gary M.
2016-01-01
Objective To develop and implement a device and protocol for oocyte analysis at a single cell level. The device must be capable of high resolution imaging, temperature control, perfusion of media, drugs, sperm, and immunolabeling reagents all at defined flow-rates. Each oocyte and resultant embryo must remain spatially separated and defined. Design Experimental laboratory study Setting University and Academic Center for reproductive medicine. Patients/Animals Women with eggs retrieved for ICSI cycles, adult female FVBN and B6C3F1 mouse strains, sea stars. Intervention Real-time, longitudinal imaging of oocytes following fluorescent labeling, insemination, and viability tests. Main outcome measure(s) Cell and embryo viability, immunolabeling efficiency, live cell endocytosis quantitation, precise metrics of fertilization and embryonic development. Results Single oocytes were longitudinally imaged following significant changes in media, markers, endocytosis quantitation, and development, all with supreme control by microfluidics. Cells remained viable, enclosed, and separate for precision measurements, repeatability, and imaging. Conclusions We engineered a simple device to load, visualize, experiment, and effectively record individual oocytes and embryos, without loss of cells. Prolonged incubation capabilities provide longitudinal studies without need for transfer and potential loss of cells. This simple perfusion apparatus (SPA) provides for careful, precise, and flexible handling of precious samples facilitating clinical in vitro fertilization approaches. PMID:25450296
Portable Horizontal-Drilling And Positioning Device
NASA Technical Reports Server (NTRS)
Smigocki, Edmund; Johnson, Clarence
1988-01-01
Portable horizontal-drilling and positioning device, constructed mainly of off-the-shelf components, accurately drills horizontal small holes in irregularly shaped objects. Holes precisely placed and drilled in objects that cannot be moved to shop area. New device provides three axes of movement while maintaining horizontal drilling.
Note: A manifold ranking based saliency detection method for camera.
Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir
2016-09-01
Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.
Evaluating structural pattern recognition for handwritten math via primitive label graphs
NASA Astrophysics Data System (ADS)
Zanibbi, Richard; MoucheÌre, Harold; Viard-Gaudin, Christian
2013-01-01
Currently, structural pattern recognizer evaluations compare graphs of detected structure to target structures (i.e. ground truth) using recognition rates, recall and precision for object segmentation, classification and relationships. In document recognition, these target objects (e.g. symbols) are frequently comprised of multiple primitives (e.g. connected components, or strokes for online handwritten data), but current metrics do not characterize errors at the primitive level, from which object-level structure is obtained. Primitive label graphs are directed graphs defined over primitives and primitive pairs. We define new metrics obtained by Hamming distances over label graphs, which allow classification, segmentation and parsing errors to be characterized separately, or using a single measure. Recall and precision for detected objects may also be computed directly from label graphs. We illustrate the new metrics by comparing a new primitive-level evaluation to the symbol-level evaluation performed for the CROHME 2012 handwritten math recognition competition. A Python-based set of utilities for evaluating, visualizing and translating label graphs is publicly available.
Uncertainty Optimization Applied to the Monte Carlo Analysis of Planetary Entry Trajectories
NASA Technical Reports Server (NTRS)
Olds, John; Way, David
2001-01-01
Recently, strong evidence of liquid water under the surface of Mars and a meteorite that might contain ancient microbes have renewed interest in Mars exploration. With this renewed interest, NASA plans to send spacecraft to Mars approx. every 26 months. These future spacecraft will return higher-resolution images, make precision landings, engage in longer-ranging surface maneuvers, and even return Martian soil and rock samples to Earth. Future robotic missions and any human missions to Mars will require precise entries to ensure safe landings near science objective and pre-employed assets. Potential sources of water and other interesting geographic features are often located near hazards, such as within craters or along canyon walls. In order for more accurate landings to be made, spacecraft entering the Martian atmosphere need to use lift to actively control the entry. This active guidance results in much smaller landing footprints. Planning for these missions will depend heavily on Monte Carlo analysis. Monte Carlo trajectory simulations have been used with a high degree of success in recent planetary exploration missions. These analyses ascertain the impact of off-nominal conditions during a flight and account for uncertainty. Uncertainties generally stem from limitations in manufacturing tolerances, measurement capabilities, analysis accuracies, and environmental unknowns. Thousands of off-nominal trajectories are simulated by randomly dispersing uncertainty variables and collecting statistics on forecast variables. The dependability of Monte Carlo forecasts, however, is limited by the accuracy and completeness of the assumed uncertainties. This is because Monte Carlo analysis is a forward driven problem; beginning with the input uncertainties and proceeding to the forecasts outputs. It lacks a mechanism to affect or alter the uncertainties based on the forecast results. If the results are unacceptable, the current practice is to use an iterative, trial-and-error approach to reconcile discrepancies. Therefore, an improvement to the Monte Carlo analysis is needed that will allow the problem to be worked in reverse. In this way, the largest allowable dispersions that achieve the required mission objectives can be determined quantitatively.
White, Robin R; Capper, Judith L
2014-03-01
The objective of this study was to use a precision nutrition model to simulate the relationship between diet formulation frequency and dairy cattle performance across various climates. Agricultural Modeling and Training Systems (AMTS) CattlePro diet-balancing software (Cornell Research Foundation, Ithaca, NY) was used to compare 3 diet formulation frequencies (weekly, monthly, or seasonal) and 3 levels of climate variability (hot, cold, or variable). Predicted daily milk yield (MY), metabolizable energy (ME) balance, and dry matter intake (DMI) were recorded for each frequency-variability combination. Economic analysis was conducted to calculate the predicted revenue over feed and labor costs. Diet formulation frequency affected ME balance and MY but did not affect DMI. Climate variability affected ME balance and DMI but not MY. The interaction between climate variability and formulation frequency did not affect ME balance, MY, or DMI. Formulating diets more frequently increased MY, DMI, and ME balance. Economic analysis showed that formulating diets weekly rather than seasonally could improve returns over variable costs by $25,000 per year for a moderate-sized (300-cow) operation. To achieve this increase in returns, an entire feeding system margin of error of <1% was required. Formulating monthly, rather than seasonally, may be a more feasible alternative as this requires a margin of error of only 2.5% for the entire feeding system. Feeding systems with a low margin of error must be developed to better take advantage of the benefits of precision nutrition. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ashfaq, Maria; Sial, Ali Akber; Bushra, Rabia; Rehman, Atta-Ur; Baig, Mirza Tasawur; Huma, Ambreen; Ahmed, Maryam
2018-01-01
Spectrophotometric technique is considered to be the simplest and operator friendly among other available analytical methods for pharmaceutical analysis. The objective of the study was to develop a precise, accurate and rapid UV-spectrophotometric method for the estimation of chlorpheniramine maleate (CPM) in pure and solid pharmaceutical formulation. Drug absorption was measured in various solvent systems including 0.1N HCl (pH 1.2), acetate buffer (pH 4.5), phosphate buffer (pH 6.8) and distil water (pH 7.0). Method validation was performed as per official guidelines of ICH, 2005. High drug absorption was observed in 0.1N HCl medium with λ max of 261nm. The drug showed the good linearity from 20 to 60μg/mL solution concentration with the correlation coefficient linear regression equation Y= 0.1853 X + 0.1098 presenting R 2 value of 0.9998. The method accuracy was evaluated by the percent drug recovery, presents more than 99% drug recovery at three different levels assessed. The % RSD value <1 was computed for inter and intraday analysis indicating the high accuracy and precision of the developed technique. The developed method is robust because it shows no any significant variation in with minute changes. The LOD and LOQ values were assessed to be 2.2μg/mL and 6.6μg/mL respectively. The investigated method proved its sensitivity, precision and accuracy hence could be successfully used to estimate the CPM content in bulk and pharmaceutical matrix tablets.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang
2013-01-01
The latest researches use software technology (OLDP, object location detection programs) to turn a commercial high-technology product, i.e. a battery-free wireless mouse, into a high performance/precise object location detector to detect whether or not an object has been placed in the designated location. The preferred environmental stimulation is…
Bhatt, Nejal M.; Chavada, Vijay D.; Sanyal, Mallika; Shrivastav, Pranav S.
2014-01-01
Objective. Three sensitive, selective, and precise spectrophotometric methods based on manipulation of ratio spectra, have been developed and validated for the determination of diclofenac sodium and pantoprazole sodium. Materials and Methods. The first method is based on ratio spectra peak to peak measurement using the amplitudes at 251 and 318 nm; the second method involves the first derivative of the ratio spectra (Δλ = 4 nm) using the peak amplitudes at 326.0 nm for diclofenac sodium and 337.0 nm for pantoprazole sodium. The third is the method of mean centering of ratio spectra using the values at 318.0 nm for both the analytes. Results. All the three methods were linear over the concentration range of 2.0–24.0 μg/mL for diclofenac sodium and 2.0–20.0 μg/mL for pantoprazole sodium. The methods were validated according to the ICH guidelines and accuracy, precision, repeatability, and robustness are found to be within the acceptable limit. The results of single factor ANOVA analysis indicated that there is no significant difference among the developed methods. Conclusions. The developed methods provided simple resolution of this binary combination from laboratory mixtures and pharmaceutical preparations and can be conveniently adopted for routine quality control analysis. PMID:24701171
[Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].
Krimmel, M; Kluba, S; Dietz, K; Reinert, S
2005-03-01
The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong; Hsiung, Pao-Ann; Wan, Chieh-Hao; Koong, Chorng-Shiuh; Liu, Tang-Kun; Yang, Yuanfan; Lin, Chu-Hsing; Chu, William Cheng-Chung
2009-02-01
A billiard ball tracking system is designed to combine with a visual guide interface to instruct users for a reliable strike. The integrated system runs on a PC platform. The system makes use of a vision system for cue ball, object ball and cue stick tracking. A least-squares error calibration process correlates the real-world and the virtual-world pool ball coordinates for a precise guidance line calculation. Users are able to adjust the cue stick on the pool table according to a visual guidance line instruction displayed on a PC monitor. The ideal visual guidance line extended from the cue ball is calculated based on a collision motion analysis. In addition to calculating the ideal visual guide, the factors influencing selection of the best shot among different object balls and pockets are explored. It is found that a tolerance angle around the ideal line for the object ball to roll into a pocket determines the difficulty of a strike. This angle depends in turn on the distance from the pocket to the object, the distance from the object to the cue ball, and the angle between these two vectors. Simulation results for tolerance angles as a function of these quantities are given. A selected object ball was tested extensively with respect to various geometrical parameters with and without using our integrated system. Players with different proficiency levels were selected for the experiment. The results indicate that all players benefit from our proposed visual guidance system in enhancing their skills, while low-skill players show the maximum enhancement in skill with the help of our system. All exhibit enhanced maximum and average hit-in rates. Experimental results on hit-in rates have shown a pattern consistent with that of the analysis. The hit-in rate is thus tightly connected with the analyzed tolerance angles for sinking object balls into a target pocket. These results prove the efficiency of our system, and the analysis results can be used to attain an efficient game-playing strategy.
Replicas in Cultural Heritage: 3d Printing and the Museum Experience
NASA Astrophysics Data System (ADS)
Ballarin, M.; Balletti, C.; Vernier, P.
2018-05-01
3D printing has seen a recent massive diffusion for several applications, not least the field of Cultural Heritage. Being used for different purposes, such as study, analysis, conservation or access in museum exhibitions, 3D printed replicas need to undergo a process of validation also in terms of metrical precision and accuracy. The Laboratory of Photogrammetry of Iuav University of Venice has started several collaborations with Italian museum institutions firstly for the digital acquisition and then for the physical reproduction of objects of historical and artistic interest. The aim of the research is to analyse the metric characteristics of the printed model in relation to the original data, and to optimize the process that from the survey leads to the physical representation of an object. In fact, this could be acquired through different methodologies that have different precisions (multi-image photogrammetry, TOF laser scanner, triangulation based laser scanner), and it always involves a long processing phase. It should not be forgotten that the digital data have to undergo a series of simplifications, which, on one hand, eliminate the noise introduced by the acquisition process, but on the other one, they can lead to discrepancies between the physical copy and the original geometry. In this paper we will show the results obtained on a small archaeological find that was acquired and reproduced for a museum exhibition intended for blind and partially sighted people.
Becker, G
2001-01-01
The accreditation of the ISO 9001 certification (ISO = International Standard Organization) is an external evaluation procedure carried out by independent experts, whose object is the analysis of the operational methods and practices of a medical care facility (e.g. hospital, private clinic, general practitioner's or dentist's practice) which decided to assume the concept, implementation and control of its own quality policy. The whole accreditation procedure represents the basic structure of a continuous dynamic progressiveness within a cabinet eager to offer outstanding quality. Moreover, it guarantees active and voluntary participation of every single member of the medical administration or technical team involved in the realization of this primary objective. In other words, we are talking about a very strong dynamic innovation leading to a change of views and the improvement of communication means, while simultaneously enhancing the security and quality aspects of medical care. The continuous guarantee of high quality medical care calls for precise planning and systematization of actions. First of all, these actions are defined, analyzed and listed in precise work procedures. As they are defined with the agreement of the whole team, they implicate respect and self control. This requires of course transparency of the treatment methods, whose different steps and procedures are described in detail in a logogramm set up in common.
Wu, Xinhong; Luo, Bo; Wei, Shaozhong; Luo, Yan; Feng, Yaojun; Xu, Juan; Wei, Wei
2013-11-01
To investigate the treatment efficiency of whole brain irradiation combined with precise radiotherapy on triple-negative (TN) phenotype breast cancer patients with brain metastases and their survival times. A total of 112 metastatic breast cancer patients treated with whole brain irradiation and intensity modulated radiotherapy (IMRT) or 3D conformal radiotherapy (3DCRT) were analyzed. Thirty-seven patients were of TN phenotype. Objective response rates were compared. Survival times were estimated by using the Kaplan-Meier method. Log-rank test was used to compare the survival time difference between the TN and non-TN groups. Potential prognostic factors were determined by using a Cox proportional hazard regression model. The efficiency of radiotherapy treatment on TN and non-TN phenotypes was 96.2% and 97%, respectively. TN phenotype was associated with worse survival times than non-TN phenotype after radiotherapy (6.9 months vs. 17 months) (P < 0.01). On multivariate analysis, good prognosis was associated with non-TN status, lower graded prognosis assessment class, and nonexistence of active extracranial metastases. After whole brain irradiation followed by IMRT or 3DCRT treatment, TN phenotype breast cancer patients with intracranial metastasis had high objective response rates but shorter survival time. With respect to survival in breast cancer patients with intracranial metastasis, the TN phenotype represents a significant adverse prognostic factor.
NASA Technical Reports Server (NTRS)
Boyer, K. L.; Wuescher, D. M.; Sarkar, S.
1991-01-01
Dynamic edge warping (DEW), a technique for recovering reasonably accurate disparity maps from uncalibrated stereo image pairs, is presented. No precise knowledge of the epipolar camera geometry is assumed. The technique is embedded in a system including structural stereopsis on the front end and robust estimation in digital photogrammetry on the other for the purpose of self-calibrating stereo image pairs. Once the relative camera orientation is known, the epipolar geometry is computed and the system can use this information to refine its representation of the object space. Such a system will find application in the autonomous extraction of terrain maps from stereo aerial photographs, for which camera position and orientation are unknown a priori, and for online autonomous calibration maintenance for robotic vision applications, in which the cameras are subject to vibration and other physical disturbances after calibration. This work thus forms a component of an intelligent system that begins with a pair of images and, having only vague knowledge of the conditions under which they were acquired, produces an accurate, dense, relative depth map. The resulting disparity map can also be used directly in some high-level applications involving qualitative scene analysis, spatial reasoning, and perceptual organization of the object space. The system as a whole substitutes high-level information and constraints for precise geometric knowledge in driving and constraining the early correspondence process.
Soler, Zachary M; Pallanch, John F; Sansoni, Eugene Ritter; Jones, Cameron S; Lawrence, Lauren A; Schlosser, Rodney J; Mace, Jess C; Smith, Timothy L
2015-09-01
Commonly used computed tomography (CT) staging systems for chronic rhinosinusitis (CRS) focus on the sinuses and do not quantify disease in the olfactory cleft. The goal of the current study was to determine whether precise measurements of olfactory cleft opacification better correlate with olfaction in patients with CRS. Olfaction was assessed using the 40-item Smell Identification Test (SIT-40) before and after sinus surgery in adult patients. Olfactory cleft opacification was quantified precisely using three-dimensional (3D), computerized volumetric analysis, as well as via semiquantitative Likert scale estimations at predetermined anatomic sites. Sinus opacification was also quantified using the Lund-Mackay staging system. The overall cohort (n = 199) included 89 (44.7%) patients with CRS with nasal polyposis (CRSwNP) and 110 (55.3%) with CRS without nasal polyposis (CRSsNP). The olfactory cleft opacified volume correlated with objective olfaction as determined by the SIT-40 (Spearman's rank correlation coefficient [Rs ] = -0.461; p < 0.001). The correlation was significantly stronger in the CRSwNP subgroup (Rs = -0.573; p < 0.001), whereas no appreciable correlation was found in the CRSsNP group (Rs = -0.141; p = 0.141). Correlations between sinus-specific Lund-Mackay CT scoring and SIT-40 scores were weaker in the CRSwNP (Rs = -0.377; p < 0.001) subgroup but stronger in the CRSsNP (Rs = -0.225; p = 0.018) group when compared to olfactory cleft correlations. Greater intraclass correlations (ICCs) were found between quantitative volumetric measures of olfactory cleft opacification (ICC = 0.844; p < 0.001) as compared with semiquantitative Likert grading (ICC = 0.627; p < 0.001). Quantitative measures of olfactory cleft opacification correlate with objective olfaction, with the strongest correlations seen in patients with nasal polyps. © 2015 ARS-AAOA, LLC.
Quality Analysis of Chlorogenic Acid and Hyperoside in Crataegi fructus
Weon, Jin Bae; Jung, Youn Sik; Ma, Choong Je
2016-01-01
Background: Crataegi fructus is a herbal medicine for strong stomach, sterilization, and alcohol detoxification. Chlorogenic acid and hyperoside are the major compounds in Crataegi fructus. Objective: In this study, we established novel high-performance liquid chromatography (HPLC)-diode array detection analysis method of chlorogenic acid and hyperoside for quality control of Crataegi fructus. Materials and Methods: HPLC analysis was achieved on a reverse-phase C18 column (5 μm, 4.6 mm × 250 mm) using water and acetonitrile as mobile phase with gradient system. The method was validated for linearity, precision, and accuracy. About 31 batches of Crataegi fructus samples collected from Korea and China were analyzed by using HPLC fingerprint of developed HPLC method. Then, the contents of chlorogenic acid and hyperoside were compared for quality evaluation of Crataegi fructus. Results: The results have shown that the average contents (w/w %) of chlorogenic acid and hyperoside in Crataegi fructus collected from Korea were 0.0438% and 0.0416%, respectively, and the average contents (w/w %) of 0.0399% and 0.0325%, respectively. Conclusion: In conclusion, established HPLC analysis method was stable and could provide efficient quality evaluation for monitoring of commercial Crataegi fructus. SUMMARY Quantitative analysis method of chlorogenic acid and hyperoside in Crataegi fructus is developed by high.performance liquid chromatography.(HPLC).diode array detectionEstablished HPLC analysis method is validated with linearity, precision, and accuracyThe developed method was successfully applied for quantitative analysis of Crataegi fructus sample collected from Korea and China. Abbreviations used: HPLC: High-performance liquid chromatography, GC: Gas chromatography, MS: Mass spectrometer, LOD: Limits of detection, LOQ: Limits of quantification, RSD: Relative standard deviation, RRT: Relative retention time, RPA: Relation peak area. PMID:27076744
Using the CoRE Requirements Method with ADARTS. Version 01.00.05
1994-03-01
requirements; combining ADARTS processes and objects derived from CoRE requirements into an ADARTS software architecture design ; and taking advantage of...CoRE’s precision in the ADARTS process structuring, class structuring, and software architecture design activities. Object-oriented requirements and
NASA Astrophysics Data System (ADS)
Shrivastava, Prashant Kumar; Pandey, Arun Kumar
2018-06-01
Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.
A quality quantitative method of silicon direct bonding based on wavelet image analysis
NASA Astrophysics Data System (ADS)
Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing
2018-04-01
The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.
Methods of recording and analysing cough sounds.
Subburaj, S; Parvez, L; Rajagopalan, T G
1996-01-01
Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.
A double sealing technique for increasing the precision of headspace-gas chromatographic analysis.
Xie, Wei-Qi; Yu, Kong-Xian; Gong, Yi-Xian
2018-01-19
This paper investigates a new double sealing technique for increasing the precision of the headspace gas chromatographic method. The air leakage problem caused by the high pressure in the headspace vial during the headspace sampling process has a great impact to the measurement precision in the conventional headspace analysis (i.e., single sealing technique). The results (using ethanol solution as the model sample) show that the present technique is effective to minimize such a problem. The double sealing technique has an excellent measurement precision (RSD < 0.15%) and accuracy (recovery = 99.1%-100.6%) for the ethanol quantification. The detection precision of the present method was 10-20 times higher than that in earlier HS-GC work that use conventional single sealing technique. The present double sealing technique may open up a new avenue, and also serve as a general strategy for improving the performance (i.e., accuracy and precision) of headspace analysis of various volatile compounds. Copyright © 2017 Elsevier B.V. All rights reserved.
Cleaning and Cleanliness Measurement of Additive Manufactured Parts
NASA Technical Reports Server (NTRS)
Mitchell, Mark A.; Raley, Randy
2016-01-01
The successful acquisition and utilization of piece parts and assemblies for contamination sensitive applications requires application of cleanliness acceptance criteria. Contamination can be classified using many different schemes. One common scheme is classification as organic, ionic and particulate contaminants. These may be present in and on the surface of solid components and assemblies or may be dispersed in various gaseous or liquid media. This discussion will focus on insoluble particle contamination on the surfaces of piece parts and assemblies. Cleanliness of parts can be controlled using two strategies, referred to as gross cleanliness and precision cleanliness. Under a gross cleanliness strategy acceptance is based on visual cleanliness. This approach introduces a number of concerns that render it unsuitable for controlling cleanliness of high technology products. Under the precision cleanliness strategy, subjective, visual assessment of cleanliness is replaced by objective measurement of cleanliness. When a precision cleanliness strategy is adopted there naturally arises the question: How clean is clean enough? The methods for establishing objective cleanliness acceptance limits will be discussed.
Precision pointing and control of flexible spacecraft
NASA Technical Reports Server (NTRS)
Bantell, M. H., Jr.
1987-01-01
The problem and long term objectives for the precision pointing and control of flexible spacecraft are given. The four basic objectives are stated in terms of two principle tasks. Under Task 1, robust low order controllers, improved structural modeling methods for control applications and identification methods for structural dynamics are being developed. Under Task 2, a lab test experiment for verification of control laws and system identification algorithms is being developed. For Task 1, work has focused on robust low order controller design and some initial considerations for structural modeling in control applications. For Task 2, work has focused on experiment design and fabrication, along with sensor selection and initial digital controller implementation. Conclusions are given.
Canyval-x: Cubesat Astronomy by NASA and Yonsei Using Virtual Telescope Alignment Experiment
NASA Technical Reports Server (NTRS)
Shah, Neerav
2016-01-01
CANYVAL-X is a technology demonstration CubeSat mission with a primary objective of validating technologies that allow two spacecraft to fly in formation along an inertial line-of-sight (i.e., align two spacecraft to an inertial source). Demonstration of precision dual-spacecraft alignment achieving fine angular precision enables a variety of cutting-edge heliophysics and astrophysics science.
ERIC Educational Resources Information Center
Stöckel, Tino; Hughes, Charmayne M. L.
2015-01-01
This experiment examined how multiple planning constraints affect grasp posture planning in 6- to 10-year-old children (n = 16 in each group) by manipulating the intended object end-orientation (left end-down, right end-down) and initial precision demands (standard, initial precision) of a bar transport task. Results indicated that grasp posture…
ERIC Educational Resources Information Center
Dispa, Delphine; Lejeune, Thierry; Thonnard, Jean-Louis
2013-01-01
Most chronic stroke patients present with difficulty in the manipulation of objects. The aim of this study was to test whether an intensive program of precision grip training could improve hand functioning of patients at more than 6 months after a stroke. This was a cross-over study; hence, at inclusion, the patients were randomly divided into two…
2011-01-01
myopia often leads otherwise competent observers to under estimate significantly the new technology’s potential. Two business examples stand out: in...direction. With precision of effect combined with precision of impact, bloodless war becomes a reality . To this point, we have tried to make the...against virtually all of the centers of gravity directly related to strategic objectives, regardless of their loca tion. Because it can bring many
Peters, Megan A. K.; Balzer, Jonathan; Shams, Ladan
2015-01-01
If one nondescript object’s volume is twice that of another, is it necessarily twice as heavy? As larger objects are typically heavier than smaller ones, one might assume humans use such heuristics in preparing to lift novel objects if other informative cues (e.g., material, previous lifts) are unavailable. However, it is also known that humans are sensitive to statistical properties of our environments, and that such sensitivity can bias perception. Here we asked whether statistical regularities in properties of liftable, everyday objects would bias human observers’ predictions about objects’ weight relationships. We developed state-of-the-art computer vision techniques to precisely measure the volume of everyday objects, and also measured their weight. We discovered that for liftable man-made objects, “twice as large” doesn’t mean “twice as heavy”: Smaller objects are typically denser, following a power function of volume. Interestingly, this “smaller is denser” relationship does not hold for natural or unliftable objects, suggesting some ideal density range for objects designed to be lifted. We then asked human observers to predict weight relationships between novel objects without lifting them; crucially, these weight predictions quantitatively match typical weight relationships shown by similarly-sized objects in everyday environments. These results indicate that the human brain represents the statistics of everyday objects and that this representation can be quantitatively abstracted and applied to novel objects. Finally, that the brain possesses and can use precise knowledge of the nonlinear association between size and weight carries important implications for implementation of forward models of motor control in artificial systems. PMID:25768977
Bonding by Hydroxide-Catalyzed Hydration and Dehydration
NASA Technical Reports Server (NTRS)
Gwo, Dz-Hung
2008-01-01
A simple, inexpensive method for bonding solid objects exploits hydroxide-catalyzed hydration and dehydration to form silicate-like networks in thin surface and interfacial layers between the objects. The method can be practiced at room temperature or over a wide range of temperatures. The method was developed especially to enable the formation of precise, reliable bonds between precise optical components. The bonds thus formed exhibit the precision and transparency of bonds formed by the conventional optical-contact method and the strength and reliability of high-temperature frit bonds. The method also lends itself to numerous non-optical applications in which there are requirements for precise bonds and/or requirements for bonds, whether precise or imprecise, that can reliably withstand severe environmental conditions. Categories of such non-optical applications include forming composite materials, coating substrates, forming laminate structures, and preparing objects of defined geometry and composition. The method is applicable to materials that either (1) can form silicate-like networks in the sense that they have silicate-like molecular structures that are extensible into silicate-like networks or (2) can be chemically linked to silicate-like networks by means of hydroxide-catalyzed hydration and dehydration. When hydrated, a material of either type features surface hydroxyl (-OH) groups. In this method, a silicate-like network that bonds two substrates can be formed either by a bonding material alone or by the bonding material together with material from either or both of the substrates. Typically, an aqueous hydroxide bonding solution is dispensed and allowed to flow between the mating surfaces by capillary action. If the surface figures of the substrates do not match precisely, bonding could be improved by including a filling material in the bonding solution. Preferably, the filling material should include at least one ingredient that can be hydrated to have exposed hydroxyl groups and that can be chemically linked, by hydroxide catalysis, to a silicate-like network. The silicate-like network could be generated in situ from the filling material and/or substrate material, or could be originally present in the bonding material.
Moire measuring technology for three-dimensional profile of the object
NASA Astrophysics Data System (ADS)
Fu, Yanjun; Yang, Kuntao
2006-02-01
An optical system is designed to get projection of the transmission grating, the deformed grating is obtained on surface of the object. The image of the deformed grating is given by the lens, the reference grating is put on the place of the image, and then the moire fringe is obtained. The amplify principle of the moire fringe is used to measure the profile of the object. The optical principle of the projection is analyzed. And the relation between the phase and the height of object is deduced. From the different point of geometry optics and the physics opticsl, the optical system is analyzed, the factors that influence the image equality and the measuring result are obtained. So the betterment of improving the measuring precision is brought forward, and in the later information processing, because of the diffuse reflection, the image equality is not very well. In order to get a good image, the digital filter is used to filter the noise and smooth the image firstly. Then in order to improve the measure precision, the subdivision technology is applied. The Fourier transform profilometry and phase shifting technology is used in the calculation. A detail analyses is done both in time field and frequency field. And the method of improving the measuring precision is put forward. A good digital filter algorithm is brought forward in the Fourier transform profilometry. In the phase shifting technology, the detail formula of three-step and four-step is given. At last the phase that is relational with the high information of the object is get, but the phase is disconnected phase, after the unwrapping algorithm,the disconnected phase is changed to be the continuous phase. Taking use of the relation between the phase and height, the height is obtained. Then the three-dimensional profile of the measured object can be reconstructed. The system is very convenient for non-contact measure of profile of some objects.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2001-01-01
A laboratory for analysis of low-ionic strength water has been developed at the U.S. Geological Survey (USGS) office in Troy, N.Y., to analyze samples collected by USGS projects in the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures developed to ensure proper sample collection, processing, and analysis. The quality-assurance/quality-control data are stored in the laboratory's SAS data-management system, which provides efficient review, compilation, and plotting of quality-assurance/quality-control data. This report presents and discusses samples analyzed from July 1993 through June 1995. Quality-control results for 18 analytical procedures were evaluated for bias and precision. Control charts show that data from seven of the analytical procedures were biased throughout the analysis period for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, dissolved inorganic carbon, dissolved organic carbon (soil expulsions), chloride, magnesium, nitrate (colorimetric method), and pH. Three of the analytical procedures were occasionally biased but were within control limits; they were: calcium (high for high-concentration samples for May 1995), dissolved organic carbon (high for highconcentration samples from January through September 1994), and fluoride (high in samples for April and June 1994). No quality-control sample has been developed for the organic monomeric aluminum procedure. Results from the filter-blank and analytical-blank analyses indicate that all analytical procedures in which blanks were run were within control limits, although values for a few blanks were outside the control limits. Blanks were not analyzed for acid-neutralizing capacity, dissolved inorganic carbon, fluoride, nitrate (colorimetric method), or pH. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in 14 of the 18 procedures. Data-quality objectives were met by more than 90 percent of the samples analyzed in all procedures except total monomeric aluminum (85 percent of samples met objectives), total aluminum (70 percent of samples met objectives), and dissolved organic carbon (85 percent of samples met objectives). Triplicate samples were not analyzed for ammonium, fluoride, dissolved inorganic carbon, or nitrate (colorimetric method). Results of the USGS interlaboratory Standard Reference Sample Program indicated high data quality with a median result of 3.6 of a possible 4.0. Environment Canada's LRTAP interlaboratory study results indicated that more than 85 percent of the samples met data-quality objectives in 6 of the 12 analyses; exceptions were calcium, dissolved organic carbon, chloride, pH, potassium, and sodium. Data-quality objectives were not met for calcium samples in one LRTAP study, but 94 percent of samples analyzed were within control limits for the remaining studies. Data-quality objectives were not met by 35 percent of samples analyzed for dissolved organic carbon, but 94 percent of sample values were within 20 percent of the most probable value. Data-quality objectives were not met for 30 percent of samples analyzed for chloride, but 90 percent of sample values were within 20 percent of the most probable value. Measurements of samples with a pH above 6.0 were biased high in 54 percent of the samples, although 85 percent of the samples met data-quality objectives for pH measurements below 6.0. Data-quality objectives for potassium and sodium were not met in one study (only 33 percent of the samples analyzed met the objectives), although 85 percent of the sample values were within control limits for the other studies. Measured sodium values were above the upper control limit in all studies. Results from blind reference-sample analyses indicated that data
Kobayashi, Yuto; Kamishima, Tamotsu; Sugimori, Hiroyuki; Ichikawa, Shota; Noguchi, Atsushi; Kono, Michihito; Iiyama, Toshitake; Sutherland, Kenneth; Atsumi, Tatsuya
2018-03-01
Synovitis, which is a hallmark of rheumatoid arthritis (RA), needs to be precisely quantified to determine the treatment plan. Time-intensity curve (TIC) shape analysis is an objective assessment method for characterizing the pixels as artery, inflamed synovium, or other tissues using dynamic contrast-enhanced MRI (DCE-MRI). To assess the feasibility of our original arterial mask subtraction method (AMSM) with mutual information (MI) for quantification of synovitis in RA. Prospective study. Ten RA patients (nine women and one man; mean age, 56.8 years; range, 38-67 years). 3T/DCE-MRI. After optimization of TIC shape analysis to the hand region, a combination of TIC shape analysis and AMSM was applied to synovial quantification. The MI between pre- and postcontrast images was utilized to determine the arterial mask phase objectively, which was compared with human subjective selection. The volume of objectively measured synovitis by software was compared with that of manual outlining by an experienced radiologist. Simple TIC shape analysis and TIC shape analysis combined with AMSM were compared in slices without synovitis according to subjective evaluation. Pearson's correlation coefficient, paired t-test and intraclass correlation coefficient (ICC). TIC shape analysis was successfully optimized in the hand region with a correlation coefficient of 0.725 (P < 0.01) with the results of manual assessment regarded as ground truth. Objective selection utilizing MI had substantial agreement (ICC = 0.734) with subjective selection. Correlation of synovial volumetry in combination with TIC shape analysis and AMSM with manual assessment was excellent (r = 0.922, P < 0.01). In addition, negative predictive ability in slices without synovitis pixels was significantly increased (P < 0.01). The combination of TIC shape analysis and image subtraction reinforced with MI can accurately quantify synovitis of RA in the hand by eliminating arterial pixels. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Langemo, Diane; Spahn, James; Spahn, Thomas; Pinnamaneni, V. Chowdry
2015-01-01
ABSTRACT The study objective was to examine precision in wound measurement using a recently Food and Drug Administration-approved Scout (WoundVision, LLC, Indianapolis, Indiana) device to measure wound length (L) and width (W). Wound perimeter and a ruler measurement of L and W were also made. Images of 40 actual patient wounds were measured using the Scout device. All 3 techniques (length, width, perimeter) demonstrated acceptable within and between reader precision; however, the best precision was in wound perimeter measurement. PMID:25679463
Verification and validation of a Work Domain Analysis with turing machine task analysis.
Rechard, J; Bignon, A; Berruet, P; Morineau, T
2015-03-01
While the use of Work Domain Analysis as a methodological framework in cognitive engineering is increasing rapidly, verification and validation of work domain models produced by this method are becoming a significant issue. In this article, we propose the use of a method based on Turing machine formalism named "Turing Machine Task Analysis" to verify and validate work domain models. The application of this method on two work domain analyses, one of car driving which is an "intentional" domain, and the other of a ship water system which is a "causal domain" showed the possibility of highlighting improvements needed by these models. More precisely, the step by step analysis of a degraded task scenario in each work domain model pointed out unsatisfactory aspects in the first modelling, like overspecification, underspecification, omission of work domain affordances, or unsuitable inclusion of objects in the work domain model. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Calibrating excitation light fluxes for quantitative light microscopy in cell biology
Grünwald, David; Shenoy, Shailesh M; Burke, Sean; Singer, Robert H
2011-01-01
Power output of light bulbs changes over time and the total energy delivered will depend on the optical beam path of the microscope, filter sets and objectives used, thus making comparison between experiments performed on different microscopes complicated. Using a thermocoupled power meter, it is possible to measure the exact amount of light applied to a specimen in fluorescence microscopy, regardless of the light source, as the light power measured can be translated into a power density at the sample. This widely used and simple tool forms the basis of a new degree of calibration precision and comparability of results among experiments and setups. Here we describe an easy-to-follow protocol that allows researchers to precisely estimate excitation intensities in the object plane, using commercially available opto-mechanical components. The total duration of this protocol for one objective and six filter cubes is 75 min including start-up time for the lamp. PMID:18974739
CCD Photometry of bright stars using objective wire mesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamiński, Krzysztof; Zgórz, Marika; Schwarzenberg-Czerny, Aleksander, E-mail: chrisk@amu.edu.pl
2014-06-01
Obtaining accurate photometry of bright stars from the ground remains problematic due to the danger of overexposing the target and/or the lack of suitable nearby comparison stars. The century-old method of using objective wire mesh to produce multiple stellar images seems promising for the precise CCD photometry of such stars. Furthermore, our tests on β Cep and its comparison star, differing by 5 mag, are very encouraging. Using a CCD camera and a 20 cm telescope with the objective covered by a plastic wire mesh, in poor weather conditions, we obtained differential photometry with a precision of 4.5 mmag permore » two minute exposure. Our technique is flexible and may be tuned to cover a range as big as 6-8 mag. We discuss the possibility of installing a wire mesh directly in the filter wheel.« less
Moving Object Detection Using Scanning Camera on a High-Precision Intelligent Holder
Chen, Shuoyang; Xu, Tingfa; Li, Daqun; Zhang, Jizhou; Jiang, Shenwang
2016-01-01
During the process of moving object detection in an intelligent visual surveillance system, a scenario with complex background is sure to appear. The traditional methods, such as “frame difference” and “optical flow”, may not able to deal with the problem very well. In such scenarios, we use a modified algorithm to do the background modeling work. In this paper, we use edge detection to get an edge difference image just to enhance the ability of resistance illumination variation. Then we use a “multi-block temporal-analyzing LBP (Local Binary Pattern)” algorithm to do the segmentation. In the end, a connected component is used to locate the object. We also produce a hardware platform, the core of which consists of the DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) platforms and the high-precision intelligent holder. PMID:27775671
Search Filter Precision Can Be Improved By NOTing Out Irrelevant Content
Wilczynski, Nancy L.; McKibbon, K. Ann; Haynes, R. Brian
2011-01-01
Background: Most methodologic search filters developed for use in large electronic databases such as MEDLINE have low precision. One method that has been proposed but not tested for improving precision is NOTing out irrelevant content. Objective: To determine if search filter precision can be improved by NOTing out the text words and index terms assigned to those articles that are retrieved but are off-target. Design: Analytic survey. Methods: NOTing out unique terms in off-target articles and testing search filter performance in the Clinical Hedges Database. Main Outcome Measures: Sensitivity, specificity, precision and number needed to read (NNR). Results: For all purpose categories (diagnosis, prognosis and etiology) except treatment and for all databases (MEDLINE, EMBASE, CINAHL and PsycINFO), constructing search filters that NOTed out irrelevant content resulted in substantive improvements in NNR (over four-fold for some purpose categories and databases). Conclusion: Search filter precision can be improved by NOTing out irrelevant content. PMID:22195215
NASA Technical Reports Server (NTRS)
Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.
2006-01-01
The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.
Three-dimensional reconstruction of indoor whole elements based on mobile LiDAR point cloud data
NASA Astrophysics Data System (ADS)
Gong, Yuejian; Mao, Wenbo; Bi, Jiantao; Ji, Wei; He, Zhanjun
2014-11-01
Ground-based LiDAR is one of the most effective city modeling tools at present, which has been widely used for three-dimensional reconstruction of outdoor objects. However, as for indoor objects, there are some technical bottlenecks due to lack of GPS signal. In this paper, based on the high-precision indoor point cloud data which was obtained by LiDAR, an international advanced indoor mobile measuring equipment, high -precision model was fulfilled for all indoor ancillary facilities. The point cloud data we employed also contain color feature, which is extracted by fusion with CCD images. Thus, it has both space geometric feature and spectral information which can be used for constructing objects' surface and restoring color and texture of the geometric model. Based on Autodesk CAD platform and with help of PointSence plug, three-dimensional reconstruction of indoor whole elements was realized. Specifically, Pointools Edit Pro was adopted to edit the point cloud, then different types of indoor point cloud data was processed, including data format conversion, outline extracting and texture mapping of the point cloud model. Finally, three-dimensional visualization of the real-world indoor was completed. Experiment results showed that high-precision 3D point cloud data obtained by indoor mobile measuring equipment can be used for indoor whole elements' 3-d reconstruction and that methods proposed in this paper can efficiently realize the 3 -d construction of indoor whole elements. Moreover, the modeling precision could be controlled within 5 cm, which was proved to be a satisfactory result.
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
Superconducting gravity gradiometer mission. Volume 1: Study team executive summary
NASA Technical Reports Server (NTRS)
Morgan, Samuel H. (Editor); Paik, Ho Jung (Editor)
1989-01-01
An executive summary is presented based upon the scientific and engineering studies and developments performed or directed by a Study Team composed of various Federal and University activities involved with the development of a three-axis Superconducting Gravity Gradiometer integrated with a six-axis superconducting accelerometer. This instrument is being developed for a future orbital mission to make precise global gravity measurements. The scientific justification and requirements for such a mission are discussed. This includes geophysics, the primary mission objective, as well as secondary objectives, such as navigation and tests of fundamental laws of physics, i.e., a null test of the inverse square law of gravitation and tests of general relativity. The instrument design and status along with mission analysis, engineering assessments, and preliminary spacecraft concepts are discussed. In addition, critical spacecraft systems and required technology advancements are examined. The mission requirements and an engineering assessment of a precursor flight test of the instrument are discussed.
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
Public and private sector interactions: an economic perspective.
Maynard, A
1986-01-01
The debate about the public-private mix for health care has been dominated by rhetoric and the failure to evaluate the characteristics of the outcomes of public and private health care systems and to relate these to policy targets. After a brief analysis of the competing, liberal (conservative) and collectivist (socialist), objectives, the nature of the private health care sector in Britain is described and it is shown that growth has faltered due to cost containment problems. This outcome is the product of characteristics of the private health care system, paralleled precisely in the NHS: asymmetry information, monopoly power, moral hazard and third party pays. The final section discusses briefly some remedies for the inefficient and inequitable outcomes which are seen in all health care markets and it is argued that competition within public and private health care systems may enable each system type to achieve its own particular objectives more efficiently.
Superconducting gravity gradiometer mission. Volume 2: Study team technical report
NASA Technical Reports Server (NTRS)
Morgan, Samuel H. (Editor); Paik, Ho Jung (Editor)
1988-01-01
Scientific and engineering studies and developments performed or directed by a Study Team composed of various Federal and University activities involved with the development of a three-axis superconducting gravity gradiometer integrated with a six-axis superconducting accelerometer are examined. This instrument is being developed for a future orbital mission to make precise global gravity measurements. The scientific justification and requirements for such a mission are discussed. This includes geophysics, the primary mission objective, as well as secondary objective, such as navigation and feats of fundamental laws of physics, i.e., a null test of the inverse square law of gravitation and tests of general relativity. The instrument design and status along with mission analysis, engineering assessments, and preliminary spacecraft concepts are discussed. In addition, critical spacecraft systems and required technology advancements are examined. The mission requirements and an engineering assessment of a precursor flight test of the instrument are discussed.
NASA Technical Reports Server (NTRS)
Goldstein, J. I.; Williams, D. B.
1992-01-01
This paper reviews and discusses future directions in analytical electron microscopy for microchemical analysis using X-ray and Electron Energy Loss Spectroscopy (EELS). The technique of X-ray microanalysis, using the ratio method and k(sub AB) factors, is outlined. The X-ray absorption correction is the major barrier to the objective of obtaining I% accuracy and precision in analysis. Spatial resolution and Minimum Detectability Limits (MDL) are considered with present limitations of spatial resolution in the 2 to 3 microns range and of MDL in the 0.1 to 0.2 wt. % range when a Field Emission Gun (FEG) system is used. Future directions of X-ray analysis include improvement in X-ray spatial resolution to the I to 2 microns range and MDL as low as 0.01 wt. %. With these improvements the detection of single atoms in the analysis volume will be possible. Other future improvements include the use of clean room techniques for thin specimen preparation, quantification available at the I% accuracy and precision level with light element analysis quantification available at better than the 10% accuracy and precision level, the incorporation of a compact wavelength dispersive spectrometer to improve X-ray spectral resolution, light element analysis and MDL, and instrument improvements including source stability, on-line probe current measurements, stage stability, and computerized stage control. The paper reviews the EELS technique, recognizing that it has been slow to develop and still remains firmly in research laboratories rather than in applications laboratories. Consideration of microanalysis with core-loss edges is given along with a discussion of the limitations such as specimen thickness. Spatial resolution and MDL are considered, recognizing that single atom detection is already possible. Plasmon loss analysis is discussed as well as fine structure analysis. New techniques for energy-loss imaging are also summarized. Future directions in the EELS technique will be the development of new spectrometers and improvements in thin specimen preparation. The microanalysis technique needs to be simplified and software developed so that the EELS technique approaches the relative simplicity of the X-ray technique. Finally, one can expect major improvements in EELS imaging as data storage and processing improvements occur.
THE PRISM MULTI-OBJECT SURVEY (PRIMUS). I. SURVEY OVERVIEW AND CHARACTERISTICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coil, Alison L.; Moustakas, John; Aird, James
2011-11-01
We present the PRIsm MUlti-object Survey (PRIMUS), a spectroscopic faint galaxy redshift survey to z {approx} 1. PRIMUS uses a low-dispersion prism and slitmasks to observe {approx}2500 objects at once in a 0.18 deg{sup 2} field of view, using the Inamori Magellan Areal Camera and Spectrograph camera on the Magellan I Baade 6.5 m telescope at Las Campanas Observatory. PRIMUS covers a total of 9.1 deg{sup 2} of sky to a depth of i{sub AB} {approx} 23.5 in seven different deep, multi-wavelength fields that have coverage from the Galaxy Evolution Explorer, Spitzer, and either XMM or Chandra, as well asmore » multiple-band optical and near-IR coverage. PRIMUS includes {approx}130,000 robust redshifts of unique objects with a redshift precision of {sigma}{sub z}/(1 + z) {approx} 0.005. The redshift distribution peaks at z {approx} 0.6 and extends to z = 1.2 for galaxies and z = 5 for broad-line active galactic nuclei. The motivation, observational techniques, fields, target selection, slitmask design, and observations are presented here, with a brief summary of the redshift precision; a forthcoming paper presents the data reduction, redshift fitting, redshift confidence, and survey completeness. PRIMUS is the largest faint galaxy survey undertaken to date. The high targeting fraction ({approx}80%) and large survey size will allow for precise measures of galaxy properties and large-scale structure to z {approx} 1.« less
Detection and laser ranging of orbital objects using optical methods
NASA Astrophysics Data System (ADS)
Wagner, P.; Hampf, D.; Sproll, F.; Hasenohr, T.; Humbert, L.; Rodmann, J.; Riede, W.
2016-09-01
Laser ranging to satellites (SLR) in earth orbit is an established technology used for geodesy, fundamental science and precise orbit determination. A combined active and passive optical measurement system using a single telescope mount is presented which performs precise ranging measurements of retro reflector equipped objects in low earth orbit (LEO). The German Aerospace Center (DLR) runs an observatory in Stuttgart where a system has been assembled completely from commercial off-the-shelf (COTS) components. The visible light directed to the tracking camera is used to perform angular measurements of objects under investigation. This is done astrometrically by comparing the apparent target position with cataloged star positions. First successful satellite laser ranging was demonstrated recently using an optical fiber directing laser pulses onto the astronomical mount. The transmitter operates at a wavelength of 1064 nm with a repetition rate of 3 kHz and pulse energy of 25 μJ. A motorized tip/tilt mount allows beam steering of the collimated beam with μrad accuracy. The returning photons reflected from the object in space are captured with the tracking telescope. A special low aberration beam splitter unit was designed to separate the infrared from visible light. This allows passive optical closed loop tracking and operation of a single photon detector for time of flight measurements at a single telescope simultaneously. The presented innovative design yields to a compact and cost effective but very precise ranging system which allows orbit determination.
Garrido, P; Aldaz, A; Vera, R; Calleja, M A; de Álava, E; Martín, M; Matías-Guiu, X; Palacios, J
2018-04-01
Precision medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle for each person. Precision medicine is transforming clinical and biomedical research, as well as health care itself from a conceptual, as well as a methodological viewpoint, providing extraordinary opportunities to improve public health and lower the costs of the healthcare system. However, the implementation of precision medicine poses ethical-legal, regulatory, organizational, and knowledge-related challenges. Without a national strategy, precision medicine, which will be implemented one way or another, could take place without the appropriate planning that can guarantee technical quality, equal access of all citizens to the best practices, violating the rights of patients and professionals, and jeopardizing the solvency of the healthcare system. With this paper from the Spanish Societies of Medical Oncology, Pathology, and Hospital Pharmacy, we highlight the need to institute a consensual national strategy for the development of precision medicine in our country, review the national and international context, comment on the opportunities and challenges for implementing precision medicine, and outline the objectives of a national strategy on precision medicine in cancer.
Özdemir, Vural; Kolker, Eugene
2016-02-01
Nutrition is central to sustenance of good health, not to mention its role as a cultural object that brings together or draws lines among societies. Undoubtedly, understanding the future paths of nutrition science in the current era of Big Data remains firmly on science, technology, and innovation strategy agendas around the world. Nutrigenomics, the confluence of nutrition science with genomics, brought about a new focus on and legitimacy for "variability science" (i.e., the study of mechanisms of person-to-person and population differences in response to food, and the ways in which food variably impacts the host, for example, nutrient-related disease outcomes). Societal expectations, both public and private, and claims over genomics-guided and individually-tailored precision diets continue to proliferate. While the prospects of nutrition science, and nutrigenomics in particular, are established, there is a need to integrate the efforts in four Big Data domains that are naturally allied--agrigenomics, nutrigenomics, nutriproteomics, and nutrimetabolomics--that address complementary variability questions pertaining to individual differences in response to food-related environmental exposures. The joint use of these four omics knowledge domains, coined as Precision Nutrition 4.0 here, has sadly not been realized to date, but the potentials for such integrated knowledge innovation are enormous. Future personalized nutrition practices would benefit from a seamless planning of life sciences funding, research, and practice agendas from "farm to clinic to supermarket to society," and from "genome to proteome to metabolome." Hence, this innovation foresight analysis explains the already existing potentials waiting to be realized, and suggests ways forward for innovation in both technology and ethics foresight frames on precision nutrition. We propose the creation of a new Precision Nutrition Evidence Barometer for periodic, independent, and ongoing retrieval, screening, and aggregation of the relevant life sciences data. For innovation in Big Data ethics oversight, we suggest "nested governance" wherein the processes of knowledge production are made transparent in the continuum from life sciences and social sciences to humanities, and where each innovation actor reports to another accountability and transparency layer: scientists to ethicists, and ethicists to scholars in the emerging field of ethics-of-ethics. Such nested innovation ecosystems offer safety against innovation blind spots, calibrate visible/invisible power differences in the cultures of science or ethics, and ultimately, reducing the risk of "paper values"--what people say--and "real values"--what innovation actors actually do. We are optimistic that the convergence of nutrigenomics with nutriproteomics, nutrimetabolomics, and agrigenomics can build a robust, sustainable, and trustworthy precision nutrition 4.0 agenda, as articulated in this Big Data and ethics foresight analysis.
Tykot, Robert H
2016-01-01
Elemental analysis is a fundamental method of analysis on archaeological materials to address their overall composition or identify the source of their geological components, yet having access to instrumentation, its often destructive nature, and the time and cost of analyses have limited the number and/or size of archaeological artifacts tested. The development of portable X-ray fluorescence (pXRF) instruments over the past decade, however, has allowed nondestructive analyses to be conducted in museums around the world, on virtually any size artifact, producing data for up to several hundred samples per day. Major issues have been raised, however, about the sensitivity, precision, and accuracy of these devices, and the limitation of performing surface analysis on potentially heterogeneous objects. The advantages and limitations of pXRF are discussed here regarding archaeological studies of obsidian, ceramics, metals, bone, and painted materials. © The Author(s) 2015.
A performance and failure analysis of SAPHIRE with a MEDLINE test collection.
Hersh, W R; Hickam, D H; Haynes, R B; McKibbon, K A
1994-01-01
OBJECTIVE: Assess the performance of the SAPHIRE automated information retrieval system. DESIGN: Comparative study of automated and human searching of a MEDLINE test collection. MEASUREMENTS: Recall and precision of SAPHIRE were compared with those attributes of novice physicians, expert physicians, and librarians for a test collection of 75 queries and 2,334 citations. Failure analysis assessed the efficacy of the Metathesaurus as a concept vocabulary; the reasons for retrieval of nonrelevant articles and nonretrieval of relevant articles; and the effect of changing the weighting formula for relevance ranking of retrieved articles. RESULTS: Recall and precision of SAPHIRE were comparable to those of both physician groups, but less than those of librarians. CONCLUSION: The current version of the Metathesaurus, as utilized by SAPHIRE, was unable to represent the conceptual content of one-fourth of physician-generated MEDLINE queries. The most likely cause for retrieval of nonrelevant articles was the presence of some or all of the search terms in the article, with frequencies high enough to lead to retrieval. The most likely cause for nonretrieval of relevant articles was the absence of the actual terms from the query, with synonyms or hierarchically related terms present instead. There were significant variations in performance when SAPHIRE's concept-weighing formulas were modified. PMID:7719787
Khoo, Benjamin C C; Beck, Thomas J; Qiao, Qi-Hong; Parakh, Pallav; Semanick, Lisa; Prince, Richard L; Singer, Kevin P; Price, Roger I
2005-07-01
Hip structural analysis (HSA) is a technique for extracting strength-related structural dimensions of bone cross-sections from two-dimensional hip scan images acquired by dual energy X-ray absorptiometry (DXA) scanners. Heretofore the precision of the method has not been thoroughly tested in the clinical setting. Using paired scans from two large clinical trials involving a range of different DXA machines, this study reports the first precision analysis of HSA variables, in comparison with that of conventional bone mineral density (BMD) on the same scans. A key HSA variable, section modulus (Z), biomechanically indicative of bone strength during bending, had a short-term precision percentage coefficient of variation (CV%) in the femoral neck of 3.4-10.1%, depending on the manufacturer or model of the DXA equipment. Cross-sectional area (CSA), a determinant of bone strength during axial loading and closely aligned with conventional DXA bone mineral content, had a range of CV% from 2.8% to 7.9%. Poorer precision was associated with inadequate inclusion of the femoral shaft or femoral head in the DXA-scanned hip region. Precision of HSA-derived BMD varied between 2.4% and 6.4%. Precision of DXA manufacturer-derived BMD varied between 1.9% and 3.4%, arising from the larger analysis region of interest (ROI). The precision of HSA variables was not generally dependent on magnitude, subject height, weight, or conventional femoral neck densitometric variables. The generally poorer precision of key HSA variables in comparison with conventional DXA-derived BMD highlights the critical roles played by correct limb repositioning and choice of an adequate and appropriately positioned ROI.
The role of transparency in da Vinci stereopsis.
Zannoli, Marina; Mamassian, Pascal
2011-10-15
The majority of natural scenes contains zones that are visible to one eye only. Past studies have shown that these monocular regions can be seen at a precise depth even though there are no binocular disparities that uniquely constrain their locations in depth. In the so-called da Vinci stereopsis configuration, the monocular region is a vertical line placed next to a binocular rectangular occluder. The opacity of the occluder has been mentioned to be a necessary condition to obtain da Vinci stereopsis. However, this opacity constraint has never been empirically tested. In the present study, we tested whether da Vinci stereopsis and perceptual transparency can interact using a classical da Vinci configuration in which the opacity of the occluder varied. We used two different monocular objects: a line and a disk. We found no effect of the opacity of the occluder on the perceived depth of the monocular object. A careful analysis of the distribution of perceived depth revealed that the monocular object was perceived at a depth that increased with the distance between the object and the occluder. The analysis of the skewness of the distributions was not consistent with a double fusion explanation, favoring an implication of occlusion geometry in da Vinci stereopsis. A simple model that includes the geometry of the scene could account for the results. In summary, the mechanism responsible to locate monocular regions in depth is not sensitive to the material properties of objects, suggesting that da Vinci stereopsis is solved at relatively early stages of disparity processing. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet
2017-06-01
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
NASA Astrophysics Data System (ADS)
Jing, Chao; Liu, Zhongling; Zhou, Ge; Zhang, Yimo
2011-11-01
The nanometer-level precise phase-shift system is designed to realize the phase-shift interferometry in electronic speckle shearography pattern interferometry. The PZT is used as driving component of phase-shift system and translation component of flexure hinge is developed to realize micro displacement of non-friction and non-clearance. Closed-loop control system is designed for high-precision micro displacement, in which embedded digital control system is developed for completing control algorithm and capacitive sensor is used as feedback part for measuring micro displacement in real time. Dynamic model and control model of the nanometer-level precise phase-shift system is analyzed, and high-precision micro displacement is realized with digital PID control algorithm on this basis. It is proved with experiments that the location precision of the precise phase-shift system to step signal of displacement is less than 2nm and the location precision to continuous signal of displacement is less than 5nm, which is satisfied with the request of the electronic speckle shearography and phase-shift pattern interferometry. The stripe images of four-step phase-shift interferometry and the final phase distributed image correlated with distortion of objects are listed in this paper to prove the validity of nanometer-level precise phase-shift system.
Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly
2016-01-01
Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.
Testing the effectiveness of simplified search strategies for updating systematic reviews.
Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana
2017-08-01
The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.
Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases.
Seabroke, Suzie; Candore, Gianmario; Juhlin, Kristina; Quarcoo, Naashika; Wisniewski, Antoni; Arani, Ramin; Painter, Jeffery; Tregunno, Philip; Norén, G Niklas; Slattery, Jim
2016-04-01
Disproportionality analyses are used in many organisations to identify adverse drug reactions (ADRs) from spontaneous report data. Reporting patterns vary over time, with patient demographics, and between different geographical regions, and therefore subgroup analyses or adjustment by stratification may be beneficial. The objective of this study was to evaluate the performance of subgroup and stratified disproportionality analyses for a number of key covariates within spontaneous report databases of differing sizes and characteristics. Using a reference set of established ADRs, signal detection performance (sensitivity and precision) was compared for stratified, subgroup and crude (unadjusted) analyses within five spontaneous report databases (two company, one national and two international databases). Analyses were repeated for a range of covariates: age, sex, country/region of origin, calendar time period, event seriousness, vaccine/non-vaccine, reporter qualification and report source. Subgroup analyses consistently performed better than stratified analyses in all databases. Subgroup analyses also showed benefits in both sensitivity and precision over crude analyses for the larger international databases, whilst for the smaller databases a gain in precision tended to result in some loss of sensitivity. Additionally, stratified analyses did not increase sensitivity or precision beyond that associated with analytical artefacts of the analysis. The most promising subgroup covariates were age and region/country of origin, although this varied between databases. Subgroup analyses perform better than stratified analyses and should be considered over the latter in routine first-pass signal detection. Subgroup analyses are also clearly beneficial over crude analyses for larger databases, but further validation is required for smaller databases.
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
NASA Technical Reports Server (NTRS)
Guglielmi, G.; Selby, K.; Blunt, B. A.; Jergas, M.; Newitt, D. C.; Genant, H. K.; Majumdar, S.
1996-01-01
RATIONALE AND OBJECTIVES: Marrow transverse relaxation time (T2*) in magnetic resonance (MR) imaging may be related to the density and structure of the surrounding trabecular network. We investigated regional variations of T2* in the human calcaneus and compared the findings with bone mineral density (BMD), as measured by dual X-ray absorpiometry (DXA). Short- and long-term precisions were evaluated first to determine whether MR imaging would be useful for the clinical assessment of disease status and progression in osteoporosis. METHODS: Gradient-recalled echo MR images of the calcaneus were acquired at 1.5 T from six volunteers. Measurements of T2* were compared with BMD and (for one volunteer) conventional radiography. RESULTS: T2* values showed significant regional variation; they typically were shortest in the superior region of the calcaneus. There was a linear correlation between MR and DXA measurements (r = .66 for 1/T2* versus BMD). Differences in T2* attributable to variations in analysis region-of-interest placement were not significant for five of the six volunteers. Sagittal MR images had short- and long-term precision errors of 4.2% and 3.3%, respectively. For DXA, the precision was 1.3% (coefficient of variation). CONCLUSION: MR imaging may be useful for trabecular bone assessment in the calcaneus. However, given the large regional variations in bone density and structure, the choice of an ROI is likely to play a major role in the accuracy, precision, and overall clinical efficacy of T2* measurements.
Investigation of Space Interferometer Control Using Imaging Sensor Output Feedback
NASA Technical Reports Server (NTRS)
Leitner, Jesse A.; Cheng, Victor H. L.
2003-01-01
Numerous space interferometry missions are planned for the next decade to verify different enabling technologies towards very-long-baseline interferometry to achieve high-resolution imaging and high-precision measurements. These objectives will require coordinated formations of spacecraft separately carrying optical elements comprising the interferometer. High-precision sensing and control of the spacecraft and the interferometer-component payloads are necessary to deliver sub-wavelength accuracy to achieve the scientific objectives. For these missions, the primary scientific product of interferometer measurements may be the only source of data available at the precision required to maintain the spacecraft and interferometer-component formation. A concept is studied for detecting the interferometer's optical configuration errors based on information extracted from the interferometer sensor output. It enables precision control of the optical components, and, in cases of space interferometers requiring formation flight of spacecraft that comprise the elements of a distributed instrument, it enables the control of the formation-flying vehicles because independent navigation or ranging sensors cannot deliver the high-precision metrology over the entire required geometry. Since the concept can act on the quality of the interferometer output directly, it can detect errors outside the capability of traditional metrology instruments, and provide the means needed to augment the traditional instrumentation to enable enhanced performance. Specific analyses performed in this study include the application of signal-processing and image-processing techniques to solve the problems of interferometer aperture baseline control, interferometer pointing, and orientation of multiple interferometer aperture pairs.
Digital holographic microscopy combined with optical tweezers
NASA Astrophysics Data System (ADS)
Cardenas, Nelson; Yu, Lingfeng; Mohanty, Samarendra K.
2011-02-01
While optical tweezers have been widely used for the manipulation and organization of microscopic objects in three dimensions, observing the manipulated objects along axial direction has been quite challenging. In order to visualize organization and orientation of objects along axial direction, we report development of a Digital holographic microscopy combined with optical tweezers. Digital holography is achieved by use of a modified Mach-Zehnder interferometer with digital recording of interference pattern of the reference and sample laser beams by use of a single CCD camera. In this method, quantitative phase information is retrieved dynamically with high temporal resolution, only limited by frame rate of the CCD. Digital focusing, phase-unwrapping as well as online analysis and display of the quantitative phase images was performed on a software developed on LabView platform. Since phase changes observed in DHOT is very sensitive to optical thickness of trapped volume, estimation of number of particles trapped in the axial direction as well as orientation of non-spherical objects could be achieved with high precision. Since in diseases such as malaria and diabetics, change in refractive index of red blood cells occurs, this system can be employed to map such disease-specific changes in biological samples upon immobilization with optical tweezers.
Color constancy in a scene with bright colors that do not have a fully natural surface appearance.
Fukuda, Kazuho; Uchikawa, Keiji
2014-04-01
Theoretical and experimental approaches have proposed that color constancy involves a correction related to some average of stimulation over the scene, and some of the studies showed that the average gives greater weight to surrounding bright colors. However, in a natural scene, high-luminance elements do not necessarily carry information about the scene illuminant when the luminance is too high for it to appear as a natural object color. The question is how a surrounding color's appearance mode influences its contribution to the degree of color constancy. Here the stimuli were simple geometric patterns, and the luminance of surrounding colors was tested over the range beyond the luminosity threshold. Observers performed perceptual achromatic setting on the test patch in order to measure the degree of color constancy and evaluated the surrounding bright colors' appearance mode. Broadly, our results support the assumption that the visual system counts only the colors in the object-color appearance for color constancy. However, detailed analysis indicated that surrounding colors without a fully natural object-color appearance had some sort of influence on color constancy. Consideration of this contribution of unnatural object color might be important for precise modeling of human color constancy.
NASA Technical Reports Server (NTRS)
Clayton, K. M. (Principal Investigator)
1975-01-01
The author has identified the following significant results. An objective system for regionalization is described, using ERTS-1 (or LANDSAT) computer compatible tapes. A range of computer programs for analysis of these tapes was developed. Emphasis is on a level of generalization appropriate to a satellite system whith repetitive global coverage. The main variables are land/water ratios and vegetation cover. The scale or texture of the pattern of change in these variables varies a good deal across the earth's surface, and it seems best if the unit of generalization adopted varies in sympathy with the surface being analyzed.
Tunable far infrared studies of molecular parameters in support of stratospheric measurements
NASA Technical Reports Server (NTRS)
Chance, K. V.; Nolt, Ira G.; Radostitz, J. V.; Park, K.
1990-01-01
The purpose of this research is to make precise, fully line-resolved measurements of molecular parameters that are necessary for the analysis of spectra obtained in far infrared field measurement programs. These measurements make it possible to accurately analyze the data from field measurements to obtain atmospheric concentration profiles of key trace gases involved in the ozone chemistry. The research objectives include: measurements of pressure broadening of molecular lines of OH, O2, O3, HCl, and H2O, their temperature dependence, and, when possible, the pressure-induced frequency shifts of the lines; measurements of line positions of radical species, such as HO2.
Time-of-flight range imaging for underwater applications
NASA Astrophysics Data System (ADS)
Merbold, Hannes; Catregn, Gion-Pol; Leutenegger, Tobias
2018-02-01
Precise and low-cost range imaging in underwater settings with object distances on the meter level is demonstrated. This is addressed through silicon-based time-of-flight (TOF) cameras operated with light emitting diodes (LEDs) at visible, rather than near-IR wavelengths. We find that the attainable performance depends on a variety of parameters, such as the wavelength dependent absorption of water, the emitted optical power and response times of the LEDs, or the spectral sensitivity of the TOF chip. An in-depth analysis of the interplay between the different parameters is given and the performance of underwater TOF imaging using different visible illumination wavelengths is analyzed.
Pérez, Adriana; Gabriel, Kelley; Nehme, Eileen K; Mandell, Dorothy J; Hoelscher, Deanna M
2015-07-27
Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000-2002 and 2004-2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin's concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students' weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight status as obese was 70.8% and 81.9% for 8th- and 11th-graders, respectively. These estimates increased when using the correction equations to 77.4% and 84.4% for 8th- and 11th-graders, respectively. When direct measurement is not practical, self-reported measurements provide a reliable proxy measure across grade, sex and race/ethnicity subpopulations of adolescents. Correction equations increase the sensitivity of self-report measures to identify prevalence of overall overweight/obesity status.
Numerical study on 3D composite morphing actuators
NASA Astrophysics Data System (ADS)
Oishi, Kazuma; Saito, Makoto; Anandan, Nishita; Kadooka, Kevin; Taya, Minoru
2015-04-01
There are a number of actuators using the deformation of electroactive polymer (EAP), where fewer papers seem to have focused on the performance of 3D morphing actuators based on the analytical approach, due mainly to their complexity. The present paper introduces a numerical analysis approach on the large scale deformation and motion of a 3D half dome shaped actuator composed of thin soft membrane (passive material) and EAP strip actuators (EAP active coupon with electrodes on both surfaces), where the locations of the active EAP strips is a key parameter. Simulia/Abaqus Static and Implicit analysis code, whose main feature is the high precision contact analysis capability among structures, are used focusing on the whole process of the membrane to touch and wrap around the object. The unidirectional properties of the EAP coupon actuator are used as input data set for the material properties for the simulation and the verification of our numerical model, where the verification is made as compared to the existing 2D solution. The numerical results can demonstrate the whole deformation process of the membrane to wrap around not only smooth shaped objects like a sphere or an egg, but also irregularly shaped objects. A parametric study reveals the proper placement of the EAP coupon actuators, with the modification of the dome shape to induce the relevant large scale deformation. The numerical simulation for the 3D soft actuators shown in this paper could be applied to a wider range of soft 3D morphing actuators.
Chen, Yixi; Guzauskas, Gregory F; Gu, Chengming; Wang, Bruce C M; Furnback, Wesley E; Xie, Guotong; Dong, Peng; Garrison, Louis P
2016-11-02
The "big data" era represents an exciting opportunity to utilize powerful new sources of information to reduce clinical and health economic uncertainty on an individual patient level. In turn, health economic outcomes research (HEOR) practices will need to evolve to accommodate individual patient-level HEOR analyses. We propose the concept of "precision HEOR", which utilizes a combination of costs and outcomes derived from big data to inform healthcare decision-making that is tailored to highly specific patient clusters or individuals. To explore this concept, we discuss the current and future roles of HEOR in health sector decision-making, big data and predictive analytics, and several key HEOR contexts in which big data and predictive analytics might transform traditional HEOR into precision HEOR. The guidance document addresses issues related to the transition from traditional to precision HEOR practices, the evaluation of patient similarity analysis and its appropriateness for precision HEOR analysis, and future challenges to precision HEOR adoption. Precision HEOR should make precision medicine more realizable by aiding and adapting healthcare resource allocation. The combined hopes for precision medicine and precision HEOR are that individual patients receive the best possible medical care while overall healthcare costs remain manageable or become more cost-efficient.
Chen, Yixi; Guzauskas, Gregory F.; Gu, Chengming; Wang, Bruce C. M.; Furnback, Wesley E.; Xie, Guotong; Dong, Peng; Garrison, Louis P.
2016-01-01
The “big data” era represents an exciting opportunity to utilize powerful new sources of information to reduce clinical and health economic uncertainty on an individual patient level. In turn, health economic outcomes research (HEOR) practices will need to evolve to accommodate individual patient–level HEOR analyses. We propose the concept of “precision HEOR”, which utilizes a combination of costs and outcomes derived from big data to inform healthcare decision-making that is tailored to highly specific patient clusters or individuals. To explore this concept, we discuss the current and future roles of HEOR in health sector decision-making, big data and predictive analytics, and several key HEOR contexts in which big data and predictive analytics might transform traditional HEOR into precision HEOR. The guidance document addresses issues related to the transition from traditional to precision HEOR practices, the evaluation of patient similarity analysis and its appropriateness for precision HEOR analysis, and future challenges to precision HEOR adoption. Precision HEOR should make precision medicine more realizable by aiding and adapting healthcare resource allocation. The combined hopes for precision medicine and precision HEOR are that individual patients receive the best possible medical care while overall healthcare costs remain manageable or become more cost-efficient. PMID:27827859
Li, Yong; Randerath, Jennifer; Bauer, Hans; Marquardt, Christian; Goldenberg, Georg; Hermsdörfer, Joachim
2009-01-03
When we manipulate familiar objects in our daily life, our grip force anticipates the physical demands right from the moment of contact with the object, indicating the existence of a memory for relevant object properties. This study explores the formation and consolidation of the memory processes that associate either familiar (size) or arbitrary object features (color) with object weight. In the general task, participants repetitively lifted two differently weighted objects (580 and 280 g) in a pseudo-random order. Forty young healthy adults participated in this study and were randomly distributed into four groups: Color Cue Single task (CCS, blue and red, 9.8(3)cm(3)), Color Cue Dual task (CCD), No Cue (NC) and Size Cue (SC, 9.8(3) and 6(3)cm(3)) group. All groups performed a repetitive precision grasp-lift task and were retested with the same protocol after a 5-min pause. The CCD group was also required to simultaneously perform a memory task during each lift of differently weighted objects coded by color. The results show that groups lifting objects with arbitrary or familiar features successfully formed the association between object weight and manipulated object features and incorporated this into grip force programming, as observed in the different scaling of grip force and grip force rate for different object weights. An arbitrary feature, i.e., color, can be sufficiently associated with object weight, however with less strength than the familiar feature of size. The simultaneous memory task impaired anticipatory force scaling during repetitive object lifting but did not jeopardize the learning process and the consolidation of the associative memory.
Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W
2015-02-10
Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation. © 2015 ARVO.
Precise Orbit Determination for ALOS
NASA Technical Reports Server (NTRS)
Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji
2007-01-01
The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.
Error analysis of high-rate GNSS precise point positioning for seismic wave measurement
NASA Astrophysics Data System (ADS)
Shu, Yuanming; Shi, Yun; Xu, Peiliang; Niu, Xiaoji; Liu, Jingnan
2017-06-01
High-rate GNSS precise point positioning (PPP) has been playing a more and more important role in providing precise positioning information in fast time-varying environments. Although kinematic PPP is commonly known to have a precision of a few centimeters, the precision of high-rate PPP within a short period of time has been reported recently with experiments to reach a few millimeters in the horizontal components and sub-centimeters in the vertical component to measure seismic motion, which is several times better than the conventional kinematic PPP practice. To fully understand the mechanism of mystified excellent performance of high-rate PPP within a short period of time, we have carried out a theoretical error analysis of PPP and conducted the corresponding simulations within a short period of time. The theoretical analysis has clearly indicated that the high-rate PPP errors consist of two types: the residual systematic errors at the starting epoch, which affect high-rate PPP through the change of satellite geometry, and the time-varying systematic errors between the starting epoch and the current epoch. Both the theoretical error analysis and simulated results are fully consistent with and thus have unambiguously confirmed the reported high precision of high-rate PPP, which has been further affirmed here by the real data experiments, indicating that high-rate PPP can indeed achieve the millimeter level of precision in the horizontal components and the sub-centimeter level of precision in the vertical component to measure motion within a short period of time. The simulation results have clearly shown that the random noise of carrier phases and higher order ionospheric errors are two major factors to affect the precision of high-rate PPP within a short period of time. The experiments with real data have also indicated that the precision of PPP solutions can degrade to the cm level in both the horizontal and vertical components, if the geometry of satellites is rather poor with a large DOP value.
Gravitational mass attraction measurement for drag-free references
NASA Astrophysics Data System (ADS)
Swank, Aaron J.
Exciting new experiments in gravitational physics are among the proposed future space science missions around the world. Such future space science experiments include gravitational wave observatories, which require extraordinarily precise instruments for gravitational wave detection. In fact, future space-based gravitational wave observatories require the use of a drag free reference sensor, which is several orders of magnitude more precise than any drag free satellite launched to date. With the analysis methods and measurement techniques described in this work, there is one less challenge associated with achieving the high-precision drag-free satellite performance levels required by gravitational wave observatories. One disturbance critical to the drag-free performance is an acceleration from the mass attraction between the spacecraft and drag-free reference mass. A direct measurement of the gravitational mass attraction force is not easily performed. Historically for drag-free satellite design, the gravitational attraction properties were estimated by using idealized equations between a point mass and objects of regular geometric shape with homogeneous density. Stringent requirements are then placed on the density distribution and fabrication tolerances for the drag-free reference mass and satellite components in order to ensure that the allocated gravitational mass attraction disturbance budget is not exceeded due to the associated uncertainty in geometry and mass properties. Yet, the uncertainty associated with mass properties and geometry generate an unacceptable uncertainty in the mass attraction calculation, which make it difficult to meet the demanding drag-free performance requirements of future gravitational wave observatories. The density homogeneity and geometrical tolerances required to meet the overall drag-free performance can easily force the use of special materials or manufacturing processes, which are impractical or not feasible. The focus of this research is therefore to develop the necessary equations for the gravitational mass attraction force and gradients between two general distributed bodies. Assuming the drag-free reference mass to be a single point mass object is no longer necessary for the gravitational attraction calculations. Furthermore, the developed equations are coupled with physical measurements in order to eliminate the mass attraction uncertainty associated with mass properties. The mass attraction formula through a second order expansion consists of the measurable quantifies of mass, mass center, and moment of inertia about the mass center. Thus, the gravitational self-attraction force on the drag free reference due to the satellite can be indirectly measured. By incorporating physical measurements into the mass attraction calculation, the uncertainty in the density distribution as well as geometrical variations due to the manufacturing process are included in the analysis. For indirect gravitational mass attraction measurements, the corresponding properties of mass, mass center, and moment of inertia must be precisely determined for the proof mass and satellite components. This work focuses on the precision measurement of the moment of inertia for the drag-free test mass. Presented here is the design of a new moment of inertia measurement apparatus utilizing a five-wire torsion pendulum design. The torsion pendulum is utilized to measure the moment of inertia tensor for a prospective drag-free test mass geometry. The measurement results presented indicate the prototype five-wire torsion has matched current state of the art precision. With only minimal work to reduce laboratory environmental disturbances, the apparatus has the prospect of exceeding state of the art precision by almost an order of magnitude. In addition, the apparatus is shown to be capable of measuring the mass center offset from the geometric center to a level better than typical measurement devices. Although the pendulum was not originally designed for mass center measurements, preliminary results indicate an apparatus with a similar design may have the potential of achieving state of the art precision.
NASA Astrophysics Data System (ADS)
Frank, D. R.; Huss, G. R.; Nagashima, K.; Zolensky, M. E.; Le, L.
2017-07-01
The only whole CAI preserved in the aqueously altered CI chondrites is 16O-rich and has no resolvable radiogenic Mg. Accretion of CAIs by the CI parent object(s) may limit the precision of cosmochemical models that require a CI starting composition.
NASA Astrophysics Data System (ADS)
Matt, Kyle; Stephens, Denise C.; Gaillard, Clement; KELT-North
2016-01-01
We use a 16" telescope on the Brigham Young University (BYU) campus to follow-up on the Kilodegree Extremely Little Telescope (KELT) survey to identify possible transiting planets. KELT is an all sky survey that monitors the same areas of the sky throughout the year to identify stars that exhibit a change in brightness. Objects found to exhibit a variation in brightness similar to predicted models of transiting planets are sent to the ground-based follow-up team where we get high precision differential photometry to determine whether or not a transit is occurring and if the transiting object is a planet or companion star. If a planetary transit is found, the object is forwarded for radial velocity follow-up and could eventually be published as a KELT planet. In this poster we present light curves from possible planets we have identified as well as eclipsing binary systems and non-detections. We will highlight features of our telescope and camera and the basic steps for data reduction and analysis.
Design of apochromatic lens with large field and high definition for machine vision.
Yang, Ao; Gao, Xingyu; Li, Mingfeng
2016-08-01
Precise machine vision detection for a large object at a finite working distance (WD) requires that the lens has a high resolution for a large field of view (FOV). In this case, the effect of a secondary spectrum on image quality is not negligible. According to the detection requirements, a high resolution apochromatic objective is designed and analyzed. The initial optical structure (IOS) is combined with three segments. Next, the secondary spectrum of the IOS is corrected by replacing glasses using the dispersion vector analysis method based on the Buchdahl dispersion equation. Other aberrations are optimized by the commercial optical design software ZEMAX by properly choosing the optimization function operands. The optimized optical structure (OOS) has an f-number (F/#) of 3.08, a FOV of φ60 mm, a WD of 240 mm, and a modulated transfer function (MTF) of all fields of more than 0.1 at 320 cycles/mm. The design requirements for a nonfluorite material apochromatic objective lens with a large field and high definition for machine vision detection have been achieved.
NASA Astrophysics Data System (ADS)
Ren, B.; Wen, Q.; Zhou, H.; Guan, F.; Li, L.; Yu, H.; Wang, Z.
2018-04-01
The purpose of this paper is to provide decision support for the adjustment and optimization of crop planting structure in Jingxian County. The object-oriented information extraction method is used to extract corn and cotton from Jingxian County of Hengshui City in Hebei Province, based on multi-period GF-1 16-meter images. The best time of data extraction was screened by analyzing the spectral characteristics of corn and cotton at different growth stages based on multi-period GF-116-meter images, phenological data, and field survey data. The results showed that the total classification accuracy of corn and cotton was up to 95.7 %, the producer accuracy was 96 % and 94 % respectively, and the user precision was 95.05 % and 95.9 % respectively, which satisfied the demand of crop monitoring application. Therefore, combined with multi-period high-resolution images and object-oriented classification can be a good extraction of large-scale distribution of crop information for crop monitoring to provide convenient and effective technical means.
Multiresolution saliency map based object segmentation
NASA Astrophysics Data System (ADS)
Yang, Jian; Wang, Xin; Dai, ZhenYou
2015-11-01
Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Precision Attitude Determination System (PADS) design and analysis. Two-axis gimbal star tracker
NASA Technical Reports Server (NTRS)
1973-01-01
Development of the Precision Attitude Determination System (PADS) focused chiefly on the two-axis gimballed star tracker and electronics design improved from that of Precision Pointing Control System (PPCS), and application of the improved tracker for PADS at geosynchronous altitude. System design, system analysis, software design, and hardware design activities are reported. The system design encompasses the PADS configuration, system performance characteristics, component design summaries, and interface considerations. The PADS design and performance analysis includes error analysis, performance analysis via attitude determination simulation, and star tracker servo design analysis. The design of the star tracker and electronics are discussed. Sensor electronics schematics are included. A detailed characterization of the application software algorithms and computer requirements is provided.
NASA Technical Reports Server (NTRS)
Spector, E.; LeBlanc, A.; Shackelford, L.
1995-01-01
This study reports on the short-term in vivo precision and absolute measurements of three combinations of whole-body scan modes and analysis software using a Hologic QDR 2000 dual-energy X-ray densitometer. A group of 21 normal, healthy volunteers (11 male and 10 female) were scanned six times, receiving one pencil-beam and one array whole-body scan on three occasions approximately 1 week apart. The following combinations of scan modes and analysis software were used: pencil-beam scans analyzed with Hologic's standard whole-body software (PB scans); the same pencil-beam analyzed with Hologic's newer "enhanced" software (EPB scans); and array scans analyzed with the enhanced software (EA scans). Precision values (% coefficient of variation, %CV) were calculated for whole-body and regional bone mineral content (BMC), bone mineral density (BMD), fat mass, lean mass, %fat and total mass. In general, there was no significant difference among the three scan types with respect to short-term precision of BMD and only slight differences in the precision of BMC. Precision of BMC and BMD for all three scan types was excellent: < 1% CV for whole-body values, with most regional values in the 1%-2% range. Pencil-beam scans demonstrated significantly better soft tissue precision than did array scans. Precision errors for whole-body lean mass were: 0.9% (PB), 1.1% (EPB) and 1.9% (EA). Precision errors for whole-body fat mass were: 1.7% (PB), 2.4% (EPB) and 5.6% (EA). EPB precision errors were slightly higher than PB precision errors for lean, fat and %fat measurements of all regions except the head, although these differences were significant only for the fat and % fat of the arms and legs. In addition EPB precision values exhibited greater individual variability than PB precision values. Finally, absolute values of bone and soft tissue were compared among the three combinations of scan and analysis modes. BMC, BMD, fat mass, %fat and lean mass were significantly different between PB scans and either of the EPB or EA scans. Differences were as large as 20%-25% for certain regional fat and BMD measurements. Additional work may be needed to examine the relative accuracy of the scan mode/software combinations and to identify reasons for the differences in soft tissue precision with the array whole-body scan mode.
Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar
2015-01-01
Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890
A Neurological Enigma: The Inborn Numerical Competence of Humans and Animals
NASA Astrophysics Data System (ADS)
Gross, Hans J.
2012-03-01
"Subitizing" means our ability to recognize and memorize object numbers precisely under conditions where counting is impossible. This is an inborn archaic process which was named after the Latin "subito" = suddenly, immediately, indicating that the objects in question are presented to test persons only for the fraction of a second in order to prevent counting. Sequential counting, however, is an outstanding cultural achievement of mankind and means to count "1, 2, 3, 4, 5, 6, 7, 8 ..." without a limit. In contrast to inborn "subitizing", counting has to be trained, beginning in our early childhood with the help of our fingers. For humans we know since 140 years that we can "subitize" only up to 4 objects correctly and that mistakes occur from 5 objects on. Similar results have been obtained for a number of non-human vertebrates from salamanders to pigeons and dolphins. To our surprise, we have detected this inborn numerical competence for the first time in case of an invertebrate, the honeybee which recognizes and memorizes 3 to 4 objects under rigorous test conditions. This common ability of humans and honeybees to "subitize" up to 4 objects correctly and the miraculous but rare ability of persons with Savant syndrome to "subitize" more than hundred objects precisely raises a number of intriguing questions concerning the evolution and the significance of this biological enigma.
Grating exchange system of independent mirror supported by floating rotary stage
NASA Astrophysics Data System (ADS)
Zhang, Jianhuan; Tao, Jin; Liu, Yan; Nan, Yan
2015-10-01
The performance of The Grating Exchange System can satisfy the Thirty Meter Telescope - TMT for astronomical observation WFOS index requirements and satisfy the requirement of accuracy in the grating exchange. It is used to install in the MOBIE and a key device of MOBIE. The Wide Field Optical Spectrograph (WFOS) is one of the three first-light observing capabilities selected by the TMT Science Advisory Committee. The Multi-Object Broadband Imaging Echellette (MOBIE) instrument design concept has been developed to address the WFOS requirements as described in the TMT Science-Based Requirements Document (SRD). The Grating Exchange System uses a new type of separate movement way of three grating devices and a mirror device. Three grating devices with a mirror are able to achieve independence movement. This kind of grating exchange system can effectively solve the problem that the volume of the grating change system is too large and that the installed space of MOBIE instruments is too limit. This system adopts the good stability, high precision of rotary stage - a kind of using air bearing (Air bearing is famous for its ultra-high precision, and can meet the optical accuracy requirement) and rotation positioning feedback gauge turntable to support grating device. And with a kind of device which can carry greater weight bracket fixed on the MOBIE instrument, with two sets of servo motor control rotary stage and the mirror device respectively. And we use the control program to realize the need of exercising of the grating device and the mirror device. Using the stress strain analysis software--SolidWorks for stress and strain analysis of this structure. And then checking the structure of the rationality and feasibility. And prove that this system can realize the positioning precision under different working conditions can meet the requirements of imaging optical grating diffraction efficiency and error by the calculation and optical performance analysis.
Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G
2015-05-01
To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
From LIDAR Scanning to 3d FEM Analysis for Complex Surface and Underground Excavations
NASA Astrophysics Data System (ADS)
Chun, K.; Kemeny, J.
2017-12-01
Light detection and ranging (LIDAR) has been a prevalent remote-sensing technology applied in the geological fields due to its high precision and ease to use. One of the major applications is to use the detailed geometrical information of underground structures as a basis for the generation of three-dimensional numerical model that can be used in FEM analysis. To date, however, straightforward techniques in reconstructing numerical model from the scanned data of underground structures have not been well established or tested. In this paper, we propose a comprehensive approach integrating from LIDAR scanning to finite element numerical analysis, specifically converting LIDAR 3D point clouds of object containing complex surface geometry into finite element model. This methodology has been applied to the Kartchner Caverns in Arizona for the stability analysis. Numerical simulations were performed using the finite element code ABAQUS. The results indicate that the highlights of our technologies obtained from LIDAR is effective and provide reference for other similar engineering project in practice.
GaiaGrid : Its Implications and Implementation
NASA Astrophysics Data System (ADS)
Ansari, S. G.; Lammers, U.; Ter Linden, M.
2005-12-01
Gaia is an ESA space mission to determine positions of 1 billion objects in the Galaxy at micro-arcsecond precision. The data analysis and processing requirements of the mission involves about 20 institutes across Europe, each providing specific algorithms for specific tasks, which range from relativistic effects on positional determination, classification, astrometric binary star detection, photometric analysis, spectroscopic analysis etc. In an initial phase, a study has been ongoing over the past three years to determine the complexity of Gaia's data processing. Two processing categories have materialised: core and shell. While core deals with routine data processing, shell tasks are algorithms to carry out data analysis, which involves the Gaia Community at large. For this latter category, we are currently experimenting with use of Grid paradigms to allow access to the core data and to augment processing power to simulate and analyse the data in preparation for the actual mission. We present preliminary results and discuss the sociological impact of distributing the tasks amongst the community.
A portable system for foot biomechanical analysis during gait.
Samson, William; Sanchez, Stéphane; Salvia, Patrick; Jan, Serge Van Sint; Feipel, Véronique
2014-07-01
Modeling the foot is challenging due to its complex structure compared to most other body segments. To analyze the biomechanics of the foot, portable devices have been designed to allow measurement of temporal, spatial, and pedobarographic parameters. The goal of this study was to design and evaluate a portable system for kinematic and dynamic analysis of the foot during gait. This device consisted of a force plate synchronized with four cameras and integrated into a walkway. The complete system can be packaged for transportation. First, the measurement system was assessed using reference objects to evaluate accuracy and precision. Second, nine healthy participants were assessed during gait trials using both the portable and Vicon systems (coupled with a force plate). The ankle and metatarsophalangeal (MP) joint angles and moments were computed, as well as the ground reaction force (GRF). The intra- and inter-subject variability was analyzed for both systems, as well as the inter-system variation. The accuracy and precision were, respectively 0.4 mm and 0.4 mm for linear values and 0.5° and 0.6° for angular values. The variability of the portable and Vicon systems were similar (i.e., the inter-system variability never exceeded 2.1°, 0.081 Nmkg(-1) and 0.267 Nkg(-1) for the angles, moments and GRF, respectively). The inter-system differences were less than the inter-subject variability and similar to the intra-subject variability. Consequently, the portable system was considered satisfactory for biomechanical analysis of the foot, outside of a motion analysis laboratory. Copyright © 2014 Elsevier B.V. All rights reserved.
Radiographic absorptiometry method in measurement of localized alveolar bone density changes.
Kuhl, E D; Nummikoski, P V
2000-03-01
The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.
Automated optical testing of LWIR objective lenses using focal plane array sensors
NASA Astrophysics Data System (ADS)
Winters, Daniel; Erichsen, Patrik; Domagalski, Christian; Peter, Frank; Heinisch, Josef; Dumitrescu, Eugen
2012-10-01
The image quality of today's state-of-the-art IR objective lenses is constantly improving while at the same time the market for thermography and vision grows strongly. Because of increasing demands on the quality of IR optics and increasing production volumes, the standards for image quality testing increase and tests need to be performed in shorter time. Most high-precision MTF testing equipment for the IR spectral bands in use today relies on the scanning slit method that scans a 1D detector over a pattern in the image generated by the lens under test, followed by image analysis to extract performance parameters. The disadvantages of this approach are that it is relatively slow, it requires highly trained operators for aligning the sample and the number of parameters that can be extracted is limited. In this paper we present lessons learned from the R and D process on using focal plane array (FPA) sensors for testing of long-wave IR (LWIR, 8-12 m) optics. Factors that need to be taken into account when switching from scanning slit to FPAs are e.g.: the thermal background from the environment, the low scene contrast in the LWIR, the need for advanced image processing algorithms to pre-process camera images for analysis and camera artifacts. Finally, we discuss 2 measurement systems for LWIR lens characterization that we recently developed with different target applications: 1) A fully automated system suitable for production testing and metrology that uses uncooled microbolometer cameras to automatically measure MTF (on-axis and at several o-axis positions) and parameters like EFL, FFL, autofocus curves, image plane tilt, etc. for LWIR objectives with an EFL between 1 and 12mm. The measurement cycle time for one sample is typically between 6 and 8s. 2) A high-precision research-grade system using again an uncooled LWIR camera as detector, that is very simple to align and operate. A wide range of lens parameters (MTF, EFL, astigmatism, distortion, etc.) can be easily and accurately measured with this system.
Zheng, Xianlin; Lu, Yiqing; Zhao, Jiangbo; Zhang, Yuhai; Ren, Wei; Liu, Deming; Lu, Jie; Piper, James A; Leif, Robert C; Liu, Xiaogang; Jin, Dayong
2016-01-19
Compared with routine microscopy imaging of a few analytes at a time, rapid scanning through the whole sample area of a microscope slide to locate every single target object offers many advantages in terms of simplicity, speed, throughput, and potential for robust quantitative analysis. Existing techniques that accommodate solid-phase samples incorporating individual micrometer-sized targets generally rely on digital microscopy and image analysis, with intrinsically low throughput and reliability. Here, we report an advanced on-the-fly stage scanning method to achieve high-precision target location across the whole slide. By integrating X- and Y-axis linear encoders to a motorized stage as the virtual "grids" that provide real-time positional references, we demonstrate an orthogonal scanning automated microscopy (OSAM) technique which can search a coverslip area of 50 × 24 mm(2) in just 5.3 min and locate individual 15 μm lanthanide luminescent microspheres with standard deviations of 1.38 and 1.75 μm in X and Y directions. Alongside implementation of an autofocus unit that compensates the tilt of a slide in the Z-axis in real time, we increase the luminescence detection efficiency by 35% with an improved coefficient of variation. We demonstrate the capability of advanced OSAM for robust quantification of luminescence intensities and lifetimes for a variety of micrometer-scale luminescent targets, specifically single down-shifting and upconversion microspheres, crystalline microplates, and color-barcoded microrods, as well as quantitative suspension array assays of biotinylated-DNA functionalized upconversion nanoparticles.
Using statistical text classification to identify health information technology incidents
Chai, Kevin E K; Anthony, Stephen; Coiera, Enrico; Magrabi, Farah
2013-01-01
Objective To examine the feasibility of using statistical text classification to automatically identify health information technology (HIT) incidents in the USA Food and Drug Administration (FDA) Manufacturer and User Facility Device Experience (MAUDE) database. Design We used a subset of 570 272 incidents including 1534 HIT incidents reported to MAUDE between 1 January 2008 and 1 July 2010. Text classifiers using regularized logistic regression were evaluated with both ‘balanced’ (50% HIT) and ‘stratified’ (0.297% HIT) datasets for training, validation, and testing. Dataset preparation, feature extraction, feature selection, cross-validation, classification, performance evaluation, and error analysis were performed iteratively to further improve the classifiers. Feature-selection techniques such as removing short words and stop words, stemming, lemmatization, and principal component analysis were examined. Measurements κ statistic, F1 score, precision and recall. Results Classification performance was similar on both the stratified (0.954 F1 score) and balanced (0.995 F1 score) datasets. Stemming was the most effective technique, reducing the feature set size to 79% while maintaining comparable performance. Training with balanced datasets improved recall (0.989) but reduced precision (0.165). Conclusions Statistical text classification appears to be a feasible method for identifying HIT reports within large databases of incidents. Automated identification should enable more HIT problems to be detected, analyzed, and addressed in a timely manner. Semi-supervised learning may be necessary when applying machine learning to big data analysis of patient safety incidents and requires further investigation. PMID:23666777
Mezouari, S; Liu, W Yun; Pace, G; Hartman, T G
2015-01-01
The objective of this study was to develop an improved analytical method for the determination of 3-chloro-1,2-propanediol (3-MCPD) and 1,3-dichloropropanol (1,3-DCP) in paper-type food packaging. The established method includes aqueous extraction, matrix spiking of a deuterated surrogate internal standard (3-MCPD-d₅), clean-up using Extrelut solid-phase extraction, derivatisation using a silylation reagent, and GC-MS analysis of the chloropropanols as their corresponding trimethyl silyl ethers. The new method is applicable to food-grade packaging samples using European Commission standard aqueous extraction and aqueous food stimulant migration tests. In this improved method, the derivatisation procedure was optimised; the cost and time of the analysis were reduced by using 10 times less sample, solvents and reagents than in previously described methods. Overall the validation data demonstrate that the method is precise and reliable. The limit of detection (LOD) of the aqueous extract was 0.010 mg kg(-1) (w/w) for both 3-MCPD and 1,3-DCP. Analytical precision had a relative standard deviation (RSD) of 3.36% for 3-MCPD and an RSD of 7.65% for 1,3-DCP. The new method was satisfactorily applied to the analysis of over 100 commercial paperboard packaging samples. The data are being used to guide the product development of a next generation of wet-strength resins with reduced chloropropanol content, and also for risk assessments to calculate the virtual safe dose (VSD).
Semi-automatic computerized approach to radiological quantification in rheumatoid arthritis
NASA Astrophysics Data System (ADS)
Steiner, Wolfgang; Schoeffmann, Sylvia; Prommegger, Andrea; Boegl, Karl; Klinger, Thomas; Peloschek, Philipp; Kainberger, Franz
2004-04-01
Rheumatoid Arthritis (RA) is a common systemic disease predominantly involving the joints. Precise diagnosis and follow-up therapy requires objective quantification. For this purpose, radiological analyses using standardized scoring systems are considered to be the most appropriate method. The aim of our study is to develop a semi-automatic image analysis software, especially applicable for scoring of joints in rheumatic disorders. The X-Ray RheumaCoach software delivers various scoring systems (Larsen-Score and Ratingen-Rau-Score) which can be applied by the scorer. In addition to the qualitative assessment of joints performed by the radiologist, a semi-automatic image analysis for joint detection and measurements of bone diameters and swollen tissue supports the image assessment process. More than 3000 radiographs from hands and feet of more than 200 RA patients were collected, analyzed, and statistically evaluated. Radiographs were quantified using conventional paper-based Larsen score and the X-Ray RheumaCoach software. The use of the software shortened the scoring time by about 25 percent and reduced the rate of erroneous scorings in all our studies. Compared to paper-based scoring methods, the X-Ray RheumaCoach software offers several advantages: (i) Structured data analysis and input that minimizes variance by standardization, (ii) faster and more precise calculation of sum scores and indices, (iii) permanent data storing and fast access to the software"s database, (iv) the possibility of cross-calculation to other scores, (v) semi-automatic assessment of images, and (vii) reliable documentation of results in the form of graphical printouts.
Quantitative analysis of benzodiazepines in vitreous humor by high-performance liquid chromatography
Bazmi, Elham; Behnoush, Behnam; Akhgari, Maryam; Bahmanabadi, Leila
2016-01-01
Objective: Benzodiazepines are frequently screened drugs in emergency toxicology, drugs of abuse testing, and in forensic cases. As the variations of benzodiazepines concentrations in biological samples during bleeding, postmortem changes, and redistribution could be biasing forensic medicine examinations, hence selecting a suitable sample and a validated accurate method is essential for the quantitative analysis of these main drug categories. The aim of this study was to develop a valid method for the determination of four benzodiazepines (flurazepam, lorazepam, alprazolam, and diazepam) in vitreous humor using liquid–liquid extraction and high-performance liquid chromatography. Methods: Sample preparation was carried out using liquid–liquid extraction with n-hexane: ethyl acetate and subsequent detection by high-performance liquid chromatography method coupled to diode array detector. This method was applied to quantify benzodiazepines in 21 authentic vitreous humor samples. Linear curve for each drug was obtained within the range of 30–3000 ng/mL with coefficient of correlation higher than 0.99. Results: The limit of detection and quantitation were 30 and 100 ng/mL respectively for four drugs. The method showed an appropriate intra- and inter-day precision (coefficient of variation < 10%). Benzodiazepines recoveries were estimated to be over 80%. The method showed high selectivity; no additional peak due to interfering substances in samples was observed. Conclusion: The present method was selective, sensitive, accurate, and precise for the quantitative analysis of benzodiazepines in vitreous humor samples in forensic toxicology laboratory. PMID:27635251
[Comparison of 2 lacrimal punctal occlusion methods].
Shalaby, O; Rivas, L; Rivas, A I; Oroza, M A; Murube, J
2001-09-01
To study and compare two methods for canalicular occlusion: Cautery and Punctal Patch. The study included fourty patients divided in two groups of 20 patients. The end point was 4 occluded puncti. The first group underwent deep cauterization resulting in occlusion of the full vertical aspect of the canaliculus. The second group underwent punctal patch technique for canalicular occlusion. Differential parameters were the following: time of intervention, ease of use, risks and precision. In the post operatory, discomfort, subjective and objective improvement in ocular surface as well as long term result of each technique was analysed. Time of intervention was longer for punctal patch compared to cautery. Both methods exhibited similar ease of use and improvement in ocular surface. Precision was high in punctal patch technique showing complete and final occlusion and no punctum needed reopening, while cautery technique presented 20% rate of reopening intervention. Postoperatory discomfort and irritation were remarkably evident with punctal technique, while minimal in cautery technique. Survival analysis after one year follow up, showed a higher rate of advantages for punctal patch technique over cautery technique.
Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks
NASA Astrophysics Data System (ADS)
Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji
High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.
Kim, Dong-Woo; Cho, Myeong-Woo; Seo, Tae-Il; Shin, Young-Jae
2008-01-01
Recently, the magnetorheological (MR) polishing process has been examined as a new ultra-precision polishing technology for micro parts in MEMS applications. In the MR polishing process, the magnetic force plays a dominant role. This method uses MR fluids which contains micro abrasives as a polishing media. The objective of the present research is to shed light onto the material removal mechanism under various slurry conditions for polishing and to investigate surface characteristics, including shape analysis and surface roughness measurement, of spots obtained from the MR polishing process using alumina abrasives. A series of basic experiments were first performed to determine the optimum polishing conditions for BK7 glass using prepared slurries by changing the process parameters, such as wheel rotating speed and electric current. Using the obtained results, groove polishing was then performed and the results are investigated. Outstanding surface roughness of Ra=3.8nm was obtained on the BK7 glass specimen. The present results highlight the possibility of applying this polishing method to ultra-precision micro parts production, especially in MEMS applications. PMID:27879705
Risk factors for bladder cancer: challenges of conducting a literature search using PubMed.
Joshi, Ashish; Preslan, Elicia
2011-04-01
The objective of this study was to assess the risk factors for bladder cancer using PubMed articles from January 2000 to December 2009. The study also aimed to describe the challenges encountered in the methodology of a literature search for bladder cancer risk factors using PubMed. Twenty-six categories of risk factors for bladder cancer were identified using the National Cancer Institute Web site and the Medical Subject Headings (MeSH) Web site. A total of 1,338 PubMed searches were run using the term "urinary bladder cancer" and a risk factor term (e.g., "cigarette smoking") and were screened to identify 260 articles for final analysis. The search strategy had an overall precision of 3.42 percent, relative recall of 12.64 percent, and an F-measure of 5.39 percent. Although search terms derived from MeSH had the highest overall precision and recall, the differences did not reach significance, which indicates that for generalized, free-text searches of the PubMed database, the searchers' own terms are generally as effective as MeSH terms.
Platform Precision Autopilot Overview and Mission Performance
NASA Technical Reports Server (NTRS)
Strovers, Brian K.; Lee, James A.
2009-01-01
The Platform Precision Autopilot is an instrument landing system-interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Unmanned Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.
Platform Precision Autopilot Overview and Flight Test Results
NASA Technical Reports Server (NTRS)
Lin, V.; Strovers, B.; Lee, J.; Beck, R.
2008-01-01
The Platform Precision Autopilot is an instrument landing system interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Uninhabited Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.
Design and algorithm research of high precision airborne infrared touch screen
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan
2016-10-01
There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.
Wide-field Precision Kinematics of the M87 Globular Cluster System
NASA Astrophysics Data System (ADS)
Strader, Jay; Romanowsky, Aaron J.; Brodie, Jean P.; Spitler, Lee R.; Beasley, Michael A.; Arnold, Jacob A.; Tamura, Naoyuki; Sharples, Ray M.; Arimoto, Nobuo
2011-12-01
We present the most extensive combined photometric and spectroscopic study to date of the enormous globular cluster (GC) system around M87, the central giant elliptical galaxy in the nearby Virgo Cluster. Using observations from DEIMOS and the Low Resolution Imaging Spectrometer at Keck, and Hectospec on the Multiple Mirror Telescope, we derive new, precise radial velocities for 451 GCs around M87, with projected radii from ~5 to 185 kpc. We combine these measurements with literature data for a total sample of 737 objects, which we use for a re-examination of the kinematics of the GC system of M87. The velocities are analyzed in the context of archival wide-field photometry and a novel Hubble Space Telescope catalog of half-light radii, which includes sizes for 344 spectroscopically confirmed clusters. We use this unique catalog to identify 18 new candidate ultracompact dwarfs and to help clarify the relationship between these objects and true GCs. We find much lower values for the outer velocity dispersion and rotation of the GC system than in earlier papers and also differ from previous work in seeing no evidence for a transition in the inner halo to a potential dominated by the Virgo Cluster, nor for a truncation of the stellar halo. We find little kinematical evidence for an intergalactic GC population. Aided by the precision of the new velocity measurements, we see significant evidence for kinematical substructure over a wide range of radii, indicating that M87 is in active assembly. A simple, scale-free analysis finds less dark matter within ~85 kpc than in other recent work, reducing the tension between X-ray and optical results. In general, out to a projected radius of ~150 kpc, our data are consistent with the notion that M87 is not dynamically coupled to the Virgo Cluster; the core of Virgo may be in the earliest stages of assembly.
Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration
NASA Technical Reports Server (NTRS)
Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.
1996-01-01
An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.
Modeling and query the uncertainty of network constrained moving objects based on RFID data
NASA Astrophysics Data System (ADS)
Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie
2007-06-01
The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.
NASA Astrophysics Data System (ADS)
Ordoñez, Paulina; Gallego, David; Ribera, Pedro; Peña-Ortiz, Cristina; Garcia-Herrera, Ricardo; Vega, Inmaculada; Gómez, Francisco de Paula
2016-04-01
The Indian Summer Monsoon onset is one of the meteorological events most anticipated in the world. Due to its relevance for the population, the India Meteorological Department has dated the onset over the southern tip of the Indian Peninsula (Kerala) since 1901. The traditional method to date the onset was based in the judgment of skilled meteorologist and because of this, the method was considered subjective and not adequate for the study of long-term changes in the onset. A new method for determining the monsoon onset based solely on objective criteria has been in use since 2006. Unfortunately, the new method relies -among other variables- on OLR measurements. This requirement impedes the construction of an objective onset series before the satellite era. An alternative approach to establish the onset by objective methods is the use of the wind field. During the last decade, some works have demonstrated that the changes in the wind direction in some areas of the Indian Ocean can be used to determine the monsoon onset rather precisely. However, this method requires precise wind observations over a large oceanic area which has limited the periods covered for such kind of indices to those of the reanalysis products. In this work we present a new approach to track the Indian monsoon onset based solely on historical wind direction measurements taken onboard ships. Our new series provides an objective record of the onset since the last decade of the 19th century and perhaps more importantly, it can incorporate any new historical wind record not yet known in order to extend the series length. The new series captures quite precisely the rapid precipitation increase associated to the monsoon onset, correlates well with previous approaches and it is robust against anomalous (bogus) onsets. Although no significant trends in the onset date were detected, a tendency to later than average onsets during the 1900-1925 and 1970-1990 periods and earlier than average onsets between 1940 and 1965 have been found. Our results show a relatively stable link between the ENSO cycle and the onset date; however this relationship is weaker in decades characterized by prevalent La Niña conditions. Furthermore, it was found that the link between the Pacific Decadal Oscillation (PDO) and the onset date is limited to the phases characterized by a shift from negative to positive PDO phases. This research was funded by the Spanish Ministerio de Economía y Competitividad through the projects CGL2013-44530-P and CGL2014-51721-REDT
On-line 3-dimensional confocal imaging in vivo.
Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M
2000-09-01
In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.
Age-related decline of precision and binding in visual working memory.
Peich, Muy-Cheng; Husain, Masud; Bays, Paul M
2013-09-01
Working memory declines with normal aging, but the nature of this impairment is debated. Studies based on detecting changes to arrays of visual objects have identified two possible components to age-related decline: a reduction in the number of items that can be stored, or a deficit in maintaining the associations (bindings) between individual object features. However, some investigations have reported intact binding with aging, and specific deficits arising only in Alzheimer's disease. Here, using a recently developed continuous measure of recall fidelity, we tested the precision with which adults of different ages could reproduce from memory the orientation and color of a probed array item. The results reveal a further component of cognitive decline: an age-related decrease in the resolution with which visual information can be maintained in working memory. This increase in recall variability with age was strongest under conditions of greater memory load. Moreover, analysis of the distribution of errors revealed that older participants were more likely to incorrectly report one of the unprobed items in memory, consistent with an age-related increase in misbinding. These results indicate a systematic decline with age in working memory resources that can be recruited to store visual information. The paradigm presented here provides a sensitive index of both memory resolution and feature binding, with the potential for assessing their modulation by interventions. The findings have implications for understanding the mechanisms underpinning working memory deficits in both health and disease.
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera.
Ci, Wenyan; Huang, Yingping
2016-10-17
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera's 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg-Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade-Lucas-Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
Age-Related Decline of Precision and Binding in Visual Working Memory
2013-01-01
Working memory declines with normal aging, but the nature of this impairment is debated. Studies based on detecting changes to arrays of visual objects have identified two possible components to age-related decline: a reduction in the number of items that can be stored, or a deficit in maintaining the associations (bindings) between individual object features. However, some investigations have reported intact binding with aging, and specific deficits arising only in Alzheimer’s disease. Here, using a recently developed continuous measure of recall fidelity, we tested the precision with which adults of different ages could reproduce from memory the orientation and color of a probed array item. The results reveal a further component of cognitive decline: an age-related decrease in the resolution with which visual information can be maintained in working memory. This increase in recall variability with age was strongest under conditions of greater memory load. Moreover, analysis of the distribution of errors revealed that older participants were more likely to incorrectly report one of the unprobed items in memory, consistent with an age-related increase in misbinding. These results indicate a systematic decline with age in working memory resources that can be recruited to store visual information. The paradigm presented here provides a sensitive index of both memory resolution and feature binding, with the potential for assessing their modulation by interventions. The findings have implications for understanding the mechanisms underpinning working memory deficits in both health and disease. PMID:23978008
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Ci, Wenyan; Huang, Yingping
2016-01-01
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. PMID:27763508
Electron/proton separation and analysis techniques used in the AMS-02 (e+ + e-) flux measurement
NASA Astrophysics Data System (ADS)
Graziani, Maura; AMS-02 Collaboration
2016-04-01
AMS-02 is a large acceptance cosmic ray detector which has been installed on the International Space Station (ISS) in May 2011, where it is collecting cosmic rays up to TeV energies. The search for Dark Matter indirect signatures in the rare components of the cosmic ray fluxes is among the main objectives of the experiment. AMS-02 is providing cosmic electrons and positrons data with an unprecedented precision. This is achieved by means to the excellent hadron/electron separation power obtained combining the independent measurements from the Transition Radiation Detector, electromagnetic Calorimeter and Tracker detectors. In this contribution we will detail the analysis techniques used to distinguish electrons from the hadronic background and show the in-flight performances of these detectors relevant for the electron/positron measurements.
High Energy Follow-up Study of Gravitational Wave Transients
NASA Astrophysics Data System (ADS)
Barker, Brandon L.; Patricelli, Barbara
2018-01-01
As second-generation gravitational wave interferometers, such as Advanced Virgo and Advanced LIGO, reach their design sensitivities, a new lens into our universe will become available. Many of the most violent and energetic events in the cosmos, in particular the merger of compact objects and core collapse supernovae, are sources of gravitational waves and are also believed to be connected with Gamma Ray Bursts. Joint observations of electromagnetic and gravitational wave signals will provide an ideal opportunity to study the physics of these transient events and their progenitors. In particular, gamma ray observatories such as Fermi, coupled with precise sky lo- calization, will be crucial to observe the high energy electromagnetic counterparts to gravitational wave signals. We constructed joint binary neutron star and gamma ray burst detection rate estimates using an analysis pipeline and report on the results of this analysis.
AmO 2 Analysis for Analytical Method Testing and Assessment: Analysis Support for AmO 2 Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, Kevin John; Bland, Galey Jean; Fulwyler, James Brent
Americium oxide samples will be measured for various analytes to support AmO 2 production. The key analytes that are currently requested by the Am production customer at LANL include total Am content, Am isotopics, Pu assay, Pu isotopics, and trace element content including 237Np content. Multiple analytical methods will be utilized depending on the sensitivity, accuracy and precision needs of the Am matrix. Traceability to the National Institute of Standards and Technology (NIST) will be achieved, where applicable, by running NIST traceable quality control materials. This given that there are no suitable AmO 2 reference materials currently available for requestedmore » analytes. The primary objective is to demonstrate the suitability of actinide analytical chemistry methods to support AmO 2 production operations.« less
Towards Precision Spectroscopy of Baryonic Resonances
NASA Astrophysics Data System (ADS)
Döring, Michael; Mai, Maxim; Rönchen, Deborah
2017-01-01
Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Jülich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. As data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.
Towards precision spectroscopy of baryonic resonances
Doring, Michael; Mai, Maxim; Ronchen, Deborah
2017-01-26
Recent progress in baryon spectroscopy is reviewed. In a common effort, various groups have analyzed a set of new high-precision polarization observables from ELSA. The Julich-Bonn group has finalized the analysis of pion-induced meson-baryon production, the potoproduction of pions and eta mesons, and (almost) the KΛ final state. Lastly, as data become preciser, statistical aspects in the analysis of excited baryons become increasingly relevant and several advances in this direction are proposed.
Grid-based precision aim system and method for disrupting suspect objects
Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.
2014-06-10
A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.
Makuuchi, Michiru; Someya, Yoshiaki; Ogawa, Seiji; Takayama, Yoshihiro
2011-01-01
In visually guided grasping, possible hand shapes are computed from the geometrical features of the object, while prior knowledge about the object and the goal of the action influence both the computation and the selection of the hand shape. We investigated the system dynamics of the human brain for the pantomiming of grasping with two aspects accentuated. One is object recognition, with the use of objects for daily use. The subjects mimed grasping movements appropriate for an object presented in a photograph either by precision or power grip. The other is the selection of grip hand shape. We manipulated the selection demands for the grip hand shape by having the subjects use the same or different grip type in the second presentation of the identical object. Effective connectivity analysis revealed that the increased selection demands enhance the interaction between the anterior intraparietal sulcus (AIP) and posterior inferior temporal gyrus (pITG), and drive the converging causal influences from the AIP, pITG, and dorsolateral prefrontal cortex to the ventral premotor area (PMv). These results suggest that the dorsal and ventral visual areas interact in the pantomiming of grasping, while the PMv integrates the neural information of different regions to select the hand posture. The present study proposes system dynamics in visually guided movement toward meaningful objects, but further research is needed to examine if the same dynamics is found also in real grasping. PMID:21739528
Lansdale, Mark W; Oliff, Lynda; Baguley, Thom S
2005-06-01
The authors investigated whether memory for object locations in pictures could be exploited to address known difficulties of designing query languages for picture databases. M. W. Lansdale's (1998) model of location memory was adapted to 4 experiments observing memory for everyday pictures. These experiments showed that location memory is quantified by 2 parameters: a probability that memory is available and a measure of its precision. Availability is determined by controlled attentional processes, whereas precision is mostly governed by picture composition beyond the viewer's control. Additionally, participants' confidence judgments were good predictors of availability but were insensitive to precision. This research suggests that databases using location memory are feasible. The implications of these findings for database design and for further research and development are discussed. (c) 2005 APA
Georeferencing natural disaster impact footprints : lessons learned from the EM-DAT experience
NASA Astrophysics Data System (ADS)
Wallemacq, Pascaline; Guha Sapir, Debarati
2014-05-01
The Emergency Events Database (EM-DAT) contains data about the occurrence and consequences of all the disasters that have taken place since 1900. The main objectives of the database are to serve the purposes of humanitarian action at national and international levels; to aid decision making for disaster preparedness, as well as providing an objective base for vulnerability assessments and priority setting. EM-DAT records data on the human and economic impacts for each event as well as the location of said event. This is recorded as text data, namely the province, department, county, district, or village. The first purpose of geocoding (or georeferencing) the EM-DAT database is to transform the location data from text format into code data. The GAUL (Global Administrative Unit Layers) database (FAO) is used as a basis to identify the geographic footprint of the disaster, ideally to the second administrative level and add a unique code for each affected unit. Our first step has involved georeferencing earthquakes since the location of these is precise. The second purpose is to detail the degree of precision of georeferencing. The application and benefits of georeferencing are manifold. The geographic information of the footprint of past (after 2000) and future natural disasters permits the location of vulnerable areas with a GIS system and to cross data from different sources. It will allow the study of different elements such as the extent of a disaster and its human and economic consequences; the exposure and vulnerability of the population in space and time and the efficiency of mitigation measures. In addition, any association between events and external factors can be identified (e.g.: is the famine located at the same places as drought?) and precision of the information in the disaster report can be evaluated. Besides this, these maps will provide valuable communication support since maps have a high communication power and are easily understandable by the wider public and policy makers. Some results from the application of georeferencing will be presented during the session such as a study of the population potentially exposed and affected by natural disasters in Europe, a flood vulnerability analysis in Vietnam and the potential merging of watersheds analysis and flood footprints data.
High-precision radius automatic measurement using laser differential confocal technology
NASA Astrophysics Data System (ADS)
Jiang, Hongwei; Zhao, Weiqian; Yang, Jiamiao; Guo, Yongkui; Xiao, Yang
2015-02-01
A high precision radius automatic measurement method using laser differential confocal technology is proposed. Based on the property of an axial intensity curve that the null point precisely corresponds to the focus of the objective and the bipolar property, the method uses the composite PID (proportional-integral-derivative) control to ensure the steady movement of the motor for process of quick-trigger scanning, and uses least-squares linear fitting to obtain the position of the cat-eye and confocal positions, then calculates the radius of curvature of lens. By setting the number of measure times, precision auto-repeat measurement of the radius of curvature is achieved. The experiment indicates that the method has the measurement accuracy of better than 2 ppm, and the measuring repeatability is better than 0.05 μm. In comparison with the existing manual-single measurement, this method has a high measurement precision, a strong environment anti-interference capability, a better measuring repeatability which is only tenth of former's.
Astrometry with A-Track Using Gaia DR1 Catalogue
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Erece, Orhan; Kaplan, Murat
2018-04-01
In this work, we built all sky index files from Gaia DR1 catalogue for the high-precision astrometric field solution and the precise WCS coordinates of the moving objects. For this, we used build-astrometry-index program as a part of astrometry.net code suit. Additionally, we added astrometry.net's WCS solution tool to our previously developed software which is a fast and robust pipeline for detecting moving objects such as asteroids and comets in sequential FITS images, called A-Track. Moreover, MPC module was added to A-Track. This module is linked to an asteroid database to name the found objects and prepare the MPC file to report the results. After these innovations, we tested a new version of the A-Track code on photometrical data taken by the SI-1100 CCD with 1-meter telescope at TÜBİTAK National Observatory, Antalya. The pipeline can be used to analyse large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
Archetypal TRMM Radar Profiles Identified Through Cluster Analysis
NASA Technical Reports Server (NTRS)
Boccippio, Dennis J.
2003-01-01
It is widely held that identifiable 'convective regimes' exist in nature, although precise definitions of these are elusive. Examples include land / Ocean distinctions, break / monsoon beahvior, seasonal differences in the Amazon (SON vs DJF), etc. These regimes are often described by differences in the realized local convective spectra, and measured by various metrics of convective intensity, depth, areal coverage and rainfall amount. Objective regime identification may be valuable in several ways: regimes may serve as natural 'branch points' in satellite retrieval algorithms or data assimilation efforts; one example might be objective identification of regions that 'should' share a similar 2-R relationship. Similarly, objectively defined regimes may provide guidance on optimal siting of ground validation efforts. Objectively defined regimes could also serve as natural (rather than arbitrary geographic) domain 'controls' in studies of convective response to environmental forcing. Quantification of convective vertical structure has traditionally involved parametric study of prescribed quantities thought to be important to convective dynamics: maximum radar reflectivity, cloud top height, 30-35 dBZ echo top height, rain rate, etc. Individually, these parameters are somewhat deficient as their interpretation is often nonunique (the same metric value may signify different physics in different storm realizations). Individual metrics also fail to capture the coherence and interrelationships between vertical levels available in full 3-D radar datasets. An alternative approach is discovery of natural partitions of vertical structure in a globally representative dataset, or 'archetypal' reflectivity profiles. In this study, this is accomplished through cluster analysis of a very large sample (0[107) of TRMM-PR reflectivity columns. Once achieved, the rainconditional and unconditional 'mix' of archetypal profile types in a given location and/or season provides a description of the local convective spectrum which retains vertical structure information. A further cluster analysis of these 'mixes' can identify recurrent convective spectra. These are a first step towards objective identification of convective regimes, and towards answering the question: 'What are the most convectively similar locations in the world?'
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-08-14
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-01-01
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203
NASA Astrophysics Data System (ADS)
Chi, Sheng; Lee, Shu-Sheng; Huang, Jen, Jen-Yu; Lai, Ti-Yu; Jan, Chia-Ming; Hu, Po-Chi
2016-04-01
As the progress of optical technologies, different commercial 3D surface contour scanners are on the market nowadays. Most of them are used for reconstructing the surface profile of mold or mechanical objects which are larger than 50 mm×50 mm× 50 mm, and the scanning system size is about 300 mm×300 mm×100 mm. There are seldom optical systems commercialized for surface profile fast scanning for small object size less than 10 mm×10 mm×10 mm. Therefore, a miniature optical system has been designed and developed in this research work for this purpose. Since the most used scanning method of such system is line scan technology, we have developed pseudo-phase shifting digital projection technology by adopting projecting fringes and phase reconstruction method. A projector was used to project a digital fringe patterns on the object, and the fringes intensity images of the reference plane and of the sample object were recorded by a CMOS camera. The phase difference between the plane and object can be calculated from the fringes images, and the surface profile of the object was reconstructed by using the phase differences. The traditional phase shifting method was accomplished by using PZT actuator or precisely controlled motor to adjust the light source or grating and this is one of the limitations for high speed scanning. Compared with the traditional optical setup, we utilized a micro projector to project the digital fringe patterns on the sample. This diminished the phase shifting processing time and the controlled phase differences between the shifted phases become more precise. Besides, the optical path design based on a portable device scanning system was used to minimize the size and reduce the number of the system components. A screwdriver section about 7mm×5mm×5mm has been scanned and its surface profile was successfully restored. The experimental results showed that the measurement area of our system can be smaller than 10mm×10mm, the precision reached to +/-10μm, and the scanning time for each surface of an object was less than 15 seconds. This has proved that our system own the potential to be a fast scanning scanner for small object surface profile scanning.
Elevation data fitting and precision analysis of Google Earth in road survey
NASA Astrophysics Data System (ADS)
Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei
2018-05-01
Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.
Precision Medicine: Functional Advancements.
Caskey, Thomas
2018-01-29
Precision medicine was conceptualized on the strength of genomic sequence analysis. High-throughput functional metrics have enhanced sequence interpretation and clinical precision. These technologies include metabolomics, magnetic resonance imaging, and I rhythm (cardiac monitoring), among others. These technologies are discussed and placed in clinical context for the medical specialties of internal medicine, pediatrics, obstetrics, and gynecology. Publications in these fields support the concept of a higher level of precision in identifying disease risk. Precise disease risk identification has the potential to enable intervention with greater specificity, resulting in disease prevention-an important goal of precision medicine.
System and method for high precision isotope ratio destructive analysis
Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R
2013-07-02
A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).
Spacetime and orbits of bumpy black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigeland, Sarah J.; Hughes, Scott A.
2010-01-15
Our Universe contains a great number of extremely compact and massive objects which are generally accepted to be black holes. Precise observations of orbital motion near candidate black holes have the potential to determine if they have the spacetime structure that general relativity demands. As a means of formulating measurements to test the black hole nature of these objects, Collins and Hughes introduced ''bumpy black holes'': objects that are almost, but not quite, general relativity's black holes. The spacetimes of these objects have multipoles that deviate slightly from the black hole solution, reducing to black holes when the deviation ismore » zero. In this paper, we extend this work in two ways. First, we show how to introduce bumps which are smoother and lead to better behaved orbits than those in the original presentation. Second, we show how to make bumpy Kerr black holes--objects which reduce to the Kerr solution when the deviation goes to zero. This greatly extends the astrophysical applicability of bumpy black holes. Using Hamilton-Jacobi techniques, we show how a spacetime's bumps are imprinted on orbital frequencies, and thus can be determined by measurements which coherently track the orbital phase of a small orbiting body. We find that in the weak field, orbits of bumpy black holes are modified exactly as expected from a Newtonian analysis of a body with a prescribed multipolar structure, reproducing well-known results from the celestial mechanics literature. The impact of bumps on strong-field orbits is many times greater than would be predicted from a Newtonian analysis, suggesting that this framework will allow observations to set robust limits on the extent to which a spacetime's multipoles deviate from the black hole expectation.« less
Experimental research of digital holographic microscopic measuring
NASA Astrophysics Data System (ADS)
Zhu, Xueliang; Chen, Feifei; Li, Jicheng
2013-06-01
Digital holography is a new imaging technique, which is developed on the base of optical holography, Digital processing, and Computer techniques. It is using CCD instead of the conventional silver to record hologram, and then reproducing the 3D contour of the object by the way of computer simulation. Compared with the traditional optical holographic, the whole process is of simple measuring, lower production cost, faster the imaging speed, and with the advantages of non-contact real-time measurement. At present, it can be used in the fields of the morphology detection of tiny objects, micro deformation analysis, and biological cells shape measurement. It is one of the research hot spot at home and abroad. This paper introduced the basic principles and relevant theories about the optical holography and Digital holography, and researched the basic questions which influence the reproduce images in the process of recording and reconstructing of the digital holographic microcopy. In order to get a clear digital hologram, by analyzing the optical system structure, we discussed the recording distance and of the hologram. On the base of the theoretical studies, we established a measurement and analyzed the experimental conditions, then adjusted them to the system. To achieve a precise measurement of tiny object in three-dimension, we measured MEMS micro device for example, and obtained the reproduction three-dimensional contour, realized the three dimensional profile measurement of tiny object. According to the experiment results consider: analysis the reference factors between the zero-order term and a pair of twin-images by the choice of the object light and the reference light and the distance of the recording and reconstructing and the characteristics of reconstruction light on the measurement, the measurement errors were analyzed. The research result shows that the device owns certain reliability.
Object-oriented millisecond timers for the PC.
Hamm, J P
2001-11-01
Object-oriented programming provides a useful structure for designing reusable code. Accurate millisecond timing is essential for many areas of research. With this in mind, this paper provides a Turbo Pascal unit containing an object-oriented millisecond timer. This approach allows for multiple timers to be running independently. The timers may also be set at different levels of temporal precision, such as 10(-3) (milliseconds) or 10(-5) sec. The object also is able to store the time of a flagged event for later examination without interrupting the ongoing timing operation.
Autonomous Space Object Catalogue Construction and Upkeep Using Sensor Control Theory
NASA Astrophysics Data System (ADS)
Moretti, N.; Rutten, M.; Bessell, T.; Morreale, B.
The capability to track objects in space is critical to safeguard domestic and international space assets. Infrequent measurement opportunities, complex dynamics and partial observability of orbital state makes the tracking of resident space objects nontrivial. It is not uncommon for human operators to intervene with space tracking systems, particularly in scheduling sensors. This paper details the development of a system that maintains a catalogue of geostationary objects through dynamically tasking sensors in real time by managing the uncertainty of object states. As the number of objects in space grows the potential for collision grows exponentially. Being able to provide accurate assessment to operators regarding costly collision avoidance manoeuvres is paramount; the accuracy of which is highly dependent on how object states are estimated. The system represents object state and uncertainty using particles and utilises a particle filter for state estimation. Particle filters capture the model and measurement uncertainty accurately, allowing for a more comprehensive representation of the state’s probability density function. Additionally, the number of objects in space is growing disproportionally to the number of sensors used to track them. Maintaining precise positions for all objects places large loads on sensors, limiting the time available to search for new objects or track high priority objects. Rather than precisely track all objects our system manages the uncertainty in orbital state for each object independently. The uncertainty is allowed to grow and sensor data is only requested when the uncertainty must be reduced. For example when object uncertainties overlap leading to data association issues or if the uncertainty grows to beyond a field of view. These control laws are formulated into a cost function, which is optimised in real time to task sensors. By controlling an optical telescope the system has been able to construct and maintain a catalogue of approximately 100 geostationary objects.
Knops, André; Piazza, Manuela; Sengupta, Rakesh; Eger, Evelyn; Melcher, David
2014-07-23
Human cognition is characterized by severe capacity limits: we can accurately track, enumerate, or hold in mind only a small number of items at a time. It remains debated whether capacity limitations across tasks are determined by a common system. Here we measure brain activation of adult subjects performing either a visual short-term memory (vSTM) task consisting of holding in mind precise information about the orientation and position of a variable number of items, or an enumeration task consisting of assessing the number of items in those sets. We show that task-specific capacity limits (three to four items in enumeration and two to three in vSTM) are neurally reflected in the activity of the posterior parietal cortex (PPC): an identical set of voxels in this region, commonly activated during the two tasks, changed its overall response profile reflecting task-specific capacity limitations. These results, replicated in a second experiment, were further supported by multivariate pattern analysis in which we could decode the number of items presented over a larger range during enumeration than during vSTM. Finally, we simulated our results with a computational model of PPC using a saliency map architecture in which the level of mutual inhibition between nodes gives rise to capacity limitations and reflects the task-dependent precision with which objects need to be encoded (high precision for vSTM, lower precision for enumeration). Together, our work supports the existence of a common, flexible system underlying capacity limits across tasks in PPC that may take the form of a saliency map. Copyright © 2014 the authors 0270-6474/14/349857-10$15.00/0.
Non-invasive, investigative methods in skin aging.
Longo, C; Ciardo, S; Pellacani, G
2015-12-01
A precise and noninvasive quantification of aging is of outmost importance for in vivo assessment of the skin aging "stage", and thus acts to minimize it. Several bioengineering methods have been proposed to objectively, precisely, and non-invasively measure skin aging, and to detect early skin damage, that is sub-clinically observable. In this review we have described the most relevant methods that have emerged from recently introduced technologies, aiming at quantitatively assessing the effects of aging on the skin.
Eastern Space and Missile Center (ESMC) Capability.
1983-09-16
Sites Fig. 4 ETR Tracking Itlescopes A unique feature at the ETR is the ability to compute a The Contraves Model 151 includes a TV camera. a widetband...main objective lens. The Contraves wideband transmitter sends video signals from either the main objective TV or the DAGE wide-angle TV system to the...Modified main objective plus the time of day to 0.1 second. to use the ESMC precise 2400 b/s acquisition data system, the Contraves computer system
NASA Technical Reports Server (NTRS)
Cho, Hyung J.; Sukhatme, Kalyani G.; Mahoney, John C.; Penanen, Konstantin Penanen; Vargas, Rudolph, Jr.
2010-01-01
A method allows combining the functions of a heater and a thermometer in a single device, a thermistor, with minimal temperature read errors. Because thermistors typically have a much smaller thermal mass than the objects they monitor, the thermal time to equilibrate the thermometer to the temperature of the object is typically much shorter than the thermal time of the object to change its temperature in response to an external perturbation.
Numerosity underestimation with item similarity in dynamic visual display.
Au, Ricky K C; Watanabe, Katsumi
2013-01-01
The estimation of numerosity of a large number of objects in a static visual display is possible even at short durations. Such coarse approximations of numerosity are distinct from subitizing, in which the number of objects can be reported with high precision when a small number of objects are presented simultaneously. The present study examined numerosity estimation of visual objects in dynamic displays and the effect of object similarity on numerosity estimation. In the basic paradigm (Experiment 1), two streams of dots were presented and observers were asked to indicate which of the two streams contained more dots. Streams consisting of dots that were identical in color were judged as containing fewer dots than streams where the dots were different colors. This underestimation effect for identical visual items disappeared when the presentation rate was slower (Experiment 1) or the visual display was static (Experiment 2). In Experiments 3 and 4, in addition to the numerosity judgment task, observers performed an attention-demanding task at fixation. Task difficulty influenced observers' precision in the numerosity judgment task, but the underestimation effect remained evident irrespective of task difficulty. These results suggest that identical or similar visual objects presented in succession might induce substitution among themselves, leading to an illusion that there are few items overall and that exploiting attentional resources does not eliminate the underestimation effect.
Optical versus tactile geometry measurement: alternatives or counterparts
NASA Astrophysics Data System (ADS)
Lehmann, Peter
2003-05-01
This contribution deals with measuring strategies and methods for the determination of several geometrical features, covering the surface micro-topography and the form of mechanical objects. The measuring principles used in optical surface metrology include optical focusing profilers, confocal point measuring and areal measuring sensors as well as interferometrical principles such as white light interferometry and speckle techniques. In comparison with stylus instruments optical techniques provide certain advantages such as a fast data acquisition, in-process applicability or contactless measurement. However, the frequency response characteristics of optical and tactile measurement differ significantly. In addition, optical sensors are commonly more influenced by critical geometrical conditions and optical properties of an object. For precise form measurement mechanical instruments dominate till now. One reason for this may be, that commonly the complete 360 degrees geometry of the measuring object has to be analyzed. Another point is that optical principles such as form measuring interferometry fail in cases of complex object geometry or rougher object surfaces. Other methods, e.g. fringe projection or digital holography, till now do not meet the accuracy demands of precision engineered workpieces. Hence, a combination of mechanical concepts and optical sensors represents an interesting potential for current and future measuring tasks, which require high accuracy and maximum flexibility.
NASA Astrophysics Data System (ADS)
Pfister, T.; Günther, P.; Nöthen, M.; Czarske, J.
2010-02-01
Both in production engineering and process control, multidirectional displacements, deformations and vibrations of moving or rotating components have to be measured dynamically, contactlessly and with high precision. Optical sensors would be predestined for this task, but their measurement rate is often fundamentally limited. Furthermore, almost all conventional sensors measure only one measurand, i.e. either out-of-plane or in-plane distance or velocity. To solve this problem, we present a novel phase coded heterodyne laser Doppler distance sensor (PH-LDDS), which is able to determine out-of-plane (axial) position and in-plane (lateral) velocity of rough solid-state objects simultaneously and independently with a single sensor. Due to the applied heterodyne technique, stationary or purely axially moving objects can also be measured. In addition, it is shown theoretically as well as experimentally that this sensor offers concurrently high temporal resolution and high position resolution since its position uncertainty is in principle independent of the lateral object velocity in contrast to conventional distance sensors. This is a unique feature of the PH-LDDS enabling precise and dynamic position and shape measurements also of fast moving objects. With an optimized sensor setup, an average position resolution of 240 nm was obtained.
Reliability of patient specific instrumentation in total knee arthroplasty.
Jennart, Harold; Ngo Yamben, Marie-Ange; Kyriakidis, Theofylaktos; Zorman, David
2015-12-01
The aim of this study was to compare the precision between Patient Specific Instrumentation (PSI) and Conventional Instrumentation (CI) as determined intra-operatively by a pinless navigation system. Eighty patients were included in this prospective comparative study and they were divided into two homogeneous groups. We defined an original score from 6 to 30 points to evaluate the accuracy of the position of the cutting guides. This score is based on 6 objective criteria. The analysis indicated that PSI was not superior to conventional instrumentation in the overall score (p = 0.949). Moreover, no statistically significant difference was observed for any individual criteria of our score. Level of evidence II.
Even illumination in total internal reflection fluorescence microscopy using laser light.
Fiolka, R; Belyaev, Y; Ewers, H; Stemmer, A
2008-01-01
In modern fluorescence microscopy, lasers are a widely used source of light, both for imaging in total internal reflection and epi-illumination modes. In wide-field imaging, scattering of highly coherent laser light due to imperfections in the light path typically leads to nonuniform illumination of the specimen, compromising image analysis. We report the design and construction of an objective-launch total internal reflection fluorescence microscopy system with excellent evenness of specimen illumination achieved by azimuthal rotation of the incoming illuminating laser beam. The system allows quick and precise changes of the incidence angle of the laser beam and thus can also be used in an epifluorescence mode. 2007 Wiley-Liss, Inc
NASA Astrophysics Data System (ADS)
Luo, Jia; Zhang, Min; Zhou, Xiaoling; Chen, Jianhua; Tian, Yuxin
2018-01-01
Taken 4 main tree species in the Wuling mountain small watershed as research objects, 57 typical sample plots were set up according to the stand type, site conditions and community structure. 311 goal diameter-class sample trees were selected according to diameter-class groups of different tree-height grades, and the optimal fitting models of tree height and DBH growth of main tree species were obtained by stem analysis using Richard, Logistic, Korf, Mitscherlich, Schumacher, Weibull theoretical growth equations, and the correlation coefficient of all optimal fitting models reached above 0.9. Through the evaluation and test, the optimal fitting models possessed rather good fitting precision and forecast dependability.
NecroQuant: quantitative assessment of radiological necrosis
NASA Astrophysics Data System (ADS)
Hwang, Darryl H.; Mohamed, Passant; Varghese, Bino A.; Cen, Steven Y.; Duddalwar, Vinay
2017-11-01
Clinicians can now objectively quantify tumor necrosis by Hounsfield units and enhancement characteristics from multiphase contrast enhanced CT imaging. NecroQuant has been designed to work as part of a radiomics pipelines. The software is a departure from the conventional qualitative assessment of tumor necrosis, as it provides the user (radiologists and researchers) a simple interface to precisely and interactively define and measure necrosis in contrast-enhanced CT images. Although, the software is tested here on renal masses, it can be re-configured to assess tumor necrosis across variety of tumors from different body sites, providing a generalized, open, portable, and extensible quantitative analysis platform that is widely applicable across cancer types to quantify tumor necrosis.
Leung, Janet T Y; Shek, Daniel T L
2011-01-01
This paper examines the use of quantitative and qualitative approaches to study the impact of economic disadvantage on family processes and adolescent development. Quantitative research has the merits of objectivity, good predictive and explanatory power, parsimony, precision and sophistication of analysis. Qualitative research, in contrast, provides a detailed, holistic, in-depth understanding of social reality and allows illumination of new insights. With the pragmatic considerations of methodological appropriateness, design flexibility, and situational responsiveness in responding to the research inquiry, a mixed methods approach could be a possibility of integrating quantitative and qualitative approaches and offers an alternative strategy to study the impact of economic disadvantage on family processes and adolescent development.
NASA Technical Reports Server (NTRS)
Lewandowski, W.
1994-01-01
The introduction of the GPS common-view method at the beginning of the 1980's led to an immediate and dramatic improvement of international time comparisons. Since then, further progress brought the precision and accuracy of GPS common-view intercontinental time transfer from tens of nanoseconds to a few nanoseconds, even with SA activated. This achievement was made possible by the use of the following: ultra-precise ground antenna coordinates, post-processed precise ephemerides, double-frequency measurements of ionosphere, and appropriate international coordination and standardization. This paper reviews developments and applications of the GPS common-view method during the last decade and comments on possible future improvements whose objective is to attain sub-nanosecond uncertainty.
Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph
2016-04-18
Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects. Automated rosetting analyzer for micrographs has the capability to push malaria research to a more quantitative and statistically significant level with increased reliability due to operator independence. As an installation file for Windows © 7, 8.1 and 10 is available for free, ARAM offers a novel open and easy-to-use platform for the malaria community to elucidate resetting. © 7, 8.1 and 10 is available for free, ARAM offers a novel open and easy-to-use platform for the malaria community to elucidate rosetting.
Hirayama, Ryuichi; Fujimoto, Yasunori; Umegaki, Masao; Kagawa, Naoki; Kinoshita, Manabu; Hashimoto, Naoya; Yoshimine, Toshiki
2013-05-01
Existing training methods for neuroendoscopic surgery have mainly emphasized the acquisition of anatomical knowledge and procedures for operating an endoscope and instruments. For laparoscopic surgery, various training systems have been developed to teach handling of an endoscope as well as the manipulation of instruments for speedy and precise endoscopic performance using both hands. In endoscopic endonasal surgery (EES), especially using a binostril approach to the skull base and intradural lesions, the learning of more meticulous manipulation of instruments is mandatory, and it may be necessary to develop another type of training method for acquiring psychomotor skills for EES. Authors of the present study developed an inexpensive, portable personal trainer using a webcam and objectively evaluated its utility. Twenty-five neurosurgeons volunteered for this study and were divided into 2 groups, a novice group (19 neurosurgeons) and an experienced group (6 neurosurgeons). Before and after the exercises of set tasks with a webcam box trainer, the basic endoscopic skills of each participant were objectively assessed using the virtual reality simulator (LapSim) while executing 2 virtual tasks: grasping and instrument navigation. Scores for the following 11 performance variables were recorded: instrument time, instrument misses, instrument path length, and instrument angular path (all of which were measured in both hands), as well as tissue damage, max damage, and finally overall score. Instrument time was indicated as movement speed; instrument path length and instrument angular path as movement efficiency; and instrument misses, tissue damage, and max damage as movement precision. In the novice group, movement speed and efficiency were significantly improved after the training. In the experienced group, significant improvement was not shown in the majority of virtual tasks. Before the training, significantly greater movement speed and efficiency were demonstrated in the experienced group, but no difference in movement precision was shown between the 2 groups. After the training, no significant differences were shown between the 2 groups in the majority of the virtual tasks. Analysis revealed that the webcam trainer improved the basic skills of the novices, increasing movement speed and efficiency without sacrificing movement precision. Novices using this unique webcam trainer showed improvement in psychomotor skills for EES. The authors believe that training in terms of basic endoscopic skills is meaningful and that the webcam training system can play a role in daily off-the-job training for EES.
Lee, Jiwoo; Weon, Jin Bae; Yun, Bo-Ra; Eom, Min Rye; Ma, Choong Je
2015-01-01
Background: Artemisia apiacea is a traditional herbal medicine using treatment of eczema and jaundice in Eastern Asia, including China, Korea, and Japan. Objective: An accurate and sensitive analysis method using high performance liquid chromatography-diode array ultraviolet/visible detector and liquid chromatography–mass spectrometry for the simultaneous determination of three phytosterol compounds, campesterol, stigmasterol and daucosterol in A. apiacea was established. Materials and Methods: The analytes were separated on a Shiseido C18 column (5 μm, 4.6 mm I.D. ×250 mm) with gradient elution of 0.1% trifluoroacetic acid and acetonitrile. The flow rate was 1 mL/min and detection wavelengths were set at 205 and 254 nm. Results: Validation of the method was performed to demonstrate its linearity, precision and accuracy. The calibration curves showed good linearity (R2 > 0.9994). The limits of detection and limits of quantification were within the ranges 0.55–7.07 μg/mL and 1.67–21.44 μg/mL, respectively. And, the relative standard deviations of intra- and inter-day precision were <2.93%. The recoveries were found to be in the range of 90.03–104.91%. Conclusion: The developed method has been successfully applied to the analysis for quality control of campesterol, stigmasterol and daucosterol in A. apiacea. PMID:25829768
Microanalysis of dental caries using laser-scanned fluorescence
NASA Astrophysics Data System (ADS)
Barron, Joseph R.; Paton, Barry E.; Zakariasen, Kenneth L.
1992-06-01
It is well known that enamel and dentin fluoresce when illuminated by short-wavelength optical radiation. Fluorescence emission from carious and non-carious regions of teeth have been studied using a new experimental scanning technique for fluorescence analysis of dental sections. Scanning in 2 dimensions will allow surface maps of dental caries to be created. These surface images are then enhanced using the conventional and newer image processing techniques. Carious regions can be readily identified and contour maps can be used to graphically display the degree of damage on both surfaces and transverse sections. Numerous studies have shown that carious fluorescence is significantly different than non-carious regions. The scanning laser fluorescence spectrometer focuses light from a 25 mW He-Cd laser at 442 nm through an objective lens onto a cross-section area as small as 3 micrometers in diameter. Microtome prepared dental samples 100 micrometers thick are laid flat onto an optical bench perpendicular to the incident beam. The sample is moved under computer control in X & Y with an absolute precision of 0.1 micrometers . The backscattered light is both spatial and wavelength filtered before being measured on a long wavelength sensitized photomultiplier tube. High precision analysis of dental samples allow detailed maps of carious regions to be determined. Successive images allow time studies of caries growth and even the potential for remineralization studies of decalcified regions.
Analysis of Differences Between VLBI, GNSS and SLR Earth Orientation Series
NASA Astrophysics Data System (ADS)
MacMillan, D. S.; Pavlis, E. C.; Griffiths, J.
2016-12-01
We have compared polar motion series from VLBI, GNSS, and SLR where the reference frames were aligned to ITRF2008. Three objectives of the comparisons are 1) to determine biases between the techniques and 2) to determine the precisions of each technique via a 3-corner hat analysis after removing the relative biases, and 3) to evaluate the long-term stability of EOP series. Between VLBI and GPS or SLR, there are clear annual variations ranging from 25 to 100 µas in peak-to-peak amplitude. We investigate the possible causes of these variations. In addition, there are other apparent systematic bias and rate differences. From the point of view of VLBI, it is evident that there are VLBI network dependent effects, specifically between the operational R1 and R4 weekly 24-hour sessions. We investigate the origins of these differences including network station changes in these networks over the period from 2002-present. The EOP biases and precisions of the five IVS VLBI CONT campaigns (since 2002) are also analyzed since these sessions were each designed to provide the highest quality results that could be produced at the time. A possible source of biases between the geodetic techniques is the underlying reference frame used by each technique. We also consider the technique differences when ITRF2014 was applied instead of ITRF2008.
Analysis of Polar Motion Series Differences Between VLBI, GNSS, and SLR
NASA Astrophysics Data System (ADS)
MacMillan, Daniel; Pavlis, Erricos
2017-04-01
We have compared polar motion series from VLBI, GNSS, and SLR generated with a reference frame aligned to ITRF2008. Three objectives of the comparisons are 1) to determine biases between the techniques, 2) to determine the precision of each technique via a 3-corner hat analysis after removing the relative biases, and 3) to evaluate the long-term stability of polar motion series. Between VLBI, GNSS,and SLR, there are clear variations ranging from 20 to 60 µas in peak-to-peak amplitude. We investigate the possible causes of these variations. In addition, there are other apparent systematic biases and rate differences. There are VLBI network dependent effects that appear in the VLBI-GNSS and VLBI-SLR differences, specifically between the operational R1 and R4 weekly 24-hour sessions. We investigate the origins of these differences including network station changes in these networks over the period from 2002-present. The polar motion biases and precisions of the five IVS VLBI continuous observing CONT campaigns (since 2002) are also analyzed since these 2-week campaigns were each designed to provide the highest quality results that could be produced at the time. A possible source of bias between the three techniques is the underlying chosen sub-network used by each technique to realize the adopted reference frame. We also consider the technique differences when ITRF2014 is used instead of ITRF2008
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun
2017-04-01
In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.
An overview of meta-analysis for clinicians.
Lee, Young Ho
2018-03-01
The number of medical studies being published is increasing exponentially, and clinicians must routinely process large amounts of new information. Moreover, the results of individual studies are often insufficient to provide confident answers, as their results are not consistently reproducible. A meta-analysis is a statistical method for combining the results of different studies on the same topic and it may resolve conflicts among studies. Meta-analysis is being used increasingly and plays an important role in medical research. This review introduces the basic concepts, steps, advantages, and caveats of meta-analysis, to help clinicians understand it in clinical practice and research. A major advantage of a meta-analysis is that it produces a precise estimate of the effect size, with considerably increased statistical power, which is important when the power of the primary study is limited because of a small sample size. A meta-analysis may yield conclusive results when individual studies are inconclusive. Furthermore, meta-analyses investigate the source of variation and different effects among subgroups. In summary, a meta-analysis is an objective, quantitative method that provides less biased estimates on a specific topic. Understanding how to conduct a meta-analysis aids clinicians in the process of making clinical decisions.
Droplet-counting Microtitration System for Precise On-site Analysis.
Kawakubo, Susumu; Omori, Taichi; Suzuki, Yasutada; Ueta, Ikuo
2018-01-01
A new microtitration system based on the counting of titrant droplets has been developed for precise on-site analysis. The dropping rate was controlled by inserting a capillary tube as a flow resistance in a laboratory-made micropipette. The error of titration was 3% in a simulated titration with 20 droplets. The pre-addition of a titrant was proposed for precise titration within an error of 0.5%. The analytical performances were evaluated for chelate titration, redox titration and acid-base titration.
Short-term adaptation of saccades does not affect smooth pursuit eye movement initiation.
Sun, Zongpeng; Smilgin, Aleksandra; Junker, Marc; Dicke, Peter W; Thier, Peter
2017-08-01
Scrutiny of the visual environment requires saccades that shift gaze to objects of interest. In case the object should be moving, smooth pursuit eye movements (SPEM) try to keep the image of the object within the confines of the fovea in order to ensure sufficient time for its analysis. Both saccades and SPEM can be adaptively changed by the experience of insufficiencies, compromising the precision of saccades or the minimization of object image slip in the case of SPEM. As both forms of adaptation rely on the cerebellar oculomotor vermis (OMV), most probably deploying a shared neuronal machinery, one might expect that the adaptation of one type of eye movement should affect the kinematics of the other. In order to test this expectation, we subjected two monkeys to a standard saccadic adaption paradigm with SPEM test trials at the end and, alternatively, the same two monkeys plus a third one to a random saccadic adaptation paradigm with interleaved trials of SPEM. In contrast to our expectation, we observed at best marginal transfer which, moreover, had little consistency across experiments and subjects. The lack of consistent transfer of saccadic adaptation decisively constrains models of the implementation of oculomotor learning in the OMV, suggesting an extensive separation of saccade- and SPEM-related synapses on P-cell dendritic trees.
NASA Astrophysics Data System (ADS)
Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad
2018-06-01
Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.
Van Lierde, Kristiane M; De Bodt, Marc; Dhaeseleer, Evelien; Wuyts, Floris; Claeys, Sofie
2010-05-01
The purpose of the present study is to measure the effectiveness of two treatment techniques--vocalization with abdominal breath support and manual circumlaryngeal therapy (MCT)--in patients with muscle tension dysphonia (MTD). The vocal quality before and after the two treatment techniques was measured by means of the dysphonia severity index (DSI), which is designed to establish an objective and quantitative correlate of the perceived vocal quality. The DSI is based on the weighted combination of the following set of voice measurements: maximum phonation time (MPT), highest frequency, lowest intensity, and jitter. The repeated-measures analysis of variance (ANOVA) revealed a significant difference between the objective overall vocal quality before and after MCT. No significant differences were measured between the objective overall vocal quality before and after vocalization with abdominal breath support. This study showed evidence that MCT is an effective treatment technique for patients with elevated laryngeal position, increased laryngeal muscle tension, and MTD. The precise way in which MCT has an effect on vocal quality has not been addressed in this experiment, but merits study. Further research into this topic could focus on electromyography (EMG) recordings in relation to vocal improvements with larger sample of subjects. (c) 2010 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
ERIC Educational Resources Information Center
Raghuveer, V. R.; Tripathy, B. K.
2012-01-01
With the advancements in the WWW and ICT, the e-learning domain has developed very fast. Even many educational institutions these days have shifted their focus towards the e-learning and mobile learning environments. However, from the quality of learning point of view, which is measured in terms of "active learning" taking place, the…
Enhanced Graphics for Extended Scale Range
NASA Technical Reports Server (NTRS)
Hanson, Andrew J.; Chi-Wing Fu, Philip
2012-01-01
Enhanced Graphics for Extended Scale Range is a computer program for rendering fly-through views of scene models that include visible objects differing in size by large orders of magnitude. An example would be a scene showing a person in a park at night with the moon, stars, and galaxies in the background sky. Prior graphical computer programs exhibit arithmetic and other anomalies when rendering scenes containing objects that differ enormously in scale and distance from the viewer. The present program dynamically repartitions distance scales of objects in a scene during rendering to eliminate almost all such anomalies in a way compatible with implementation in other software and in hardware accelerators. By assigning depth ranges correspond ing to rendering precision requirements, either automatically or under program control, this program spaces out object scales to match the precision requirements of the rendering arithmetic. This action includes an intelligent partition of the depth buffer ranges to avoid known anomalies from this source. The program is written in C++, using OpenGL, GLUT, and GLUI standard libraries, and nVidia GEForce Vertex Shader extensions. The program has been shown to work on several computers running UNIX and Windows operating systems.
Garrido, Pilar; Aldaz, Azucena; Calleja, Miguel Ángel; De Álava, Enrique; Lamas, María Jesús; Martín, Miguel; Matías-Guiu, Xavier; Palacios, José; Vera, Ruth
2017-11-01
Precision medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle for each person. Precision medicine is transforming clinical and biomedical research, as well as health care itself from a conceptual, as well as a methodological viewpoint, providing extraordinary opportunities to improve public health and lower the costs of the healthcare system. However, the implementation of precision medicine poses ethical-legal, regulatory, organizational and knowledge-related challenges. Without a national strategy, precision medicine, which will be implemented one way or another, could take place without the appropriate planning that can guarantee technical quality, equal access of all citizens to the best practices, violating the rights of patients and professionals and jeopardizing the solvency of the healthcare system. With this paper from the Spanish Societies of Medical Oncology (SEOM), Pathology (SEAP), and Hospital Pharmacy (SEFH) we highlight the need to institute a consensual national strategy for the development of precision medicine in our country, review the national and international context, comment on the opportunities and challenges for implementing precision medicine, and outline the objectives of a national strategy on precision medicine in cancer. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Sensor-based precision fertilization for field crops
USDA-ARS?s Scientific Manuscript database
From the development of the first viable variable-rate fertilizer systems in the upper Midwest USA, precision agriculture is now approaching three decades old. Early precision fertilization practice relied on laboratory analysis of soil samples collected on a spatial pattern to define the nutrient-s...
Code of Federal Regulations, 2010 CFR
2010-10-01
... program and must provide an environment in which the work can be pursued with reasonable flexibility and... are directed toward objectives for which the work or methods cannot be precisely described in advance...
Characterization and photometric performance of the Hyper Suprime-Cam Software Pipeline
NASA Astrophysics Data System (ADS)
Huang, Song; Leauthaud, Alexie; Murata, Ryoma; Bosch, James; Price, Paul; Lupton, Robert; Mandelbaum, Rachel; Lackner, Claire; Bickerton, Steven; Miyazaki, Satoshi; Coupon, Jean; Tanaka, Masayuki
2018-01-01
The Subaru Strategic Program (SSP) is an ambitious multi-band survey using the Hyper Suprime-Cam (HSC) on the Subaru telescope. The Wide layer of the SSP is both wide and deep, reaching a detection limit of i ˜ 26.0 mag. At these depths, it is challenging to achieve accurate, unbiased, and consistent photometry across all five bands. The HSC data are reduced using a pipeline that builds on the prototype pipeline for the Large Synoptic Survey Telescope. We have developed a Python-based, flexible framework to inject synthetic galaxies into real HSC images, called SynPipe. Here we explain the design and implementation of SynPipe and generate a sample of synthetic galaxies to examine the photometric performance of the HSC pipeline. For stars, we achieve 1% photometric precision at i ˜ 19.0 mag and 6% precision at i ˜ 25.0 in the i band (corresponding to statistical scatters of ˜0.01 and ˜0.06 mag respectively). For synthetic galaxies with single-Sérsic profiles, forced CModel photometry achieves 13% photometric precision at i ˜ 20.0 mag and 18% precision at i ˜ 25.0 in the i band (corresponding to statistical scatters of ˜0.15 and ˜0.22 mag respectively). We show that both forced point spread function and CModel photometry yield unbiased color estimates that are robust to seeing conditions. We identify several caveats that apply to the version of HSC pipeline used for the first public HSC data release (DR1) that need to be taking into consideration. First, the degree to which an object is blended with other objects impacts the overall photometric performance. This is especially true for point sources. Highly blended objects tend to have larger photometric uncertainties, systematically underestimated fluxes, and slightly biased colors. Secondly, >20% of stars at 22.5 < i < 25.0 mag can be misclassified as extended objects. Thirdly, the current CModel algorithm tends to strongly underestimate the half-light radius and ellipticity of galaxy with i > 21.5 mag.
[Implementation of precision control to achieve the goal of schistosomiasis elimination in China].
Zhou, Xiao-nong
2016-02-01
The integrated strategy for schistosomiasis control with focus on infectious source control, which has been implemented since 2004, accelerated the progress towards schistosomiasis control in China, and achieved transmission control of the disease across the country by the end of 2015, which achieved the overall objective of the Mid- and Long-term National Plan for Prevention and Control of Schistosomiasis (2004-2015) on schedule. Then, the goal of schistosomiasis elimination by 2025 was proposed in China in 2014. To achieve this new goal on schedule, we have to address the key issues, and implement precision control measures with more precise identification of control targets, so that we are able to completely eradicate the potential factors leading to resurgence of schistosomiasis transmission and enable the achievement of schistosomiasis elimination on schedule. Precision schistosomiasis control, a theoretical innovation of precision medicine in schistosomiasis control, will provide new insights into schistosomiasis control based on the conception of precision medicine. This paper describes the definition, interventions and the role of precision schistosomiasis control in the elimination of schistosomiasis in China, and demonstrates that sustainable improvement of professionals and integrated control capability at grass-root level is a prerequisite to the implementation of schistosomiasis control, precision schistosomiasis control is a key to the further implementation of the integrated strategy for schistosomiasis control with focus on infectious source control, and precision schistosomiasis control is a guarantee of curing schistosomiasis patients and implementing schistosomiasis control program and interventions.
Benschop, R; Draaisma, D
2000-01-01
A prominent feature of late nineteenth-century psychology was its intense preoccupation with precision. Precision was at once an ideal and an argument: the quest for precision helped psychology to establish its status as a mature science, sharing a characteristic concern with the natural sciences. We will analyse how psychologists set out to produce precision in 'mental chronometry', the measurement of the duration of psychological processes. In his Leipzig laboratory, Wundt inaugurated an elaborate research programme on mental chronometry. We will look at the problem of calibration of experimental apparatus and will describe the intricate material, literary, and social technologies involved in the manufacture of precision. First, we shall discuss some of the technical problems involved in the measurement of ever shorter time-spans. Next, the Cattell-Berger experiments will help us to argue against the received view that all the precision went into the hardware, and practically none into the social organization of experimentation. Experimenters made deliberate efforts to bring themselves and their subjects under a regime of control and calibration similar to that which reigned over the experimental machinery. In Leipzig psychology, the particular blend of material and social technology resulted in a specific object of study: the generalized mind. We will then show that the distribution of precision in experimental psychology outside Leipzig demanded a concerted effort of instruments, texts, and people. It will appear that the forceful attempts to produce precision and uniformity had some rather paradoxical consequences.
Bragança, Sara; Arezes, Pedro; Carvalho, Miguel; Ashdown, Susan P; Castellucci, Ignacio; Leão, Celina
2018-01-01
Collecting anthropometric data for real-life applications demands a high degree of precision and reliability. It is important to test new equipment that will be used for data collectionOBJECTIVE:Compare two anthropometric data gathering techniques - manual methods and a Kinect-based 3D body scanner - to understand which of them gives more precise and reliable results. The data was collected using a measuring tape and a Kinect-based 3D body scanner. It was evaluated in terms of precision by considering the regular and relative Technical Error of Measurement and in terms of reliability by using the Intraclass Correlation Coefficient, Reliability Coefficient, Standard Error of Measurement and Coefficient of Variation. The results obtained showed that both methods presented better results for reliability than for precision. Both methods showed relatively good results for these two variables, however, manual methods had better results for some body measurements. Despite being considered sufficiently precise and reliable for certain applications (e.g. apparel industry), the 3D scanner tested showed, for almost every anthropometric measurement, a different result than the manual technique. Many companies design their products based on data obtained from 3D scanners, hence, understanding the precision and reliability of the equipment used is essential to obtain feasible results.
Hout, Michael C; Goldinger, Stephen D
2015-01-01
When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.
Precision Mass Property Measurements Using a Five-Wire Torsion Pendulum
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
A method for measuring the moment of inertia of an object using a five-wire torsion pendulum design is described here. Typical moment of inertia measurement devices are capable of 1 part in 10(exp 3) accuracy and current state of the art techniques have capabilities of about one part in 10(exp 4). The five-wire apparatus design shows the prospect of improving on current state of the art. Current measurements using a laboratory prototype indicate a moment of inertia measurement precision better than a part in 10(exp 4). In addition, the apparatus is shown to be capable of measuring the mass center offset from the geometric center. Typical mass center measurement devices exhibit a measurement precision up to approximately 1 micrometer. Although the five-wire pendulum was not originally designed for mass center measurements, preliminary results indicate an apparatus with a similar design may have the potential of achieving state of the art precision.
-Omic and Electronic Health Records Big Data Analytics for Precision Medicine
Wu, Po-Yen; Cheng, Chih-Wen; Kaddi, Chanchala D.; Venugopalan, Janani; Hoffman, Ryan; Wang, May D.
2017-01-01
Objective Rapid advances of high-throughput technologies and wide adoption of electronic health records (EHRs) have led to fast accumulation of -omic and EHR data. These voluminous complex data contain abundant information for precision medicine, and big data analytics can extract such knowledge to improve the quality of health care. Methods In this article, we present -omic and EHR data characteristics, associated challenges, and data analytics including data pre-processing, mining, and modeling. Results To demonstrate how big data analytics enables precision medicine, we provide two case studies, including identifying disease biomarkers from multi-omic data and incorporating -omic information into EHR. Conclusion Big data analytics is able to address –omic and EHR data challenges for paradigm shift towards precision medicine. Significance Big data analytics makes sense of –omic and EHR data to improve healthcare outcome. It has long lasting societal impact. PMID:27740470
Hout, Michael C.; Goldinger, Stephen D.
2014-01-01
When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306
Search guidance is proportional to the categorical specificity of a target cue.
Schmidt, Joseph; Zelinsky, Gregory J
2009-10-01
Visual search studies typically assume the availability of precise target information to guide search, often a picture of the exact target. However, search targets in the real world are often defined categorically and with varying degrees of visual specificity. In five target preview conditions we manipulated the availability of target visual information in a search task for common real-world objects. Previews were: a picture of the target, an abstract textual description of the target, a precise textual description, an abstract + colour textual description, or a precise + colour textual description. Guidance generally increased as information was added to the target preview. We conclude that the information used for search guidance need not be limited to a picture of the target. Although generally less precise, to the extent that visual information can be extracted from a target label and loaded into working memory, this information too can be used to guide search.
[Medical imaging in tumor precision medicine: opportunities and challenges].
Xu, Jingjing; Tan, Yanbin; Zhang, Minming
2017-05-25
Tumor precision medicine is an emerging approach for tumor diagnosis, treatment and prevention, which takes account of individual variability of environment, lifestyle and genetic information. Tumor precision medicine is built up on the medical imaging innovations developed during the past decades, including the new hardware, new imaging agents, standardized protocols, image analysis and multimodal imaging fusion technology. Also the development of automated and reproducible analysis algorithm has extracted large amount of information from image-based features. With the continuous development and mining of tumor clinical and imaging databases, the radiogenomics, radiomics and artificial intelligence have been flourishing. Therefore, these new technological advances bring new opportunities and challenges to the application of imaging in tumor precision medicine.
Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data
Qin, Xinyan; Wu, Gongping; Fan, Fei
2018-01-01
Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from “layer” to “block” according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection. PMID:29690560
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
NASA Astrophysics Data System (ADS)
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
Detecting Inspection Objects of Power Line from Cable Inspection Robot LiDAR Data.
Qin, Xinyan; Wu, Gongping; Lei, Jin; Fan, Fei; Ye, Xuhui
2018-04-22
Power lines are extending to complex environments (e.g., lakes and forests), and the distribution of power lines in a tower is becoming complicated (e.g., multi-loop and multi-bundle). Additionally, power line inspection is becoming heavier and more difficult. Advanced LiDAR technology is increasingly being used to solve these difficulties. Based on precise cable inspection robot (CIR) LiDAR data and the distinctive position and orientation system (POS) data, we propose a novel methodology to detect inspection objects surrounding power lines. The proposed method mainly includes four steps: firstly, the original point cloud is divided into single-span data as a processing unit; secondly, the optimal elevation threshold is constructed to remove ground points without the existing filtering algorithm, improving data processing efficiency and extraction accuracy; thirdly, a single power line and its surrounding data can be respectively extracted by a structured partition based on a POS data (SPPD) algorithm from "layer" to "block" according to power line distribution; finally, a partition recognition method is proposed based on the distribution characteristics of inspection objects, highlighting the feature information and improving the recognition effect. The local neighborhood statistics and the 3D region growing method are used to recognize different inspection objects surrounding power lines in a partition. Three datasets were collected by two CIR LIDAR systems in our study. The experimental results demonstrate that an average 90.6% accuracy and average 98.2% precision at the point cloud level can be achieved. The successful extraction indicates that the proposed method is feasible and promising. Our study can be used to obtain precise dimensions of fittings for modeling, as well as automatic detection and location of security risks, so as to improve the intelligence level of power line inspection.
The emerging potential for network analysis to inform precision cancer medicine.
Ozturk, Kivilcim; Dow, Michelle; Carlin, Daniel E; Bejar, Rafael; Carter, Hannah
2018-06-14
Precision cancer medicine promises to tailor clinical decisions to patients using genomic information. Indeed, successes of drugs targeting genetic alterations in tumors, such as imatinib that targets BCR-ABL in chronic myelogenous leukemia, have demonstrated the power of this approach. However biological systems are complex, and patients may differ not only by the specific genetic alterations in their tumor, but by more subtle interactions among such alterations. Systems biology and more specifically, network analysis, provides a framework for advancing precision medicine beyond clinical actionability of individual mutations. Here we discuss applications of network analysis to study tumor biology, early methods for N-of-1 tumor genome analysis and the path for such tools to the clinic. Copyright © 2018. Published by Elsevier Ltd.
Collaborative real-time motion video analysis by human observer and image exploitation algorithms
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2015-05-01
Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.
The Necessity of Functional Analysis for Space Exploration Programs
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Breidenthal, Julian C.
2011-01-01
As NASA moves toward expanded commercial spaceflight within its human exploration capability, there is increased emphasis on how to allocate responsibilities between government and commercial organizations to achieve coordinated program objectives. The practice of program-level functional analysis offers an opportunity for improved understanding of collaborative functions among heterogeneous partners. Functional analysis is contrasted with the physical analysis more commonly done at the program level, and is shown to provide theoretical performance, risk, and safety advantages beneficial to a government-commercial partnership. Performance advantages include faster convergence to acceptable system solutions; discovery of superior solutions with higher commonality, greater simplicity and greater parallelism by substituting functional for physical redundancy to achieve robustness and safety goals; and greater organizational cohesion around program objectives. Risk advantages include avoidance of rework by revelation of some kinds of architectural and contractual mismatches before systems are specified, designed, constructed, or integrated; avoidance of cost and schedule growth by more complete and precise specifications of cost and schedule estimates; and higher likelihood of successful integration on the first try. Safety advantages include effective delineation of must-work and must-not-work functions for integrated hazard analysis, the ability to formally demonstrate completeness of safety analyses, and provably correct logic for certification of flight readiness. The key mechanism for realizing these benefits is the development of an inter-functional architecture at the program level, which reveals relationships between top-level system requirements that would otherwise be invisible using only a physical architecture. This paper describes the advantages and pitfalls of functional analysis as a means of coordinating the actions of large heterogeneous organizations for space exploration programs.
Precision Machining Technologies. Occupational Competency Analysis Profile.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. Vocational Instructional Materials Lab.
This Occupational Competency Analysis Profile (OCAP), which is one of a series of OCAPs developed to identify the skills that Ohio employers deem necessary to entering a given occupation/occupational area, lists the occupational, academic, and employability skills required of individuals entering the occupation of precision machinist. The…
Evidence for two attentional components in visual working memory.
Allen, Richard J; Baddeley, Alan D; Hitch, Graham J
2014-11-01
How does executive attentional control contribute to memory for sequences of visual objects, and what does this reveal about storage and processing in working memory? Three experiments examined the impact of a concurrent executive load (backward counting) on memory for sequences of individually presented visual objects. Experiments 1 and 2 found disruptive concurrent load effects of equivalent magnitude on memory for shapes, colors, and colored shape conjunctions (as measured by single-probe recognition). These effects were present only for Items 1 and 2 in a 3-item sequence; the final item was always impervious to this disruption. This pattern of findings was precisely replicated in Experiment 3 when using a cued verbal recall measure of shape-color binding, with error analysis providing additional insights concerning attention-related loss of early-sequence items. These findings indicate an important role for executive processes in maintaining representations of earlier encountered stimuli in an active form alongside privileged storage of the most recent stimulus. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Applying formal methods and object-oriented analysis to existing flight software
NASA Technical Reports Server (NTRS)
Cheng, Betty H. C.; Auernheimer, Brent
1993-01-01
Correctness is paramount for safety-critical software control systems. Critical software failures in medical radiation treatment, communications, and defense are familiar to the public. The significant quantity of software malfunctions regularly reported to the software engineering community, the laws concerning liability, and a recent NRC Aeronautics and Space Engineering Board report additionally motivate the use of error-reducing and defect detection software development techniques. The benefits of formal methods in requirements driven software development ('forward engineering') is well documented. One advantage of rigorously engineering software is that formal notations are precise, verifiable, and facilitate automated processing. This paper describes the application of formal methods to reverse engineering, where formal specifications are developed for a portion of the shuttle on-orbit digital autopilot (DAP). Three objectives of the project were to: demonstrate the use of formal methods on a shuttle application, facilitate the incorporation and validation of new requirements for the system, and verify the safety-critical properties to be exhibited by the software.
2003-07-10
NASA's Hubble Space Telescope (HST) precisely measured the mass of the oldest known planet in our Milky Way Galaxy bringing closure to a decade of speculation. Scientists weren't sure if the object was a planet or a brown dwarf. Hubble's analysis shows that the object is 2.5 times the mass of Jupiter, confirming that it is indeed a planet. At an estimated age of 13 billion years, the planet is more than twice the age of Earth's 4.5 billion years. It formed around a young, sun-like star barely 1 million years after our universe's birth in the Big Bang. The ancient planet resides in an unlikely, rough neighborhood. It orbits a peculiar pair of burned-out stars in the crowded core cluster of more than 100,000 stars. Its very existence provides evidence that the first planets formed rapidly, within a billion years of the Big Bang, and leads astronomers to conclude that planets may be very abundant in our galaxy. This artist's concept depicts the planet with a view of a rich star filled sky.
Geometric model of pseudo-distance measurement in satellite location systems
NASA Astrophysics Data System (ADS)
Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.
2018-04-01
The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.
The York Gospels: a 1000-year biological palimpsest
Fiddyment, Sarah; Vnouček, Jiří; Mattiangeli, Valeria; Speller, Camilla; Binois, Annelise; Carver, Martin; Dand, Catherine; Newfield, Timothy P.; Webb, Christopher C.; Bradley, Daniel G.; Collins, Matthew J.
2017-01-01
Medieval manuscripts, carefully curated and conserved, represent not only an irreplaceable documentary record but also a remarkable reservoir of biological information. Palaeographic and codicological investigation can often locate and date these documents with remarkable precision. The York Gospels (York Minster Ms. Add. 1) is one such codex, one of only a small collection of pre-conquest Gospel books to have survived the Reformation. By extending the non-invasive triboelectric (eraser-based) sampling technique eZooMS, to include the analysis of DNA, we report a cost-effective and simple-to-use biomolecular sampling technique for parchment. We apply this combined methodology to document for the first time a rich palimpsest of biological information contained within the York Gospels, which has accumulated over the 1000-year lifespan of this cherished object that remains an active participant in the life of York Minster. These biological data provide insights into the decisions made in the selection of materials, the construction of the codex and the use history of the object. PMID:29134095
NASA Technical Reports Server (NTRS)
Turso, James; Lawrence, Charles; Litt, Jonathan
2004-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Technical Reports Server (NTRS)
Turso, James A.; Lawrence, Charles; Litt, Jonathan S.
2007-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Technical Reports Server (NTRS)
Wiegman, E. J.; Evans, W. E.; Hadfield, R.
1975-01-01
Measurements are examined of snow coverage during the snow-melt season in 1973 and 1974 from LANDSAT imagery for the three Columbia River Subbasins. Satellite derived snow cover inventories for the three test basins were obtained as an alternative to inventories performed with the current operational practice of using small aircraft flights over selected snow fields. The accuracy and precision versus cost for several different interactive image analysis procedures was investigated using a display device, the Electronic Satellite Image Analysis Console. Single-band radiance thresholding was the principal technique employed in the snow detection, although this technique was supplemented by an editing procedure involving reference to hand-generated elevation contours. For each data and view measured, a binary thematic map or "mask" depicting the snow cover was generated by a combination of objective and subjective procedures. Photographs of data analysis equipment (displays) are shown.
Control/structure interaction conceptual design tool
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1990-01-01
The JPL Control/Structure Interaction Program is developing new analytical methods for designing micro-precision spacecraft with controlled structures. One of these, the Conceptual Design Tool, will illustrate innovative new approaches to the integration of multi-disciplinary analysis and design methods. The tool will be used to demonstrate homogeneity of presentation, uniform data representation across analytical methods, and integrated systems modeling. The tool differs from current 'integrated systems' that support design teams most notably in its support for the new CSI multi-disciplinary engineer. The design tool will utilize a three dimensional solid model of the spacecraft under design as the central data organization metaphor. Various analytical methods, such as finite element structural analysis, control system analysis, and mechanical configuration layout, will store and retrieve data from a hierarchical, object oriented data structure that supports assemblies of components with associated data and algorithms. In addition to managing numerical model data, the tool will assist the designer in organizing, stating, and tracking system requirements.
Pazira, Parvin; Rostami Haji-Abadi, Mahdi; Zolaktaf, Vahid; Sabahi, Mohammadfarzan; Pazira, Toomaj
2016-06-08
In relation to statistical analysis, studies to determine the validity, reliability, objectivity and precision of new measuring devices are usually incomplete, due in part to using only correlation coefficient and ignoring the data dispersion. The aim of this study was to demonstrate the best way to determine the validity, reliability, objectivity and accuracy of an electro-inclinometer or other measuring devices. Another purpose of this study is to answer the question of whether reliability and objectivity represent accuracy of measuring devices. The validity of an electro-inclinometer was examined by mechanical and geometric methods. The objectivity and reliability of the device was assessed by calculating Cronbach's alpha for repeated measurements by three raters and by measurements on the same person by mechanical goniometer and the electro-inclinometer. Measurements were performed on "hip flexion with the extended knee" and "shoulder abduction with the extended elbow." The raters measured every angle three times within an interval of two hours. The three-way ANOVA was used to determine accuracy. The results of mechanical and geometric analysis showed that validity of the electro-inclinometer was 1.00 and level of error was less than one degree. Objectivity and reliability of electro-inclinometer was 0.999, while objectivity of mechanical goniometer was in the range of 0.802 to 0.966 and the reliability was 0.760 to 0.961. For hip flexion, the difference between raters in joints angle measurement by electro-inclinometer and mechanical goniometer was 1.74 and 16.33 degree (P<0.05), respectively. The differences for shoulder abduction measurement by electro-inclinometer and goniometer were 0.35 and 4.40 degree (P<0.05). Although both the objectivity and reliability are acceptable, the results showed that measurement error was very high in the mechanical goniometer. Therefore, it can be concluded that objectivity and reliability alone cannot determine the accuracy of a device and it is preferable to use other statistical methods to compare and evaluate the accuracy of these two devices.
Wind adaptive modeling of transmission lines using minimum description length
NASA Astrophysics Data System (ADS)
Jaw, Yoonseok; Sohn, Gunho
2017-03-01
The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.
Precise Determination of the Baseline Between the TerraSAR-X and TanDEM-X Satellites
NASA Astrophysics Data System (ADS)
Koenig, Rolf; Rothacher, Markus; Michalak, Grzegorz; Moon, Yongjin
TerraSAR-X, launched on June 15, 2007, and TanDEM-X, to be launched in September 2009, both carry the Tracking, Occultation and Ranging (TOR) category A payload instrument package. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), for precise orbit determination and atmospheric sounding and a Laser retro-reflector (LRR) serving as target for the global Satellite Laser Ranging (SLR) ground station network. The TOR is supplied by the GeoForschungsZentrum Potsdam (GFZ) Germany, and the Center for Space Research (CSR), Austin, Texas. The objective of the German/US collaboration is twofold: provision of atmospheric profiles for use in numerical weather predictions and climate studies from the occultation data and precision SAR data processing based on precise orbits and atmospheric products. For the scientific objectives of the TanDEM- X mission, i.e., bi-static SAR together with TerraSAR-X, the dual-frequency GPS receiver is of vital importance for the millimeter level determination of the baseline or distance between the two spacecrafts. The paper discusses the feasibility of generating millimeter baselines by the example of GRACE, where for validation the distance between the two GRACE satellites is directly available from the micrometer-level intersatellite link measurements. The distance of the GRACE satellites is some 200 km, the distance of the TerraSAR-X/TanDEM-X formation will be some 200 meters. Therefore the proposed approach is then subject to a simulation of the foreseen TerraSAR-X/TanDEM-X formation. The effect of varying space environmental conditions, of possible phase center variations, multi path, and of varying center of mass of the spacecrafts are evaluated and discussed.
Peppler, W T; Kim, W J; Ethans, K; Cowley, K C
2017-05-01
Methodological validation of dual-energy x-ray absorptiometry (DXA)-based measures of leg bone mineral density (BMD) based on the guidelines of the International Society for Clinical Densitometry. The primary objective of this study was to determine the precision of BMD estimates at the knee and heel using the manufacturer provided DXA acquisition algorithm. The secondary objective was to determine the smallest change in DXA-based measurement of BMD that should be surpassed (least significant change (LSC)) before suggesting that a biological change has occurred in the distal femur, proximal tibia and calcaneus. Academic Research Centre, Canada. Ten people with motor-complete SCI of at least 2 years duration and 10 people from the general population volunteered to have four DXA-based measurements taken of their femur, tibia and calcaneus. BMDs for seven regions of interest (RIs) were calculated, as were short-term precision (root-mean-square (RMS) standard deviation (g cm -2 ), RMS-coefficient of variation (RMS-CV, %)) and LSC. Overall, RMS-CV values were similar between SCI (3.63-10.20%, mean=5.3%) and able-bodied (1.85-5.73%, mean=4%) cohorts, despite lower absolute BMD values at each RIs in those with SCI (35%, heel to 54%, knee; P<0.0001). Precision was highest at the calcaneus and lowest at the femur. Except at the femur, RMS-CV values were under 6%. For DXA-based estimates of BMD at the distal femur, proximal tibia and calcaneus, these precision values suggest that LSC values >10% are needed to detect differences between treated and untreated groups in studies aimed at reducing bone mineral loss after SCI.