Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun
2018-07-01
A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.
NASA Technical Reports Server (NTRS)
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
Boundary regularized integral equation formulation of the Helmholtz equation in acoustics.
Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y C
2015-01-01
A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals.
Boundary regularized integral equation formulation of the Helmholtz equation in acoustics
Sun, Qiang; Klaseboer, Evert; Khoo, Boo-Cheong; Chan, Derek Y. C.
2015-01-01
A boundary integral formulation for the solution of the Helmholtz equation is developed in which all traditional singular behaviour in the boundary integrals is removed analytically. The numerical precision of this approach is illustrated with calculation of the pressure field owing to radiating bodies in acoustic wave problems. This method facilitates the use of higher order surface elements to represent boundaries, resulting in a significant reduction in the problem size with improved precision. Problems with extreme geometric aspect ratios can also be handled without diminished precision. When combined with the CHIEF method, uniqueness of the solution of the exterior acoustic problem is assured without the need to solve hypersingular integrals. PMID:26064591
Solving Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1987-01-01
Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.
Adhikari, Badri; Hou, Jie; Cheng, Jianlin
2018-03-01
In this study, we report the evaluation of the residue-residue contacts predicted by our three different methods in the CASP12 experiment, focusing on studying the impact of multiple sequence alignment, residue coevolution, and machine learning on contact prediction. The first method (MULTICOM-NOVEL) uses only traditional features (sequence profile, secondary structure, and solvent accessibility) with deep learning to predict contacts and serves as a baseline. The second method (MULTICOM-CONSTRUCT) uses our new alignment algorithm to generate deep multiple sequence alignment to derive coevolution-based features, which are integrated by a neural network method to predict contacts. The third method (MULTICOM-CLUSTER) is a consensus combination of the predictions of the first two methods. We evaluated our methods on 94 CASP12 domains. On a subset of 38 free-modeling domains, our methods achieved an average precision of up to 41.7% for top L/5 long-range contact predictions. The comparison of the three methods shows that the quality and effective depth of multiple sequence alignments, coevolution-based features, and machine learning integration of coevolution-based features and traditional features drive the quality of predicted protein contacts. On the full CASP12 dataset, the coevolution-based features alone can improve the average precision from 28.4% to 41.6%, and the machine learning integration of all the features further raises the precision to 56.3%, when top L/5 predicted long-range contacts are evaluated. And the correlation between the precision of contact prediction and the logarithm of the number of effective sequences in alignments is 0.66. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Plakhov, Iu. V.; Mytsenko, A. V.; Shel'Pov, V. A.
A numerical integration method is developed that is more accurate than Everhart's (1974) implicit single-sequence approach for integrating orbits. This method can be used to solve problems of space geodesy based on the use of highly precise laser observations.
NASA Astrophysics Data System (ADS)
Ding, Zhe; Li, Li; Hu, Yujin
2018-01-01
Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.
Park, Chan Woo; Moon, Yu Gyeong; Seong, Hyejeong; Jung, Soon Won; Oh, Ji-Young; Na, Bock Soon; Park, Nae-Man; Lee, Sang Seok; Im, Sung Gap; Koo, Jae Bon
2016-06-22
We demonstrate a new patterning technique for gallium-based liquid metals on flat substrates, which can provide both high pattern resolution (∼20 μm) and alignment precision as required for highly integrated circuits. In a very similar manner as in the patterning of solid metal films by photolithography and lift-off processes, the liquid metal layer painted over the whole substrate area can be selectively removed by dissolving the underlying photoresist layer, leaving behind robust liquid patterns as defined by the photolithography. This quick and simple method makes it possible to integrate fine-scale interconnects with preformed devices precisely, which is indispensable for realizing monolithically integrated stretchable circuits. As a way for constructing stretchable integrated circuits, we propose a hybrid configuration composed of rigid device regions and liquid interconnects, which is constructed on a rigid substrate first but highly stretchable after being transferred onto an elastomeric substrate. This new method can be useful in various applications requiring both high-resolution and precisely aligned patterning of gallium-based liquid metals.
A comparative study of integrators for constructing ephemerides with high precision.
NASA Astrophysics Data System (ADS)
Huang, Tian-Yi
1990-09-01
There are four indexes for evaluating various integrators. They are the local truncation error, the numerical stability, the complexity of computation and the quality of adaptation. A review and a comparative study of several numerical integration methods, such as Adams, Cowell, Runge-Kutta-Fehlberg, Gragg-Bulirsch-Stoer extrapolation, Everhart, Taylor series and Krogh, which are popular for constructing ephemerides with high precision, has been worked out.
Precision Learning Assessment: An Alternative to Traditional Assessment Techniques.
ERIC Educational Resources Information Center
Caltagirone, Paul J.; Glover, Christopher E.
1985-01-01
A continuous and curriculum-based assessment method, Precision Learning Assessment (PLA), which integrates precision teaching and norm-referenced techniques, was applied to a math computation curriculum for 214 third graders. The resulting districtwide learning curves defining average annual progress through the computation curriculum provided…
USDA-ARS?s Scientific Manuscript database
Plant Transformation Technologies is a comprehensive, authoritative book focusing on cutting-edge plant biotechnologies, offering in-depth, forward-looking information on methods for controlled and accurate genetic engineering. In response to ever-increasing pressure for precise and efficient integr...
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
NASA Astrophysics Data System (ADS)
Hong, Haibo; Yin, Yuehong; Chen, Xing
2016-11-01
Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.
Accurate computation of gravitational field of a tesseroid
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
2018-02-01
We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W. S.
Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
A systematic and efficient method to compute multi-loop master integrals
NASA Astrophysics Data System (ADS)
Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu
2018-04-01
We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.
Design and control of the precise tracking bed based on complex electromechanical design theory
NASA Astrophysics Data System (ADS)
Ren, Changzhi; Liu, Zhao; Wu, Liao; Chen, Ken
2010-05-01
The precise tracking technology is wide used in astronomical instruments, satellite tracking and aeronautic test bed. However, the precise ultra low speed tracking drive system is one high integrated electromechanical system, which one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. The precise Tracking Bed is one ultra-exact, ultra-low speed, high precision and huge inertial instrument, which some kind of mechanism and environment of the ultra low speed is different from general technology. This paper explores the design process based on complex electromechanical optimizing design theory, one non-PID with a CMAC forward feedback control method is used in the servo system of the precise tracking bed and some simulation results are discussed.
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
Alania, M; Lobato, I; Van Aert, S
2018-01-01
In this paper, both the frozen lattice (FL) and the absorptive potential (AP) approximation models are compared in terms of the integrated intensity and the precision with which atomic columns can be located from an image acquired using high angle annular dark field (HAADF) scanning transmission electron microscopy (STEM). The comparison is made for atoms of Cu, Ag, and Au. The integrated intensity is computed for both an isolated atomic column and an atomic column inside an FCC structure. The precision has been computed using the so-called Cramér-Rao Lower Bound (CRLB), which provides a theoretical lower bound on the variance with which parameters can be estimated. It is shown that the AP model results into accurate measurements for the integrated intensity only for small detector ranges under relatively low angles and for small thicknesses. In terms of the attainable precision, both methods show similar results indicating picometer range precision under realistic experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
GTCBio's Precision Medicine Conference (July 7-8, 2016 - Boston, Massachusetts, USA).
Cole, P
2016-09-01
GTCBio's Precision Medicine Conference met this year to outline the many steps forward that precision medicine and individualized genomics has made and the challenges it still faces in technological, modeling, and standards development, interoperability and compatibility advancements, and methods of economic and societal adoption. The conference was split into four sections, 'Overcoming Challenges in the Commercialization of Precision Medicine', 'Implementation of Precision Medicine: Strategies & Technologies', 'Integrating & Interpreting Personal Genomics, Big Data, & Bioinformatics' and 'Incentivizing Precision Medicine: Regulation & Reimbursement', with this report focusing on the final two subjects. Copyright 2016 Prous Science, S.A.U. or its licensors. All rights reserved.
High Precision Edge Detection Algorithm for Mechanical Parts
NASA Astrophysics Data System (ADS)
Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui
2018-04-01
High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.
Precise positioning of an ion in an integrated Paul trap-cavity system using radiofrequency signals
NASA Astrophysics Data System (ADS)
Kassa, Ezra; Takahashi, Hiroki; Christoforou, Costas; Keller, Matthias
2018-03-01
We report a novel miniature Paul ion trap design with an integrated optical fibre cavity which can serve as a building block for a fibre-linked quantum network. In such cavity quantum electrodynamic set-ups, the optimal coupling of the ions to the cavity mode is of vital importance and this is achieved by moving the ion relative to the cavity mode. The trap presented herein features an endcap-style design complemented with extra electrodes on which additional radiofrequency voltages are applied to fully control the pseudopotential minimum in three dimensions. This method lifts the need to use three-dimensional translation stages for moving the fibre cavity with respect to the ion and achieves high integrability, mechanical rigidity and scalability. Not based on modifying the capacitive load of the trap, this method leads to precise control of the pseudopotential minimum allowing the ion to be moved with precisions limited only by the ion's position spread. We demonstrate this by coupling the ion to the fibre cavity and probing the cavity mode profile.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2013-07-01
We propose, describe, and demonstrate a new numerically stable implementation of the extended boundary-condition method (EBCM) to compute the T-matrix for electromagnetic scattering by spheroidal particles. Our approach relies on the fact that for many of the EBCM integrals in the special case of spheroids, a leading part of the integrand integrates exactly to zero, which causes catastrophic loss of precision in numerical computations. This feature was in fact first pointed out by Waterman in the context of acoustic scattering and electromagnetic scattering by infinite cylinders. We have recently studied it in detail in the case of electromagnetic scattering by particles. Based on this study, the principle of our new implementation is therefore to compute all the integrands without the problematic part to avoid the primary cause of loss of precision. Particular attention is also given to choosing the algorithms that minimise loss of precision in every step of the method, without compromising on speed. We show that the resulting implementation can efficiently compute in double precision arithmetic the T-matrix and therefore optical properties of spheroidal particles to a high precision, often down to a remarkable accuracy (10-10 relative error), over a wide range of parameters that are typically considered problematic. We discuss examples such as high-aspect ratio metallic nanorods and large size parameter (≈35) dielectric particles, which had been previously modelled only using quadruple-precision arithmetic codes.
[Precision nutrition in the era of precision medicine].
Chen, P Z; Wang, H
2016-12-06
Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.
High-precision radius automatic measurement using laser differential confocal technology
NASA Astrophysics Data System (ADS)
Jiang, Hongwei; Zhao, Weiqian; Yang, Jiamiao; Guo, Yongkui; Xiao, Yang
2015-02-01
A high precision radius automatic measurement method using laser differential confocal technology is proposed. Based on the property of an axial intensity curve that the null point precisely corresponds to the focus of the objective and the bipolar property, the method uses the composite PID (proportional-integral-derivative) control to ensure the steady movement of the motor for process of quick-trigger scanning, and uses least-squares linear fitting to obtain the position of the cat-eye and confocal positions, then calculates the radius of curvature of lens. By setting the number of measure times, precision auto-repeat measurement of the radius of curvature is achieved. The experiment indicates that the method has the measurement accuracy of better than 2 ppm, and the measuring repeatability is better than 0.05 μm. In comparison with the existing manual-single measurement, this method has a high measurement precision, a strong environment anti-interference capability, a better measuring repeatability which is only tenth of former's.
A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang
2017-06-28
Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.
Solar Power Tower Integrated Layout and Optimization Tool | Concentrating
methods to reduce the overall computational burden while generating accurate and precise results. These methods have been developed as part of the U.S. Department of Energy (DOE) SunShot Initiative research
NASA Astrophysics Data System (ADS)
Caffo, Michele; Czyż, Henryk; Gunia, Michał; Remiddi, Ettore
2009-03-01
We present the program BOKASUN for fast and precise evaluation of the Master Integrals of the two-loop self-mass sunrise diagram for arbitrary values of the internal masses and the external four-momentum. We use a combination of two methods: a Bernoulli accelerated series expansion and a Runge-Kutta numerical solution of a system of linear differential equations. Program summaryProgram title: BOKASUN Catalogue identifier: AECG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9404 No. of bytes in distributed program, including test data, etc.: 104 123 Distribution format: tar.gz Programming language: FORTRAN77 Computer: Any computer with a Fortran compiler accepting FORTRAN77 standard. Tested on various PC's with LINUX Operating system: LINUX RAM: 120 kbytes Classification: 4.4 Nature of problem: Any integral arising in the evaluation of the two-loop sunrise Feynman diagram can be expressed in terms of a given set of Master Integrals, which should be calculated numerically. The program provides a fast and precise evaluation method of the Master Integrals for arbitrary (but not vanishing) masses and arbitrary value of the external momentum. Solution method: The integrals depend on three internal masses and the external momentum squared p. The method is a combination of an accelerated expansion in 1/p in its (pretty large!) region of fast convergence and of a Runge-Kutta numerical solution of a system of linear differential equations. Running time: To obtain 4 Master Integrals on PC with 2 GHz processor it takes 3 μs for series expansion with pre-calculated coefficients, 80 μs for series expansion without pre-calculated coefficients, from a few seconds up to a few minutes for Runge-Kutta method (depending on the required accuracy and the values of the physical parameters).
Combined Feature Based and Shape Based Visual Tracker for Robot Navigation
NASA Technical Reports Server (NTRS)
Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.
2005-01-01
We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.
Optogenetic Modulation and Multi-Electrode Analysis of Cerebellar Networks In Vivo
Kruse, Wolfgang; Krause, Martin; Aarse, Janna; Mark, Melanie D.; Manahan-Vaughan, Denise; Herlitze, Stefan
2014-01-01
The firing patterns of cerebellar Purkinje cells (PCs), as the sole output of the cerebellar cortex, determine and tune motor behavior. PC firing is modulated by various inputs from different brain regions and by cell-types including granule cells (GCs), climbing fibers and inhibitory interneurons. To understand how signal integration in PCs occurs and how subtle changes in the modulation of PC firing lead to adjustment of motor behaviors, it is important to precisely record PC firing in vivo and to control modulatory pathways in a spatio-temporal manner. Combining optogenetic and multi-electrode approaches, we established a new method to integrate light-guides into a multi-electrode system. With this method we are able to variably position the light-guide in defined regions relative to the recording electrode with micrometer precision. We show that PC firing can be precisely monitored and modulated by light-activation of channelrhodopsin-2 (ChR2) expressed in PCs, GCs and interneurons. Thus, this method is ideally suited to investigate the spatio/temporal modulation of PCs in anesthetized and in behaving mice. PMID:25144735
An integrated bioanalytical method development and validation approach: case studies.
Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M
2012-10-01
We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.
Zhang, Chi; Zhang, Ge; Chen, Ke-ji; Lu, Ai-ping
2016-04-01
The development of an effective classification method for human health conditions is essential for precise diagnosis and delivery of tailored therapy to individuals. Contemporary classification of disease systems has properties that limit its information content and usability. Chinese medicine pattern classification has been incorporated with disease classification, and this integrated classification method became more precise because of the increased understanding of the molecular mechanisms. However, we are still facing the complexity of diseases and patterns in the classification of health conditions. With continuing advances in omics methodologies and instrumentation, we are proposing a new classification approach: molecular module classification, which is applying molecular modules to classifying human health status. The initiative would be precisely defining the health status, providing accurate diagnoses, optimizing the therapeutics and improving new drug discovery strategy. Therefore, there would be no current disease diagnosis, no disease pattern classification, and in the future, a new medicine based on this classification, molecular module medicine, could redefine health statuses and reshape the clinical practice.
One- and two-photon states for quantum information
NASA Astrophysics Data System (ADS)
Peters, Nicholas A.
To find expression stability among transgenic lines, the Recombinase Mediated Transgene Integration (RMTI) technology using the Cre/ lox-mediated site-specific gene integration system was used. The objectives were to develop an efficient method of site-specific transgene integration and to test the effectiveness of this method by assaying transgene expression in the RMTI lines. The RMTI technology allows the precise integration of a transgene in a previously placed target genomic location containing a lox site. The efficiency of CRE-mediated site-specific integration in rice by particle bombardment was found to vary from 3 to 28% in nine different experiments. Some hemizygous site-specific integration plants that were derived from homozygous target locus were found to undergo CRE-mediated reversion of the integration locus. No reversion was observed in callus; however, reverting cells may have been excluded due to selection pressure. The expression of the transgene gus was studied in all 40 callus lines, 12 regenerated T0 plants and the T1 and T2 progenies of 5 lines. The isogenic SC lines had an average expression level based on the activity of beta-glucuronidase of 158 +/- 9 units/mg protein (mean +/- SEM; n=3; variance within SC lines are expressed as standard error of the mean SEM) indicating a significantly higher level of expression, as compared to MC lines that had a much lower expression level 44 +/- 8 units/mg protein (mean +/- SEM; n=3) and the imprecise lines that had 22 +/- 8 units/mg protein (mean +/- SEM; n=3). Transgene expression in the callus cells of precise single copy lines varied by ˜3 fold, whereas that in multi-copy lines varied by ˜30 fold. Furthermore, precise single copy lines, on an average, contained ˜3.5 fold higher expression than multi-copy lines. Transgene expression in the plants of precise single-copy lines was highly variable, which was found to be due to the loss of the integration because of CRE-mediated reversion in the locus. (Abstract shortened by UMI.)
Rasmussen, Sebastian R; Konge, Lars; Mikkelsen, Peter T; Sørensen, Mads S; Andersen, Steven A W
2016-03-01
Cognitive load (CL) theory suggests that working memory can be overloaded in complex learning tasks such as surgical technical skills training, which can impair learning. Valid and feasible methods for estimating the CL in specific learning contexts are necessary before the efficacy of CL-lowering instructional interventions can be established. This study aims to explore secondary task precision for the estimation of CL in virtual reality (VR) surgical simulation and also investigate the effects of CL-modifying factors such as simulator-integrated tutoring and repeated practice. Twenty-four participants were randomized for visual assistance by a simulator-integrated tutor function during the first 5 of 12 repeated mastoidectomy procedures on a VR temporal bone simulator. Secondary task precision was found to be significantly lower during simulation compared with nonsimulation baseline, p < .001. Contrary to expectations, simulator-integrated tutoring and repeated practice did not have an impact on secondary task precision. This finding suggests that even though considerable changes in CL are reflected in secondary task precision, it lacks sensitivity. In contrast, secondary task reaction time could be more sensitive, but requires substantial postprocessing of data. Therefore, future studies on the effect of CL modifying interventions should weigh the pros and cons of the various secondary task measurements. © The Author(s) 2015.
Network-based machine learning and graph theory algorithms for precision oncology.
Zhang, Wei; Chien, Jeremy; Yong, Jeongsik; Kuang, Rui
2017-01-01
Network-based analytics plays an increasingly important role in precision oncology. Growing evidence in recent studies suggests that cancer can be better understood through mutated or dysregulated pathways or networks rather than individual mutations and that the efficacy of repositioned drugs can be inferred from disease modules in molecular networks. This article reviews network-based machine learning and graph theory algorithms for integrative analysis of personal genomic data and biomedical knowledge bases to identify tumor-specific molecular mechanisms, candidate targets and repositioned drugs for personalized treatment. The review focuses on the algorithmic design and mathematical formulation of these methods to facilitate applications and implementations of network-based analysis in the practice of precision oncology. We review the methods applied in three scenarios to integrate genomic data and network models in different analysis pipelines, and we examine three categories of network-based approaches for repositioning drugs in drug-disease-gene networks. In addition, we perform a comprehensive subnetwork/pathway analysis of mutations in 31 cancer genome projects in the Cancer Genome Atlas and present a detailed case study on ovarian cancer. Finally, we discuss interesting observations, potential pitfalls and future directions in network-based precision oncology.
Development of a 0.5m clear aperture Cassegrain type collimator telescope
NASA Astrophysics Data System (ADS)
Ekinci, Mustafa; Selimoǧlu, Özgür
2016-07-01
Collimator is an optical instrument used to evaluate performance of high precision instruments, especially space-born high resolution telescopes. Optical quality of the collimator telescope needs to be better than the instrument to be measured. This requirement leads collimator telescope to be a very precise instrument with high quality mirrors and a stable structure to keep it operational under specified conditions. In order to achieve precision requirements and to ensure repeatability of the mounts for polishing and metrology, opto-mechanical principles are applied to mirror mounts. Finite Element Method is utilized to simulate gravity effects, integration errors and temperature variations. Finite element analyses results of deformed optical surfaces are imported to optical domain by using Zernike polynomials to evaluate the design against specified WFE requirements. Both mirrors are aspheric and made from Zerodur for its stability and near zero CTE, M1 is further light-weighted. Optical quality measurements of the mirrors are achieved by using custom made CGHs on an interferometric test setup. Spider of the Cassegrain collimator telescope has a flexural adjustment mechanism driven by precise micrometers to overcome tilt errors originating from finite stiffness of the structure and integration errors. Collimator telescope is assembled and alignment methods are proposed.
Gram-Schmidt Orthogonalization by Gauss Elimination.
ERIC Educational Resources Information Center
Pursell, Lyle; Trimble, S. Y.
1991-01-01
Described is the hand-calculation method for the orthogonalization of a given set of vectors through the integration of Gaussian elimination with existing algorithms. Although not numerically preferable, this method adds increased precision as well as organization to the solution process. (JJK)
Ciaffoni, Luca; O'Neill, David P; Couper, John H; Ritchie, Grant A D; Hancock, Gus; Robbins, Peter A
2016-08-01
There are no satisfactory methods for monitoring oxygen consumption in critical care. To address this, we adapted laser absorption spectroscopy to provide measurements of O2, CO2, and water vapor within the airway every 10 ms. The analyzer is integrated within a novel respiratory flow meter that is an order of magnitude more precise than other flow meters. Such precision, coupled with the accurate alignment of gas concentrations with respiratory flow, makes possible the determination of O2 consumption by direct integration over time of the product of O2 concentration and flow. The precision is illustrated by integrating the balance gas (N2 plus Ar) flow and showing that this exchange was near zero. Measured O2 consumption changed by <5% between air and O2 breathing. Clinical capability was illustrated by recording O2 consumption during an aortic aneurysm repair. This device now makes easy, accurate, and noninvasive measurement of O2 consumption for intubated patients in critical care possible.
Ciaffoni, Luca; O’Neill, David P.; Couper, John H.; Ritchie, Grant A. D.; Hancock, Gus; Robbins, Peter A.
2016-01-01
There are no satisfactory methods for monitoring oxygen consumption in critical care. To address this, we adapted laser absorption spectroscopy to provide measurements of O2, CO2, and water vapor within the airway every 10 ms. The analyzer is integrated within a novel respiratory flow meter that is an order of magnitude more precise than other flow meters. Such precision, coupled with the accurate alignment of gas concentrations with respiratory flow, makes possible the determination of O2 consumption by direct integration over time of the product of O2 concentration and flow. The precision is illustrated by integrating the balance gas (N2 plus Ar) flow and showing that this exchange was near zero. Measured O2 consumption changed by <5% between air and O2 breathing. Clinical capability was illustrated by recording O2 consumption during an aortic aneurysm repair. This device now makes easy, accurate, and noninvasive measurement of O2 consumption for intubated patients in critical care possible. PMID:27532048
Fabrication of a wide-field NIR integral field unit for SWIMS using ultra-precision cutting
NASA Astrophysics Data System (ADS)
Kitagawa, Yutaro; Yamagata, Yutaka; Morita, Shin-ya; Motohara, Kentaro; Ozaki, Shinobu; Takahashi, Hidenori; Konishi, Masahiro; Kato, Natsuko M.; Kobayakawa, Yutaka; Terao, Yasunori; Ohashi, Hirofumi
2016-07-01
We describe overview of fabrication methods and measurement results of test fabrications of optical surfaces for an integral field unit (IFU) for Simultaneous color Wide-field Infrared Multi-object Spectrograph, SWIMS, which is a first-generation instrument for the University of Tokyo Atacama Observatory 6.5-m telescope. SWIMS-IFU provides entire near-infrared spectrum from 0.9 to 2.5 μm simultaneously covering wider field of view of 17" × 13" compared with current near-infrared IFUs. We investigate an ultra-precision cutting technique to monolithically fabricate optical surfaces of IFU optics such as an image slicer. Using 4- or 5-axis ultra precision machine we compare the milling process and shaper cutting process to find the best way of fabrication of image slicers. The measurement results show that the surface roughness almost satisfies our requirement in both of two methods. Moreover, we also obtain ideal surface form in the shaper cutting process. This method will be adopted to other mirror arrays (i.e. pupil mirror and slit mirror, and such monolithic fabrications will also help us to considerably reduce alignment procedure of each optical elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, T; Ding, H; Torabzadeh, M
2015-06-15
Purpose: To investigate the feasibility of quantifying the cross-sectional area (CSA) of coronary arteries using integrated density in a physics-based model with a phantom study. Methods: In this technique the total integrated density of the object as compared with its local background is measured so it is possible to account for the partial volume effect. The proposed method was compared to manual segmentation using CT scans of a 10 cm diameter Lucite cylinder placed inside a chest phantom. Holes with cross-sectional areas from 1.4 to 12.3 mm{sup 2} were drilled into the Lucite and filled with iodine solution, producing amore » contrast-to-noise ratio of approximately 26. Lucite rods 1.6 mm in diameter were used to simulate plaques. The phantom was imaged with and without the Lucite rods placed in the holes to simulate diseased and normal arteries, respectively. Linear regression analysis was used, and the root-mean-square deviations (RMSD) and errors (RMSE) were computed to assess the precision and accuracy of the measurements. In the case of manual segmentation, two readers independently delineated the lumen in order to quantify the inter-reader variability. Results: The precision and accuracy for the normal vessels using the integrated density technique were 0.32 mm{sup 2} and 0.32 mm{sup 2}, respectively. The corresponding results for the manual segmentation were 0.51 mm{sup 2} and 0.56 mm{sup 2}. In the case of diseased vessels, the precision and accuracy of the integrated density technique were 0.46 mm{sup 2} and 0.55 mm{sup 2}, respectively. The corresponding results for the manual segmentation were 0.75 mm{sup 2} and 0.98 mm{sup 2}. The mean percent difference for the two readers was found to be 8.4%. Conclusion: The CSA based on integrated density had improved precision and accuracy as compared with manual segmentation in a Lucite phantom. The results indicate the potential for using integrated density to improve CSA measurements in CT angiography.« less
NASA Astrophysics Data System (ADS)
Zhang, J.; Gao, Q.; Tan, S. J.; Zhong, W. X.
2012-10-01
A new method is proposed as a solution for the large-scale coupled vehicle-track dynamic model with nonlinear wheel-rail contact. The vehicle is simplified as a multi-rigid-body model, and the track is treated as a three-layer beam model. In the track model, the rail is assumed to be an Euler-Bernoulli beam supported by discrete sleepers. The vehicle model and the track model are coupled using Hertzian nonlinear contact theory, and the contact forces of the vehicle subsystem and the track subsystem are approximated by the Lagrange interpolation polynomial. The response of the large-scale coupled vehicle-track model is calculated using the precise integration method. A more efficient algorithm based on the periodic property of the track is applied to calculate the exponential matrix and certain matrices related to the solution of the track subsystem. Numerical examples demonstrate the computational accuracy and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Yang, Xiaojun; Lu, Dun; Liu, Hui; Zhao, Wanhua
2018-06-01
The complicated electromechanical coupling phenomena due to different kinds of causes have significant influences on the dynamic precision of the direct driven feed system in machine tools. In this paper, a novel integrated modeling and analysis method of the multiple electromechanical couplings for the direct driven feed system in machine tools is presented. At first, four different kinds of electromechanical coupling phenomena in the direct driven feed system are analyzed systematically. Then a novel integrated modeling and analysis method of the electromechanical coupling which is influenced by multiple factors is put forward. In addition, the effects of multiple electromechanical couplings on the dynamic precision of the feed system and their main influencing factors are compared and discussed, respectively. Finally, the results of modeling and analysis are verified by the experiments. It finds out that multiple electromechanical coupling loops, which are overlapped and influenced by each other, are the main reasons of the displacement fluctuations in the direct driven feed system.
Rigorous high-precision enclosures of fixed points and their invariant manifolds
NASA Astrophysics Data System (ADS)
Wittig, Alexander N.
The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.
NASA Astrophysics Data System (ADS)
Wang, Yubing; Yin, Weihong; Han, Qin; Yang, Xiaohong; Ye, Han; Lü, Qianqian; Yin, Dongdong
2017-04-01
Graphene field-effect transistors have been intensively studied. However, in order to fabricate devices with more complicated structures, such as the integration with waveguide and other two-dimensional materials, we need to transfer the exfoliated graphene samples to a target position. Due to the small area of exfoliated graphene and its random distribution, the transfer method requires rather high precision. In this paper, we systematically study a method to selectively transfer mechanically exfoliated graphene samples to a target position with a precision of sub-micrometer. To characterize the doping level of this method, we transfer graphene flakes to pre-patterned metal electrodes, forming graphene field-effect transistors. The hole doping of graphene is calculated to be 2.16 × {10}12{{{cm}}}-2. In addition, we fabricate a waveguide-integrated multilayer graphene photodetector to demonstrate the viability and accuracy of this method. A photocurrent as high as 0.4 μA is obtained, corresponding to a photoresponsivity of 0.48 mA/W. The device performs uniformly in nine illumination cycles. Project supported by the National Key Research and Development Program of China (No. 2016YFB0402404), the High-Tech Research and Development Program of China (Nos. 2013AA031401, 2015AA016902, 2015AA016904), and the National Natural Foundation of China (Nos. 61674136, 61176053, 61274069, 61435002).
Liu, Bao-Cheng; Ji, Guang
2017-07-01
Incorporating "-omics" studies with environmental interactions could help elucidate the biological mechanisms responsible for Traditional Chinese Medicine (TCM) patterns. Based on the authors' own experiences, this review outlines a model of an ideal combination of "-omics" biomarkers, environmental factors, and TCM pattern classifications; provides a narrative review of the relevant genetic and TCM studies; and lists several successful integrative examples. Two integration tools are briefly introduced. The first is the integration of modern devices into objective diagnostic methods of TCM patterning, which would improve current clinical decision-making and practice. The second is the use of biobanks and data platforms, which could broadly support biological and medical research. Such efforts will transform current medical management and accelerate the progression of precision medicine.
ERIC Educational Resources Information Center
Earl, Boyd L.
2008-01-01
A general result for the integrals of the Gaussian function over the harmonic oscillator wavefunctions is derived using generating functions. Using this result, an example problem of a harmonic oscillator with various Gaussian perturbations is explored in order to compare the results of precise numerical solution, the variational method, and…
Long-time predictions in nonlinear dynamics
NASA Technical Reports Server (NTRS)
Szebehely, V.
1980-01-01
It is known that nonintegrable dynamical systems do not allow precise predictions concerning their behavior for arbitrary long times. The available series solutions are not uniformly convergent according to Poincare's theorem and numerical integrations lose their meaningfulness after the elapse of arbitrary long times. Two approaches are the use of existing global integrals and statistical methods. This paper presents a generalized method along the first approach. As examples long-time predictions in the classical gravitational satellite and planetary problems are treated.
Automatic computational labeling of glomerular textural boundaries
NASA Astrophysics Data System (ADS)
Ginley, Brandon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
The glomerulus, a specialized bundle of capillaries, is the blood filtering unit of the kidney. Each human kidney contains about 1 million glomeruli. Structural damages in the glomerular micro-compartments give rise to several renal conditions; most severe of which is proteinuria, where excessive blood proteins flow freely to the urine. The sole way to confirm glomerular structural damage in renal pathology is by examining histopathological or immunofluorescence stained needle biopsies under a light microscope. However, this method is extremely tedious and time consuming, and requires manual scoring on the number and volume of structures. Computational quantification of equivalent features promises to greatly ease this manual burden. The largest obstacle to computational quantification of renal tissue is the ability to recognize complex glomerular textural boundaries automatically. Here we present a computational pipeline to accurately identify glomerular boundaries with high precision and accuracy. The computational pipeline employs an integrated approach composed of Gabor filtering, Gaussian blurring, statistical F-testing, and distance transform, and performs significantly better than standard Gabor based textural segmentation method. Our integrated approach provides mean accuracy/precision of 0.89/0.97 on n = 200Hematoxylin and Eosin (HE) glomerulus images, and mean 0.88/0.94 accuracy/precision on n = 200 Periodic Acid Schiff (PAS) glomerulus images. Respective accuracy/precision of the Gabor filter bank based method is 0.83/0.84 for HE and 0.78/0.8 for PAS. Our method will simplify computational partitioning of glomerular micro-compartments hidden within dense textural boundaries. Automatic quantification of glomeruli will streamline structural analysis in clinic, and can help realize real time diagnoses and interventions.
FORMED: Bringing Formal Methods to the Engineering Desktop
2016-02-01
integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification
Spike timing precision of neuronal circuits.
Kilinc, Deniz; Demir, Alper
2018-06-01
Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.
Integrated multi-ISE arrays with improved sensitivity, accuracy and precision
NASA Astrophysics Data System (ADS)
Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan
2017-03-01
Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl- electrodes, 10 F- electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.
NASA Astrophysics Data System (ADS)
Tsalamengas, John L.
2018-07-01
We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.
Jig For Stereoscopic Photography
NASA Technical Reports Server (NTRS)
Nielsen, David J.
1990-01-01
Separations between views adjusted precisely for best results. Simple jig adjusted to set precisely, distance between right and left positions of camera used to make stereoscopic photographs. Camera slides in slot between extreme positions, where it takes stereoscopic pictures. Distance between extreme positions set reproducibly with micrometer. In view of trend toward very-large-scale integration of electronic circuits, training method and jig used to make training photographs useful to many companies to reduce cost of training manufacturing personnel.
Technique of laser chromosome welding for chromosome repair and artificial chromosome creation.
Huang, Yao-Xiong; Li, Lin; Yang, Liu; Zhang, Yi
2018-04-01
Here we report a technique of laser chromosome welding that uses a violet pulse laser micro-beam for welding. The technique can integrate any size of a desired chromosome fragment into recipient chromosomes by combining with other techniques of laser chromosome manipulation such as chromosome cutting, moving, and stretching. We demonstrated that our method could perform chromosomal modifications with high precision, speed and ease of use in the absence of restriction enzymes, DNA ligases and DNA polymerases. Unlike the conventional methods such as de novo artificial chromosome synthesis, our method has no limitation on the size of the inserted chromosome fragment. The inserted DNA size can be precisely defined and the processed chromosome can retain its intrinsic structure and integrity. Therefore, our technique provides a high quality alternative approach to directed genetic recombination, and can be used for chromosomal repair, removal of defects and artificial chromosome creation. The technique may also have applicability on the manipulation and extension of large pieces of synthetic DNA.
Jäckel, David; Bakkum, Douglas J; Russell, Thomas L; Müller, Jan; Radivojevic, Milos; Frey, Urs; Franke, Felix; Hierlemann, Andreas
2017-04-20
We present a novel, all-electric approach to record and to precisely control the activity of tens of individual presynaptic neurons. The method allows for parallel mapping of the efficacy of multiple synapses and of the resulting dynamics of postsynaptic neurons in a cortical culture. For the measurements, we combine an extracellular high-density microelectrode array, featuring 11'000 electrodes for extracellular recording and stimulation, with intracellular patch-clamp recording. We are able to identify the contributions of individual presynaptic neurons - including inhibitory and excitatory synaptic inputs - to postsynaptic potentials, which enables us to study dendritic integration. Since the electrical stimuli can be controlled at microsecond resolution, our method enables to evoke action potentials at tens of presynaptic cells in precisely orchestrated sequences of high reliability and minimum jitter. We demonstrate the potential of this method by evoking short- and long-term synaptic plasticity through manipulation of multiple synaptic inputs to a specific neuron.
Precise calculation of the local pressure tensor in Cartesian and spherical coordinates in LAMMPS
NASA Astrophysics Data System (ADS)
Nakamura, Takenobu; Kawamoto, Shuhei; Shinoda, Wataru
2015-05-01
An accurate and efficient algorithm for calculating the 3D pressure field has been developed and implemented in the open-source molecular dynamics package, LAMMPS. Additionally, an algorithm to compute the pressure profile along the radial direction in spherical coordinates has also been implemented. The latter is particularly useful for systems showing a spherical symmetry such as micelles and vesicles. These methods yield precise pressure fields based on the Irving-Kirkwood contour integration and are particularly useful for biomolecular force fields. The present methods are applied to several systems including a buckled membrane and a vesicle.
A Method for Precision Closed-Loop Irrigation Using a Modified PID Control Algorithm
NASA Astrophysics Data System (ADS)
Goodchild, Martin; Kühn, Karl; Jenkins, Malcolm; Burek, Kazimierz; Dutton, Andrew
2016-04-01
The benefits of closed-loop irrigation control have been demonstrated in grower trials which show the potential for improved crop yields and resource usage. Managing water use by controlling irrigation in response to soil moisture changes to meet crop water demands is a popular approach but requires knowledge of closed-loop control practice. In theory, to obtain precise closed-loop control of a system it is necessary to characterise every component in the control loop to derive the appropriate controller parameters, i.e. proportional, integral & derivative (PID) parameters in a classic PID controller. In practice this is often difficult to achieve. Empirical methods are employed to estimate the PID parameters by observing how the system performs under open-loop conditions. In this paper we present a modified PID controller, with a constrained integral function, that delivers excellent regulation of soil moisture by supplying the appropriate amount of water to meet the needs of the plant during the diurnal cycle. Furthermore, the modified PID controller responds quickly to changes in environmental conditions, including rainfall events which can result in: controller windup, under-watering and plant stress conditions. The experimental work successfully demonstrates the functionality of a constrained integral PID controller that delivers robust and precise irrigation control. Coir substrate strawberry growing trial data is also presented illustrating soil moisture control and the ability to match water deliver to solar radiation.
Multisignal detecting system of pile integrity testing
NASA Astrophysics Data System (ADS)
Liu, Zuting; Luo, Ying; Yu, Shihai
2002-05-01
The low strain reflection wave method plays a principal rule in the integrating detection of base piles. However, there are some deficiencies with this method. For example, there is a blind area of detection on top of the tested pile; it is difficult to recognize the defects at deep-seated parts of the pile; there is still the planar of 3D domino effect, etc. It is very difficult to solve these problems only with the single-transducer pile integrity testing system. A new multi-signal piles integrity testing system is proposed in this paper, which is able to impulse and collect signals on multiple points on top of the pile. By using the multiple superposition data processing method, the detecting system can effectively restrain the interference and elevate the precision and SNR of pile integrity testing. The system can also be applied to the evaluation of engineering structure health.
Analysis of a novel device-level SINS/ACFSS deeply integrated navigation method
NASA Astrophysics Data System (ADS)
Zhang, Hao; Qin, Shiqiao; Wang, Xingshu; Jiang, Guangwen; Tan, Wenfeng; Wu, Wei
2017-02-01
The combination of the strap-down inertial navigation system(SINS) and the celestial navigation system(CNS) is one of the popular measures to constitute the integrated navigation system. A star sensor(SS) is used as a precise attitude determination device in CNS. To solve the problem that the star image obtained by SS is motion-blurred under dynamic conditions, the attitude-correlated frames(ACF) approach is presented and the star sensor which works based on ACF approach is named ACFSS. Depending on the ACF approach, a novel device-level SINS/ACFSS deeply integrated navigation method is proposed in this paper. Feedback to the ACF process from the error of the gyro is one of the typical characters of the SINS/CNS deeply integrated navigation method. Herein, simulation results have verified its validity and efficiency in improving the accuracy of gyro and it can be proved that this method is feasible.
An application framework for computer-aided patient positioning in radiation therapy.
Liebler, T; Hub, M; Sanner, C; Schlegel, W
2003-09-01
The importance of exact patient positioning in radiation therapy increases with the ongoing improvements in irradiation planning and treatment. Therefore, new ways to overcome precision limitations of current positioning methods in fractionated treatment have to be found. The Department of Medical Physics at the German Cancer Research Centre (DKFZ) follows different video-based approaches to increase repositioning precision. In this context, the modular software framework FIVE (Fast Integrated Video-based Environment) has been designed and implemented. It is both hardware- and platform-independent and supports merging position data by integrating various computer-aided patient positioning methods. A highly precise optical tracking system and several subtraction imaging techniques have been realized as modules to supply basic video-based repositioning techniques. This paper describes the common framework architecture, the main software modules and their interfaces. An object-oriented software engineering process has been applied using the UML, C + + and the Qt library. The significance of the current framework prototype for the application in patient positioning as well as the extension to further application areas will be discussed. Particularly in experimental research, where special system adjustments are often necessary, the open design of the software allows problem-oriented extensions and adaptations.
Precision medicine for cancer with next-generation functional diagnostics.
Friedman, Adam A; Letai, Anthony; Fisher, David E; Flaherty, Keith T
2015-12-01
Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
A hybrid method for transient wave propagation in a multilayered solid
NASA Astrophysics Data System (ADS)
Tian, Jiayong; Xie, Zhoumin
2009-08-01
We present a hybrid method for the evaluation of transient elastic-wave propagation in a multilayered solid, integrating reverberation matrix method with the theory of generalized rays. Adopting reverberation matrix formulation, Laplace-Fourier domain solutions of elastic waves in the multilayered solid are expanded into the sum of a series of generalized-ray group integrals. Each generalized-ray group integral containing Kth power of reverberation matrix R represents the set of K-times reflections and refractions of source waves arriving at receivers in the multilayered solid, which was computed by fast inverse Laplace transform (FILT) and fast Fourier transform (FFT) algorithms. However, the calculation burden and low precision of FILT-FFT algorithm limit the application of reverberation matrix method. In this paper, we expand each of generalized-ray group integrals into the sum of a series of generalized-ray integrals, each of which is accurately evaluated by Cagniard-De Hoop method in the theory of generalized ray. The numerical examples demonstrate that the proposed method makes it possible to calculate the early-time transient response in the complex multilayered-solid configuration efficiently.
NASA Astrophysics Data System (ADS)
Artem'ev, V. A.; Nezvanov, A. Yu.; Nesvizhevsky, V. V.
2016-01-01
We discuss properties of the interaction of slow neutrons with nano-dispersed media and their application for neutron reflectors. In order to increase the accuracy of model simulation of the interaction of neutrons with nanopowders, we perform precise quantum mechanical calculation of potential scattering of neutrons on single nanoparticles using the method of phase functions. We compare results of precise calculations with those performed within first Born approximation for nanodiamonds with the radius of 2-5 nm and for neutron energies 3 × 10-7-10-3 eV. Born approximation overestimates the probability of scattering to large angles, while the accuracy of evaluation of integral characteristics (cross sections, albedo) is acceptable. Using Monte-Carlo method, we calculate albedo of neutrons from different layers of piled up diamond nanopowder.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
A pruning algorithm for Meta-blocking based on cumulative weight
NASA Astrophysics Data System (ADS)
Zhang, Fulin; Gao, Zhipeng; Niu, Kun
2017-08-01
Entity Resolution is an important process in data cleaning and data integration. It usually employs a blocking method to avoid the quadratic complexity work when scales to large data sets. Meta-blocking can perform better in the context of highly heterogeneous information spaces. Yet, its precision and efficiency still have room to improve. In this paper, we present a new pruning algorithm for Meta-Blocking. It can achieve a higher precision than the existing WEP algorithm at a small cost of recall. In addition, can reduce the runtime of the blocking process. We evaluate our proposed method over five real-world data sets.
Zeng, Irene Sui Lan; Lumley, Thomas
2018-01-01
Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.
Yan, Liping; Chen, Benyong; Zhang, Enzheng; Zhang, Shihua; Yang, Ye
2015-08-01
A novel method for the precision measurement of refractive index of air (n(air)) based on the combining of the laser synthetic wavelength interferometry with the Edlén equation estimation is proposed. First, a n(air_e) is calculated from the modified Edlén equation according to environmental parameters measured by low precision sensors with an uncertainty of 10(-6). Second, a unique integral fringe number N corresponding to n(air) is determined based on the calculated n(air_e). Then, a fractional fringe ε corresponding to n(air) with high accuracy can be obtained according to the principle of fringe subdivision of laser synthetic wavelength interferometry. Finally, high accurate measurement of n(air) is achieved according to the determined fringes N and ε. The merit of the proposed method is that it not only solves the problem of the measurement accuracy of n(air) being limited by the accuracies of environmental sensors, but also avoids adopting complicated vacuum pumping to measure the integral fringe N in the method of conventional laser interferometry. To verify the feasibility of the proposed method, comparison experiments with Edlén equations in short time and in long time were performed. Experimental results show that the measurement accuracy of n(air) is better than 2.5 × 10(-8) in short time tests and 6.2 × 10(-8) in long time tests.
Tild-CRISPR Allows for Efficient and Precise Gene Knockin in Mouse and Human Cells.
Yao, Xuan; Zhang, Meiling; Wang, Xing; Ying, Wenqin; Hu, Xinde; Dai, Pengfei; Meng, Feilong; Shi, Linyu; Sun, Yun; Yao, Ning; Zhong, Wanxia; Li, Yun; Wu, Keliang; Li, Weiping; Chen, Zi-Jiang; Yang, Hui
2018-05-21
The targeting efficiency of knockin sequences via homologous recombination (HR) is generally low. Here we describe a method we call Tild-CRISPR (targeted integration with linearized dsDNA-CRISPR), a targeting strategy in which a PCR-amplified or precisely enzyme-cut transgene donor with 800-bp homology arms is injected with Cas9 mRNA and single guide RNA into mouse zygotes. Compared with existing targeting strategies, this method achieved much higher knockin efficiency in mouse embryos, as well as brain tissue. Importantly, the Tild-CRISPR method also yielded up to 12-fold higher knockin efficiency than HR-based methods in human embryos, making it suitable for studying gene functions in vivo and developing potential gene therapies. Copyright © 2018 Elsevier Inc. All rights reserved.
Zhu, Haixin; Zhou, Xianfeng; Su, Fengyu; Tian, Yanqing; Ashili, Shashanka; Holl, Mark R; Meldrum, Deirdre R
2012-10-01
We report a novel method for wafer level, high throughput optical chemical sensor patterning, with precise control of the sensor volume and capability of producing arbitrary microscale patterns. Monomeric oxygen (O(2)) and pH optical probes were polymerized with 2-hydroxyethyl methacrylate (HEMA) and acrylamide (AM) to form spin-coatable and further crosslinkable polymers. A micro-patterning method based on micro-fabrication techniques (photolithography, wet chemical process and reactive ion etch) was developed to miniaturize the sensor film onto glass substrates in arbitrary sizes and shapes. The sensitivity of fabricated micro-patterns was characterized under various oxygen concentrations and pH values. The process for spatially integration of two sensors (Oxygen and pH) on the same substrate surface was also developed, and preliminary fabrication and characterization results were presented. To the best of our knowledge, it is the first time that poly (2-hydroxylethyl methacrylate)-co-poly (acrylamide) (PHEMA-co-PAM)-based sensors had been patterned and integrated at the wafer level with micron scale precision control using microfabrication techniques. The developed methods can provide a feasible way to miniaturize and integrate the optical chemical sensor system and can be applied to any lab-on-a-chip system, especially the biological micro-systems requiring optical sensing of single or multiple analytes.
An intelligent control scheme for precise tip-motion control in atomic force microscopy.
Wang, Yanyan; Hu, Xiaodong; Xu, Linyan
2016-01-01
The paper proposes a new intelligent control method to precisely control the tip motion of the atomic force microscopy (AFM). The tip moves up and down at a high rate along the z direction during scanning, requiring the utilization of a rapid feedback controller. The standard proportional-integral (PI) feedback controller is commonly used in commercial AFMs to enable topography measurements. The controller's response performance is determined by the set of the proportional (P) parameter and the integral (I) parameter. However, the two parameters cannot be automatically altered simultaneously according to the scanning speed and the surface topography during continuors scanning, leading to an inaccurate measurement. Thus a new intelligent controller combining the fuzzy controller and the PI controller is put forward in the paper. The new controller automatically selects the most appropriate PI parameters to achieve a fast response rate on basis of the tracking errors. In the experimental setup, the new controller is realized with a digital signal process (DSP) system, implemented in a conventional AFM system. Experiments are carried out by comparing the new method with the standard PI controller. The results demonstrate that the new method is more robust and effective for the precise tip motion control, corresponding to the achievement of a highly qualified image by shortening the response time of the controller. © Wiley Periodicals, Inc.
Relative receiver autonomous integrity monitoring for future GNSS-based aircraft navigation
NASA Astrophysics Data System (ADS)
Gratton, Livio Rafael
The Global Positioning System (GPS) has enabled reliable, safe, and practical aircraft positioning for en-route and non-precision phases of flight for more than a decade. Intense research is currently devoted to extending the use of Global Navigation Satellite Systems (GNSS), including GPS, to precision approach and landing operations. In this context, this work is focused on the development, analysis, and verification of the concept of Relative Receiver Autonomous Integrity Monitoring (RRAIM) and its potential applications to precision approach navigation. RRAIM fault detection algorithms are developed, and associated mathematical bounds on position error are derived. These are investigated as possible solutions to some current key challenges in precision approach navigation, discussed below. Augmentation systems serving continent-size areas (like the Wide Area Augmentation System or WAAS) allow certain precision approach operations within the covered region. More and better satellites, with dual frequency capabilities, are expected to be in orbit in the mid-term future, which will potentially allow WAAS-like capabilities worldwide with a sparse ground station network. Two main challenges in achieving this goal are (1) ensuring that navigation fault detection functions are fast enough to alert worldwide users of hazardously misleading information, and (2) minimizing situations in which navigation is unavailable because the user's local satellite geometry is insufficient for safe position estimation. Local augmentation systems (implemented at individual airports, like the Local Area Augmentation System or LAAS) have the potential to allow precision approach and landing operations by providing precise corrections to user-satellite range measurements. An exception to these capabilities arises during ionospheric storms (caused by solar activity), when hazardous situations can exist with residual range errors several orders of magnitudes higher than nominal. Until dual frequency civil GPS signals are available, the ability to provide integrity during ionospheric storms, without excessive loss of availability is a major challenge. For all users, with or without augmentation, some situations cause short duration losses of satellites in view. Two examples are aircraft banking during turns and ionospheric scintillation. The loss of range signals can translate into gaps in good satellite geometry, and the resulting challenge is to ensure navigation continuity by bridging these gaps, while simultaneously maintaining high integrity. It is shown that the RRAIM methods developed in this research can be applied to mitigate each of these obstacles to safe and reliable precision aircraft navigation.
Wong, Kin-Yiu; Xu, Yuqing; Xu, Liang
2015-11-01
Enzymatic reactions are integral components in many biological functions and malfunctions. The iconic structure of each reaction path for elucidating the reaction mechanism in details is the molecular structure of the rate-limiting transition state (RLTS). But RLTS is very hard to get caught or to get visualized by experimentalists. In spite of the lack of explicit molecular structure of the RLTS in experiment, we still can trace out the RLTS unique "fingerprints" by measuring the isotope effects on the reaction rate. This set of "fingerprints" is considered as a most direct probe of RLTS. By contrast, for computer simulations, oftentimes molecular structures of a number of TS can be precisely visualized on computer screen, however, theoreticians are not sure which TS is the actual rate-limiting one. As a result, this is an excellent stage setting for a perfect "marriage" between experiment and theory for determining the structure of RLTS, along with the reaction mechanism, i.e., experimentalists are responsible for "fingerprinting", whereas theoreticians are responsible for providing candidates that match the "fingerprints". In this Review, the origin of isotope effects on a chemical reaction is discussed from the perspectives of classical and quantum worlds, respectively (e.g., the origins of the inverse kinetic isotope effects and all the equilibrium isotope effects are purely from quantum). The conventional Bigeleisen equation for isotope effect calculations, as well as its refined version in the framework of Feynman's path integral and Kleinert's variational perturbation (KP) theory for systematically incorporating anharmonicity and (non-parabolic) quantum tunneling, are also presented. In addition, the outstanding interplay between theory and experiment for successfully deducing the RLTS structures and the reaction mechanisms is demonstrated by applications on biochemical reactions, namely models of bacterial squalene-to-hopene polycyclization and RNA 2'-O-transphosphorylation. For all these applications, we used our recently-developed path-integral method based on the KP theory, called automated integration-free path-integral (AIF-PI) method, to perform ab initio path-integral calculations of isotope effects. As opposed to the conventional path-integral molecular dynamics (PIMD) and Monte Carlo (PIMC) simulations, values calculated from our AIF-PI path-integral method can be as precise as (not as accurate as) the numerical precision of the computing machine. Lastly, comments are made on the general challenges in theoretical modeling of candidates matching the experimental "fingerprints" of RLTS. This article is part of a Special Issue entitled: Enzyme Transition States from Theory and Experiment. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukushima, Toshio, E-mail: Toshio.Fukushima@nao.ac.jp
In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitationalmore » acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.« less
NASA Astrophysics Data System (ADS)
Li, Xingxing
2014-05-01
Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;
Liu, Dan; Wang, Qisong; Liu, Xin; Niu, Ruixin; Zhang, Yan; Sun, Jinwei
2016-01-01
Accurately measuring the oil content and salt content of crude oil is very important for both estimating oil reserves and predicting the lifetime of an oil well. There are some problems with the current methods such as high cost, low precision, and difficulties in operation. To solve these problems, we present a multifunctional sensor, which applies, respectively, conductivity method and ultrasound method to measure the contents of oil, water, and salt. Based on cross sensitivity theory, these two transducers are ideally integrated for simplifying the structure. A concentration test of ternary solutions is carried out to testify its effectiveness, and then Canonical Correlation Analysis is applied to evaluate the data. From the perspective of statistics, the sensor inputs, for instance, oil concentration, salt concentration, and temperature, are closely related to its outputs including output voltage and time of flight of ultrasound wave, which further identify the correctness of the sensing theory and the feasibility of the integrated design. Combined with reconstruction algorithms, the sensor can realize the content measurement of the solution precisely. The potential development of the proposed sensor and method in the aspect of online test for crude oil is of important reference and practical value. PMID:27775640
Assessing Backwards Integration as a Method of KBO Family Finding
NASA Astrophysics Data System (ADS)
Benfell, Nathan; Ragozzine, Darin
2018-04-01
The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.
Beste, A; Harrison, R J; Yanai, T
2006-08-21
Chemists are mainly interested in energy differences. In contrast, most quantum chemical methods yield the total energy which is a large number compared to the difference and has therefore to be computed to a higher relative precision than would be necessary for the difference alone. Hence, it is desirable to compute energy differences directly, thereby avoiding the precision problem. Whenever it is possible to find a parameter which transforms smoothly from an initial to a final state, the energy difference can be obtained by integrating the energy derivative with respect to that parameter (cf. thermodynamic integration or adiabatic connection methods). If the dependence on the parameter is predominantly linear, accurate results can be obtained by single-point integration. In density functional theory and Hartree-Fock, we applied the formalism to ionization potentials, excitation energies, and chemical bond breaking. Example calculations for ionization potentials and excitation energies showed that accurate results could be obtained with a linear estimate. For breaking bonds, we introduce a nongeometrical parameter which gradually turns the interaction between two fragments of a molecule on. The interaction changes the potentials used to determine the orbitals as well as the constraint on the orbitals to be orthogonal.
QED contributions to electron g-2
NASA Astrophysics Data System (ADS)
Laporta, Stefano
2018-05-01
In this paper I briefly describe the results of the numerical evaluation of the mass-independent 4-loop contribution to the electron g-2 in QED with 1100 digits of precision. In particular I also show the semi-analytical fit to the numerical value, which contains harmonic polylogarithms of eiπ/3, e2iπ/3 and eiπ/2 one-dimensional integrals of products of complete elliptic integrals and six finite parts of master integrals, evaluated up to 4800 digits. I give also some information about the methods and the program used.
The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network
NASA Astrophysics Data System (ADS)
Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.
2017-05-01
The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
Calculation of precision satellite orbits with nonsingular elements /VOP formulation/
NASA Technical Reports Server (NTRS)
Velez, C. E.; Cefola, P. J.; Long, A. C.; Nimitz, K. S.
1974-01-01
Review of some results obtained in an effort to develop efficient, high-precision trajectory computation processes for artificial satellites by optimum selection of the form of the equations of motion of the satellite and the numerical integration method. In particular, the matching of a Gaussian variation-of-parameter (VOP) formulation is considered which is expressed in terms of equinoctial orbital elements and partially decouples the motion of the orbital frame from motion within the orbital frame. The performance of the resulting orbit generators is then compared with the popular classical Cowell/Gauss-Jackson formulation/integrator pair for two distinctly different orbit types - namely, the orbit of the ATS satellite at near-geosynchronous conditions and the near-circular orbit of the GEOS-C satellite at 1000 km.
Performance analysis of device-level SINS/ACFSS deeply integrated navigation method
NASA Astrophysics Data System (ADS)
Zhang, Hao; Qin, Shiqiao; Wang, Xingshu; Jiang, Guangwen; Tan, Wenfeng
2016-10-01
The Strap-Down Inertial Navigation System (SINS) is a widely used navigation system. The combination of SINS and the Celestial Navigation System (CNS) is one of the popular measures to constitute the integrated navigation system. A Star Sensor (SS) is used as a precise attitude determination device in CNS. To solve the problem that the star image obtained by SS under dynamic conditions is motion-blurred, the Attitude Correlated Frames (ACF) is presented and the star sensor which works based on ACF approach is named ACFSS. Depending on the ACF approach, a novel device-level SINS/ACFSS deeply integrated navigation method is proposed in this paper. Feedback to the ACF process from the error of the gyro is one of the typical characters of the SINS/CNS deeply integrated navigation method. Herein, simulation results have verified its validity and efficiency in improving the accuracy of gyro and it can be proved that this method is feasible in theory.
Ferroelectric Zinc Oxide Nanowire Embedded Flexible Sensor for Motion and Temperature Sensing.
Shin, Sung-Ho; Park, Dae Hoon; Jung, Joo-Yun; Lee, Min Hyung; Nah, Junghyo
2017-03-22
We report a simple method to realize multifunctional flexible motion sensor using ferroelectric lithium-doped ZnO-PDMS. The ferroelectric layer enables piezoelectric dynamic sensing and provides additional motion information to more precisely discriminate different motions. The PEDOT:PSS-functionalized AgNWs, working as electrode layers for the piezoelectric sensing layer, resistively detect a change of both movement or temperature. Thus, through the optimal integration of both elements, the sensing limit, accuracy, and functionality can be further expanded. The method introduced here is a simple and effective route to realize a high-performance flexible motion sensor with integrated multifunctionalities.
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
Picosecond Resolution Time-to-Digital Converter Using Gm-C Integrator and SAR-ADC
NASA Astrophysics Data System (ADS)
Xu, Zule; Miyahara, Masaya; Matsuzawa, Akira
2014-04-01
A picosecond resolution time-to-digital converter (TDC) is presented. The resolution of a conventional delay chain TDC is limited by the delay of a logic buffer. Various types of recent TDCs are successful in breaking this limitation, but they require a significant calibration effort to achieve picosecond resolution with a sufficient linear range. To address these issues, we propose a simple method to break the resolution limitation without any calibration: a Gm-C integrator followed by a successive approximation register analog-to-digital converter (SAR-ADC). This translates the time interval into charge, and then the charge is quantized. A prototype chip was fabricated in 90 nm CMOS. The measurement results reveal a 1 ps resolution, a -0.6/0.7 LSB differential nonlinearity (DNL), a -1.1/2.3 LSB integral nonlinearity (INL), and a 9-bit range. The measured 11.74 ps single-shot precision is caused by the noise of the integrator. We analyze the noise of the integrator and propose an improved front-end circuit to reduce this noise. The proposal is verified by simulations showing the maximum single-shot precision is less than 1 ps. The proposed front-end circuit can also diminish the mismatch effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, T; Ding, H; Lipinski, J
2015-06-15
Purpose: To develop a physics-based model for accurate quantification of the cross-sectional area (CSA) of coronary arteries in CT angiography by measuring the integrated density to account for the partial volume effect. Methods: In this technique the integrated density of the object as compared with its local background is measured to account for the partial volume effect. Normal vessels were simulated as circles with diameters in the range of 0.1–3mm. Diseased vessels were simulated as 2, 3, and 4mm diameter vessels with 10–90% area stenosis, created by inserting circular plaques. A simplified two material model was used with the lumenmore » as 8mg/ml Iodine and background as lipid. The contrast-to-noise ratio between lumen and background was approximately 26. Linear fits to the known CSA were calculated. The precision and accuracy of the measurement were quantified using the root-mean-square fit deviations (RMSD) and errors to the known CSA (RMSE). Results compared to manual segmentation of the vessel lumen. To assess the impact of random variations, coefficients of variation (CV) from 10 simulations for each vessel were computed to determine reliability. Measurements with CVs less than 10% were considered reliable. Results: For normal vessels, the precision and accuracy of the integrated density technique were 0.12mm{sup 2} and 0.28mm{sup 2}, respectively. The corresponding results for manual segmentation were 0.27mm{sup 2} and 0.43mm{sup 2}. For diseased vessels, the precision and accuracy of the integrated density technique were 0.14mm{sup 2} and 0.19mm{sup 2}. Corresponding results for manual segmentation were 0.42mm{sup 2} and 0.71mm{sup 2}. Reliable CSAs were obtained for normal vessels with diameters larger than 1 mm and for diseased vessels with area as low as 1.26mm2. Conclusion: The CSA based on integrated density showed improved precision and accuracy as compared with manual segmentation in simulation. These results indicate the potential of using integrated density to quantify CSA of coronary arteries in CT angiography.« less
Electron Beam "Writes" Silicon On Sapphire
NASA Technical Reports Server (NTRS)
Heinemann, Klaus
1988-01-01
Method of growing silicon on sapphire substrate uses beam of electrons to aid growth of semiconductor material. Silicon forms as epitaxial film in precisely localized areas in micron-wide lines. Promising fabrication method for fast, densely-packed integrated circuits. Silicon deposited preferentially in contaminated substrate zones and in clean zone irradiated by electron beam. Electron beam, like surface contamination, appears to stimulate decomposition of silane atmosphere.
Application of Neural Networks to Wind tunnel Data Response Surface Methods
NASA Technical Reports Server (NTRS)
Lo, Ching F.; Zhao, J. L.; DeLoach, Richard
2000-01-01
The integration of nonlinear neural network methods with conventional linear regression techniques is demonstrated for representative wind tunnel force balance data modeling. This work was motivated by a desire to formulate precision intervals for response surfaces produced by neural networks. Applications are demonstrated for representative wind tunnel data acquired at NASA Langley Research Center and the Arnold Engineering Development Center in Tullahoma, TN.
Hybrid perturbation methods based on statistical time series models
NASA Astrophysics Data System (ADS)
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Advanced optical manufacturing digital integrated system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Li, Wei; Tang, Dingyong
2012-10-01
It is necessarily to adapt development of advanced optical manufacturing technology with modern science technology development. To solved these problems which low of ration, ratio of finished product, repetition, consistent in big size and high precision in advanced optical component manufacturing. Applied business driven and method of Rational Unified Process, this paper has researched advanced optical manufacturing process flow, requirement of Advanced Optical Manufacturing integrated System, and put forward architecture and key technology of it. Designed Optical component core and Manufacturing process driven of Advanced Optical Manufacturing Digital Integrated System. the result displayed effective well, realized dynamic planning Manufacturing process, information integration improved ratio of production manufactory.
Wu, Fan; Stark, Eran; Ku, Pei-Cheng; Wise, Kensall D.; Buzsáki, György; Yoon, Euisik
2015-01-01
SUMMARY We report a scalable method to monolithically integrate microscopic light emitting diodes (μLEDs) and recording sites onto silicon neural probes for optogenetic applications in neuroscience. Each μLED and recording site has dimensions similar to a pyramidal neuron soma, providing confined emission and electrophysiological recording of action potentials and local field activity. We fabricated and implanted the four-shank probes, each integrated with 12 μLEDs and 32 recording sites, into the CA1 pyramidal layer of anesthetized and freely moving mice. Spikes were robustly induced by 60 nW light power, and fast population oscillations were induced at the microwatt range. To demonstrate the spatiotemporal precision of parallel stimulation and recording, we achieved independent control of distinct cells ~50 μm apart and of differential somatodendritic compartments of single neurons. The scalability and spatiotemporal resolution of this monolithic optogenetic tool provides versatility and precision for cellular-level circuit analysis in deep structures of intact, freely moving animals. PMID:26627311
A novel approach to evaluation of pest insect abundance in the presence of noise.
Embleton, Nina; Petrovskaya, Natalia
2014-03-01
Evaluation of pest abundance is an important task of integrated pest management. It has recently been shown that evaluation of pest population size from discrete sampling data can be done by using the ideas of numerical integration. Numerical integration of the pest population density function is a computational technique that readily gives us an estimate of the pest population size, where the accuracy of the estimate depends on the number of traps installed in the agricultural field to collect the data. However, in a standard mathematical problem of numerical integration, it is assumed that the data are precise, so that the random error is zero when the data are collected. This assumption does not hold in ecological applications. An inherent random error is often present in field measurements, and therefore it may strongly affect the accuracy of evaluation. In our paper, we offer a novel approach to evaluate the pest insect population size under the assumption that the data about the pest population include a random error. The evaluation is not based on statistical methods but is done using a spatially discrete method of numerical integration where the data obtained by trapping as in pest insect monitoring are converted to values of the population density. It will be discussed in the paper how the accuracy of evaluation differs from the case where the same evaluation method is employed to handle precise data. We also consider how the accuracy of the pest insect abundance evaluation can be affected by noise when the data available from trapping are sparse. In particular, we show that, contrary to intuitive expectations, noise does not have any considerable impact on the accuracy of evaluation when the number of traps is small as is conventional in ecological applications.
Attalla, R; Ling, C; Selvaganapathy, P
2016-02-01
The lack of a simple and effective method to integrate vascular network with engineered scaffolds and tissue constructs remains one of the biggest challenges in true 3D tissue engineering. Here, we detail the use of a commercially available, low-cost, open-source 3D printer modified with a microfluidic print-head in order to develop a method for the generation of instantly perfusable vascular network integrated with gel scaffolds seeded with cells. The print-head features an integrated coaxial nozzle that allows the fabrication of hollow, calcium-polymerized alginate tubes that can be easily patterned using 3D printing techniques. The diameter of the hollow channel can be precisely controlled and varied between 500 μm - 2 mm by changing applied flow rates or print-head speed. These channels are integrated into gel layers with a thickness of 800 μm - 2.5 mm. The structural rigidity of these constructs allows the fabrication of multi-layered structures without causing the collapse of hollow channels in lower layers. The 3D printing method was fully characterized at a range of operating speeds (0-40 m/min) and corresponding flow rates (1-30 mL/min) were identified to produce precise definition. This microfluidic design also allows the incorporation of a wide range of scaffold materials as well as biological constituents such as cells, growth factors, and ECM material. Media perfusion of the channels causes a significant viability increase in the bulk of cell-laden structures over the long-term. With this setup, gel constructs with embedded arrays of hollow channels can be created and used as a potential substitute for blood vessel networks.
Zhu, Haixin; Zhou, Xianfeng; Su, Fengyu; Tian, Yanqing; Ashili, Shashanka; Holl, Mark R.; Meldrum, Deirdre R.
2012-01-01
We report a novel method for wafer level, high throughput optical chemical sensor patterning, with precise control of the sensor volume and capability of producing arbitrary microscale patterns. Monomeric oxygen (O2) and pH optical probes were polymerized with 2-hydroxyethyl methacrylate (HEMA) and acrylamide (AM) to form spin-coatable and further crosslinkable polymers. A micro-patterning method based on micro-fabrication techniques (photolithography, wet chemical process and reactive ion etch) was developed to miniaturize the sensor film onto glass substrates in arbitrary sizes and shapes. The sensitivity of fabricated micro-patterns was characterized under various oxygen concentrations and pH values. The process for spatially integration of two sensors (Oxygen and pH) on the same substrate surface was also developed, and preliminary fabrication and characterization results were presented. To the best of our knowledge, it is the first time that poly (2-hydroxylethyl methacrylate)-co-poly (acrylamide) (PHEMA-co-PAM)-based sensors had been patterned and integrated at the wafer level with micron scale precision control using microfabrication techniques. The developed methods can provide a feasible way to miniaturize and integrate the optical chemical sensor system and can be applied to any lab-on-a-chip system, especially the biological micro-systems requiring optical sensing of single or multiple analytes. PMID:23175599
Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.
Gao, J
2016-01-01
Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.
Integrated titer plate-injector head for microdrop array preparation, storage and transfer
Swierkowski, Stefan P.
2000-01-01
An integrated titer plate-injector head for preparing and storing two-dimensional (2-D) arrays of microdrops and for ejecting part or all of the microdrops and inserting same precisely into 2-D arrays of deposition sites with micrometer precision. The titer plate-injector head includes integrated precision formed nozzles with appropriate hydrophobic surface features and evaporative constraints. A reusable pressure head with a pressure equalizing feature is added to the titer plate to perform simultaneous precision sample ejection. The titer plate-injector head may be utilized in various applications including capillary electrophoresis, chemical flow injection analysis, microsample array preparation, etc.
Kinematic Alignment and Bonding of Silicon Mirrors for High-Resolution Astronomical X-Ray Optics
NASA Technical Reports Server (NTRS)
Chan, Kai-Wing; Mazzarella, James R.; Saha, Timo T.; Zhang, William W.; Mcclelland, Ryan S.; Biskack, Michael P.; Riveros, Raul E.; Allgood, Kim D.; Kearney, John D.; Sharpe, Marton V.;
2017-01-01
Optics for the next generation's high-resolution, high throughput x-ray telescope requires fabrication of well-formed lightweight mirror segments and their integration at arc-second precision. Recent advances in the fabrication of silicon mirrors developed at NASA/Goddard prompted us to develop a new method of mirror alignment and integration. In this method, stiff silicon mirrors are aligned quasi-kinematically and are bonded in an interlocking fashion to produce a "meta-shell" with large collective area. We address issues of aligning and bonding mirrors with this method and show a recent result of 4 seconds-of-arc for a single pair of mirrors tested at soft x-rays.
NASA Technical Reports Server (NTRS)
Stanley, A. G.; Gauthier, M. K.
1977-01-01
A successful diagnostic technique was developed using a scanning electron microscope (SEM) as a precision tool to determine ionization effects in integrated circuits. Previous SEM methods radiated the entire semiconductor chip or major areas. The large area exposure methods do not reveal the exact components which are sensitive to radiation. To locate these sensitive components a new method was developed, which consisted in successively irradiating selected components on the device chip with equal doses of electrons /10 to the 6th rad (Si)/, while the whole device was subjected to representative bias conditions. A suitable device parameter was measured in situ after each successive irradiation with the beam off.
Integrating Terrain Maps Into a Reactive Navigation Strategy
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Werger, Barry; Seraji, Homayoun
2006-01-01
An improved method of processing information for autonomous navigation of a robotic vehicle across rough terrain involves the integration of terrain maps into a reactive navigation strategy. Somewhat more precisely, the method involves the incorporation, into navigation logic, of data equivalent to regional traversability maps. The terrain characteristic is mapped using a fuzzy-logic representation of the difficulty of traversing the terrain. The method is robust in that it integrates a global path-planning strategy with sensor-based regional and local navigation strategies to ensure a high probability of success in reaching a destination and avoiding obstacles along the way. The sensor-based strategies use cameras aboard the vehicle to observe the regional terrain, defined as the area of the terrain that covers the immediate vicinity near the vehicle to a specified distance a few meters away.
On the inversion of geodetic integrals defined over the sphere using 1-D FFT
NASA Astrophysics Data System (ADS)
García, R. V.; Alejo, C. A.
2005-08-01
An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.
USDA-ARS?s Scientific Manuscript database
Rangeland environments are particularly susceptible to erosion due to extreme rainfall events and low vegetation cover. Landowners and managers need access to reliable erosion evaluation methods in order to protect productivity and hydrologic integrity of their rangelands and make resource allocati...
Integration of Molecular Pathology, Epidemiology, and Social Science for Global Precision Medicine
Nishi, Akihiro; Milner, Danny A; Giovannucci, Edward L.; Nishihara, Reiko; Tan, Andy S.; Kawachi, Ichiro; Ogino, Shuji
2015-01-01
Summary The precision medicine concept and the unique disease principle imply that each patient has unique pathogenic processes resulting from heterogeneous cellular genetic and epigenetic alterations, and interactions between cells (including immune cells) and exposures, including dietary, environmental, microbial, and lifestyle factors. As a core method field in population health science and medicine, epidemiology is a growing scientific discipline that can analyze disease risk factors, and develop statistical methodologies to maximize utilization of big data on populations and disease pathology. The evolving transdisciplinary field of molecular pathological epidemiology (MPE) can advance biomedical and health research by linking exposures to molecular pathologic signatures, enhancing causal inference, and identifying potential biomarkers for clinical impact. The MPE approach can be applied to any diseases, although it has been most commonly used in neoplastic diseases (including breast, lung and colorectal cancers) because of availability of various molecular diagnostic tests. However, use of state-of-the-art genomic, epigenomic and other omic technologies and expensive drugs in modern healthcare systems increases racial, ethnic and socioeconomic disparities. To address this, we propose to integrate molecular pathology, epidemiology, and social science. Social epidemiology integrates the latter two fields. The integrative social MPE model can embrace sociology, economics and precision medicine, address global health disparities and inequalities, and elucidate biological effects of social environments, behaviors, and networks. We foresee advancements of molecular medicine, including molecular diagnostics, biomedical imaging, and targeted therapeutics, which should benefit individuals in a global population, by means of an interdisciplinary approach of integrative MPE and social health science. PMID:26636627
Integration of molecular pathology, epidemiology and social science for global precision medicine.
Nishi, Akihiro; Milner, Danny A; Giovannucci, Edward L; Nishihara, Reiko; Tan, Andy S; Kawachi, Ichiro; Ogino, Shuji
2016-01-01
The precision medicine concept and the unique disease principle imply that each patient has unique pathogenic processes resulting from heterogeneous cellular genetic and epigenetic alterations and interactions between cells (including immune cells) and exposures, including dietary, environmental, microbial and lifestyle factors. As a core method field in population health science and medicine, epidemiology is a growing scientific discipline that can analyze disease risk factors and develop statistical methodologies to maximize utilization of big data on populations and disease pathology. The evolving transdisciplinary field of molecular pathological epidemiology (MPE) can advance biomedical and health research by linking exposures to molecular pathologic signatures, enhancing causal inference and identifying potential biomarkers for clinical impact. The MPE approach can be applied to any diseases, although it has been most commonly used in neoplastic diseases (including breast, lung and colorectal cancers) because of availability of various molecular diagnostic tests. However, use of state-of-the-art genomic, epigenomic and other omic technologies and expensive drugs in modern healthcare systems increases racial, ethnic and socioeconomic disparities. To address this, we propose to integrate molecular pathology, epidemiology and social science. Social epidemiology integrates the latter two fields. The integrative social MPE model can embrace sociology, economics and precision medicine, address global health disparities and inequalities, and elucidate biological effects of social environments, behaviors and networks. We foresee advancements of molecular medicine, including molecular diagnostics, biomedical imaging and targeted therapeutics, which should benefit individuals in a global population, by means of an interdisciplinary approach of integrative MPE and social health science.
Advances in molecular dynamics simulation of ultra-precision machining of hard and brittle materials
NASA Astrophysics Data System (ADS)
Guo, Xiaoguang; Li, Qiang; Liu, Tao; Kang, Renke; Jin, Zhuji; Guo, Dongming
2017-03-01
Hard and brittle materials, such as silicon, SiC, and optical glasses, are widely used in aerospace, military, integrated circuit, and other fields because of their excellent physical and chemical properties. However, these materials display poor machinability because of their hard and brittle properties. Damages such as surface micro-crack and subsurface damage often occur during machining of hard and brittle materials. Ultra-precision machining is widely used in processing hard and brittle materials to obtain nanoscale machining quality. However, the theoretical mechanism underlying this method remains unclear. This paper provides a review of present research on the molecular dynamics simulation of ultra-precision machining of hard and brittle materials. The future trends in this field are also discussed.
Precision Medicine in Head and Neck Cancer: Myth or Reality?
Malone, Eoghan; Siu, Lillian L
2018-01-01
Standard treatment in head and neck squamous cell carcinoma (HNSCC) is limited currently with decisions being made primarily based on tumor location, histology, and stage. The role of the human papillomavirus in risk stratification is actively under clinical trial evaluations. The molecular complexity and intratumoral heterogeneity of the disease are not actively integrated into management decisions of HNSCC, despite a growing body of knowledge in these areas. The advent of the genomic era has delivered vast amounts of information regarding different cancer subtypes and is providing new therapeutic targets, which can potentially be elucidated using next-generation sequencing and other modern technologies. The task ahead is to expand beyond the existent armamentarium by exploiting beyond the genome and perform integrative analysis using innovative systems biology methods, with the goal to deliver effective precision medicine-based theragnostic options in HNSCC.
Reliability and precision of stress sonography of the ulnar collateral ligament.
Bica, David; Armen, Joseph; Kulas, Anthony S; Youngs, Kevin; Womack, Zachary
2015-03-01
Musculoskeletal sonography has emerged as an additional diagnostic tool that can be used to assess medial elbow pain and laxity in overhead throwers. It provides a dynamic, rapid, and noninvasive modality in the evaluation of ligamentous structural integrity. Many studies have demonstrated the utility of dynamic sonography for medial elbow and ulnar collateral ligament (UCL) integrity. However, evaluating the reliabilityand precision of these measurements is critical if sonography is ultimately used as a clinical diagnostic tool. The purpose of this study was to evaluate the reliability and precision of stress sonography applied to the medial elbow. We conducted a cross-sectional study during the 2011 baseball off-season. Eighteen National Collegiate Athletic Association Division I pitchers were enrolled, and 36 elbows were studied. Using sonography, the medial elbow was assessed, and measurements of the UCL length and ulnohumeral joint gapping were performed twice under two conditions (unloaded and loaded) and bilaterally. Intraclass correlation coefficients (0.72-0.94) and standard errors of measurements (0.3-0.9 mm) for UCL length and ulnohumeral joint gapping were good to excellent. Mean differences between unloaded and loaded conditions for the dominant arms were 1.3 mm (gapping; P < .001) and 1.4 mm (UCL length; P < .001). Medial elbow stress sonography is a reliable and precise method for detecting changes in ulnohumeral joint gapping and UCL lengthening. Ultimately, this method may provide clinicians valuable information regarding the medial elbow's response to valgus loading and may help guide treatment options. © 2015 by the American Institute of Ultrasound in Medicine.
Huang, Hu; Zhao, Hongwei; Yang, Zhaojun; Fan, Zunqiang; Wan, Shunguang; Shi, Chengli; Ma, Zhichao
2012-01-01
Miniaturization precision positioning platforms are needed for in situ nanomechanical test applications. This paper proposes a compact precision positioning platform integrating strain gauges and the piezoactuator. Effects of geometric parameters of two parallel plates on Von Mises stress distribution as well as static and dynamic characteristics of the platform were studied by the finite element method. Results of the calibration experiment indicate that the strain gauge sensor has good linearity and its sensitivity is about 0.0468 mV/μm. A closed-loop control system was established to solve the problem of nonlinearity of the platform. Experimental results demonstrate that for the displacement control process, both the displacement increasing portion and the decreasing portion have good linearity, verifying that the control system is available. The developed platform has a compact structure but can realize displacement measurement with the embedded strain gauges, which is useful for the closed-loop control and structure miniaturization of piezo devices. It has potential applications in nanoindentation and nanoscratch tests, especially in the field of in situ nanomechanical testing which requires compact structures. PMID:23012566
Kamalzadeh, Zahra; Babanezhad, Esmaeil; Ghaffari, Solmaz; Mohseni Ezhiyeh, Alireza; Mohammadnejad, Mahdieh; Naghibfar, Mehdi; Bararjanian, Morteza; Attar, Hossein
2017-08-01
A new, normal phase high performance liquid chromatography (NP-HPLC) method was developed for separation of Bortezomib (BZB) enantiomers and quantitative determination of (1S,2R)-enantiomer of BZB in active pharmaceutical ingredient (API) samples. The developed method was validated based on International Conference on Harmonisation (ICH) guidelines and it was proved to be accurate, precise and robust. The obtained resolution (RS) between the enantiomers was more than 2. The calibration curve for (1S,2R)-enantiomer was found to be linear in the concentration range of 0.24-5.36 mg/L with regression coefficient (R2) of 0.9998. Additionally, the limit of detection (LOD) and limit of quantification (LOQ) were 0.052 and 0.16 mg/L, respectively. Also, in this study, a precise, sensitive and robust gradient reversed-phase HPLC (RP-HPLC) method was developed and validated for determination of BZB in API samples. The detector response was linear over the concentration range of 0.26-1110.5 mg/L. The values of R2, LOD and LOQ were 0.9999, 0.084 and 0.25 mg/L, respectively. For both NP-HPLC and RP-HPLC methods, all of the RSD (%) values obtained in the precision study were <1.0%. System suitability parameters in terms of tailing factor (TF), number of theoretical plates (N) and RS were TF < 2.0, N > 2,000 and RS > 2.0. The performance of two common integration methods of valley to valley and drop perpendicular for drawing the baseline between two adjacent peaks were investigated for the determination of diastereomeric impurity (Imp-D) in the BZB-API samples. The results showed that the valley to valley method outperform the drop perpendicular method for calculation of Imp-D peak areas. Therefore, valley to valley method was chosen for peak integration. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Patient similarity for precision medicine: a systematic review.
Parimbelli, E; Marini, S; Sacchi, L; Bellazzi, R
2018-06-01
Evidence-based medicine is the most prevalent paradigm adopted by physicians. Clinical practice guidelines typically define a set of recommendations together with eligibility criteria that restrict their applicability to a specific group of patients. The ever-growing size and availability of health-related data is currently challenging the broad definitions of guideline-defined patient groups. Precision medicine leverages on genetic, phenotypic, or psychosocial characteristics to provide precise identification of patient subsets for treatment targeting. Defining a patient similarity measure is thus an essential step to allow stratification of patients into clinically-meaningful subgroups. The present review investigates the use of patient similarity as a tool to enable precision medicine. 279 articles were analyzed along four dimensions: data types considered, clinical domains of application, data analysis methods, and translational stage of findings. Cancer-related research employing molecular profiling and standard data analysis techniques such as clustering constitute the majority of the retrieved studies. Chronic and psychiatric diseases follow as the second most represented clinical domains. Interestingly, almost one quarter of the studies analyzed presented a novel methodology, with the most advanced employing data integration strategies and being portable to different clinical domains. Integration of such techniques into decision support systems constitutes and interesting trend for future research. Copyright © 2018. Published by Elsevier Inc.
Du, Han; Zhang, Xingwang; Chen, Guoqiang; Deng, Jie; Chau, Fook Siong; Zhou, Guangya
2016-01-01
Photonic molecules have a range of promising applications including quantum information processing, where precise control of coupling strength is critical. Here, by laterally shifting the center-to-center offset of coupled photonic crystal nanobeam cavities, we demonstrate a method to precisely and dynamically control the coupling strength of photonic molecules through integrated nanoelectromechanical systems with a precision of a few GHz over a range of several THz without modifying the nature of their constituent resonators. Furthermore, the coupling strength can be tuned continuously from negative (strong coupling regime) to zero (weak coupling regime) and further to positive (strong coupling regime) and vice versa. Our work opens a door to the optimization of the coupling strength of photonic molecules in situ for the study of cavity quantum electrodynamics and the development of efficient quantum information devices. PMID:27097883
Discrete mathematics, formal methods, the Z schema and the software life cycle
NASA Technical Reports Server (NTRS)
Bown, Rodney L.
1991-01-01
The proper role and scope for the use of discrete mathematics and formal methods in support of engineering the security and integrity of components within deployed computer systems are discussed. It is proposed that the Z schema can be used as the specification language to capture the precise definition of system and component interfaces. This can be accomplished with an object oriented development paradigm.
NASA Astrophysics Data System (ADS)
Pino, Abdiel O.; Pladellorens, Josep
2014-07-01
A means of facilitating the transfer of Optical inspection methods knowledge and skills from academic institutions and their research partners into Panama optics and optical research groups is described. The process involves the creation of an Integrated Knowledge Group Research (IKGR), a partnership led by Polytechnic University of Panama with the support of the SENACYT and Optics and Optometry Department, Polytechnic University of Catalonia. This paper describes the development of the Project for knowledge transfer "Implementation of a method of optical inspection of low cost for improving the surface quality of rolled material of metallic and nonmetallic industrial use", this project will develop a method for measuring the surface quality using texture analysis speckle pattern formed on the surface to be characterized. The project is designed to address the shortage of key skills in the field of precision engineering for optical applications. The main issues encountered during the development of the knowledge transfer teaching and learning are discussed, and the outcomes from the first four months of knowledge transfer activities are described. In overall summary, the results demonstrate how the Integrated Knowledge Group Research and new approach to knowledge transfer has been effective in addressing the engineering skills gap in precision optics for manufactured industrial sector.
NASA Astrophysics Data System (ADS)
Zhou, Yi; Tang, Yan; Deng, Qinyuan; Zhao, Lixin; Hu, Song
2017-08-01
Three-dimensional measurement and inspection is an area with growing needs and interests in many domains, such as integrated circuits (IC), medical cure, and chemistry. Among the methods, broadband light interferometry is widely utilized due to its large measurement range, noncontact and high precision. In this paper, we propose a spatial modulation depth-based method to retrieve the surface topography through analyzing the characteristics of both frequency and spatial domains in the interferogram. Due to the characteristics of spatial modulation depth, the technique could effectively suppress the negative influences caused by light fluctuations and external disturbance. Both theory and experiments are elaborated to confirm that the proposed method can greatly improve the measurement stability and sensitivity with high precision. This technique can achieve a superior robustness with the potential to be applied in online topography measurement.
Narayanamoorthy, S; Sathiyapriya, S P
2016-01-01
In this article, we focus on linear and nonlinear fuzzy Volterra integral equations of the second kind and we propose a numerical scheme using homotopy perturbation method (HPM) to obtain fuzzy approximate solutions to them. To facilitate the benefits of this proposal, an algorithmic form of the HPM is also designed to handle the same. In order to illustrate the potentiality of the approach, two test problems are offered and the obtained numerical results are compared with the existing exact solutions and are depicted in terms of plots to reveal its precision and reliability.
Image Retrieval using Integrated Features of Binary Wavelet Transform
NASA Astrophysics Data System (ADS)
Agarwal, Megha; Maheshwari, R. P.
2011-12-01
In this paper a new approach for image retrieval is proposed with the application of binary wavelet transform. This new approach facilitates the feature calculation with the integration of histogram and correlogram features extracted from binary wavelet subbands. Experiments are performed to evaluate and compare the performance of proposed method with the published literature. It is verified that average precision and average recall of proposed method (69.19%, 41.78%) is significantly improved compared to optimal quantized wavelet correlogram (OQWC) [6] (64.3%, 38.00%) and Gabor wavelet correlogram (GWC) [10] (64.1%, 40.6%). All the experiments are performed on Corel 1000 natural image database [20].
Testing Backwards Integration As A Method Of Age-Determination for KBO Families
NASA Astrophysics Data System (ADS)
Benfell, Nathan; Ragozzine, Darin
2017-10-01
The age of young asteroid collisional families is often determined by using backwards n-body integration of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. However, there are some minor effects that may be important to include in the integration to ensure a faithful reproduction of the actual solar system. We have created simulated families of Kuiper Belt objects through a forwards integration of various objects with identical starting locations and velocity distributions, based on the Haumea family. After carrying this integration forwards through ~4 Gyr, backwards integrations are used (1) to investigate which factors are of enough significance to require inclusion in the integration (e.g., terrestrial planets, KBO self-gravity, putative Planet 9, etc.), (2) to test orbital element clustering statistics and identify methods for assessing false alarm probabilities, and (3) to compare the age estimates with the known age of the simulated family to explore the viability of backwards integration for precise age estimates.
NASA Astrophysics Data System (ADS)
Wang, Qianren; Chen, Xing; Yin, Yuehong; Lu, Jian
2017-08-01
With the increasing complexity of mechatronic products, traditional empirical or step-by-step design methods are facing great challenges with various factors and different stages having become inevitably coupled during the design process. Management of massive information or big data, as well as the efficient operation of information flow, is deeply involved in the process of coupled design. Designers have to address increased sophisticated situations when coupled optimisation is also engaged. Aiming at overcoming these difficulties involved in conducting the design of the spindle box system of ultra-precision optical grinding machine, this paper proposed a coupled optimisation design method based on state-space analysis, with the design knowledge represented by ontologies and their semantic networks. An electromechanical coupled model integrating mechanical structure, control system and driving system of the motor is established, mainly concerning the stiffness matrix of hydrostatic bearings, ball screw nut and rolling guide sliders. The effectiveness and precision of the method are validated by the simulation results of the natural frequency and deformation of the spindle box when applying an impact force to the grinding wheel.
Real-Time PCR for the Detection of Precise Transgene Copy Number in Wheat.
Giancaspro, Angelica; Gadaleta, Agata; Blanco, Antonio
2017-01-01
Despite the unceasing advances in genetic transformation techniques, the success of common delivery methods still lies on the behavior of the integrated transgenes in the host genome. Stability and expression of the introduced genes are influenced by several factors such as chromosomal location, transgene copy number and interaction with the host genotype. Such factors are traditionally characterized by Southern blot analysis, which can be time-consuming, laborious, and often unable to detect the exact copy number of rearranged transgenes. Recent research in crop field suggests real-time PCR as an effective and reliable tool for the precise quantification and characterization of transgene loci. This technique overcomes most problems linked to phenotypic segregation analysis and can analyze hundreds of samples in a day, making it an efficient method for estimating a gene copy number integrated in a transgenic line. This protocol describes the use of real-time PCR for the detection of transgene copy number in durum wheat transgenic lines by means of two different chemistries (SYBR ® Green I dye and TaqMan ® probes).
Aligning, Bonding, and Testing Mirrors for Lightweight X-ray Telescopes
NASA Technical Reports Server (NTRS)
Chan, Kai-Wing; Zhang, William W.; Saha, Timo T.; McClelland, Ryan S.; Biskach, Michael P.; Niemeyer, Jason; Schofield, Mark J.; Mazzarella, James R.; Kolos, Linette D.; Hong, Melinda M.;
2015-01-01
High-resolution, high throughput optics for x-ray astronomy entails fabrication of well-formed mirror segments and their integration with arc-second precision. In this paper, we address issues of aligning and bonding thin glass mirrors with negligible additional distortion. Stability of the bonded mirrors and the curing of epoxy used in bonding them were tested extensively. We present results from tests of bonding mirrors onto experimental modules, and on the stability of the bonded mirrors tested in x-ray. These results demonstrate the fundamental validity of the methods used in integrating mirrors into telescope module, and reveal the areas for further investigation. The alignment and integration methods are applicable to the astronomical mission concept such as STAR-X, the Survey and Time-domain Astronomical Research Explorer.
NASA Technical Reports Server (NTRS)
Chan, Kai-Wing; Zhang, William W.; Schofield, Mark J.; Numata, Ai; Mazzarella, James R.; Saha, Timo T.; Biskach, Michael P.; McCelland, Ryan S.; Niemeyer, Jason; Sharpe, Marton V.;
2016-01-01
High-resolution, high throughput optics for x-ray astronomy requires fabrication of well-formed mirror segments and their integration with arc-second level precision. Recently, advances of fabrication of silicon mirrors developed at NASA/Goddard prompted us to develop a new method of mirror integration. The new integration scheme takes advantage of the stiffer, more thermally conductive, and lower-CTE silicon, compared to glass, to build a telescope of much lighter weight. In this paper, we address issues of aligning and bonding mirrors with this method. In this preliminary work, we demonstrated the basic viability of such scheme. Using glass mirrors, we demonstrated that alignment error of 1" and bonding error 2" can be achieved for mirrors in a single shell. We will address the immediate plan to demonstrate the bonding reliability and to develop technology to build up a mirror stack and a whole "meta-shell".
NASA Astrophysics Data System (ADS)
Wang, C.; Rubin, Y.
2014-12-01
Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.
Lane-Level Vehicle Positioning : Integrating Diverse Systems for Precision and Reliability
DOT National Transportation Integrated Search
2013-05-13
Integrated global positioning system/inertial navigation system (GPS/INS) technology, the backbone of vehicle positioning systems, cannot provide the precision and reliability needed for vehicle-based, lane-level positioning in all driving environmen...
Efficient numerical evaluation of Feynman integrals
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Jian; Yan, Qi-Shu; Zhao, Xiaoran
2016-03-01
Feynman loop integrals are a key ingredient for the calculation of higher order radiation effects, and are responsible for reliable and accurate theoretical prediction. We improve the efficiency of numerical integration in sector decomposition by implementing a quasi-Monte Carlo method associated with the CUDA/GPU technique. For demonstration we present the results of several Feynman integrals up to two loops in both Euclidean and physical kinematic regions in comparison with those obtained from FIESTA3. It is shown that both planar and non-planar two-loop master integrals in the physical kinematic region can be evaluated in less than half a minute with accuracy, which makes the direct numerical approach viable for precise investigation of higher order effects in multi-loop processes, e.g. the next-to-leading order QCD effect in Higgs pair production via gluon fusion with a finite top quark mass. Supported by the Natural Science Foundation of China (11305179 11475180), Youth Innovation Promotion Association, CAS, IHEP Innovation (Y4545170Y2), State Key Lab for Electronics and Particle Detectors, Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y4KF061CJ1), Cluster of Excellence Precision Physics, Fundamental Interactions and Structure of Matter (PRISMA-EXC 1098)
A Differential GPS Aided Ins for Aircraft Landings
1995-12-01
Pseudolite during the Landing A pproach .................................................................................................. 4-9 4.2.1...for precision approaches, areas associated with accuracy, coverage, integrity availability, and aircraft integration must be studied and 1-3...publications [13,20,27,30,57,59] suggests very few studies have been performed which use an integrated INS/GPS for precision approaches. The majority of
Signal processor for processing ultrasonic receiver signals
Fasching, George E.
1980-01-01
A signal processor is provided which uses an analog integrating circuit in conjunction with a set of digital counters controlled by a precision clock for sampling timing to provide an improved presentation of an ultrasonic transmitter/receiver signal. The signal is sampled relative to the transmitter trigger signal timing at precise times, the selected number of samples are integrated and the integrated samples are transferred and held for recording on a strip chart recorder or converted to digital form for storage. By integrating multiple samples taken at precisely the same time with respect to the trigger for the ultrasonic transmitter, random noise, which is contained in the ultrasonic receiver signal, is reduced relative to the desired useful signal.
Precision Fluid Management in Continuous Renal Replacement Therapy.
Murugan, Raghavan; Hoste, Eric; Mehta, Ravindra L; Samoni, Sara; Ding, Xiaoqiang; Rosner, Mitchell H; Kellum, John A; Ronco, Claudio
2016-01-01
Fluid management during continuous renal replacement therapy (CRRT) in critically ill patients is a dynamic process that encompasses 3 inter-related goals: maintenance of the patency of the CRRT circuit, maintenance of plasma electrolyte and acid-base homeostasis and regulation of patient fluid balance. In this article, we report the consensus recommendations of the 2016 Acute Disease Quality Initiative XVII conference on 'Precision Fluid Management in CRRT'. We discuss the principles of fluid management, describe various prescription methods to achieve circuit integrity and introduce the concept of integrated fluid balance for tailoring fluid balance to the needs of the individual patient. We suggest that these recommendations could serve to develop the best clinical practice and standards of care for fluid management in patients undergoing CRRT. Finally, we identify and highlight areas of uncertainty in fluid management and set an agenda for future research. © 2016 S. Karger AG, Basel.
Yang, Seul Ki; Lee, J; Kim, Sug-Whan; Lee, Hye-Young; Jeon, Jin-A; Park, I H; Yoon, Jae-Ryong; Baek, Yang-Sik
2014-01-13
We report a new and improved photon counting method for the precision PDE measurement of SiPM detectors, utilizing two integrating spheres connected serially and calibrated reference detectors. First, using a ray tracing simulation and irradiance measurement results with a reference photodiode, we investigated irradiance characteristics of the measurement instrument, and analyzed dominating systematic uncertainties in PDE measurement. Two SiPM detectors were then used for PDE measurements between wavelengths of 368 and 850 nm and for bias voltages varying from around 70V. The resulting PDEs of the SiPMs show good agreement with those from other studies, yet with an improved accuracy of 1.57% (1σ). This was achieved by the simultaneous measurement with the NIST calibrated reference detectors, which suppressed the time dependent variation of source light. The technical details of the instrumentation, measurement results and uncertainty analysis are reported together with their implications.
Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan
2014-01-01
As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system. PMID:24828010
Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan
2014-01-01
As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Chacon, Luis; Barnes, Daniel C
2012-01-01
Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been developed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230, 18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver and is capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle orbit integrations from the field solver, while remaining fully self-consistent. This provides great flexibility, and dramatically improves the solver efficiency by reducing the degrees of freedom of the associated nonlinear system. However, it requires a particle push per nonlinearmore » residual evaluation, which makes the particle push the most time-consuming operation in the algorithm. This paper describes a very efficient mixed-precision, hybrid CPU-GPU implementation of the implicit PIC algorithm. The JFNK solver is kept on the CPU (in double precision), while the inherent data parallelism of the particle mover is exploited by implementing it in single-precision on a graphics processing unit (GPU) using CUDA. Performance-oriented optimizations, with the aid of an analytical performance model, the roofline model, are employed. Despite being highly dynamic, the adaptive, charge-conserving particle mover algorithm achieves up to 300 400 GOp/s (including single-precision floating-point, integer, and logic operations) on a Nvidia GeForce GTX580, corresponding to 20 25% absolute GPU efficiency (against the peak theoretical performance) and 50-70% intrinsic efficiency (against the algorithm s maximum operational throughput, which neglects all latencies). This is about 200-300 times faster than an equivalent serial CPU implementation. When the single-precision GPU particle mover is combined with a double-precision CPU JFNK field solver, overall performance gains 100 vs. the double-precision CPU-only serial version are obtained, with no apparent loss of robustness or accuracy when applied to a challenging long-time scale ion acoustic wave simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakatsuji, H.; Nakashima, H.; Department of Synthetic Chemistry and Biological Chemistry, Graduate School of Engineering, Kyoto University, Nishikyo-ku, Kyoto 615-8510
2007-12-14
A local Schroedinger equation (LSE) method is proposed for solving the Schroedinger equation (SE) of general atoms and molecules without doing analytic integrations over the complement functions of the free ICI (iterative-complement-interaction) wave functions. Since the free ICI wave function is potentially exact, we can assume a flatness of its local energy. The variational principle is not applicable because the analytic integrations over the free ICI complement functions are very difficult for general atoms and molecules. The LSE method is applied to several 2 to 5 electron atoms and molecules, giving an accuracy of 10{sup -5} Hartree in total energy.more » The potential energy curves of H{sub 2} and LiH molecules are calculated precisely with the free ICI LSE method. The results show the high potentiality of the free ICI LSE method for developing accurate predictive quantum chemistry with the solutions of the SE.« less
Control/structure interaction conceptual design tool
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
1990-01-01
The JPL Control/Structure Interaction Program is developing new analytical methods for designing micro-precision spacecraft with controlled structures. One of these, the Conceptual Design Tool, will illustrate innovative new approaches to the integration of multi-disciplinary analysis and design methods. The tool will be used to demonstrate homogeneity of presentation, uniform data representation across analytical methods, and integrated systems modeling. The tool differs from current 'integrated systems' that support design teams most notably in its support for the new CSI multi-disciplinary engineer. The design tool will utilize a three dimensional solid model of the spacecraft under design as the central data organization metaphor. Various analytical methods, such as finite element structural analysis, control system analysis, and mechanical configuration layout, will store and retrieve data from a hierarchical, object oriented data structure that supports assemblies of components with associated data and algorithms. In addition to managing numerical model data, the tool will assist the designer in organizing, stating, and tracking system requirements.
Fan, Qigao; Wu, Yaheng; Hui, Jing; Wu, Lei; Yu, Zhenzhong; Zhou, Lijuan
2014-01-01
In some GPS failure conditions, positioning for mobile target is difficult. This paper proposed a new method based on INS/UWB for attitude angle and position synchronous tracking of indoor carrier. Firstly, error model of INS/UWB integrated system is built, including error equation of INS and UWB. And combined filtering model of INS/UWB is researched. Simulation results show that the two subsystems are complementary. Secondly, integrated navigation data fusion strategy of INS/UWB based on Kalman filtering theory is proposed. Simulation results show that FAKF method is better than the conventional Kalman filtering. Finally, an indoor experiment platform is established to verify the integrated navigation theory of INS/UWB, which is geared to the needs of coal mine working environment. Static and dynamic positioning results show that the INS/UWB integrated navigation system is stable and real-time, positioning precision meets the requirements of working condition and is better than any independent subsystem.
NASA Astrophysics Data System (ADS)
Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu
2018-04-01
A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.
Yoon, Jai-Woong; Park, Young-Guk; Park, Chun-Joo; Kim, Do-Il; Lee, Jin-Ho; Chung, Nag-Kun; Choe, Bo-Young; Suh, Tae-Suk; Lee, Hyoung-Koo
2007-11-01
The stationary grid commonly used with a digital x-ray detector causes a moiré interference pattern due to the inadequate sampling of the grid shadows by the detector pixels. There are limitations with the previous methods used to remove the moiré such as imperfect electromagnetic interference shielding and the loss of image information. A new method is proposed for removing the moiré pattern by integrating a carbon-interspaced high precision x-ray grid with high grid line uniformity with the detector for frequency matching. The grid was aligned to the detector by translating and rotating the x-ray grid with respect to the detector using microcontrolled alignment mechanism. The gap between the grid and the detector surface was adjusted with micrometer precision to precisely match the projected grid line pitch to the detector pixel pitch. Considering the magnification of the grid shadows on the detector plane, the grids were manufactured such that the grid line frequency was slightly higher than the detector sampling frequency. This study examined the factors that affect the moiré pattern, particularly the line frequency and displacement. The frequency of the moiré pattern was found to be sensitive to the angular displacement of the grid with respect to the detector while the horizontal translation alters the phase but not the moiré frequency. The frequency of the moiré pattern also decreased with decreasing difference in frequency between the grid and the detector, and a moiré-free image was produced after complete matching for a given source to detector distance. The image quality factors including the contrast, signal-to-noise ratio and uniformity in the images with and without the moiré pattern were investigated.
Nieć, Dawid; Kunicki, Paweł K
2015-10-01
Measurements of plasma concentrations of free normetanephrine (NMN), metanephrine (MN) and methoxytyramine (MTY) constitute the most diagnostically accurate screening test for pheochromocytomas and paragangliomas. The aim of this article is to present the results from a validation of an analytical method utilizing high performance liquid chromatography with coulometric detection (HPLC-CD) for quantifying plasma free NMN, MN and MTY. Additionally, peak integration by height and area and the use of one calibration curve for all batches or individual calibration curve for each batch of samples was explored as to determine the optimal approach with regard to accuracy and precision. The method was validated using charcoal stripped plasma spiked with solutions of NMN, MN, MTY and internal standard (4-hydroxy-3-methoxybenzylamine) with the exception of selectivity which was evaluated by analysis of real plasma samples. Calibration curve performance, accuracy, precision and recovery were determined following both peak-area and peak-height measurements and the obtained results were compared. The most accurate and precise method of calibration was evaluated by analyzing quality control samples at three concentration levels in 30 analytical runs. The detector response was linear over the entire tested concentration range from 10 to 2000pg/mL with R(2)≥0.9988. The LLOQ was 10pg/mL for each analyte of interest. To improve accuracy for measurements at low concentrations, a weighted (1/amount) linear regression model was employed, which resulted in inaccuracies of -2.48 to 9.78% and 0.22 to 7.81% following peak-area and peak-height integration, respectively. The imprecisions ranged from 1.07 to 15.45% and from 0.70 to 11.65% for peak-area and peak-height measurements, respectively. The optimal approach to calibration was the one utilizing an individual calibration curve for each batch of samples and peak-height measurements. It was characterized by inaccuracies ranging from -3.39 to +3.27% and imprecisions from 2.17 to 13.57%. The established HPLC-CD method enables accurate and precise measurements of plasma free NMN, MN and MTY with reasonable selectivity. Preparing calibration curve based on peak-height measurements for each batch of samples yields optimal accuracy and precision. Copyright © 2015. Published by Elsevier B.V.
Wavelength metrology with a color sensor integrated chip
NASA Astrophysics Data System (ADS)
Jackson, Jarom; Jones, Tyler; Otterstrom, Nils; Archibald, James; Durfee, Dallin
2016-03-01
We have developed a method of wavelength sensing using the TCS3414 from AMS, a color sensor developed for use in cell phones and consumer electronics. The sensor datasheet specifies 16 bits of precision and 200ppm/C° temperature dependence, which preliminary calculations showed might be sufficient for picometer level wavelength discrimination of narrow linewidth sources. We have successfully shown that this is possible by using internal etalon effects in addition to the filters' wavelength responses, and recently published our findings in OpticsExpress. Our device demonstrates sub picometer precision over short time periods, with about 10pm drift over a one month period. This method requires no moving or delicate optics, and has the potential to produce inexpensive and mechanically robust devices. Funded by Brigham Young University and NSF Grant Number PHY-1205736.
Experimental research on a modular miniaturization nanoindentation device
NASA Astrophysics Data System (ADS)
Huang, Hu; Zhao, Hongwei; Mi, Jie; Yang, Jie; Wan, Shunguang; Yang, Zhaojun; Yan, Jiwang; Ma, Zhichao; Geng, Chunyang
2011-09-01
Nanoindentation technology is developing toward the in situ test which requires miniaturization of indentation instruments. This paper presents a miniaturization nanoindentation device based on the modular idea. It mainly consists of macro-adjusting mechanism, x-y precise positioning platform, z axis precise driving unit, and the load-depth measuring unit. The device can be assembled with different forms and has minimum dimensions of 200 mm × 135 mm × 200 mm. The load resolution is about 0.1 mN and the displacement resolution is about 10 nm. A new calibration method named the reference-mapping method is proposed to calibrate the developed device. Output performance tests and indentation experiments indicate the feasibility of the developed device and calibration method. This paper gives an example that combining piezoelectric actuators with flexure hinge to realize nanoindentation tests. Integrating a smaller displacement sensor, a more compact nanoindentation device can be designed in the future.
BDS/GPS Dual Systems Positioning Based on the Modified SR-UKF Algorithm
Kong, JaeHyok; Mao, Xuchu; Li, Shaoyuan
2016-01-01
The Global Navigation Satellite System can provide all-day three-dimensional position and speed information. Currently, only using the single navigation system cannot satisfy the requirements of the system’s reliability and integrity. In order to improve the reliability and stability of the satellite navigation system, the positioning method by BDS and GPS navigation system is presented, the measurement model and the state model are described. Furthermore, the modified square-root Unscented Kalman Filter (SR-UKF) algorithm is employed in BDS and GPS conditions, and analysis of single system/multi-system positioning has been carried out, respectively. The experimental results are compared with the traditional estimation results, which show that the proposed method can perform highly-precise positioning. Especially when the number of satellites is not adequate enough, the proposed method combine BDS and GPS systems to achieve a higher positioning precision. PMID:27153068
Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo
2012-12-01
In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.
Zhang, Yan; Lee, Dong-Weon
2010-05-01
An integrated system made up of a double-hot arm electro-thermal microactuator and a piezoresistor embedded at the base of the 'cold arm' is proposed. The electro-thermo-mechanical modeling and optimization is developed to elaborate the operation mechanism of the hybrid system through numerical simulations. For given materials, the geometry design mostly influences the performance of the sensor and actuator, which can be considered separately. That is because thermal expansion induced heating energy has less influence on the base area of the 'cold arm,' where is the maximum stress. The piezoresistor is positioned here for large sensitivity to monitor the in-plane movement of the system and characterize the actuator response precisely in real time. Force method is used to analyze the thermal induced mechanical expansion in the redundant structure. On the other hand, the integrated actuating mechanism is designed for high speed imaging. Based on the simulation results, the actuator operates at levels below 5 mA appearing to be very reliable, and the stress sensitivity is about 40 MPa per micron.
Big Data and machine learning in radiation oncology: State of the art and future prospects.
Bibault, Jean-Emmanuel; Giraud, Philippe; Burgun, Anita
2016-11-01
Precision medicine relies on an increasing amount of heterogeneous data. Advances in radiation oncology, through the use of CT Scan, dosimetry and imaging performed before each fraction, have generated a considerable flow of data that needs to be integrated. In the same time, Electronic Health Records now provide phenotypic profiles of large cohorts of patients that could be correlated to this information. In this review, we describe methods that could be used to create integrative predictive models in radiation oncology. Potential uses of machine learning methods such as support vector machine, artificial neural networks, and deep learning are also discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Jiang, Weiping; Wang, Li; Niu, Xiaoji; Zhang, Quan; Zhang, Hui; Tang, Min; Hu, Xiangyun
2014-01-01
A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference. PMID:25330046
Lu, Jiaxi; Wang, Pengli; Wang, Qiuying; Wang, Yanan; Jiang, Miaomiao
2018-05-15
In the current study, we employed high-resolution proton and carbon nuclear magnetic resonance spectroscopy (¹H and 13 C NMR) for quantitative analysis of glycerol in drug injections without any complex pre-treatment or derivatization on samples. The established methods were validated with good specificity, linearity, accuracy, precision, stability, and repeatability. Our results revealed that the contents of glycerol were convenient to calculate directly via the integration ratios of peak areas with an internal standard in ¹H NMR spectra, while the integration of peak heights were proper for 13 C NMR in combination with an external calibration of glycerol. The developed methods were both successfully applied in drug injections. Quantitative NMR methods showed an extensive prospect for glycerol determination in various liquid samples.
On the complete and partial integrability of non-Hamiltonian systems
NASA Astrophysics Data System (ADS)
Bountis, T. C.; Ramani, A.; Grammaticos, B.; Dorizzi, B.
1984-11-01
The methods of singularity analysis are applied to several third order non-Hamiltonian systems of physical significance including the Lotka-Volterra equations, the three-wave interaction and the Rikitake dynamo model. Complete integrability is defined and new completely integrable systems are discovered by means of the Painlevé property. In all these cases we obtain integrals, which reduce the equations either to a final quadrature or to an irreducible second order ordinary differential equation (ODE) solved by Painlevé transcendents. Relaxing the Painlevé property we find many partially integrable cases whose movable singularities are poles at leading order, with In( t- t0) terms entering at higher orders. In an Nth order, generalized Rössler model a precise relation is established between the partial fulfillment of the Painlevé conditions and the existence of N - 2 integrals of the motion.
Xiao, Qing; Min, Taishan; Ma, Shuangping; Hu, Lingna; Chen, Hongyan; Lu, Daru
2018-04-18
Targeted integration of transgenes facilitates functional genomic research and holds prospect for gene therapy. The established microhomology-mediated end-joining (MMEJ)-based strategy leads to the precise gene knock-in with easily constructed donor, yet the limited efficiency remains to be further improved. Here, we show that single-strand DNA (ssDNA) donor contributes to efficient increase of knock-in efficiency and establishes a method to achieve the intracellular linearization of long ssDNA donor. We identified that the CRISPR/Cas9 system is responsible for breaking double-strand DNA (dsDNA) of palindromic structure in inverted terminal repeats (ITRs) region of recombinant adeno-associated virus (AAV), leading to the inhibition of viral second-strand DNA synthesis. Combing Cas9 plasmids targeting genome and ITR with AAV donor delivery, the precise knock-in of gene cassette was achieved, with 13-14% of the donor insertion events being mediated by MMEJ in HEK 293T cells. This study describes a novel method to integrate large single-strand transgene cassettes into the genomes, increasing knock-in efficiency by 13.6-19.5-fold relative to conventional AAV-mediated method. It also provides a comprehensive solution to the challenges of complicated production and difficult delivery with large exogenous fragments.
NASA Astrophysics Data System (ADS)
Liu, Yahui; Fan, Xiaoqian; Lv, Chen; Wu, Jian; Li, Liang; Ding, Dawei
2018-02-01
Information fusion method of INS/GPS navigation system based on filtering technology is a research focus at present. In order to improve the precision of navigation information, a navigation technology based on Adaptive Kalman Filter with attenuation factor is proposed to restrain noise in this paper. The algorithm continuously updates the measurement noise variance and processes noise variance of the system by collecting the estimated and measured values, and this method can suppress white noise. Because a measured value closer to the current time would more accurately reflect the characteristics of the noise, an attenuation factor is introduced to increase the weight of the current value, in order to deal with the noise variance caused by environment disturbance. To validate the effectiveness of the proposed algorithm, a series of road tests are carried out in urban environment. The GPS and IMU data of the experiments were collected and processed by dSPACE and MATLAB/Simulink. Based on the test results, the accuracy of the proposed algorithm is 20% higher than that of a traditional Adaptive Kalman Filter. It also shows that the precision of the integrated navigation can be improved due to the reduction of the influence of environment noise.
On computing Laplace's coefficients and their derivatives.
NASA Astrophysics Data System (ADS)
Gerasimov, I. A.; Vinnikov, E. L.
The algorithm of computing Laplace's coefficients and their derivatives is proposed with application of recurrent relations. The A.G.M.-method is used for the calculation of values L0(0), L0(1). The FORTRAN-program corresponding to the algorithm is given. The precision control was provided with numerical integrating by Simpsons method. The behavior of Laplace's coefficients and their third derivatives whith varying indices K, n for fixed values of the α-parameter is presented graphically.
Round-off error in long-term orbital integrations using multistep methods
NASA Technical Reports Server (NTRS)
Quinlan, Gerald D.
1994-01-01
Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.
Common-path low-coherence interferometry fiber-optic sensor guided microincision
NASA Astrophysics Data System (ADS)
Zhang, Kang; Kang, Jin U.
2011-09-01
We propose and demonstrate a common-path low-coherence interferometry (CP-LCI) fiber-optic sensor guided precise microincision. The method tracks the target surface and compensates the tool-to-surface relative motion with better than +/-5 μm resolution using a precision micromotor connected to the tool tip. A single-fiber distance probe integrated microdissector was used to perform an accurate 100 μm incision into the surface of an Intralipid phantom. The CP-LCI guided incision quality in terms of depth was evaluated afterwards using three-dimensional Fourier-domain optical coherence tomography imaging, which showed significant improvement of incision accuracy compared to free-hand-only operations.
Efficient Genome Editing in Induced Pluripotent Stem Cells with Engineered Nucleases In Vitro.
Termglinchan, Vittavat; Seeger, Timon; Chen, Caressa; Wu, Joseph C; Karakikes, Ioannis
2017-01-01
Precision genome engineering is rapidly advancing the application of the induced pluripotent stem cells (iPSCs) technology for in vitro disease modeling of cardiovascular diseases. Targeted genome editing using engineered nucleases is a powerful tool that allows for reverse genetics, genome engineering, and targeted transgene integration experiments to be performed in a precise and predictable manner. However, nuclease-mediated homologous recombination is an inefficient process. Herein, we describe the development of an optimized method combining site-specific nucleases and the piggyBac transposon system for "seamless" genome editing in pluripotent stem cells with high efficiency and fidelity in vitro.
Optimization of processing parameters of UAV integral structural components based on yield response
NASA Astrophysics Data System (ADS)
Chen, Yunsheng
2018-05-01
In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.
SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.
NASA Astrophysics Data System (ADS)
Chen, Guohai; Meng, Zeng; Yang, Dixiong
2018-01-01
This paper develops an efficient method termed as PE-PIM to address the exact nonstationary responses of pavement structure, which is modeled as a rectangular thin plate resting on bi-parametric Pasternak elastic foundation subjected to stochastic moving loads with constant acceleration. Firstly, analytical power spectral density (PSD) functions of random responses for thin plate are derived by integrating pseudo excitation method (PEM) with Duhamel's integral. Based on PEM, the new equivalent von Mises stress (NEVMS) is proposed, whose PSD function contains all cross-PSD functions between stress components. Then, the PE-PIM that combines the PEM with precise integration method (PIM) is presented to achieve efficiently stochastic responses of the plate by replacing Duhamel's integral with the PIM. Moreover, the semi-analytical Monte Carlo simulation is employed to verify the computational results of the developed PE-PIM. Finally, numerical examples demonstrate the high accuracy and efficiency of PE-PIM for nonstationary random vibration analysis. The effects of velocity and acceleration of moving load, boundary conditions of the plate and foundation stiffness on the deflection and NEVMS responses are scrutinized.
Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
NASA Astrophysics Data System (ADS)
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
NASA Astrophysics Data System (ADS)
Tian, Qijie; Chang, Songtao; Li, Zhou; He, Fengyun; Qiao, Yanfeng
2017-03-01
The suppression level of internal stray radiation is a key criterion for infrared imaging systems, especially for high-precision cryogenic infrared imaging systems. To achieve accurate measurement for internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures, a measurement method, which is based on radiometric calibration, is presented in this paper. First of all, the calibration formula is deduced considering the integration time, and the effect of ambient temperature on internal stray radiation is further analyzed in detail. Then, an approach is proposed to measure the internal stray radiation of cryogenic infrared imaging systems under various ambient temperatures. By calibrating the system under two ambient temperatures, the quantitative relation between the internal stray radiation and the ambient temperature can be acquired, and then the internal stray radiation of the cryogenic infrared imaging system under various ambient temperatures can be calculated. Finally, several experiments are performed in a chamber with controllable inside temperatures to evaluate the effectiveness of the proposed method. Experimental results indicate that the proposed method can be used to measure internal stray radiation with high accuracy at various ambient temperatures and integration times. The proposed method has some advantages, such as simple implementation and the capability of high-precision measurement. The measurement results can be used to guide the stray radiation suppression and to test whether the internal stray radiation suppression performance meets the requirement or not.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-11-01
Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.
[Implementation of precision control to achieve the goal of schistosomiasis elimination in China].
Zhou, Xiao-nong
2016-02-01
The integrated strategy for schistosomiasis control with focus on infectious source control, which has been implemented since 2004, accelerated the progress towards schistosomiasis control in China, and achieved transmission control of the disease across the country by the end of 2015, which achieved the overall objective of the Mid- and Long-term National Plan for Prevention and Control of Schistosomiasis (2004-2015) on schedule. Then, the goal of schistosomiasis elimination by 2025 was proposed in China in 2014. To achieve this new goal on schedule, we have to address the key issues, and implement precision control measures with more precise identification of control targets, so that we are able to completely eradicate the potential factors leading to resurgence of schistosomiasis transmission and enable the achievement of schistosomiasis elimination on schedule. Precision schistosomiasis control, a theoretical innovation of precision medicine in schistosomiasis control, will provide new insights into schistosomiasis control based on the conception of precision medicine. This paper describes the definition, interventions and the role of precision schistosomiasis control in the elimination of schistosomiasis in China, and demonstrates that sustainable improvement of professionals and integrated control capability at grass-root level is a prerequisite to the implementation of schistosomiasis control, precision schistosomiasis control is a key to the further implementation of the integrated strategy for schistosomiasis control with focus on infectious source control, and precision schistosomiasis control is a guarantee of curing schistosomiasis patients and implementing schistosomiasis control program and interventions.
T.Z. Ye; K.J.S. Jayawickrama; G.R. Johnson
2004-01-01
BLUP (Best linear unbiased prediction) method has been widely used in forest tree improvement programs. Since one of the properties of BLUP is that related individuals contribute to the predictions of each other, it seems logical that integrating data from all generations and from all populations would improve both the precision and accuracy in predicting genetic...
2017-10-01
goal of this research is to characterize the mechanisms leading to hypermutated prostate cancer and to integrate tumor hypermutation status with... integrate tumor hypermutation status with clinical decision making and therapy to improve the care of men with advanced prostate cancer. Using Next-Gen...preparation of presentations that integrate prostate cancer patient clinical histories with genomic findings 1-36 Yes, see Prostate Precision Tumor
Design of c-band telecontrol transmitter local oscillator for UAV data link
NASA Astrophysics Data System (ADS)
Cao, Hui; Qu, Yu; Song, Zuxun
2018-01-01
A C-band local oscillator of an Unmanned Aerial Vehicle (UAV) data link radio frequency (RF) transmitter unit with high-stability, high-precision and lightweight was designed in this paper. Based on the highly integrated broadband phase-locked loop (PLL) chip HMC834LP6GE, the system performed fractional-N control by internal modules programming to achieve low phase noise and small frequency resolution. The simulation and testing methods were combined to optimize and select the loop filter parameters to ensure the high precision and stability of the frequency synthesis output. The theoretical analysis and engineering prototype measurement results showed that the local oscillator had stable output frequency, accurate frequency step, high spurious suppression and low phase noise, and met the design requirements. The proposed design idea and research method have theoretical guiding significance for engineering practice.
VaDiR: an integrated approach to Variant Detection in RNA.
Neums, Lisa; Suenaga, Seiji; Beyerlein, Peter; Anders, Sara; Koestler, Devin; Mariani, Andrea; Chien, Jeremy
2018-02-01
Advances in next-generation DNA sequencing technologies are now enabling detailed characterization of sequence variations in cancer genomes. With whole-genome sequencing, variations in coding and non-coding sequences can be discovered. But the cost associated with it is currently limiting its general use in research. Whole-exome sequencing is used to characterize sequence variations in coding regions, but the cost associated with capture reagents and biases in capture rate limit its full use in research. Additional limitations include uncertainty in assigning the functional significance of the mutations when these mutations are observed in the non-coding region or in genes that are not expressed in cancer tissue. We investigated the feasibility of uncovering mutations from expressed genes using RNA sequencing datasets with a method called Variant Detection in RNA(VaDiR) that integrates 3 variant callers, namely: SNPiR, RVBoost, and MuTect2. The combination of all 3 methods, which we called Tier 1 variants, produced the highest precision with true positive mutations from RNA-seq that could be validated at the DNA level. We also found that the integration of Tier 1 variants with those called by MuTect2 and SNPiR produced the highest recall with acceptable precision. Finally, we observed a higher rate of mutation discovery in genes that are expressed at higher levels. Our method, VaDiR, provides a possibility of uncovering mutations from RNA sequencing datasets that could be useful in further functional analysis. In addition, our approach allows orthogonal validation of DNA-based mutation discovery by providing complementary sequence variation analysis from paired RNA/DNA sequencing datasets.
Engineering and agronomy aspects of a long-term precision agriculture field experiment
USDA-ARS?s Scientific Manuscript database
Much research has been conducted on specific precision agriculture tools and implementation strategies, but little has been reported on long-term evaluation of integrated precision agriculture field experiments. In 2004 our research team developed and initiated a multi-faceted “precision agriculture...
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
DESIGN NOTE: New apparatus for haze measurement for transparent media
NASA Astrophysics Data System (ADS)
Yu, H. L.; Hsiao, C. C.; Liu, W. C.
2006-08-01
Precise measurement of luminous transmittance and haze of transparent media is increasingly important to the LCD industry. Currently there are at least three documentary standards for measuring transmission haze. Unfortunately, none of those standard methods by itself can obtain the precise values for the diffuse transmittance (DT), total transmittance (TT) and haze. This note presents a new apparatus capable of precisely measuring all three variables simultaneously. Compared with current structures, the proposed design contains one more compensatory port. For optimal design, the light trap absorbs the beam completely, light scattered by the instrument is zero and the interior surface of the integrating sphere, baffle, as well as the reflectance standard, are of equal characteristic. The accurate values of the TT, DT and haze can be obtained using the new apparatus. Even if the design is not optimal, the measurement errors of the new apparatus are smaller than those of other methods especially for high sphere reflectance. Therefore, the sphere can be made of a high reflectance material for the new apparatus to increase the signal-to-noise ratio.
van Ditmarsch, Dave; Xavier, João B
2011-06-17
Online spectrophotometric measurements allow monitoring dynamic biological processes with high-time resolution. Contrastingly, numerous other methods require laborious treatment of samples and can only be carried out offline. Integrating both types of measurement would allow analyzing biological processes more comprehensively. A typical example of this problem is acquiring quantitative data on rhamnolipid secretion by the opportunistic pathogen Pseudomonas aeruginosa. P. aeruginosa cell growth can be measured by optical density (OD600) and gene expression can be measured using reporter fusions with a fluorescent protein, allowing high time resolution monitoring. However, measuring the secreted rhamnolipid biosurfactants requires laborious sample processing, which makes this an offline measurement. Here, we propose a method to integrate growth curve data with endpoint measurements of secreted metabolites that is inspired by a model of exponential cell growth. If serial diluting an inoculum gives reproducible time series shifted in time, then time series of endpoint measurements can be reconstructed using calculated time shifts between dilutions. We illustrate the method using measured rhamnolipid secretion by P. aeruginosa as endpoint measurements and we integrate these measurements with high-resolution growth curves measured by OD600 and expression of rhamnolipid synthesis genes monitored using a reporter fusion. Two-fold serial dilution allowed integrating rhamnolipid measurements at a ~0.4 h-1 frequency with high-time resolved data measured at a 6 h-1 frequency. We show how this simple method can be used in combination with mutants lacking specific genes in the rhamnolipid synthesis or quorum sensing regulation to acquire rich dynamic data on P. aeruginosa virulence regulation. Additionally, the linear relation between the ratio of inocula and the time-shift between curves produces high-precision measurements of maximum specific growth rates, which were determined with a precision of ~5.4%. Growth curve synchronization allows integration of rich time-resolved data with endpoint measurements to produce time-resolved quantitative measurements. Such data can be valuable to unveil the dynamic regulation of virulence in P. aeruginosa. More generally, growth curve synchronization can be applied to many biological systems thus helping to overcome a key obstacle in dynamic regulation: the scarceness of quantitative time-resolved data.
[Precision medicine : a required approach for the general internist].
Waeber, Gérard; Cornuz, Jacques; Gaspoz, Jean-Michel; Guessous, Idris; Mooser, Vincent; Perrier, Arnaud; Simonet, Martine Louis
2017-01-18
The general internist cannot be a passive bystander of the anticipated medical revolution induced by precision medicine. This latter aims to improve the predictive and/or clinical course of an individual by integrating all biological, genetic, environmental, phenotypic and psychosocial knowledge of a person. In this article, national and international initiatives in the field of precision medicine are discussed as well as the potential financial, ethical and limitations of personalized medicine. The question is not to know if precision medicine will be part of everyday life but rather to integrate early the general internist in multidisciplinary teams to ensure optimal information and shared-decision process with patients and individuals.
A revised version of the transfer matrix method to analyze one-dimensional structures
NASA Technical Reports Server (NTRS)
Nitzsche, F.
1983-01-01
A new and general method to analyze both free and forced vibration characteristics of one-dimensional structures is discussed in this paper. This scheme links for the first time the classical transfer matrix method with the recently developed integrating matrix technique to integrate systems of differential equations. Two alternative approaches to the problem are presented. The first is based upon the lumped parameter model to account for the inertia properties of the structure. The second releases that constraint allowing a more precise description of the physical system. The free vibration of a straight uniform beam under different support conditions is analyzed to test the accuracy of the two models. Finally some results for the free vibration of a 12th order system representing a curved, rotating beam prove that the present method is conveniently extended to more complicated structural dynamics problems.
Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output
NASA Astrophysics Data System (ADS)
Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu
2015-07-01
Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.
The Automatic Integration of Folksonomies with Taxonomies Using Non-axiomatic Logic
NASA Astrophysics Data System (ADS)
Geldart, Joe; Cummins, Stephen
Cooperative tagging systems such as folksonomies are powerful tools when used to annotate information resources. The inherent power of folksonomies is in their ability to allow casual users to easily contribute ad hoc, yet meaningful, resource metadata without any specialist training. Older folksonomies have begun to degrade due to the lack of internal structure and from the use of many low quality tags. This chapter describes a remedy for some of the problems associated with folksonomies. We introduce a method of automatic integration and inference of the relationships between tags and resources in a folksonomy using non-axiomatic logic. We test this method on the CiteULike corpus of tags by comparing precision and recall between it and standard keyword search. Our results show that non-axiomatic reasoning is a promising technique for integrating tagging systems with more structured knowledge representations.
Tagliazucchi, Enzo; Sanjuán, Ana
2017-01-01
Abstract A precise definition of a brain state has proven elusive. Here, we introduce the novel local-global concept of intrinsic ignition characterizing the dynamical complexity of different brain states. Naturally occurring intrinsic ignition events reflect the capability of a given brain area to propagate neuronal activity to other regions, giving rise to different levels of integration. The ignitory capability of brain regions is computed by the elicited level of integration for each intrinsic ignition event in each brain region, averaged over all events. This intrinsic ignition method is shown to clearly distinguish human neuroimaging data of two fundamental brain states (wakefulness and deep sleep). Importantly, whole-brain computational modelling of this data shows that at the optimal working point is found where there is maximal variability of the intrinsic ignition across brain regions. Thus, combining whole brain models with intrinsic ignition can provide novel insights into underlying mechanisms of brain states. PMID:28966977
Deco, Gustavo; Tagliazucchi, Enzo; Laufs, Helmut; Sanjuán, Ana; Kringelbach, Morten L
2017-01-01
A precise definition of a brain state has proven elusive. Here, we introduce the novel local-global concept of intrinsic ignition characterizing the dynamical complexity of different brain states. Naturally occurring intrinsic ignition events reflect the capability of a given brain area to propagate neuronal activity to other regions, giving rise to different levels of integration. The ignitory capability of brain regions is computed by the elicited level of integration for each intrinsic ignition event in each brain region, averaged over all events. This intrinsic ignition method is shown to clearly distinguish human neuroimaging data of two fundamental brain states (wakefulness and deep sleep). Importantly, whole-brain computational modelling of this data shows that at the optimal working point is found where there is maximal variability of the intrinsic ignition across brain regions. Thus, combining whole brain models with intrinsic ignition can provide novel insights into underlying mechanisms of brain states.
Inferring the gravitational potential of the Milky Way with a few precisely measured stars
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price-Whelan, Adrian M.; Johnston, Kathryn V.; Hendel, David
2014-10-10
The dark matter halo of the Milky Way is expected to be triaxial and filled with substructure. It is hoped that streams or shells of stars produced by tidal disruption of stellar systems will provide precise measures of the gravitational potential to test these predictions. We develop a method for inferring the Galactic potential with tidal streams based on the idea that the stream stars were once close in phase space. Our method can flexibly adapt to any form for the Galactic potential: it works in phase-space rather than action-space and hence relies neither on our ability to derive actionsmore » nor on the integrability of the potential. Our model is probabilistic, with a likelihood function and priors on the parameters. The method can properly account for finite observational uncertainties and missing data dimensions. We test our method on synthetic data sets generated from N-body simulations of satellite disruption in a static, multi-component Milky Way, including a triaxial dark matter halo with observational uncertainties chosen to mimic current and near-future surveys of various stars. We find that with just eight well-measured stream stars, we can infer properties of a triaxial potential with precisions of the order of 5%-7%. Without proper motions, we obtain 10% constraints on most potential parameters and precisions around 5%-10% for recovering missing phase-space coordinates. These results are encouraging for the goal of using flexible, time-dependent potential models combined with larger data sets to unravel the detailed shape of the dark matter distribution around the Milky Way.« less
Development of a method for personal, spatiotemporal exposure assessment.
Adams, Colby; Riggs, Philip; Volckens, John
2009-07-01
This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.
Low speed airfoil design and analysis
NASA Technical Reports Server (NTRS)
Eppler, R.; Somers, D. M.
1979-01-01
A low speed airfoil design and analysis program was developed which contains several unique features. In the design mode, the velocity distribution is not specified for one but many different angles of attack. Several iteration options are included which allow the trailing edge angle to be specified while other parameters are iterated. For airfoil analysis, a panel method is available which uses third-order panels having parabolic vorticity distributions. The flow condition is satisfied at the end points of the panels. Both sharp and blunt trailing edges can be analyzed. The integral boundary layer method with its laminar separation bubble analog, empirical transition criterion, and precise turbulent boundary layer equations compares very favorably with other methods, both integral and finite difference. Comparisons with experiment for several airfoils over a very wide Reynolds number range are discussed. Applications to high lift airfoil design are also demonstrated.
Parallel high-precision orbit propagation using the modified Picard-Chebyshev method
NASA Astrophysics Data System (ADS)
Koblick, Darin C.
2012-03-01
The modified Picard-Chebyshev method, when run in parallel, is thought to be more accurate and faster than the most efficient sequential numerical integration techniques when applied to orbit propagation problems. Previous experiments have shown that the modified Picard-Chebyshev method can have up to a one order magnitude speedup over the 12
Noninvasive vacuum integrity tests on fast warm-up traveling-wave tubes
NASA Astrophysics Data System (ADS)
Dallos, A.; Carignan, R. G.
1989-04-01
A method of tube vacuum monitoring that uses the tube's existing internal electrodes as an ion gage is discussed. This method has been refined using present-day instrumentation and has proved to be a precise, simple, and fast method of tube vacuum measurement. The method is noninvasive due to operation of the cathode at low temperature, which minimizes pumping or outgassing. Because of the low current levels to be measured, anode insulator leakage must be low, and the leads must be properly shielded to minimize charging effects. A description of the method, instrumentation used, limitations, and data showing results over a period of 600 days are presented.
Precise pooling and dispensing of microfluidic droplets towards micro- to macro-world interfacing
Brouzes, Eric; Carniol, April; Bakowski, Tomasz; Strey, Helmut H.
2014-01-01
Droplet microfluidics possesses unique properties such as the ability to carry out multiple independent reactions without dispersion of samples in microchannels. We seek to extend the use of droplet microfluidics to a new range of applications by enabling its integration into workflows based on traditional technologies, such as microtiter plates. Our strategy consists in developing a novel method to manipulate, pool and deliver a precise number of microfluidic droplets. To this aim, we present a basic module that combines droplet trapping with an on-chip valve. We quantitatively analyzed the trapping efficiency of the basic module in order to optimize its design. We also demonstrate the integration of the basic module into a multiplex device that can deliver 8 droplets at every cycle. This device will have a great impact in low throughput droplet applications that necessitate interfacing with macroscale technologies. The micro- to macro- interface is particularly critical in microfluidic applications that aim at sample preparation and has not been rigorously addressed in this context. PMID:25485102
Multiplexed precision genome editing with trackable genomic barcodes in yeast.
Roy, Kevin R; Smith, Justin D; Vonesch, Sibylle C; Lin, Gen; Tu, Chelsea Szu; Lederer, Alex R; Chu, Angela; Suresh, Sundari; Nguyen, Michelle; Horecka, Joe; Tripathi, Ashutosh; Burnett, Wallace T; Morgan, Maddison A; Schulz, Julia; Orsley, Kevin M; Wei, Wu; Aiyar, Raeka S; Davis, Ronald W; Bankaitis, Vytas A; Haber, James E; Salit, Marc L; St Onge, Robert P; Steinmetz, Lars M
2018-07-01
Our understanding of how genotype controls phenotype is limited by the scale at which we can precisely alter the genome and assess the phenotypic consequences of each perturbation. Here we describe a CRISPR-Cas9-based method for multiplexed accurate genome editing with short, trackable, integrated cellular barcodes (MAGESTIC) in Saccharomyces cerevisiae. MAGESTIC uses array-synthesized guide-donor oligos for plasmid-based high-throughput editing and features genomic barcode integration to prevent plasmid barcode loss and to enable robust phenotyping. We demonstrate that editing efficiency can be increased more than fivefold by recruiting donor DNA to the site of breaks using the LexA-Fkh1p fusion protein. We performed saturation editing of the essential gene SEC14 and identified amino acids critical for chemical inhibition of lipid signaling. We also constructed thousands of natural genetic variants, characterized guide mismatch tolerance at the genome scale, and ascertained that cryptic Pol III termination elements substantially reduce guide efficacy. MAGESTIC will be broadly useful to uncover the genetic basis of phenotypes in yeast.
Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B
2006-08-01
Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.
A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.
Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa
2016-05-17
Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.
NASA Astrophysics Data System (ADS)
Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.
2012-12-01
It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and analyzed by U-Pb TIMS-TEA. In situ trace element transects are used to model predicted TIMS-TEA trace element concentrations to test whether complicated trace element profiles undermine U-Pb TIMS-TEA data. We find good agreement between predicted and measured TIMS-TEA data, and can argue that the measured ID-TIMS U-Pb date corresponds to the time at which the geochemical signature measured by TIMS-TEA was acquired. Thus, in a hypothetical magma that is differentiating through AFC processes on timescales resolvable by geochronology, U-Pb TIMS-TEA should usually be a robust indicator of magma evolution through time. We present data from two ca. 40-30 Ma alpine intrusions from northern Italy: the southern Adamello batholith and the Bergell intrusion. The relatively young age of these intrusions permits uncertainties on individual zircon or zircon fragments as good as 10 ka, while zircon populations from individual hand samples often record zircon growth of >200 ka. Using the methodologies described above, we explore whether these zircons record in situ magmatic differentiation or introduction of antecrystic zircon to magma batches, and integrate these data to gain a better understanding of magma storage, differentiation and emplacement as a function of pressure, temperature, and time. These methods are a promising step towards interpreting complicated geochronologic data in ashbed samples as well through a better understanding of magmatic processes that precede eruption.
Mathematical model of bone drilling for virtual surgery system
NASA Astrophysics Data System (ADS)
Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.
2018-04-01
The bone drilling is an essential part of surgeries in ENT and Dentistry. A proper training of drilling machine handling skills is impossible without proper modelling of the drilling process. Utilization of high precision methods like FEM is limited due to the requirement of 1000 Hz update rate for haptic feedback. The study presents a mathematical model of the drilling process that accounts the properties of materials, the geometry and the rotation rate of a burr to compute the removed material volume. The simplicity of the model allows for integrating it in the high-frequency haptic thread. The precision of the model is enough for a virtual surgery system targeted on the training of the basic surgery skills.
Common-path low-coherence interferometry fiber-optic sensor guided microincision
Zhang, Kang; Kang, Jin U.
2011-01-01
We propose and demonstrate a common-path low-coherence interferometry (CP-LCI) fiber-optic sensor guided precise microincision. The method tracks the target surface and compensates the tool-to-surface relative motion with better than ±5 μm resolution using a precision micromotor connected to the tool tip. A single-fiber distance probe integrated microdissector was used to perform an accurate 100 μm incision into the surface of an Intralipid phantom. The CP-LCI guided incision quality in terms of depth was evaluated afterwards using three-dimensional Fourier-domain optical coherence tomography imaging, which showed significant improvement of incision accuracy compared to free-hand-only operations. PMID:21950912
2015-12-01
Over the past two decades the United States has revolutionized the war fighting industry with advancements in Low Observable ( LO ) technology and...precision strike capability. Advanced LO weapon systems such as the F-35, F-22 and the B-2 have been developed and portrayed as stealth aircraft. In...Because LO technology is in a physical form, methods can and have been developed to exploit the inherent weaknesses. Additionally, the technological
NASA Astrophysics Data System (ADS)
Hou, Ligang; Luo, Rengui; Wu, Wuchen
2006-11-01
This paper forwards a low power grating detection chip (EYAS) on length and angle precision measurement. Traditional grating detection method, such as resister chain divide or phase locked divide circuit are difficult to design and tune. The need of an additional CPU for control and display makes these methods' implementation more complex and costly. Traditional methods also suffer low sampling speed for the complex divide circuit scheme and CPU software compensation. EYAS is an application specific integrated circuit (ASIC). It integrates micro controller unit (MCU), power management unit (PMU), LCD controller, Keyboard interface, grating detection unit and other peripherals. Working at 10MHz, EYAS can afford 5MHz internal sampling rate and can handle 1.25MHz orthogonal signal from grating sensor. With a simple control interface by keyboard, sensor parameter, data processing and system working mode can be configured. Two LCD controllers can adapt to dot array LCD or segment bit LCD, which comprised output interface. PMU alters system between working and standby mode by clock gating technique to save power. EYAS in test mode (system action are more frequently than real world use) consumes 0.9mw, while 0.2mw in real world use. EYAS achieved the whole grating detection system function, high-speed orthogonal signal handling in a single chip with very low power consumption.
A high precision dual feedback discrete control system designed for satellite trajectory simulator
NASA Astrophysics Data System (ADS)
Liu, Ximin; Liu, Liren; Sun, Jianfeng; Xu, Nan
2005-08-01
Cooperating with the free-space laser communication terminals, the satellite trajectory simulator is used to test the acquisition, pointing, tracking and communicating performances of the terminals. So the satellite trajectory simulator plays an important role in terminal ground test and verification. Using the double-prism, Sun etc in our group designed a satellite trajectory simulator. In this paper, a high precision dual feedback discrete control system designed for the simulator is given and a digital fabrication of the simulator is made correspondingly. In the dual feedback discrete control system, Proportional- Integral controller is used in velocity feedback loop and Proportional- Integral- Derivative controller is used in position feedback loop. In the controller design, simplex method is introduced and an improvement to the method is made. According to the transfer function of the control system in Z domain, the digital fabrication of the simulator is given when it is exposed to mechanism error and moment disturbance. Typically, when the mechanism error is 100urad, the residual standard error of pitching angle, azimuth angle, x-coordinate position and y-coordinate position are 0.49urad, 6.12urad, 4.56urad, 4.09urad respectively. When the moment disturbance is 0.1rad, the residual standard error of pitching angle, azimuth angle, x-coordinate position and y-coordinate position are 0.26urad, 0.22urad, 0.16urad, 0.15urad respectively. The digital fabrication results demonstrate that the dual feedback discrete control system designed for the simulator can achieve the anticipated high precision performance.
NASA Astrophysics Data System (ADS)
Whiles, Matt R.; Brock, Brent L.; Franzen, Annette C.; Dinsmore, Steven C., II
2000-11-01
We used invertebrate bioassessment, habitat analysis, geographic information system analysis of land use, and water chemistry monitoring to evaluate tributaries of a degraded northeast Nebraska, USA, reservoir. Bimonthly invertebrate collections and monthly water chemistry samples were collected for two years on six stream reaches to identify sources contributing to reservoir degradation and test suitability of standard rapid bioassessment methods in this region. A composite biotic index composed of seven commonly used metrics was effective for distinguishing between differentially impacted sites and responded to a variety of disturbances. Individual metrics varied greatly in precision and ability to discriminate between relatively impacted and unimpacted stream reaches. A modified Hilsenhoff index showed the highest precision (reference site CV = 0.08) but was least effective at discriminating among sites. Percent dominance and the EPT (number of Ephemeroptera, Plecoptera, and Trichoptera taxa) metrics were most effective at discriminating between sites and exhibited intermediate precision. A trend of higher biotic integrity during summer was evident, indicating seasonal corrections should differ from other regions. Poor correlations were evident between water chemistry variables and bioassessment results. However, land-use factors, particularly within 18-m riparian zones, were correlated with bioassessment scores. For example, there was a strong negative correlation between percentage of rangeland in 18-m riparian zones and percentage of dominance in streams (r 2 = 0.90, P < 0.01). Results demonstrate that standard rapid bioassessment methods, with some modifications, are effective for use in this agricultural region of the Great Plains and that riparian land use may be the best predictor of stream biotic integrity.
Herbort, Carl P; Tugal-Tutkun, Ilknur; Neri, Piergiorgio; Pavésio, Carlos; Onal, Sumru; LeHoang, Phuc
2017-05-01
Uveitis is one of the fields in ophthalmology where a tremendous evolution took place in the past 25 years. Not only did we gain access to more efficient, more targeted, and better tolerated therapies, but also in parallel precise and quantitative measurement methods developed allowing the clinician to evaluate these therapies and adjust therapeutic intervention with a high degree of precision. Objective and quantitative measurement of the global level of intraocular inflammation became possible for most inflammatory diseases with direct or spill-over anterior chamber inflammation, thanks to laser flare photometry. The amount of retinal inflammation could be quantified by using fluorescein angiography to score retinal angiographic signs. Indocyanine green angiography gave imaging insight into the hitherto inaccessible choroidal compartment, rendering possible the quantification of choroiditis by scoring indocyanine green angiographic signs. Optical coherence tomography has enabled measurement and objective monitoring of retinal and choroidal thickness. This multimodal quantitative appraisal of intraocular inflammation represents an exquisite security in monitoring uveitis. What is enigmatic, however, is the slow pace with which these improvements are integrated in some areas. What is even more difficult to understand is the fact that clinical trials to assess new therapeutic agents still mostly rely on subjective parameters such as clinical evaluation of vitreous haze as a main endpoint; whereas a whole array of precise, quantitative, and objective modalities are available for the design of clinical studies. The scope of this work was to review the quantitative investigations that improved the management of uveitis in the past 2-3 decades.
Compact self-aligning assemblies with refractive microlens arrays made by contactless embossing
NASA Astrophysics Data System (ADS)
Schulze, Jens; Ehrfeld, Wolfgang; Mueller, Holger; Picard, Antoni
1998-04-01
The hybrid integration of microlenses and arrays of microlenses in micro-optical systems is simplified using contactless embossing of microlenses (CEM) in combination with LIGA microfabrication. CEM is anew fabrication technique for the production of precise refractive microlens arrays. A high precision matrix of holes made by LIGA technique is used as a compression molding tool to form the microlenses. The tool is pressed onto a thermoplastic sample which is heated close to the glass transformation temperature of the material. The material bulges into the openings of the molding tool due to the applied pressure and forms lens-like spherical structures. The name refers to the fact that the surface of the microlens does not get in contact with the compression molding tool during the shaping process and optical quality of the surface is maintained. Microlenses and arrays of microlenses with lens diameters from 30 micrometers up to 700 micrometers and numerical aperture values of up to 0.25 have been fabricated in different materials. Cost-effectiveness in the production process, excellent optical performance and the feature of easy replication are the main advantages of this technique. The most promising feature of this method is the possibility to obtain self- aligned assemblies then can be further integrated into a micro-optical bench setup. The CEM fabrication method in combination with LIGA microfabrication considerably enhances the hybrid integration in micro-optical devices which results in a more cost-effective production of compact micro-opto-electro-mechanical systems.
Highly Controlled Codeposition Rate of Organolead Halide Perovskite by Laser Evaporation Method.
Miyadera, Tetsuhiko; Sugita, Takeshi; Tampo, Hitoshi; Matsubara, Koji; Chikamatsu, Masayuki
2016-10-05
Organolead-halide perovskites can be promising materials for next-generation solar cells because of its high power conversion efficiency. The method of precise fabrication is required because both solution-process and vacuum-process fabrication of the perovskite have problems of controllability and reproducibility. Vacuum deposition process was expected to achieve precise control; however, vaporization of amine compound significantly degrades the controllability of deposition rate. Here we achieved the reduction of the vaporization by implementing the laser evaporation system for the codeposition of perovskite. Locally irradiated continuous-wave lasers on the source materials realized the reduced vaporization of CH 3 NH 3 I. The deposition rate was stabilized for several hours by adjusting the duty ratio of modulated laser based on proportional-integral control. Organic-photovoltaic-type perovskite solar cells were fabricated by codeposition of PbI 2 and CH 3 NH 3 I. A power-conversion efficiency of 16.0% with reduced hysteresis was achieved.
NASA Astrophysics Data System (ADS)
Asfour, Jean-Michel; Weidner, Frank; Bodendorf, Christof; Bode, Andreas; Poleshchuk, Alexander G.; Nasyrov, Ruslan K.; Grupp, Frank; Bender, Ralf
2017-09-01
We present a method for precise alignment of lens elements using specific Computer Generated Hologram (CGH) with an integrated Fizeau reference flat surface and a Fizeau interferometer. The method is used for aligning the so called Camera Lens Assembly for ESAs Euclid telescope. Each lens has a corresponding annular area on the diffractive optics, which is used to control the position of each lens. The lenses are subsequently positioned using individual annular rings of the CGH. The overall alignment accuracy is below 1 µm, the alignment sensitivity is in the range of 0.1 µm. The achieved alignment accuracy of the lenses relative to each other is mainly depending on the stability in time of the alignment tower. Error budgets when using computer generated holograms and physical limitations are explained. Calibration measurements of the alignment system and the typically reached alignment accuracies will be shown and discussed.
NASA Astrophysics Data System (ADS)
Hu, Mengsu; Wang, Yuan; Rutqvist, Jonny
2015-06-01
One major challenge in modeling groundwater flow within heterogeneous geological media is that of modeling arbitrarily oriented or intersected boundaries and inner material interfaces. The Numerical Manifold Method (NMM) has recently emerged as a promising method for such modeling, in its ability to handle boundaries, its flexibility in constructing physical cover functions (continuous or with gradient jump), its meshing efficiency with a fixed mathematical mesh (covers), its convenience for enhancing approximation precision, and its integration precision, achieved by simplex integration. In this paper, we report on developing and comparing two new approaches for boundary constraints using the NMM, namely a continuous approach with jump functions and a discontinuous approach with Lagrange multipliers. In the discontinuous Lagrange multiplier method (LMM), the material interfaces are regarded as discontinuities which divide mathematical covers into different physical covers. We define and derive stringent forms of Lagrange multipliers to link the divided physical covers, thus satisfying the continuity requirement of the refraction law. In the continuous Jump Function Method (JFM), the material interfaces are regarded as inner interfaces contained within physical covers. We briefly define jump terms to represent the discontinuity of the head gradient across an interface to satisfy the refraction law. We then make a theoretical comparison between the two approaches in terms of global degrees of freedom, treatment of multiple material interfaces, treatment of small area, treatment of moving interfaces, the feasibility of coupling with mechanical analysis and applicability to other numerical methods. The newly derived boundary-constraint approaches are coded into a NMM model for groundwater flow analysis, and tested for precision and efficiency on different simulation examples. We first test the LMM for a Dirichlet boundary and then test both LMM and JFM for an idealized heterogeneous model, comparing the numerical results with analytical solutions. Then we test both approaches for a heterogeneous model and compare the results of hydraulic head and specific discharge. We show that both approaches are suitable for modeling material boundaries, considering high accuracy for the boundary constraints, the capability to deal with arbitrarily oriented or complexly intersected boundaries, and their efficiency using a fixed mathematical mesh.
Fluid transport properties by equilibrium molecular dynamics. I. Methodology at extreme fluid states
NASA Astrophysics Data System (ADS)
Dysthe, D. K.; Fuchs, A. H.; Rousseau, B.
1999-02-01
The Green-Kubo formalism for evaluating transport coefficients by molecular dynamics has been applied to flexible, multicenter models of linear and branched alkanes in the gas phase and in the liquid phase from ambient conditions to close to the triple point. The effects of integration time step, potential cutoff and system size have been studied and shown to be small compared to the computational precision except for diffusion in gaseous n-butane. The RATTLE algorithm is shown to give accurate transport coefficients for time steps up to a limit of 8 fs. The different relaxation mechanisms in the fluids have been studied and it is shown that the longest relaxation time of the system governs the statistical precision of the results. By measuring the longest relaxation time of a system one can obtain a reliable error estimate from a single trajectory. The accuracy of the Green-Kubo method is shown to be as good as the precision for all states and models used in this study even when the system relaxation time becomes very long. The efficiency of the method is shown to be comparable to nonequilibrium methods. The transport coefficients for two recently proposed potential models are presented, showing deviations from experiment of 0%-66%.
Numerical methods of solving a system of multi-dimensional nonlinear equations of the diffusion type
NASA Technical Reports Server (NTRS)
Agapov, A. V.; Kolosov, B. I.
1979-01-01
The principles of conservation and stability of difference schemes achieved using the iteration control method were examined. For the schemes obtained of the predictor-corrector type, the conversion was proved for the control sequences of approximate solutions to the precise solutions in the Sobolev metrics. Algorithms were developed for reducing the differential problem to integral relationships, whose solution methods are known, were designed. The algorithms for the problem solution are classified depending on the non-linearity of the diffusion coefficients, and practical recommendations for their effective use are given.
Cobalt: Development and Maturation of GN&C Technologies for Precision Landing
NASA Technical Reports Server (NTRS)
Carson, John M.; Restrepo, Carolina; Seubert, Carl; Amzajerdian, Farzin
2016-01-01
The CoOperative Blending of Autonomous Landing Technologies (COBALT) instrument is a terrestrial test platform for development and maturation of guidance, navigation and control (GN&C) technologies for precision landing. The project is developing a third-generation Langley Research Center (LaRC) navigation doppler lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the Jet Propulsion Laboratory (JPL) lander vision system (LVS) for terrain relative navigation (TRN) position estimates. These technologies together provide precise navigation knowledge that is critical for a controlled and precise touchdown. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive vertical test bed (VTB) developed by Masten Space Systems, and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).
Deep data fusion method for missile-borne inertial/celestial system
NASA Astrophysics Data System (ADS)
Zhang, Chunxi; Chen, Xiaofei; Lu, Jiazhen; Zhang, Hao
2018-05-01
Strap-down inertial-celestial integrated navigation system has the advantages of autonomy and high precision and is very useful for ballistic missiles. The star sensor installation error and inertial measurement error have a great influence for the system performance. Based on deep data fusion, this paper establishes measurement equations including star sensor installation error and proposes the deep fusion filter method. Simulations including misalignment error, star sensor installation error, IMU error are analyzed. Simulation results indicate that the deep fusion method can estimate the star sensor installation error and IMU error. Meanwhile, the method can restrain the misalignment errors caused by instrument errors.
NASA Technical Reports Server (NTRS)
Mcneill, Walter, E.; Chung, William W.; Stortz, Michael W.
1995-01-01
A piloted motion simulator evaluation, using the NASA Ames Vertical Motion Simulator, was conducted in support of a NASA Lewis Contractual study of the integration of flight and propulsion systems of a STOVL aircraft. Objectives of the study were to validate the Design Methods for Integrated Control Systems (DMICS) concept, to evaluate the handling qualities, and to assess control power usage. The E-7D ejector-augmentor STOVL fighter design served as the basis for the simulation. Handling-qualities ratings were obtained during precision hover and shipboard landing tasks. Handling-qualities ratings for these tasks ranged from satisfactory to adequate. Further improvement of the design process to fully validate the DMICS concept appears to be warranted.
A two-step, fourth-order method with energy preserving properties
NASA Astrophysics Data System (ADS)
Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato
2012-09-01
We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.
Position measurement of the direct drive motor of Large Aperture Telescope
NASA Astrophysics Data System (ADS)
Li, Ying; Wang, Daxing
2010-07-01
Along with the development of space and astronomy science, production of large aperture telescope and super large aperture telescope will definitely become the trend. It's one of methods to solve precise drive of large aperture telescope using direct drive technology unified designed of electricity and magnetism structure. A direct drive precise rotary table with diameter of 2.5 meters researched and produced by us is a typical mechanical & electrical integration design. This paper mainly introduces position measurement control system of direct drive motor. In design of this motor, position measurement control system requires having high resolution, and precisely aligning the position of rotor shaft and making measurement, meanwhile transferring position information to position reversing information corresponding to needed motor pole number. This system has chosen high precision metal band coder and absolute type coder, processing information of coders, and has sent 32-bit RISC CPU making software processing, and gained high resolution composite coder. The paper gives relevant laboratory test results at the end, indicating the position measurement can apply to large aperture telescope control system. This project is subsidized by Chinese National Natural Science Funds (10833004).
NASA Astrophysics Data System (ADS)
Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen
2018-05-01
To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.
Network-induced chaos in integrate-and-fire neuronal ensembles.
Zhou, Douglas; Rangan, Aaditya V; Sun, Yi; Cai, David
2009-09-01
It has been shown that a single standard linear integrate-and-fire (IF) neuron under a general time-dependent stimulus cannot possess chaotic dynamics despite the firing-reset discontinuity. Here we address the issue of whether conductance-based, pulsed-coupled network interactions can induce chaos in an IF neuronal ensemble. Using numerical methods, we demonstrate that all-to-all, homogeneously pulse-coupled IF neuronal networks can indeed give rise to chaotic dynamics under an external periodic current drive. We also provide a precise characterization of the largest Lyapunov exponent for these high dimensional nonsmooth dynamical systems. In addition, we present a stable and accurate numerical algorithm for evaluating the largest Lyapunov exponent, which can overcome difficulties encountered by traditional methods for these nonsmooth dynamical systems with degeneracy induced by, e.g., refractoriness of neurons.
Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning
NASA Astrophysics Data System (ADS)
Bradley, Ben K.
Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and orbit propagation, yielding savings in computation time and memory. Orbit propagation and position transformation simulations are analyzed to generate a complete set of recommendations for performing the ITRS/GCRS transformation for a wide range of needs, encompassing real-time on-board satellite operations and precise post-processing applications. In addition, a complete derivation of the ITRS/GCRS frame transformation time-derivative is detailed for use in velocity transformations between the GCRS and ITRS and is applied to orbit propagation in the rotating ITRS. EOP interpolation methods and ocean tide corrections are shown to impact the ITRS/GCRS transformation accuracy at the level of 5 cm and 20 cm on the surface of the Earth and at the Global Positioning System (GPS) altitude, respectively. The precession-nutation and EOP simplifications yield maximum propagation errors of approximately 2 cm and 1 m after 15 minutes and 6 hours in low-Earth orbit (LEO), respectively, while reducing computation time and memory usage. Finally, for orbit propagation in the ITRS, a simplified scheme is demonstrated that yields propagation errors under 5 cm after 15 minutes in LEO. This approach is beneficial for orbit determination based on GPS measurements. We conclude with a summary of recommendations on EOP usage and bias-precession-nutation implementations for achieving a wide range of transformation and propagation accuracies at several altitudes. This comprehensive set of recommendations allows satellite operators, astrodynamicists, and scientists to make informed decisions when choosing the best implementation for their application, balancing accuracy and computational complexity.
Moon, Myungjin; Nakai, Kenta
2018-04-01
Currently, cancer biomarker discovery is one of the important research topics worldwide. In particular, detecting significant genes related to cancer is an important task for early diagnosis and treatment of cancer. Conventional studies mostly focus on genes that are differentially expressed in different states of cancer; however, noise in gene expression datasets and insufficient information in limited datasets impede precise analysis of novel candidate biomarkers. In this study, we propose an integrative analysis of gene expression and DNA methylation using normalization and unsupervised feature extractions to identify candidate biomarkers of cancer using renal cell carcinoma RNA-seq datasets. Gene expression and DNA methylation datasets are normalized by Box-Cox transformation and integrated into a one-dimensional dataset that retains the major characteristics of the original datasets by unsupervised feature extraction methods, and differentially expressed genes are selected from the integrated dataset. Use of the integrated dataset demonstrated improved performance as compared with conventional approaches that utilize gene expression or DNA methylation datasets alone. Validation based on the literature showed that a considerable number of top-ranked genes from the integrated dataset have known relationships with cancer, implying that novel candidate biomarkers can also be acquired from the proposed analysis method. Furthermore, we expect that the proposed method can be expanded for applications involving various types of multi-omics datasets.
Bonding thermoplastic polymers
Wallow, Thomas I [Fremont, CA; Hunter, Marion C [Livermore, CA; Krafcik, Karen Lee [Livermore, CA; Morales, Alfredo M [Livermore, CA; Simmons, Blake A [San Francisco, CA; Domeier, Linda A [Danville, CA
2008-06-24
We demonstrate a new method for joining patterned thermoplastic parts into layered structures. The method takes advantage of case-II permeant diffusion to generate dimensionally controlled, activated bonding layers at the surfaces being joined. It is capable of producing bonds characterized by cohesive failure while preserving the fidelity of patterned features in the bonding surfaces. This approach is uniquely suited to production of microfluidic multilayer structures, as it allows the bond-forming interface between plastic parts to be precisely manipulated at micrometer length scales. The bond enhancing procedure is easily integrated in standard process flows and requires no specialized equipment.
Manufacturing Methods and Technology Project Summary Reports
1982-12-01
aluminide was used to eliminate adhesive failures. A doctor blade and expandable ring segment were selected as the tooling to apply the 0.010 inch...contractual effort is to develop manu- facturing technology for the production of integrally bladed impellers using titanium pre-alloyed powder and...Projectiles in Modernized Plants 1-16 METALS Abstracts ME-1 Projects 176 7046, 17T 7046 and 177 7046 - Precision Cast Titanium Compressor Casing ME
ERIC Educational Resources Information Center
Moraes, Edgar P.; Confessor, Mario R.; Gasparotto, Luiz H. S.
2015-01-01
This article proposes an indirect method to evaluate the corrosion rate of iron nail in simulated seawater. The official procedure is based on the direct measurement of the specimen's weight loss over time; however, a highly precise scale is required and such equipment may not be easily available. On the other hand, mobile phones equipped with…
On the enhanced detectability of GPS anomalous behavior with relative entropy
NASA Astrophysics Data System (ADS)
Cho, Jeongho
2016-10-01
A standard receiver autonomous integrity monitoring (RAIM) technique for the global positioning system (GPS) has been dedicated to provide an integrity monitoring capability for safety-critical GPS applications, such as in civil aviation for the en-route (ER) through non-precision approach (NPA) or lateral navigation (LNAV). The performance of the existing RAIM method, however, may not meet more stringent aviation requirements for availability and integrity during the precision approach and landing phases of flight due to insufficient observables and/or untimely warning to the user beyond a specified time-to-alert in the event of a significant GPS failure. This has led to an enhanced RAIM architecture ensuring stricter integrity requirement by greatly decreasing the detection time when a satellite failure or a measurement error has occurred. We thus attempted to devise a user integrity monitor which is capable of identifying the GPS failure more rapidly than a standard RAIM scheme by incorporating the RAIM with the relative entropy, which is a likelihood ratio approach to assess the inconsistence between two data streams, quite different from a Euclidean distance. In addition, the delay-coordinate embedding technique needs to be considered and preprocessed to associate the discriminant measure obtained from the RAIM with the relative entropy in the new RAIM design. In simulation results, we demonstrate that the proposed user integrity monitor outperforms the standard RAIM with a higher level of detection rate of anomalies which could be hazardous to the users in the approach or landing phase and is a very promising alternative for the detection of deviations in GPS signal. The comparison also shows that it enables to catch even small anomalous gradients more rapidly than a typical user integrity monitor.
A robust, efficient equidistribution 2D grid generation method
NASA Astrophysics Data System (ADS)
Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni
2007-11-01
We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).
Flight Testing ALHAT Precision Landing Technologies Integrated Onboard the Morpheus Rocket Vehicle
NASA Technical Reports Server (NTRS)
Carson, John M. III; Robertson, Edward A.; Trawny, Nikolas; Amzajerdian, Farzin
2015-01-01
A suite of prototype sensors, software, and avionics developed within the NASA Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project were terrestrially demonstrated onboard the NASA Morpheus rocket-propelled Vertical Testbed (VTB) in 2014. The sensors included a LIDAR-based Hazard Detection System (HDS), a Navigation Doppler LIDAR (NDL) velocimeter, and a long-range Laser Altimeter (LAlt) that enable autonomous and safe precision landing of robotic or human vehicles on solid solar system bodies under varying terrain lighting conditions. The flight test campaign with the Morpheus vehicle involved a detailed integration and functional verification process, followed by tether testing and six successful free flights, including one night flight. The ALHAT sensor measurements were integrated into a common navigation solution through a specialized ALHAT Navigation filter that was employed in closed-loop flight testing within the Morpheus Guidance, Navigation and Control (GN&C) subsystem. Flight testing on Morpheus utilized ALHAT for safe landing site identification and ranking, followed by precise surface-relative navigation to the selected landing site. The successful autonomous, closed-loop flight demonstrations of the prototype ALHAT system have laid the foundation for the infusion of safe, precision landing capabilities into future planetary exploration missions.
Wang, Lixin; Caylor, Kelly K; Dragoni, Danilo
2009-02-01
The (18)O and (2)H of water vapor serve as powerful tracers of hydrological processes. The typical method for determining water vapor delta(18)O and delta(2)H involves cryogenic trapping and isotope ratio mass spectrometry. Even with recent technical advances, these methods cannot resolve vapor composition at high temporal resolutions. In recent years, a few groups have developed continuous laser absorption spectroscopy (LAS) approaches for measuring delta(18)O and delta(2)H which achieve accuracy levels similar to those of lab-based mass spectrometry methods. Unfortunately, most LAS systems need cryogenic cooling and constant calibration to a reference gas, and have substantial power requirements, making them unsuitable for long-term field deployment at remote field sites. A new method called Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS) has been developed which requires extremely low-energy consumption and neither reference gas nor cryogenic cooling. In this report, we develop a relatively simple pumping system coupled to a dew point generator to calibrate an ICOS-based instrument (Los Gatos Research Water Vapor Isotope Analyzer (WVIA) DLT-100) under various pressures using liquid water with known isotopic signatures. Results show that the WVIA can be successfully calibrated using this customized system for different pressure settings, which ensure that this instrument can be combined with other gas-sampling systems. The precisions of this instrument and the associated calibration method can reach approximately 0.08 per thousand for delta(18)O and approximately 0.4 per thousand for delta(2)H. Compared with conventional mass spectrometry and other LAS-based methods, the OA-ICOS technique provides a promising alternative tool for continuous water vapor isotopic measurements in field deployments. Copyright 2009 John Wiley & Sons, Ltd.
A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures
NASA Technical Reports Server (NTRS)
Moore, Ashley
2005-01-01
The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.
Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.
2018-04-01
RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.
Podor, Renaud; Pailhon, Damien; Ravaux, Johann; Brau, Henri-Pierre
2015-04-01
We have developed two integrated thermocouple (TC) crucible systems that allow precise measurement of sample temperature when using a furnace associated with an environmental scanning electron microscope (ESEM). Sample temperatures measured with these systems are precise (±5°C) and reliable. The TC crucible systems allow working with solids and liquids (silicate melts or ionic liquids), independent of the gas composition and pressure. These sample holder designs will allow end users to perform experiments at high temperature in the ESEM chamber with high precision control of the sample temperature.
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon; Boekholt, Tjarda
2014-04-01
The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.
Precise computer controlled positioning of robot end effectors using force sensors
NASA Technical Reports Server (NTRS)
Shieh, L. S.; Mcinnis, B. C.; Wang, J. C.
1988-01-01
A thorough study of combined position/force control using sensory feedback for a one-dimensional manipulator model, which may count for the spacecraft docking problem or be extended to the multi-joint robot manipulator problem, was performed. The additional degree of freedom introduced by the compliant force sensor is included in the system dynamics in the design of precise position control. State feedback based on the pole placement method and with integral control is used to design the position controller. A simple constant gain force controller is used as an example to illustrate the dependence of the stability and steady-state accuracy of the overall position/force control upon the design of the inner position controller. Supportive simulation results are also provided.
Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker; Fredenslund, Anders M; Scheutz, Charlotte
2018-09-01
The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument and tracer gas combinations. Analytical instruments with a high detection frequency and good precision were established as the most suitable for successful TDM application. The application of an instrument with a poor precision could only to some extent be overcome by applying a higher tracer gas release rate. A sideward misplacement of the tracer gas release point of about 250m resulted in an emission rate comparable to those obtained using a tracer gas correctly simulating the methane emission. Conversely, an upwind misplacement of about 150m resulted in an emission rate overestimation of almost 50%, showing the importance of proper emission source simulation when applying the TDM. Copyright © 2018 Elsevier B.V. All rights reserved.
Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation
Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.
2000-01-01
Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.
Seismic displacements monitoring for 2015 Mw 7.8 Nepal earthquake with GNSS data
NASA Astrophysics Data System (ADS)
Geng, T.; Su, X.; Xie, X.
2017-12-01
The high-rate Global Positioning Satellite System (GNSS) has been recognized as one of the powerful tools for monitoring ground motions generated by seismic events. The high-rate GPS and BDS data collected during the 2015 Mw 7.8 Nepal earthquake have been analyzed using two methods, that are the variometric approach and Precise point positioning (PPP). The variometric approach is based on time differenced technique using only GNSS broadcast products to estimate velocity time series from tracking observations in real time, followed by an integration procedure on the velocities to derive the seismic event induced displacements. PPP is a positioning method to calculate precise positions at centimeter- or even millimeter-level accuracy with a single GNSS receiver using precise satellite orbit and clock products. The displacement motions with accuracy of 2 cm at far-field stations and 5 cm at near-field stations with great ground motions and static offsets up to 1-2 m could be achieved. The multi-GNSS, GPS + BDS, could provide higher accuracy displacements with the increasing of satellite numbers and the improvement of the Position Dilution of Precision (PDOP) values. Considering the time consumption of clock estimates and the precision of PPP solutions, 5 s GNSS satellite clock interval is suggested. In addition, the GNSS-derived displacements are in good agreement with those from strong motion data. These studies demonstrate the feasibility of real-time capturing seismic waves with multi-GNSS observations, which is of great promise for the purpose of earthquake early warning and rapid hazard assessment.
Importance sampling studies of helium using the Feynman-Kac path integral method
NASA Astrophysics Data System (ADS)
Datta, S.; Rejcek, J. M.
2018-05-01
In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.
NASA Astrophysics Data System (ADS)
Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.
2014-07-01
Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.
NASA Astrophysics Data System (ADS)
Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang
The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.
Online Wavelet Complementary velocity Estimator.
Righettini, Paolo; Strada, Roberto; KhademOlama, Ehsan; Valilou, Shirin
2018-02-01
In this paper, we have proposed a new online Wavelet Complementary velocity Estimator (WCE) over position and acceleration data gathered from an electro hydraulic servo shaking table. This is a batch estimator type that is based on the wavelet filter banks which extract the high and low resolution of data. The proposed complementary estimator combines these two resolutions of velocities which acquired from numerical differentiation and integration of the position and acceleration sensors by considering a fixed moving horizon window as input to wavelet filter. Because of using wavelet filters, it can be implemented in a parallel procedure. By this method the numerical velocity is estimated without having high noise of differentiators, integration drifting bias and with less delay which is suitable for active vibration control in high precision Mechatronics systems by Direct Velocity Feedback (DVF) methods. This method allows us to make velocity sensors with less mechanically moving parts which makes it suitable for fast miniature structures. We have compared this method with Kalman and Butterworth filters over stability, delay and benchmarked them by their long time velocity integration for getting back the initial position data. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Systems and methods for knowledge discovery in spatial data
Obradovic, Zoran; Fiez, Timothy E.; Vucetic, Slobodan; Lazarevic, Aleksandar; Pokrajac, Dragoljub; Hoskinson, Reed L.
2005-03-08
Systems and methods are provided for knowledge discovery in spatial data as well as to systems and methods for optimizing recipes used in spatial environments such as may be found in precision agriculture. A spatial data analysis and modeling module is provided which allows users to interactively and flexibly analyze and mine spatial data. The spatial data analysis and modeling module applies spatial data mining algorithms through a number of steps. The data loading and generation module obtains or generates spatial data and allows for basic partitioning. The inspection module provides basic statistical analysis. The preprocessing module smoothes and cleans the data and allows for basic manipulation of the data. The partitioning module provides for more advanced data partitioning. The prediction module applies regression and classification algorithms on the spatial data. The integration module enhances prediction methods by combining and integrating models. The recommendation module provides the user with site-specific recommendations as to how to optimize a recipe for a spatial environment such as a fertilizer recipe for an agricultural field.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres
2018-01-02
Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.
Kühn, Simone; Fernyhough, Charles; Alderson-Day, Benjamin; Hurlburt, Russell T.
2014-01-01
To provide full accounts of human experience and behavior, research in cognitive neuroscience must be linked to inner experience, but introspective reports of inner experience have often been found to be unreliable. The present case study aimed at providing proof of principle that introspection using one method, descriptive experience sampling (DES), can be reliably integrated with fMRI. A participant was trained in the DES method, followed by nine sessions of sampling within an MRI scanner. During moments where the DES interview revealed ongoing inner speaking, fMRI data reliably showed activation in classic speech processing areas including left inferior frontal gyrus. Further, the fMRI data validated the participant’s DES observations of the experiential distinction between inner speaking and innerly hearing her own voice. These results highlight the precision and validity of the DES method as a technique of exploring inner experience and the utility of combining such methods with fMRI. PMID:25538649
An accurate real-time model of maglev planar motor based on compound Simpson numerical integration
NASA Astrophysics Data System (ADS)
Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi
2017-05-01
To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.
Understanding the many-body expansion for large systems. I. Precision considerations
NASA Astrophysics Data System (ADS)
Richard, Ryan M.; Lao, Ka Un; Herbert, John M.
2014-07-01
Electronic structure methods based on low-order "n-body" expansions are an increasingly popular means to defeat the highly nonlinear scaling of ab initio quantum chemistry calculations, taking advantage of the inherently distributable nature of the numerous subsystem calculations. Here, we examine how the finite precision of these subsystem calculations manifests in applications to large systems, in this case, a sequence of water clusters ranging in size up to (H_2O)_{47}. Using two different computer implementations of the n-body expansion, one fully integrated into a quantum chemistry program and the other written as a separate driver routine for the same program, we examine the reproducibility of total binding energies as a function of cluster size. The combinatorial nature of the n-body expansion amplifies subtle differences between the two implementations, especially for n ⩾ 4, leading to total energies that differ by as much as several kcal/mol between two implementations of what is ostensibly the same method. This behavior can be understood based on a propagation-of-errors analysis applied to a closed-form expression for the n-body expansion, which is derived here for the first time. Discrepancies between the two implementations arise primarily from the Coulomb self-energy correction that is required when electrostatic embedding charges are implemented by means of an external driver program. For reliable results in large systems, our analysis suggests that script- or driver-based implementations should read binary output files from an electronic structure program, in full double precision, or better yet be fully integrated in a way that avoids the need to compute the aforementioned self-energy. Moreover, four-body and higher-order expansions may be too sensitive to numerical thresholds to be of practical use in large systems.
Understanding the many-body expansion for large systems. I. Precision considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard, Ryan M.; Lao, Ka Un; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu
2014-07-07
Electronic structure methods based on low-order “n-body” expansions are an increasingly popular means to defeat the highly nonlinear scaling of ab initio quantum chemistry calculations, taking advantage of the inherently distributable nature of the numerous subsystem calculations. Here, we examine how the finite precision of these subsystem calculations manifests in applications to large systems, in this case, a sequence of water clusters ranging in size up to (H{sub 2}O){sub 47}. Using two different computer implementations of the n-body expansion, one fully integrated into a quantum chemistry program and the other written as a separate driver routine for the same program,more » we examine the reproducibility of total binding energies as a function of cluster size. The combinatorial nature of the n-body expansion amplifies subtle differences between the two implementations, especially for n ⩾ 4, leading to total energies that differ by as much as several kcal/mol between two implementations of what is ostensibly the same method. This behavior can be understood based on a propagation-of-errors analysis applied to a closed-form expression for the n-body expansion, which is derived here for the first time. Discrepancies between the two implementations arise primarily from the Coulomb self-energy correction that is required when electrostatic embedding charges are implemented by means of an external driver program. For reliable results in large systems, our analysis suggests that script- or driver-based implementations should read binary output files from an electronic structure program, in full double precision, or better yet be fully integrated in a way that avoids the need to compute the aforementioned self-energy. Moreover, four-body and higher-order expansions may be too sensitive to numerical thresholds to be of practical use in large systems.« less
The effects of selective and divided attention on sensory precision and integration.
Odegaard, Brian; Wozny, David R; Shams, Ladan
2016-02-12
In our daily lives, our capacity to selectively attend to stimuli within or across sensory modalities enables enhanced perception of the surrounding world. While previous research on selective attention has studied this phenomenon extensively, two important questions still remain unanswered: (1) how selective attention to a single modality impacts sensory integration processes, and (2) the mechanism by which selective attention improves perception. We explored how selective attention impacts performance in both a spatial task and a temporal numerosity judgment task, and employed a Bayesian Causal Inference model to investigate the computational mechanism(s) impacted by selective attention. We report three findings: (1) in the spatial domain, selective attention improves precision of the visual sensory representations (which were relatively precise), but not the auditory sensory representations (which were fairly noisy); (2) in the temporal domain, selective attention improves the sensory precision in both modalities (both of which were fairly reliable to begin with); (3) in both tasks, selective attention did not exert a significant influence over the tendency to integrate sensory stimuli. Therefore, it may be postulated that a sensory modality must possess a certain inherent degree of encoding precision in order to benefit from selective attention. It also appears that in certain basic perceptual tasks, the tendency to integrate crossmodal signals does not depend significantly on selective attention. We conclude with a discussion of how these results relate to recent theoretical considerations of selective attention. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
High precision calcium isotope analysis using 42Ca-48Ca double-spike TIMS technique
NASA Astrophysics Data System (ADS)
Feng, L.; Zhou, L.; Gao, S.; Tong, S. Y.; Zhou, M. L.
2014-12-01
Double spike techniques are widely used for determining calcium isotopic compositions of natural samples. The most important factor controlling precision of the double spike technique is the choice of appropriate spike isotope pair, the composition of double spikes and the ratio of spike to sample(CSp/CN). We propose an optimal 42Ca-48Ca double spike protocol which yields the best internal precision for calcium isotopic composition determinations among all kinds of spike pairs and various spike compositions and ratios of spike to sample, as predicted by linear error propagation method. It is suggested to use spike composition of 42Ca/(42Ca+48Ca) = 0.44 mol/mol and CSp/(CN+ CSp)= 0.12mol/mol because it takes both advantages of the largest mass dispersion between 42Ca and 48Ca (14%) and lowest spike cost. Spiked samples were purified by pass through homemade micro-column filled with Ca special resin. K, Ti and other interference elements were completely separated, while 100% calcium was recovered with negligible blank. Data collection includes integration time, idle time, focus and peakcenter frequency, which were all carefully designed for the highest internal precision and lowest analysis time. All beams were automatically measured in a sequence by Triton TIMS so as to eliminate difference of analytical conditions between samples and standards, and also to increase the analytical throughputs. The typical internal precision of 100 duty cycles for one beam is 0.012‒0.015 ‰ (2δSEM), which agrees well with the predicted internal precision of 0.0124 ‰ (2δSEM). Our methods improve internal precisions by a factor of 2‒10 compared to previous methods of determination of calcium isotopic compositions by double spike TIMS. We analyzed NIST SRM 915a, NIST SRM 915b and Pacific Seawater as well as interspersed geological samples during two months. The obtained average δ44/40Ca (all relative to NIST SRM 915a) is 0.02 ± 0.02 ‰ (n=28), 0.72±0.04 ‰ (n=10) and 1.93±0.03 ‰ (n=21) for NIST SRM 915a, NIST SRM 915b and Pacific Seawater, respectively. The long-term reproducibility is 0.10‰ (2 δSD), which is comparable to the best external precision of 0.04 ‰ (2 δSD) of previous methods, but our sample throughputs are doubled with significant reduction in amount of spike used for single samples.
Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José
2016-07-22
The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched.
Ferrández-Pastor, Francisco Javier; García-Chamizo, Juan Manuel; Nieto-Hidalgo, Mario; Mora-Pascual, Jerónimo; Mora-Martínez, José
2016-01-01
The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched. PMID:27455265
Visual control of robots using range images.
Pomares, Jorge; Gil, Pablo; Torres, Fernando
2010-01-01
In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
Microfabricated ion trap array
Blain, Matthew G [Albuquerque, NM; Fleming, James G [Albuquerque, NM
2006-12-26
A microfabricated ion trap array, comprising a plurality of ion traps having an inner radius of order one micron, can be fabricated using surface micromachining techniques and materials known to the integrated circuits manufacturing and microelectromechanical systems industries. Micromachining methods enable batch fabrication, reduced manufacturing costs, dimensional and positional precision, and monolithic integration of massive arrays of ion traps with microscale ion generation and detection devices. Massive arraying enables the microscale ion traps to retain the resolution, sensitivity, and mass range advantages necessary for high chemical selectivity. The reduced electrode voltage enables integration of the microfabricated ion trap array with on-chip circuit-based rf operation and detection electronics (i.e., cell phone electronics). Therefore, the full performance advantages of the microfabricated ion trap array can be realized in truly field portable, handheld microanalysis systems.
Integrated and differential accuracy in resummed cross sections
Bertolini, Daniele; Solon, Mikhail P.; Walsh, Jonathan R.
2017-03-30
Standard QCD resummation techniques provide precise predictions for the spectrum and the cumulant of a given observable. The integrated spectrum and the cumulant differ by higher-order terms which, however, can be numerically significant. Here in this paper we propose a method, which we call the σ-improved scheme, to resolve this issue. It consists of two steps: (i) include higher-order terms in the spectrum to improve the agreement with the cumulant central value, and (ii) employ profile scales that encode correlations between different points to give robust uncertainty estimates for the integrated spectrum. We provide a generic algorithm for determining suchmore » profile scales, and show the application to the thrust distribution in e +e - collisions at NLL'+NLO and NNLL'+NNLO.« less
Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula
2016-05-01
This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool.
Chen Peng; Ao Li
2017-01-01
The emergence of multi-dimensional data offers opportunities for more comprehensive analysis of the molecular characteristics of human diseases and therefore improving diagnosis, treatment, and prevention. In this study, we proposed a heterogeneous network based method by integrating multi-dimensional data (HNMD) to identify GBM-related genes. The novelty of the method lies in that the multi-dimensional data of GBM from TCGA dataset that provide comprehensive information of genes, are combined with protein-protein interactions to construct a weighted heterogeneous network, which reflects both the general and disease-specific relationships between genes. In addition, a propagation algorithm with resistance is introduced to precisely score and rank GBM-related genes. The results of comprehensive performance evaluation show that the proposed method significantly outperforms the network based methods with single-dimensional data and other existing approaches. Subsequent analysis of the top ranked genes suggests they may be functionally implicated in GBM, which further corroborates the superiority of the proposed method. The source code and the results of HNMD can be downloaded from the following URL: http://bioinformatics.ustc.edu.cn/hnmd/ .
Design, Integration and Flight Test of a Pair of Autonomous Spacecraft Flying in Formation
2013-05-01
representatives from the Air Force Research Laboratory, NASA’s Goddard Space Flight Center, the Jet Propulsion Laboratory, Boeing, Lockheed Martin, as...categories: elliptical , hyperbolic and parabolic (known as “Keplerian orbits”), each with their own characteristics and applications. These equations...of M-SAT’s operation is that of an elliptical nature, or more precisely a near-circular orbit. The primary method of determining the orbital elements
Survey of optimization techniques for nonlinear spacecraft trajectory searches
NASA Technical Reports Server (NTRS)
Wang, Tseng-Chan; Stanford, Richard H.; Sunseri, Richard F.; Breckheimer, Peter J.
1988-01-01
Mathematical analysis of the optimal search of a nonlinear spacecraft trajectory to arrive at a set of desired targets is presented. A high precision integrated trajectory program and several optimization software libraries are used to search for a converged nonlinear spacecraft trajectory. Several examples for the Galileo Jupiter Orbiter and the Ocean Topography Experiment (TOPEX) are presented that illustrate a variety of the optimization methods used in nonlinear spacecraft trajectory searches.
Read-In Integrated Circuits for Large-Format Multi-Chip Emitter Arrays
2015-03-31
chip has been designed and fabricated using ONSEMI C5N process to verify our approach. Keywords: Large scale arrays; Tiling; Mosaic; Abutment ...required. X and y addressing is not a sustainable and easily expanded addressing architecture nor will it work well with abutted RIICs. Abutment Method... Abutting RIICs into an array is challenging because of the precise positioning required to achieve a uniform image. This problem is a new design
Astrophysical masers - Inverse methods, precision, resolution and uniqueness
NASA Astrophysics Data System (ADS)
Lerche, I.
1986-07-01
The paper provides exact analytic solutions to the two-level, steady-state, maser problem in parametric form, with the emergent intensities expressed in terms of the incident intensities and with the maser length also given in terms of an integral over the intensities. It is shown that some assumption must be made on the emergent intensity on the nonobservable side of the astrophysical maser in order to obtain any inversion of the equations. The incident intensities can then be expressed in terms of the emergent, observable, flux. It is also shown that the inversion is nonunique unless a homogeneous linear integral equation has only a null solution. Constraints imposed by knowledge of the physical length of the maser are felt in a nonlinear manner by the parametric variable and do not appear to provide any substantive additional information to reduce the degree of nonuniqueness of the inverse solutions. It is concluded that the questions of precision, resolution and uniqueness for solutions to astrophysical maser problems will remain more of an emotional art than a logical science for some time to come.
Pinto, Rita; Hansen, Lars; Hintze, John; Almeida, Raquel; Larsen, Sylvester; Coskun, Mehmet; Davidsen, Johanne; Mitchelmore, Cathy; David, Leonor; Troelsen, Jesper Thorvald; Bennett, Eric Paul
2017-07-27
Tetracycline-based inducible systems provide powerful methods for functional studies where gene expression can be controlled. However, the lack of tight control of the inducible system, leading to leakiness and adverse effects caused by undesirable tetracycline dosage requirements, has proven to be a limitation. Here, we report that the combined use of genome editing tools and last generation Tet-On systems can resolve these issues. Our principle is based on precise integration of inducible transcriptional elements (coined PrIITE) targeted to: (i) exons of an endogenous gene of interest (GOI) and (ii) a safe harbor locus. Using PrIITE cells harboring a GFP reporter or CDX2 transcription factor, we demonstrate discrete inducibility of gene expression with complete abrogation of leakiness. CDX2 PrIITE cells generated by this approach uncovered novel CDX2 downstream effector genes. Our results provide a strategy for characterization of dose-dependent effector functions of essential genes that require absence of endogenous gene expression. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Assembly and Multiplex Genome Integration of Metabolic Pathways in Yeast Using CasEMBLR.
Jakočiūnas, Tadas; Jensen, Emil D; Jensen, Michael K; Keasling, Jay D
2018-01-01
Genome integration is a vital step for implementing large biochemical pathways to build a stable microbial cell factory. Although traditional strain construction strategies are well established for the model organism Saccharomyces cerevisiae, recent advances in CRISPR/Cas9-mediated genome engineering allow much higher throughput and robustness in terms of strain construction. In this chapter, we describe CasEMBLR, a highly efficient and marker-free genome engineering method for one-step integration of in vivo assembled expression cassettes in multiple genomic sites simultaneously. CasEMBLR capitalizes on the CRISPR/Cas9 technology to generate double-strand breaks in genomic loci, thus prompting native homologous recombination (HR) machinery to integrate exogenously derived homology templates. As proof-of-principle for microbial cell factory development, CasEMBLR was used for one-step assembly and marker-free integration of the carotenoid pathway from 15 exogenously supplied DNA parts into three targeted genomic loci. As a second proof-of-principle, a total of ten DNA parts were assembled and integrated in two genomic loci to construct a tyrosine production strain, and at the same time knocking out two genes. This new method complements and improves the field of genome engineering in S. cerevisiae by providing a more flexible platform for rapid and precise strain building.
Why farming with high tech methods should integrate elements of organic agriculture.
Ammann, Klaus
2009-09-01
In the previous article [Ammann, K. (2008) Feature: integrated farming: why organic farmers should use transgenic crops. New Biotechnol. 25, 101-107], in a plea for the introduction of transgenic crops into organic and integrated farming, it was announced that the complementary topic, namely that high tech farmers should integrate elements of organic agriculture, will be a follow up. Some selected arguments for such a view are summarised here. Basically, they comprise a differentiated view on agro-biodiversity outside the field of production; landscape management methods to enhance biodiversity levels. Both elements are compatible with basic ideas of organic farming. First, Precision Farming is given as one example of the many ways to support agricultural production through high technology, with the aim of reducing energy input, maintaining excellent soil conditions and enhancing yield. It is clear from this analysis that modern agriculture and certain elements of organic-integrated agriculture are compatible. There are sectors of high tech farming, such as the introduction of a better recycling scheme and also a better focus on socio-economic aspects, which need to be taken up seriously from organic-integrated farming, a system which puts a lot of emphasis on those elements and for which important research data are available. In the final part a new concept of dynamic sustainability is presented.
Brautigam, Chad A; Zhao, Huaying; Vargas, Carolyn; Keller, Sandro; Schuck, Peter
2016-05-01
Isothermal titration calorimetry (ITC) is a powerful and widely used method to measure the energetics of macromolecular interactions by recording a thermogram of differential heating power during a titration. However, traditional ITC analysis is limited by stochastic thermogram noise and by the limited information content of a single titration experiment. Here we present a protocol for bias-free thermogram integration based on automated shape analysis of the injection peaks, followed by combination of isotherms from different calorimetric titration experiments into a global analysis, statistical analysis of binding parameters and graphical presentation of the results. This is performed using the integrated public-domain software packages NITPIC, SEDPHAT and GUSSI. The recently developed low-noise thermogram integration approach and global analysis allow for more precise parameter estimates and more reliable quantification of multisite and multicomponent cooperative and competitive interactions. Titration experiments typically take 1-2.5 h each, and global analysis usually takes 10-20 min.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Aftab; Khan, S. N.; Wilson, Brian G.
2011-07-06
A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less
Happel, Max F K; Jeschke, Marcus; Ohl, Frank W
2010-08-18
Primary sensory cortex integrates sensory information from afferent feedforward thalamocortical projection systems and convergent intracortical microcircuits. Both input systems have been demonstrated to provide different aspects of sensory information. Here we have used high-density recordings of laminar current source density (CSD) distributions in primary auditory cortex of Mongolian gerbils in combination with pharmacological silencing of cortical activity and analysis of the residual CSD, to dissociate the feedforward thalamocortical contribution and the intracortical contribution to spectral integration. We found a temporally highly precise integration of both types of inputs when the stimulation frequency was in close spectral neighborhood of the best frequency of the measurement site, in which the overlap between both inputs is maximal. Local intracortical connections provide both directly feedforward excitatory and modulatory input from adjacent cortical sites, which determine how concurrent afferent inputs are integrated. Through separate excitatory horizontal projections, terminating in cortical layers II/III, information about stimulus energy in greater spectral distance is provided even over long cortical distances. These projections effectively broaden spectral tuning width. Based on these data, we suggest a mechanism of spectral integration in primary auditory cortex that is based on temporally precise interactions of afferent thalamocortical inputs and different short- and long-range intracortical networks. The proposed conceptual framework allows integration of different and partly controversial anatomical and physiological models of spectral integration in the literature.
HALO--a Java framework for precise transcript half-life determination.
Friedel, Caroline C; Kaufmann, Stefanie; Dölken, Lars; Zimmer, Ralf
2010-05-01
Recent improvements in experimental technologies now allow measurements of de novo transcription and/or RNA decay at whole transcriptome level and determination of precise transcript half-lives. Such transcript half-lives provide important insights into the regulation of biological processes and the relative contributions of RNA decay and de novo transcription to differential gene expression. In this article, we present HALO (Half-life Organizer), the first software for the precise determination of transcript half-lives from measurements of RNA de novo transcription or decay determined with microarrays or RNA-seq. In addition, methods for quality control, filtering and normalization are supplied. HALO provides a graphical user interface, command-line tools and a well-documented Java application programming interface (API). Thus, it can be used both by biologists to determine transcript half-lives fast and reliably with the provided user interfaces as well as software developers integrating transcript half-life analysis into other gene expression profiling pipelines. Source code, executables and documentation are available at http://www.bio.ifi.lmu.de/software/halo.
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Zhang, Wei; Luo, Yi; Yang, Weimin; Chen, Liang
2013-01-01
In assembly of miniature devices, the position and orientation of the parts to be assembled should be guaranteed during or after assembly. In some cases, the relative position or orientation errors among the parts can not be measured from only one direction using visual method, because of visual occlusion or for the features of parts located in a three-dimensional way. An automatic assembly system for precise miniature devices is introduced. In the modular assembly system, two machine vision systems were employed for measurement of the three-dimensionally distributed assembly errors. High resolution CCD cameras and high position repeatability precision stages were integrated to realize high precision measurement in large work space. The two cameras worked in collaboration in measurement procedure to eliminate the influence of movement errors of the rotational or translational stages. A set of templates were designed for calibration of the vision systems and evaluation of the system's measurement accuracy.
2017-01-01
To improve point-of-care quantification using microchip capillary electrophoresis (MCE), the chip-to-chip variabilities inherent in disposable, single-use devices must be addressed. This work proposes to integrate an internal standard (ISTD) into the microchip by adding it to the background electrolyte (BGE) instead of the sample—thus eliminating the need for additional sample manipulation, microchip redesigns, and/or system expansions required for traditional ISTD usage. Cs and Li ions were added as integrated ISTDs to the BGE, and their effects on the reproducibility of Na quantification were explored. Results were then compared to the conclusions of our previous publication which used Cs and Li as traditional ISTDs. The in-house fabricated microchips, electrophoretic protocols, and solution matrixes were kept constant, allowing the proposed method to be reliably compared to the traditional method. Using the integrated ISTDs, both Cs and Li improved the Na peak area reproducibility approximately 2-fold, to final RSD values of 2.2–4.7% (n = 900). In contrast (to previous work), Cs as a traditional ISTD resulted in final RSDs of 2.5–8.8%, while the traditional Li ISTD performed poorly with RSDs of 6.3–14.2%. These findings suggest integrated ISTDs are a viable method to improve the precision of disposable MCE devices—giving matched or superior results to the traditional method in this study while neither increasing system cost nor complexity. PMID:28192985
Melanson, Edward L; Swibas, Tracy; Kohrt, Wendy M; Catenacci, Vicki A; Creasy, Seth A; Plasqui, Guy; Wouters, Loek; Speakman, John R; Berman, Elena S F
2018-02-01
When the doubly labeled water (DLW) method is used to measure total daily energy expenditure (TDEE), isotope measurements are typically performed using isotope ratio mass spectrometry (IRMS). New technologies, such as off-axis integrated cavity output spectroscopy (OA-ICOS) provide comparable isotopic measurements of standard waters and human urine samples, but the accuracy of carbon dioxide production (V̇co 2 ) determined with OA-ICOS has not been demonstrated. We compared simultaneous measurement V̇co 2 obtained using whole-room indirect calorimetry (IC) with DLW-based measurements from IRMS and OA-ICOS. Seventeen subjects (10 female; 22 to 63 yr) were studied for 7 consecutive days in the IC. Subjects consumed a dose of 0.25 g H 2 18 O (98% APE) and 0.14 g 2 H 2 O (99.8% APE) per kilogram of total body water, and urine samples were obtained on days 1 and 8 to measure average daily V̇co 2 using OA-ICOS and IRMS. V̇co 2 was calculated using both the plateau and intercept methods. There were no differences in V̇co 2 measured by OA-ICOS or IRMS compared with IC when the plateau method was used. When the intercept method was used, V̇co 2 using OA-ICOS did not differ from IC, but V̇co 2 measured using IRMS was significantly lower than IC. Accuracy (~1-5%), precision (~8%), intraclass correlation coefficients ( R = 0.87-90), and root mean squared error (30-40 liters/day) of V̇co 2 measured by OA-ICOS and IRMS were similar. Both OA-ICOS and IRMS produced measurements of V̇co 2 with comparable accuracy and precision compared with IC.
Odoardi, Sara; Anzillotti, Luca; Strano-Rossi, Sabina
2014-10-01
The complexity of biological matrices, such as blood, requires the development of suitably selective and reliable sample pretreatment procedures prior to their instrumental analysis. A method has been developed for the analysis of drugs of abuse and their metabolites from different chemical classes (opiates, methadone, fentanyl and analogues, cocaine, amphetamines and amphetamine-like substances, ketamine, LSD) in human blood using dried blood spot (DBS) and subsequent UHPLC-MS/MS analysis. DBS extraction required only 100μL of sample, added with the internal standards and then three droplets (30μL each) of this solution were spotted on the card, let dry for 1h, punched and extracted with methanol with 0.1% of formic acid. The supernatant was evaporated and the residue was then reconstituted in 100μL of water with 0.1% of formic acid and injected in the UHPLC-MS/MS system. The method was validated considering the following parameters: LOD and LOQ, linearity, precision, accuracy, matrix effect and dilution integrity. LODs were 0.05-1ng/mL and LOQs were 0.2-2ng/mL. The method showed satisfactory linearity for all substances, with determination coefficients always higher than 0.99. Intra and inter day precision, accuracy, matrix effect and dilution integrity were acceptable for all the studied substances. The addition of internal standards before DBS extraction and the deposition of a fixed volume of blood on the filter cards ensured the accurate quantification of the analytes. The validated method was then applied to authentic postmortem blood samples. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.
2017-12-01
We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.
Precision linear ramp function generator
Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.
1984-08-01
A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Precision linear ramp function generator
Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.
1986-01-01
A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
G-DOC Plus - an integrative bioinformatics platform for precision medicine.
Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha
2016-04-30
G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available at: https://gdoc.georgetown.edu .
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Verrière, M.; Schunck, N.
2018-04-01
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this paper, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different types of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank-Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. We emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).
Open-Loop Flight Testing of COBALT Navigation and Sensor Technologies for Precise Soft Landing
NASA Technical Reports Server (NTRS)
Carson, John M., III; Restrepo, Caroline I.; Seubert, Carl R.; Amzajerdian, Farzin; Pierrottet, Diego F.; Collins, Steven M.; O'Neal, Travis V.; Stelling, Richard
2017-01-01
An open-loop flight test campaign of the NASA COBALT (CoOperative Blending of Autonomous Landing Technologies) payload was conducted onboard the Masten Xodiac suborbital rocket testbed. The payload integrates two complementary sensor technologies that together provide a spacecraft with knowledge during planetary descent and landing to precisely navigate and softly touchdown in close proximity to targeted surface locations. The two technologies are the Navigation Doppler Lidar (NDL), for high-precision velocity and range measurements, and the Lander Vision System (LVS) for map-relative state esti- mates. A specialized navigation filter running onboard COBALT fuses the NDL and LVS data in real time to produce a very precise Terrain Relative Navigation (TRN) solution that is suitable for future, autonomous planetary landing systems that require precise and soft landing capabilities. During the open-loop flight campaign, the COBALT payload acquired measurements and generated a precise navigation solution, but the Xodiac vehicle planned and executed its maneuvers based on an independent, GPS-based navigation solution. This minimized the risk to the vehicle during the integration and testing of the new navigation sensing technologies within the COBALT payload.
The iso-response method: measuring neuronal stimulus integration with closed-loop experiments
Gollisch, Tim; Herz, Andreas V. M.
2012-01-01
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments. PMID:23267315
Computation of type curves for flow to partially penetrating wells in water-table aquifers
Moench, Allen F.
1993-01-01
Evaluation of Neuman's analytical solution for flow to a well in a homogeneous, anisotropic, water-table aquifer commonly requires large amounts of computation time and can produce inaccurate results for selected combinations of parameters. Large computation times occur because the integrand of a semi-infinite integral involves the summation of an infinite series. Each term of the series requires evaluation of the roots of equations, and the series itself is sometimes slowly convergent. Inaccuracies can result from lack of computer precision or from the use of improper methods of numerical integration. In this paper it is proposed to use a method of numerical inversion of the Laplace transform solution, provided by Neuman, to overcome these difficulties. The solution in Laplace space is simpler in form than the real-time solution; that is, the integrand of the semi-infinite integral does not involve an infinite series or the need to evaluate roots of equations. Because the integrand is evaluated rapidly, advanced methods of numerical integration can be used to improve accuracy with an overall reduction in computation time. The proposed method of computing type curves, for which a partially documented computer program (WTAQ1) was written, was found to reduce computation time by factors of 2 to 20 over the time needed to evaluate the closed-form, real-time solution.
Low temperature heat capacities and thermodynamic functions described by Debye-Einstein integrals.
Gamsjäger, Ernst; Wiessner, Manfred
2018-01-01
Thermodynamic data of various crystalline solids are assessed from low temperature heat capacity measurements, i.e., from almost absolute zero to 300 K by means of semi-empirical models. Previous studies frequently present fit functions with a large amount of coefficients resulting in almost perfect agreement with experimental data. It is, however, pointed out in this work that special care is required to avoid overfitting. Apart from anomalies like phase transformations, it is likely that data from calorimetric measurements can be fitted by a relatively simple Debye-Einstein integral with sufficient precision. Thereby, reliable values for the heat capacities, standard enthalpies, and standard entropies at T = 298.15 K are obtained. Standard thermodynamic functions of various compounds strongly differing in the number of atoms in the formula unit can be derived from this fitting procedure and are compared to the results of previous fitting procedures. The residuals are of course larger when the Debye-Einstein integral is applied instead of using a high number of fit coefficients or connected splines, but the semi-empiric fit coefficients keep their meaning with respect to physics. It is suggested to use the Debye-Einstein integral fit as a standard method to describe heat capacities in the range between 0 and 300 K so that the derived thermodynamic functions are obtained on the same theory-related semi-empiric basis. Additional fitting is recommended when a precise description for data at ultra-low temperatures (0-20 K) is requested.
Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F
2018-01-01
Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.
The ensemble switch method for computing interfacial tensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitz, Fabian; Virnau, Peter
2015-04-14
We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.
Provably secure time distribution for the electric grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith IV, Amos M; Evans, Philip G; Williams, Brian P
We demonstrate a quantum time distribution (QTD) method that combines the precision of optical timing techniques with the integrity of quantum key distribution (QKD). Critical infrastructure is dependent on microprocessor- and programmable logic-based monitoring and control systems. The distribution of timing information across the electric grid is accomplished by GPS signals which are known to be vulnerable to spoofing. We demonstrate a method for synchronizing remote clocks based on the arrival time of photons in a modifed QKD system. This has the advantage that the signal can be veried by examining the quantum states of the photons similar to QKD.
Precise Documentation: The Key to Better Software
NASA Astrophysics Data System (ADS)
Parnas, David Lorge
The prime cause of the sorry “state of the art” in software development is our failure to produce good design documentation. Poor documentation is the cause of many errors and reduces efficiency in every phase of a software product's development and use. Most software developers believe that “documentation” refers to a collection of wordy, unstructured, introductory descriptions, thousands of pages that nobody wanted to write and nobody trusts. In contrast, Engineers in more traditional disciplines think of precise blueprints, circuit diagrams, and mathematical specifications of component properties. Software developers do not know how to produce precise documents for software. Software developments also think that documentation is something written after the software has been developed. In other fields of Engineering much of the documentation is written before and during the development. It represents forethought not afterthought. Among the benefits of better documentation would be: easier reuse of old designs, better communication about requirements, more useful design reviews, easier integration of separately written modules, more effective code inspection, more effective testing, and more efficient corrections and improvements. This paper explains how to produce and use precise software documentation and illustrate the methods with several examples.
Transgene manipulation in zebrafish by using recombinases.
Dong, Jie; Stuart, Gary W
2004-01-01
Although much remains to be done, our results to date suggest that efficient and precise genome engineering in zebrafish will be possible in the future by using Cre recombinase and SB transposase in combination with their respective target sites. In this study, we provide the first evidence that Cre recombinase can mediate effective site-specific deletion of transgenes in zebrafish. We found that the efficiency of target site utilization could approach 100%, independent of whether the target site was provided transiently by injection or stably within an integrated transgene. Microinjection of Cre mRNA appeared to be slightly more effective for this purpose than microinjection of Cre-expressing plasmid DNA. Our work has not yet progressed to the point where SB-mediated mobilization of our transgene constructs would be observed. However, a recent report has demonstrated that SB can enhance transgenesis rates sixfold over conventional methods by efficiently mediating multiple single-copy insertion of transgenes into the zebrafish genome (Davidson et al., 2003). Therefore, it seems likely that a combined system should eventually allow both SB-mediated transgene mobilization and Cre-mediated transgene modification. Our goal is to validate methods for the precise reengineering of the zebrafish genome by using a combination of Cre-loxP and SB transposon systems. These methods can be used to delete, replace, or mobilize large pieces of DNA or to modify the genome only when and where required by the investigator. For example, it should be possible to deliver particular RNAi genes to well-expressed chromosomal loci and then exchange them easily with alternative RNAi genes for the specific suppression of alternative targets. As a nonviral vector for gene therapy, the transposon component allows for the possibility of highly efficient integration, whereas the Cre-loxP component can target the integration and/or exchange of foreign DNA into specific sites within the genome. The specificity and efficiency of this system also make it ideal for applications in which precise genome modifications are required (e.g., stock improvement). Future work should establish whether alternative recombination systems (e.g., phiC31 integrase) can improve the utility of this system. After the fish system is fully established, it would be interesting to explore its application to genome engineering in other organisms.
Knowledge-guided fuzzy logic modeling to infer cellular signaling networks from proteomic data
Liu, Hui; Zhang, Fan; Mishra, Shital Kumar; Zhou, Shuigeng; Zheng, Jie
2016-01-01
Modeling of signaling pathways is crucial for understanding and predicting cellular responses to drug treatments. However, canonical signaling pathways curated from literature are seldom context-specific and thus can hardly predict cell type-specific response to external perturbations; purely data-driven methods also have drawbacks such as limited biological interpretability. Therefore, hybrid methods that can integrate prior knowledge and real data for network inference are highly desirable. In this paper, we propose a knowledge-guided fuzzy logic network model to infer signaling pathways by exploiting both prior knowledge and time-series data. In particular, the dynamic time warping algorithm is employed to measure the goodness of fit between experimental and predicted data, so that our method can model temporally-ordered experimental observations. We evaluated the proposed method on a synthetic dataset and two real phosphoproteomic datasets. The experimental results demonstrate that our model can uncover drug-induced alterations in signaling pathways in cancer cells. Compared with existing hybrid models, our method can model feedback loops so that the dynamical mechanisms of signaling networks can be uncovered from time-series data. By calibrating generic models of signaling pathways against real data, our method supports precise predictions of context-specific anticancer drug effects, which is an important step towards precision medicine. PMID:27774993
Precise detection of chromosomal translocation or inversion breakpoints by whole-genome sequencing.
Suzuki, Toshifumi; Tsurusaki, Yoshinori; Nakashima, Mitsuko; Miyake, Noriko; Saitsu, Hirotomo; Takeda, Satoru; Matsumoto, Naomichi
2014-12-01
Structural variations (SVs), including translocations, inversions, deletions and duplications, are potentially associated with Mendelian diseases and contiguous gene syndromes. Determination of SV-related breakpoints at the nucleotide level is important to reveal the genetic causes for diseases. Whole-genome sequencing (WGS) by next-generation sequencers is expected to determine structural abnormalities more directly and efficiently than conventional methods. In this study, 14 SVs (9 balanced translocations, 1 inversion and 4 microdeletions) in 9 patients were analyzed by WGS with a shallow (5 × ) to moderate read coverage (20 × ). Among 28 breakpoints (as each SV has two breakpoints), 19 SV breakpoints had been determined previously at the nucleotide level by any other methods and 9 were uncharacterized. BreakDancer and Integrative Genomics Viewer determined 20 breakpoints (16 translocation, 2 inversion and 2 deletion breakpoints), but did not detect 8 breakpoints (2 translocation and 6 deletion breakpoints). These data indicate the efficacy of WGS for the precise determination of translocation and inversion breakpoints.
Precise on-machine extraction of the surface normal vector using an eddy current sensor array
NASA Astrophysics Data System (ADS)
Wang, Yongqing; Lian, Meng; Liu, Haibo; Ying, Yangwei; Sheng, Xianjun
2016-11-01
To satisfy the requirements of on-machine measurement of the surface normal during complex surface manufacturing, a highly robust normal vector extraction method using an Eddy current (EC) displacement sensor array is developed, the output of which is almost unaffected by surface brightness, machining coolant and environmental noise. A precise normal vector extraction model based on a triangular-distributed EC sensor array is first established. Calibration of the effects of object surface inclination and coupling interference on measurement results, and the relative position of EC sensors, is involved. A novel apparatus employing three EC sensors and a force transducer was designed, which can be easily integrated into the computer numerical control (CNC) machine tool spindle and/or robot terminal execution. Finally, to test the validity and practicability of the proposed method, typical experiments were conducted with specified testing pieces using the developed approach and system, such as an inclined plane and cylindrical and spherical surfaces.
Turan, Soeren; Farruggio, Alfonso P; Srifa, Waracharee; Day, John W; Calos, Michele P
2016-04-01
Limb girdle muscular dystrophies types 2B (LGMD2B) and 2D (LGMD2D) are degenerative muscle diseases caused by mutations in the dysferlin and alpha-sarcoglycan genes, respectively. Using patient-derived induced pluripotent stem cells (iPSC), we corrected the dysferlin nonsense mutation c.5713C>T; p.R1905X and the most common alpha-sarcoglycan mutation, missense c.229C>T; p.R77C, by single-stranded oligonucleotide-mediated gene editing, using the CRISPR/Cas9 gene-editing system to enhance the frequency of homology-directed repair. We demonstrated seamless, allele-specific correction at efficiencies of 0.7-1.5%. As an alternative, we also carried out precise gene addition strategies for correction of the LGMD2B iPSC by integration of wild-type dysferlin cDNA into the H11 safe harbor locus on chromosome 22, using dual integrase cassette exchange (DICE) or TALEN-assisted homologous recombination for insertion precise (THRIP). These methods employed TALENs and homologous recombination, and DICE also utilized site-specific recombinases. With DICE and THRIP, we obtained targeting efficiencies after selection of ~20%. We purified iPSC corrected by all methods and verified rescue of appropriate levels of dysferlin and alpha-sarcoglycan protein expression and correct localization, as shown by immunoblot and immunocytochemistry. In summary, we demonstrate for the first time precise correction of LGMD iPSC and validation of expression, opening the possibility of cell therapy utilizing these corrected iPSC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abazov, V. M.; Abbott, B.; Acharya, B. S.
2014-04-18
We present a measurement of the W boson production charge asymmetry in pmore » $$\\bar{p}$$→W+X→eν+X events at a center of mass energy of 1.96 TeV, using 9.7 fb -1 of integrated luminosity collected with the D0 detector at the Fermilab Tevatron Collider. The neutrino longitudinal momentum is determined by using a neutrino weighting method, and the asymmetry is measured as a function of the W boson rapidity. The measurement extends over wider electron pseudorapidity region than previous results and is the most precise to date, allowing for precise determination of proton parton distribution functions in global fits.« less
Stansfield, Claire; O'Mara-Eves, Alison; Thomas, James
2017-09-01
Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews. Copyright © 2017 John Wiley & Sons, Ltd.
Gao, Zhouzheng; Zhang, Hongping; Ge, Maorong; Niu, Xiaoji; Shen, Wenbin; Wickert, Jens; Schuh, Harald
2015-03-10
The continuity and reliability of precise GNSS positioning can be seriously limited by severe user observation environments. The Inertial Navigation System (INS) can overcome such drawbacks, but its performance is clearly restricted by INS sensor errors over time. Accordingly, the tightly coupled integration of GPS and INS can overcome the disadvantages of each individual system and together form a new navigation system with a higher accuracy, reliability and availability. Recently, ionosphere-constrained (IC) precise point positioning (PPP) utilizing raw GPS observations was proven able to improve both the convergence and positioning accuracy of the conventional PPP using ionosphere-free combined observations (LC-PPP). In this paper, a new mode of tightly coupled integration, in which the IC-PPP instead of LC-PPP is employed, is implemented to further improve the performance of the coupled system. We present the detailed mathematical model and the related algorithm of the new integration of IC-PPP and INS. To evaluate the performance of the new tightly coupled integration, data of both airborne and vehicle experiments with a geodetic GPS receiver and tactical grade inertial measurement unit are processed and the results are analyzed. The statistics show that the new approach can further improve the positioning accuracy compared with both IC-PPP and the tightly coupled integration of the conventional PPP and INS.
NASA Astrophysics Data System (ADS)
Tang, H.; Sun, W.
2016-12-01
The theoretical computation of dislocation theory in a given earth model is necessary in the explanation of observations of the co- and post-seismic deformation of earthquakes. For this purpose, computation theories based on layered or pure half space [Okada, 1985; Okubo, 1992; Wang et al., 2006] and on spherically symmetric earth [Piersanti et al., 1995; Pollitz, 1997; Sabadini & Vermeersen, 1997; Wang, 1999] have been proposed. It is indicated that the compressibility, curvature and the continuous variation of the radial structure of Earth should be simultaneously taken into account for modern high precision displacement-based observations like GPS. Therefore, Tanaka et al. [2006; 2007] computed global displacement and gravity variation by combining the reciprocity theorem (RPT) [Okubo, 1993] and numerical inverse Laplace integration (NIL) instead of the normal mode method [Peltier, 1974]. Without using RPT, we follow the straightforward numerical integration of co-seismic deformation given by Sun et al. [1996] to present a straightforward numerical inverse Laplace integration method (SNIL). This method is used to compute the co- and post-seismic displacement of point dislocations buried in a spherically symmetric, self-gravitating viscoelastic and multilayered earth model and is easy to extended to the application of geoid and gravity. Comparing with pre-existing method, this method is relatively more straightforward and time-saving, mainly because we sum associated Legendre polynomials and dislocation love numbers before using Riemann-Merlin formula to implement SNIL.
NASA Technical Reports Server (NTRS)
Tetervin, Neal; Lin, Chia Chiao
1951-01-01
A general integral form of the boundary-layer equation, valid for either laminar or turbulent incompressible boundary-layer flow, is derived. By using the experimental finding that all velocity profiles of the turbulent boundary layer form essentially a single-parameter family, the general equation is changed to an equation for the space rate of change of the velocity-profile shape parameter. The lack of precise knowledge concerning the surface shear and the distribution of the shearing stress across turbulent boundary layers prevented the attainment of a reliable method for calculating the behavior of turbulent boundary layers.
NASA Technical Reports Server (NTRS)
Haro, Helida C.
2010-01-01
The objective of this research effort is to determine the most appropriate, cost efficient, and effective method to utilize for finding moments of inertia for the Uninhabited Aerial Vehicle (UAV) Dryden Remotely Operated Integrated Drone (DROID). A moment is a measure of the body's tendency to turn about its center of gravity (CG) and inertia is the resistance of a body to changes in its momentum. Therefore, the moment of inertia (MOI) is a body's resistance to change in rotation about its CG. The inertial characteristics of an UAV have direct consequences on aerodynamics, propulsion, structures, and control. Therefore, it is imperative to determine the precise inertial characteristics of the DROID.
NASA Technical Reports Server (NTRS)
Haro, Helida C.
2010-01-01
The objective of this research effort is to determine the most appropriate, cost efficient, and effective method to utilize for finding moments of inertia for the Uninhabited Aerial Vehicle (UAV) Dryden Remotely Operated Integrated Drone (DROID). A moment is a measure of the body's tendency to turn about its center of gravity (CG) and inertia is the resistance of a body to changes in its momentum. Therefore, the moment of inertia (MOI) is a body's resistance to change in rotation about its CG. The inertial characteristics of an UAV have direct consequences on aerodynamics, propulsion, structures, and control. Therefore, it is imperative to determine the precise inertial characteristics of the DROID.
Leung, Janet T Y; Shek, Daniel T L
2011-01-01
This paper examines the use of quantitative and qualitative approaches to study the impact of economic disadvantage on family processes and adolescent development. Quantitative research has the merits of objectivity, good predictive and explanatory power, parsimony, precision and sophistication of analysis. Qualitative research, in contrast, provides a detailed, holistic, in-depth understanding of social reality and allows illumination of new insights. With the pragmatic considerations of methodological appropriateness, design flexibility, and situational responsiveness in responding to the research inquiry, a mixed methods approach could be a possibility of integrating quantitative and qualitative approaches and offers an alternative strategy to study the impact of economic disadvantage on family processes and adolescent development.
Automatically finding relevant citations for clinical guideline development.
Bui, Duy Duc An; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2015-10-01
Literature database search is a crucial step in the development of clinical practice guidelines and systematic reviews. In the age of information technology, the process of literature search is still conducted manually, therefore it is costly, slow and subject to human errors. In this research, we sought to improve the traditional search approach using innovative query expansion and citation ranking approaches. We developed a citation retrieval system composed of query expansion and citation ranking methods. The methods are unsupervised and easily integrated over the PubMed search engine. To validate the system, we developed a gold standard consisting of citations that were systematically searched and screened to support the development of cardiovascular clinical practice guidelines. The expansion and ranking methods were evaluated separately and compared with baseline approaches. Compared with the baseline PubMed expansion, the query expansion algorithm improved recall (80.2% vs. 51.5%) with small loss on precision (0.4% vs. 0.6%). The algorithm could find all citations used to support a larger number of guideline recommendations than the baseline approach (64.5% vs. 37.2%, p<0.001). In addition, the citation ranking approach performed better than PubMed's "most recent" ranking (average precision +6.5%, recall@k +21.1%, p<0.001), PubMed's rank by "relevance" (average precision +6.1%, recall@k +14.8%, p<0.001), and the machine learning classifier that identifies scientifically sound studies from MEDLINE citations (average precision +4.9%, recall@k +4.2%, p<0.001). Our unsupervised query expansion and ranking techniques are more flexible and effective than PubMed's default search engine behavior and the machine learning classifier. Automated citation finding is promising to augment the traditional literature search. Copyright © 2015 Elsevier Inc. All rights reserved.
Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura
2015-01-01
This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results. PMID:26134108
Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura
2015-06-30
This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results.
Tian, Hui; Sun, Yuanyuan; Liu, Chenghui; Duan, Xinrui; Tang, Wei; Li, Zhengping
2016-12-06
MicroRNA (miRNA) analysis in a single cell is extremely important because it allows deep understanding of the exact correlation between the miRNAs and cell functions. Herein, we wish to report a highly sensitive and precisely quantitative assay for miRNA detection based on ligation-based droplet digital polymerase chain reaction (ddPCR), which permits the quantitation of miRNA in a single cell. In this ligation-based ddPCR assay, two target-specific oligonucleotide probes can be simply designed to be complementary to the half-sequence of the target miRNA, respectively, which avoids the sophisticated design of reverse transcription and provides high specificity to discriminate a single-base difference among miRNAs with simple operations. After the miRNA-templated ligation, the ddPCR partitions individual ligated products into a water-in-oil droplet and digitally counts the fluorescence-positive and negative droplets after PCR amplification for quantification of the target molecules, which possesses the power of precise quantitation and robustness to variation in PCR efficiency. By integrating the advantages of the precise quantification of ddPCR and the simplicity of the ligation-based PCR, the proposed method can sensitively measure let-7a miRNA with a detection limit of 20 aM (12 copies per microliter), and even a single-base difference can be discriminated in let-7 family members. More importantly, due to its high selectivity and sensitivity, the proposed method can achieve precise quantitation of miRNAs in single-cell lysate. Therefore, the ligation-based ddPCR assay may serve as a useful tool to exactly reveal the miRNAs' actions in a single cell, which is of great importance for the study of miRNAs' biofunction as well as for the related biomedical studies.
Collaborative Genomics Study Advances Precision Oncology
A collaborative study conducted by two Office of Cancer Genomics (OCG) initiatives highlights the importance of integrating structural and functional genomics programs to improve cancer therapies, and more specifically, contribute to precision oncology treatments for children.
Composite adaptive control of belt polishing force for aero-engine blade
NASA Astrophysics Data System (ADS)
Zhsao, Pengbing; Shi, Yaoyao
2013-09-01
The existing methods for blade polishing mainly focus on robot polishing and manual grinding. Due to the difficulty in high-precision control of the polishing force, the blade surface precision is very low in robot polishing, in particular, quality of the inlet and exhaust edges can not satisfy the processing requirements. Manual grinding has low efficiency, high labor intensity and unstable processing quality, moreover, the polished surface is vulnerable to burn, and the surface precision and integrity are difficult to ensure. In order to further improve the profile accuracy and surface quality, a pneumatic flexible polishing force-exerting mechanism is designed and a dual-mode switching composite adaptive control(DSCAC) strategy is proposed, which combines Bang-Bang control and model reference adaptive control based on fuzzy neural network(MRACFNN) together. By the mode decision-making mechanism, Bang-Bang control is used to track the control command signal quickly when the actual polishing force is far away from the target value, and MRACFNN is utilized in smaller error ranges to improve the system robustness and control precision. Based on the mathematical model of the force-exerting mechanism, simulation analysis is implemented on DSCAC. Simulation results show that the output polishing force can better track the given signal. Finally, the blade polishing experiments are carried out on the designed polishing equipment. Experimental results show that DSCAC can effectively mitigate the influence of gas compressibility, valve dead-time effect, valve nonlinear flow, cylinder friction, measurement noise and other interference on the control precision of polishing force, which has high control precision, strong robustness, strong anti-interference ability and other advantages compared with MRACFNN. The proposed research achieves high-precision control of the polishing force, effectively improves the blade machining precision and surface consistency, and significantly reduces the surface roughness.
Haslem, Derrick S.; Van Norman, S. Burke; Fulde, Gail; Knighton, Andrew J.; Belnap, Tom; Butler, Allison M.; Rhagunath, Sharanya; Newman, David; Gilbert, Heather; Tudor, Brian P.; Lin, Karen; Stone, Gary R.; Loughmiller, David L.; Mishra, Pravin J.; Srivastava, Rajendu; Ford, James M.; Nadauld, Lincoln D.
2017-01-01
Purpose: The advent of genomic diagnostic technologies such as next-generation sequencing has recently enabled the use of genomic information to guide targeted treatment in patients with cancer, an approach known as precision medicine. However, clinical outcomes, including survival and the cost of health care associated with precision cancer medicine, have been challenging to measure and remain largely unreported. Patients and Methods: We conducted a matched cohort study of 72 patients with metastatic cancer of diverse subtypes in the setting of a large, integrated health care delivery system. We analyzed the outcomes of 36 patients who received genomic testing and targeted therapy (precision cancer medicine) between July 1, 2013, and January 31, 2015, compared with 36 historical control patients who received standard chemotherapy (n = 29) or best supportive care (n = 7). Results: The average progression-free survival was 22.9 weeks for the precision medicine group and 12.0 weeks for the control group (P = .002) with a hazard ratio of 0.47 (95% CI, 0.29 to 0.75) when matching on age, sex, histologic diagnosis, and previous lines of treatment. In a subset analysis of patients who received all care within the Intermountain Healthcare system (n = 44), per patient charges per week were $4,665 in the precision treatment group and $5,000 in the control group (P = .126). Conclusion: These findings suggest that precision cancer medicine may improve survival for patients with refractory cancer without increasing health care costs. Although the results of this study warrant further validation, this precision medicine approach may be a viable option for patients with advanced cancer. PMID:27601506
NASA Astrophysics Data System (ADS)
Liu, Ying; Xiong, Wei; Jiang, Li Jia; Zhou, Yunshen; Li, Dawei; Jiang, Lan; Silvain, Jean-Francois; Lu, Yongfeng
2017-02-01
Precise assembly of carbon nanotubes (CNTs) in arbitrary 3D space with proper alignment is critically important and desirable for CNT applications but still remains as a long-standing challenge. Using the two-photon polymerization (TPP) technique, it is possible to fabricate 3D micro/nanoscale CNT/polymer architectures with proper CNT alignments in desired directions, which is expected to enable a broad range of applications of CNTs in functional devices. To unleash the full potential of CNTs, it is strategically important to develop TPP-compatible resins with high CNT concentrations for precise assembly of CNTs into 3D micro/nanostructures for functional device applications. We investigated a thiol grafting method in functionalizing multiwalled carbon nanotubes (MWNTs) to develop TPP-compatible MWNT-thiol-acrylate (MTA) composite resins. The composite resins developed had high MWNT concentrations up to 0.2 wt%, over one order of magnitude higher than previously published work. Significantly enhanced electrical and mechanical properties of the 3D micro/nanostructures were achieved. Precisely controlled MWNT assembly and strong anisotropic effects were confirmed. Microelectronic devices made of the MTA composite polymer were demonstrated. The nanofabrication method can achieve controlled assembly of MWNTs in 3D micro/nanostructures, enabling a broad range of CNT applications, including 3D electronics, integrated photonics, and micro/nanoelectromechanical systems (MEMS/NEMS).
NASA Astrophysics Data System (ADS)
Chen, Yuan-Liu; Cai, Yindi; Shimizu, Yuki; Ito, So; Gao, Wei; Ju, Bing-Feng
2016-02-01
This paper presents a measurement and compensation method of surface inclination for ductile cutting of silicon microstructures by using a diamond tool with a force sensor based on a four-axis ultra-precision lathe. The X- and Y-directional inclinations of a single crystal silicon workpiece with respect to the X- and Y-motion axes of the lathe slides were measured respectively by employing the diamond tool as a touch-trigger probe, in which the tool-workpiece contact is sensitively detected by monitoring the force sensor output. Based on the measurement results, fabrication of silicon microstructures can be thus carried out directly along the tilted silicon workpiece by compensating the cutting motion axis to be parallel to the silicon surface without time-consuming pre-adjustment of the surface inclination or turning of a flat surface. A diamond tool with a negative rake angle was used in the experiment for superior ductile cutting performance. The measurement precision by using the diamond tool as a touch-trigger probe was investigated. Experiments of surface inclination measurement and ultra-precision ductile cutting of a micro-pillar array and a micro-pyramid array with inclination compensation were carried out respectively to demonstrate the feasibility of the proposed method.
Spectrum syntheses of high-resolution integrated light spectra of Galactic globular clusters
NASA Astrophysics Data System (ADS)
Sakari, Charli M.; Shetrone, Matthew; Venn, Kim; McWilliam, Andrew; Dotter, Aaron
2013-09-01
Spectrum syntheses for three elements (Mg, Na and Eu) in high-resolution integrated light spectra of the Galactic globular clusters 47 Tuc, M3, M13, NGC 7006 and M15 are presented, along with calibration syntheses of the solar and Arcturus spectra. Iron abundances in the target clusters are also derived from integrated light equivalent width analyses. Line profiles in the spectra of these five globular clusters are well fitted after careful consideration of the atomic and molecular spectral features, providing levels of precision that are better than equivalent width analyses of the same integrated light spectra, and that are comparable to the precision in individual stellar analyses. The integrated light abundances from the 5528 and 5711 Å Mg I lines, the 6154 and 6160 Å Na I lines, and the 6645 Å Eu II line fall within the observed ranges from individual stars; however, these integrated light abundances do not always agree with the average literature abundances. Tests with the second parameter clusters M3, M13 and NGC 7006 show that assuming an incorrect horizontal branch morphology is likely to have only a small ( ≲ 0.06 dex) effect on these Mg, Na and Eu abundances. These tests therefore show that integrated light spectrum syntheses can be applied to unresolved globular clusters over a wide range of metallicities and horizontal branch morphologies. Such high precision in integrated light spectrum syntheses is valuable for interpreting the chemical abundances of globular cluster systems around other galaxies.
Zhao, Y J; Liu, Y; Sun, Y C; Wang, Y
2017-08-18
To explore a three-dimensional (3D) data fusion and integration method of optical scanning tooth crowns and cone beam CT (CBCT) reconstructing tooth roots for their natural transition in the 3D profile. One mild dental crowding case was chosen from orthodontics clinics with full denture. The CBCT data were acquired to reconstruct the dental model with tooth roots by Mimics 17.0 medical imaging software, and the optical impression was taken to obtain the dentition model with high precision physiological contour of crowns by Smart Optics dental scanner. The two models were doing 3D registration based on their common part of the crowns' shape in Geomagic Studio 2012 reverse engineering software. The model coordinate system was established by defining the occlusal plane. crown-gingiva boundary was extracted from optical scanning model manually, then crown-root boundary was generated by offsetting and projecting crown-gingiva boundary to the root model. After trimming the crown and root models, the 3D fusion model with physiological contour crown and nature root was formed by curvature continuity filling algorithm finally. In the study, 10 patients with dentition mild crowded from the oral clinics were followed up with this method to obtain 3D crown and root fusion models, and 10 high qualification doctors were invited to do subjective evaluation of these fusion models. This study based on commercial software platform, preliminarily realized the 3D data fusion and integration method of optical scanning tooth crowns and CBCT tooth roots with a curvature continuous shape transition. The 10 patients' 3D crown and root fusion models were constructed successfully by the method, and the average score of the doctors' subjective evaluation for these 10 models was 8.6 points (0-10 points). which meant that all the fusion models could basically meet the need of the oral clinics, and also showed the method in our study was feasible and efficient in orthodontics study and clinics. The method of this study for 3D crown and root data fusion could obtain an integrate tooth or dental model more close to the nature shape. CBCT model calibration may probably improve the precision of the fusion model. The adaptation of this method for severe dentition crowding and micromaxillary deformity needs further research.
NASA Technical Reports Server (NTRS)
Geddes, K. O.
1977-01-01
If a linear ordinary differential equation with polynomial coefficients is converted into integrated form then the formal substitution of a Chebyshev series leads to recurrence equations defining the Chebyshev coefficients of the solution function. An explicit formula is presented for the polynomial coefficients of the integrated form in terms of the polynomial coefficients of the differential form. The symmetries arising from multiplication and integration of Chebyshev polynomials are exploited in deriving a general recurrence equation from which can be derived all of the linear equations defining the Chebyshev coefficients. Procedures for deriving the general recurrence equation are specified in a precise algorithmic notation suitable for translation into any of the languages for symbolic computation. The method is algebraic and it can therefore be applied to differential equations containing indeterminates.
The application of an atomistic J-integral to a ductile crack.
Zimmerman, Jonathan A; Jones, Reese E
2013-04-17
In this work we apply a Lagrangian kernel-based estimator of continuum fields to atomic data to estimate the J-integral for the emission dislocations from a crack tip. Face-centered cubic (fcc) gold and body-centered cubic (bcc) iron modeled with embedded atom method (EAM) potentials are used as example systems. The results of a single crack with a K-loading compare well to an analytical solution from anisotropic linear elastic fracture mechanics. We also discovered that in the post-emission of dislocations from the crack tip there is a loop size-dependent contribution to the J-integral. For a system with a finite width crack loaded in simple tension, the finite size effects for the systems that were feasible to compute prevented precise agreement with theory. However, our results indicate that there is a trend towards convergence.
A novel feature extraction scheme with ensemble coding for protein-protein interaction prediction.
Du, Xiuquan; Cheng, Jiaxing; Zheng, Tingting; Duan, Zheng; Qian, Fulan
2014-07-18
Protein-protein interactions (PPIs) play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF) algorithm with the ensemble coding (EC) method. To reduce computational time, a feature selection method (DX) was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp.
NASA Astrophysics Data System (ADS)
Wang, Juan; Wang, Jian; Li, Lijuan; Zhou, Kun
2014-08-01
In order to solve the information fusion, process integration, collaborative design and manufacturing for ultra-precision optical elements within life-cycle management, this paper presents a digital management platform which is based on product data and business processes by adopting the modern manufacturing technique, information technique and modern management technique. The architecture and system integration of the digital management platform are discussed in this paper. The digital management platform can realize information sharing and interaction for information-flow, control-flow and value-stream from user's needs to offline in life-cycle, and it can also enhance process control, collaborative research and service ability of ultra-precision optical elements.
NASA Astrophysics Data System (ADS)
Kiris, Tugba; Akbulut, Saadet; Kiris, Aysenur; Gucin, Zuhal; Karatepe, Oguzhan; Bölükbasi Ates, Gamze; Tabakoǧlu, Haşim Özgür
2015-03-01
In order to develop minimally invasive, fast and precise diagnostic and therapeutic methods in medicine by using optical methods, first step is to examine how the light propagates, scatters and transmitted through medium. So as to find out appropriate wavelengths, it is required to correctly determine the optical properties of tissues. The aim of this study is to measure the optical properties of both cancerous and normal ex-vivo pancreatic tissues. Results will be compared to detect how cancerous and normal tissues respond to different wavelengths. Double-integrating-sphere system and computational technique inverse adding doubling method (IAD) were used in the study. Absorption and reduced scattering coefficients of normal and cancerous pancreatic tissues have been measured within the range of 500-650 nm. Statistical significant differences between cancerous and normal tissues have been obtained at 550 nm and 630 nm for absorption coefficients. On the other hand; there were no statistical difference found for scattering coefficients at any wavelength.
When integration fails: Prokaryote phylogeny and the tree of life.
O'Malley, Maureen A
2013-12-01
Much is being written these days about integration, its desirability and even its necessity when complex research problems are to be addressed. Seldom, however, do we hear much about the failure of such efforts. Because integration is an ongoing activity rather than a final achievement, and because today's literature about integration consists mostly of manifesto statements rather than precise descriptions, an examination of unsuccessful integration could be illuminating to understand better how it works. This paper will examine the case of prokaryote phylogeny and its apparent failure to achieve integration within broader tree-of-life accounts of evolutionary history (often called 'universal phylogeny'). Despite the fact that integrated databases exist of molecules pertinent to the phylogenetic reconstruction of all lineages of life, and even though the same methods can be used to construct phylogenies wherever the organisms fall on the tree of life, prokaryote phylogeny remains at best only partly integrated within tree-of-life efforts. I will examine why integration does not occur, compare it with integrative practices in animal and other eukaryote phylogeny, and reflect on whether there might be different expectations of what integration should achieve. Finally, I will draw some general conclusions about integration and its function as a 'meta-heuristic' in the normative commitments guiding scientific practice. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portegies Zwart, Simon; Boekholt, Tjarda
2014-04-10
The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-bodymore » interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.« less
An address geocoding method for improving rural spatial information infrastructure
NASA Astrophysics Data System (ADS)
Pan, Yuchun; Chen, Baisong; Lu, Zhou; Li, Shuhua; Zhang, Jingbo; Zhou, YanBing
2010-11-01
The transition of rural and agricultural management from divisional to integrated mode has highlighted the importance of data integration and sharing. Current data are mostly collected by specific department to satisfy their own needs and lake of considering on wider potential uses. This led to great difference in data format, semantic, and precision even in same area, which is a significant barrier for constructing an integrated rural spatial information system to support integrated management and decision-making. Considering the rural cadastral management system and postal zones, the paper designs a rural address geocoding method based on rural cadastral parcel. It puts forward a geocoding standard which consists of absolute position code, relative position code and extended code. It designs a rural geocoding database model, and addresses collection and update model. Then, based on the rural address geocoding model, it proposed a data model for rural agricultural resources management. The results show that the address coding based on postal code is stable and easy to memorize, two-dimensional coding based on the direction and distance is easy to be located and memorized, while extended code can enhance the extensibility and flexibility of address geocoding.
Nguyen, Huu-Tho; Md Dawal, Siti Zawiah; Nukman, Yusoff; Aoyama, Hideki; Case, Keith
2015-01-01
Globalization of business and competitiveness in manufacturing has forced companies to improve their manufacturing facilities to respond to market requirements. Machine tool evaluation involves an essential decision using imprecise and vague information, and plays a major role to improve the productivity and flexibility in manufacturing. The aim of this study is to present an integrated approach for decision-making in machine tool selection. This paper is focused on the integration of a consistent fuzzy AHP (Analytic Hierarchy Process) and a fuzzy COmplex PRoportional ASsessment (COPRAS) for multi-attribute decision-making in selecting the most suitable machine tool. In this method, the fuzzy linguistic reference relation is integrated into AHP to handle the imprecise and vague information, and to simplify the data collection for the pair-wise comparison matrix of the AHP which determines the weights of attributes. The output of the fuzzy AHP is imported into the fuzzy COPRAS method for ranking alternatives through the closeness coefficient. Presentation of the proposed model application is provided by a numerical example based on the collection of data by questionnaire and from the literature. The results highlight the integration of the improved fuzzy AHP and the fuzzy COPRAS as a precise tool and provide effective multi-attribute decision-making for evaluating the machine tool in the uncertain environment.
A multi-landing pad DNA integration platform for mammalian cell engineering
Gaidukov, Leonid; Wroblewska, Liliana; Teague, Brian; Nelson, Tom; Zhang, Xin; Liu, Yan; Jagtap, Kalpana; Mamo, Selamawit; Tseng, Wen Allen; Lowe, Alexis; Das, Jishnu; Bandara, Kalpanie; Baijuraj, Swetha; Summers, Nevin M; Zhang, Lin; Weiss, Ron
2018-01-01
Abstract Engineering mammalian cell lines that stably express many transgenes requires the precise insertion of large amounts of heterologous DNA into well-characterized genomic loci, but current methods are limited. To facilitate reliable large-scale engineering of CHO cells, we identified 21 novel genomic sites that supported stable long-term expression of transgenes, and then constructed cell lines containing one, two or three ‘landing pad’ recombination sites at selected loci. By using a highly efficient BxB1 recombinase along with different selection markers at each site, we directed recombinase-mediated insertion of heterologous DNA to selected sites, including targeting all three with a single transfection. We used this method to controllably integrate up to nine copies of a monoclonal antibody, representing about 100 kb of heterologous DNA in 21 transcriptional units. Because the integration was targeted to pre-validated loci, recombinant protein expression remained stable for weeks and additional copies of the antibody cassette in the integrated payload resulted in a linear increase in antibody expression. Overall, this multi-copy site-specific integration platform allows for controllable and reproducible insertion of large amounts of DNA into stable genomic sites, which has broad applications for mammalian synthetic biology, recombinant protein production and biomanufacturing. PMID:29617873
Inactivation of Pol θ and C-NHEJ eliminates off-target integration of exogenous DNA.
Zelensky, Alex N; Schimmel, Joost; Kool, Hanneke; Kanaar, Roland; Tijsterman, Marcel
2017-07-07
Off-target or random integration of exogenous DNA hampers precise genomic engineering and presents a safety risk in clinical gene therapy strategies. Genetic definition of random integration has been lacking for decades. Here, we show that the A-family DNA polymerase θ (Pol θ) promotes random integration, while canonical non-homologous DNA end joining plays a secondary role; cells double deficient for polymerase θ and canonical non-homologous DNA end joining are devoid of any integration events, demonstrating that these two mechanisms define random integration. In contrast, homologous recombination is not reduced in these cells and gene targeting is improved to 100% efficiency. Such complete reversal of integration outcome, from predominately random integration to exclusively gene targeting, provides a rational way forward to improve the efficacy and safety of DNA delivery and gene correction approaches.Random off-target integration events can impair precise gene targeting and poses a safety risk for gene therapy. Here the authors show that repression of polymerase θ and classical non-homologous recombination eliminates random integration.
Printable semiconductor structures and related methods of making and assembling
Nuzzo, Ralph G.; Rogers, John A.; Menard, Etienne; Lee, Keon Jae; Khang; , Dahl-Young; Sun, Yugang; Meitl, Matthew; Zhu, Zhengtao; Ko, Heung Cho; Mack, Shawn
2013-03-12
The present invention provides a high yield pathway for the fabrication, transfer and assembly of high quality printable semiconductor elements having selected physical dimensions, shapes, compositions and spatial orientations. The compositions and methods of the present invention provide high precision registered transfer and integration of arrays of microsized and/or nanosized semiconductor structures onto substrates, including large area substrates and/or flexible substrates. In addition, the present invention provides methods of making printable semiconductor elements from low cost bulk materials, such as bulk silicon wafers, and smart-materials processing strategies that enable a versatile and commercially attractive printing-based fabrication platform for making a broad range of functional semiconductor devices.
Printable semiconductor structures and related methods of making and assembling
Nuzzo, Ralph G [Champaign, IL; Rogers, John A [Champaign, IL; Menard, Etienne [Durham, NC; Lee, Keon Jae [Tokyo, JP; Khang, Dahl-Young [Urbana, IL; Sun, Yugang [Westmont, IL; Meitl, Matthew [Raleigh, NC; Zhu, Zhengtao [Rapid City, SD; Ko, Heung Cho [Urbana, IL; Mack, Shawn [Goleta, CA
2011-10-18
The present invention provides a high yield pathway for the fabrication, transfer and assembly of high quality printable semiconductor elements having selected physical dimensions, shapes, compositions and spatial orientations. The compositions and methods of the present invention provide high precision registered transfer and integration of arrays of microsized and/or nanosized semiconductor structures onto substrates, including large area substrates and/or flexible substrates. In addition, the present invention provides methods of making printable semiconductor elements from low cost bulk materials, such as bulk silicon wafers, and smart-materials processing strategies that enable a versatile and commercially attractive printing-based fabrication platform for making a broad range of functional semiconductor devices.
Printable semiconductor structures and related methods of making and assembling
Nuzzo, Ralph G.; Rogers, John A.; Menard, Etienne; Lee, Keon Jae; Khang, Dahl-Young; Sun, Yugang; Meitl, Matthew; Zhu, Zhengtao; Ko, Heung Cho; Mack, Shawn
2010-09-21
The present invention provides a high yield pathway for the fabrication, transfer and assembly of high quality printable semiconductor elements having selected physical dimensions, shapes, compositions and spatial orientations. The compositions and methods of the present invention provide high precision registered transfer and integration of arrays of microsized and/or nanosized semiconductor structures onto substrates, including large area substrates and/or flexible substrates. In addition, the present invention provides methods of making printable semiconductor elements from low cost bulk materials, such as bulk silicon wafers, and smart-materials processing strategies that enable a versatile and commercially attractive printing-based fabrication platform for making a broad range of functional semiconductor devices.
Genomic clocks and evolutionary timescales
NASA Technical Reports Server (NTRS)
Blair Hedges, S.; Kumar, Sudhir
2003-01-01
For decades, molecular clocks have helped to illuminate the evolutionary timescale of life, but now genomic data pose a challenge for time estimation methods. It is unclear how to integrate data from many genes, each potentially evolving under a different model of substitution and at a different rate. Current methods can be grouped by the way the data are handled (genes considered separately or combined into a 'supergene') and the way gene-specific rate models are applied (global versus local clock). There are advantages and disadvantages to each of these approaches, and the optimal method has not yet emerged. Fortunately, time estimates inferred using many genes or proteins have greater precision and appear to be robust to different approaches.
A new method to calculate the beam charge for an integrating current transformer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Yuchi; Han Dan; Zhu Bin
2012-09-15
The integrating current transformer (ICT) is a magnetic sensor widely used to precisely measure the charge of an ultra-short-pulse charged particle beam generated by traditional accelerators and new laser-plasma particle accelerators. In this paper, we present a new method to calculate the beam charge in an ICT based on circuit analysis. The output transfer function shows an invariable signal profile for an ultra-short electron bunch, so the function can be used to evaluate the signal quality and calculate the beam charge through signal fitting. We obtain a set of parameters in the output function from a standard signal generated bymore » an ultra-short electron bunch (about 1 ps in duration) at a radio frequency linear electron accelerator at Tsinghua University. These parameters can be used to obtain the beam charge by signal fitting with excellent accuracy.« less
X-ray simulations method for the large field of view
NASA Astrophysics Data System (ADS)
Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.
2018-03-01
In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.
Computer Generated Diffraction Patterns Of Rough Surfaces
NASA Astrophysics Data System (ADS)
Rakels, Jan H.
1989-03-01
It is generally accepted, that optical methods are the most promising for the in-process measurement of surface finish. These methods have the advantages of being non-contacting and fast data acquisition. In the Micro-Engineering Centre at the University of Warwick, an optical sensor has been devised which can measure the rms roughness, slope and wavelength of turned and precision ground surfaces. The operation of this device is based upon the Kirchhoff-Fresnel diffraction integral. Application of this theory to ideal turned surfaces is straightforward, and indeed the theoretically calculated diffraction patterns are in close agreement with patterns produced by an actual optical instrument. Since it is mathematically difficult to introduce real surface profiles into the diffraction integral, a computer program has been devised, which simulates the operation of the optical sensor. The program produces a diffraction pattern as a graphical output. Comparison between computer generated and actual diffraction patterns of the same surfaces show a high correlation.
Public data and open source tools for multi-assay genomic investigation of disease.
Kannan, Lavanya; Ramos, Marcel; Re, Angela; El-Hachem, Nehme; Safikhani, Zhaleh; Gendoo, Deena M A; Davis, Sean; Gomez-Cabrero, David; Castelo, Robert; Hansen, Kasper D; Carey, Vincent J; Morgan, Martin; Culhane, Aedín C; Haibe-Kains, Benjamin; Waldron, Levi
2016-07-01
Molecular interrogation of a biological sample through DNA sequencing, RNA and microRNA profiling, proteomics and other assays, has the potential to provide a systems level approach to predicting treatment response and disease progression, and to developing precision therapies. Large publicly funded projects have generated extensive and freely available multi-assay data resources; however, bioinformatic and statistical methods for the analysis of such experiments are still nascent. We review multi-assay genomic data resources in the areas of clinical oncology, pharmacogenomics and other perturbation experiments, population genomics and regulatory genomics and other areas, and tools for data acquisition. Finally, we review bioinformatic tools that are explicitly geared toward integrative genomic data visualization and analysis. This review provides starting points for accessing publicly available data and tools to support development of needed integrative methods. © The Author 2015. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aaltonen, T.; Brucken, E.; Devoto, F.
A precision measurement of the top quark mass m{sub t} is obtained using a sample of tt events from pp collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m{sub t} and a parameter {Delta}{sub JES} used to calibrate themore » jet energy scale in situ. Using a total of 1087 events in 5.6 fb{sup -1} of integrated luminosity, a value of m{sub t}=173.0{+-}1.2 GeV/c{sup 2} is measured.« less
High precision triangular waveform generator
Mueller, Theodore R.
1983-01-01
An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.
High-precision triangular-waveform generator
Mueller, T.R.
1981-11-14
An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.
Precision mechatronics based on high-precision measuring and positioning systems and machines
NASA Astrophysics Data System (ADS)
Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert
2007-06-01
Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.
Integrated GNSS Attitude Determination and Positioning for Direct Geo-Referencing
Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J. G.
2014-01-01
Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0.8, matching the theoretical gain of 3/4 for two antennas on the rotating frame and a single antenna at the reference station. PMID:25036330
Integrated GNSS attitude determination and positioning for direct geo-referencing.
Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J G
2014-07-17
Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0:8, matching the theoretical gain of √ 3/4 for two antennas on the rotating frame and a single antenna at the reference station.
Problems, challenges and promises: perspectives on precision medicine.
Duffy, David J
2016-05-01
The 'precision medicine (systems medicine)' concept promises to achieve a shift to future healthcare systems with a more proactive and predictive approach to medicine, where the emphasis is on disease prevention rather than the treatment of symptoms. The individualization of treatment for each patient will be at the centre of this approach, with all of a patient's medical data being computationally integrated and accessible. Precision medicine is being rapidly embraced by biomedical researchers, pioneering clinicians and scientific funding programmes in both the European Union (EU) and USA. Precision medicine is a key component of both Horizon 2020 (the EU Framework Programme for Research and Innovation) and the White House's Precision Medicine Initiative. Precision medicine promises to revolutionize patient care and treatment decisions. However, the participants in precision medicine are faced with a considerable central challenge. Greater volumes of data from a wider variety of sources are being generated and analysed than ever before; yet, this heterogeneous information must be integrated and incorporated into personalized predictive models, the output of which must be intelligible to non-computationally trained clinicians. Drawing primarily from the field of 'oncology', this article will introduce key concepts and challenges of precision medicine and some of the approaches currently being implemented to overcome these challenges. Finally, this article also covers the criticisms of precision medicine overpromising on its potential to transform patient care. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Application of Geodetic Techniques for Antenna Positioning in a Ground Penetrating Radar Method
NASA Astrophysics Data System (ADS)
Mazurkiewicz, Ewelina; Ortyl, Łukasz; Karczewski, Jerzy
2018-03-01
The accuracy of determining the location of detectable subsurface objects is related to the accuracy of the position of georadar traces in a given profile, which in turn depends on the precise assessment of the distance covered by an antenna. During georadar measurements the distance covered by an antenna can be determined with a variety of methods. Recording traces at fixed time intervals is the simplest of them. A method which allows for more precise location of georadar traces is recording them at fixed distance intervals, which can be performed with the use of distance triggers (such as a measuring wheel or a hip chain). The search for methods eliminating these discrepancies can be based on the measurement of spatial coordinates of georadar traces conducted with the use of modern geodetic techniques for 3-D location. These techniques include above all a GNSS satellite system and electronic tachymeters. Application of the above mentioned methods increases the accuracy of space location of georadar traces. The article presents the results of georadar measurements performed with the use of geodetic techniques in the test area of Mydlniki in Krakow. A satellite receiver Leica system 1200 and a electronic tachymeter Leica 1102 TCRA were integrated with the georadar equipment. The accuracy of locating chosen subsurface structures was compared.
Wang, Yujue; Lian, Ziyang; Yao, Mingge; Wang, Ji; Hu, Hongping
2013-10-01
A power harvester with adjustable frequency, which consists of a hinged-hinged piezoelectric bimorph and a concentrated mass, is studied by the precise electric field method (PEFM), taking into account a distribution of the electric field over the thickness. Usually, using the equivalent electric field method (EEFM), the electric field is approximated as a constant value in the piezoelectric layer. Charge on the upper electrode (UEC) of the bimorph is often assumed as output charge. However, different output charge can be obtained by integrating on electric displacement over the electrode with different thickness coordinates. Therefore, an average charge (AC) on thickness is often assumed as the output value. This method is denoted EEFM AC. The flexural vibration of the bimorph is calculated by the three methods and their results are compared. Numerical results illustrate that EEFM UEC overestimates resonant frequency, output power, and efficiency. EEFM AC can accurately calculate the output power and efficiency, but underestimates resonant frequency. The performance of the harvester, which depends on concentrated mass weight, position, and circuit load, is analyzed using PEFM. The resonant frequency can be modulated 924 Hz by moving the concentrated mass along the bimorph. This feature suggests that the natural frequency of the harvester can be adjusted conveniently to adapt to frequency fluctuation of the ambient vibration.
NASA Astrophysics Data System (ADS)
Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max
2015-09-01
Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.
3D Printed Programmable Release Capsules.
Gupta, Maneesh K; Meng, Fanben; Johnson, Blake N; Kong, Yong Lin; Tian, Limei; Yeh, Yao-Wen; Masters, Nina; Singamaneni, Srikanth; McAlpine, Michael C
2015-08-12
The development of methods for achieving precise spatiotemporal control over chemical and biomolecular gradients could enable significant advances in areas such as synthetic tissue engineering, biotic-abiotic interfaces, and bionanotechnology. Living organisms guide tissue development through highly orchestrated gradients of biomolecules that direct cell growth, migration, and differentiation. While numerous methods have been developed to manipulate and implement biomolecular gradients, integrating gradients into multiplexed, three-dimensional (3D) matrices remains a critical challenge. Here we present a method to 3D print stimuli-responsive core/shell capsules for programmable release of multiplexed gradients within hydrogel matrices. These capsules are composed of an aqueous core, which can be formulated to maintain the activity of payload biomolecules, and a poly(lactic-co-glycolic) acid (PLGA, an FDA approved polymer) shell. Importantly, the shell can be loaded with plasmonic gold nanorods (AuNRs), which permits selective rupturing of the capsule when irradiated with a laser wavelength specifically determined by the lengths of the nanorods. This precise control over space, time, and selectivity allows for the ability to pattern 2D and 3D multiplexed arrays of enzyme-loaded capsules along with tunable laser-triggered rupture and release of active enzymes into a hydrogel ambient. The advantages of this 3D printing-based method include (1) highly monodisperse capsules, (2) efficient encapsulation of biomolecular payloads, (3) precise spatial patterning of capsule arrays, (4) "on the fly" programmable reconfiguration of gradients, and (5) versatility for incorporation in hierarchical architectures. Indeed, 3D printing of programmable release capsules may represent a powerful new tool to enable spatiotemporal control over biomolecular gradients.
Determination of γ -ray widths in 15N using nuclear resonance fluorescence
NASA Astrophysics Data System (ADS)
Szücs, T.; Bemmerer, D.; Caciolli, A.; Fülöp, Zs.; Massarczyk, R.; Michelagnoli, C.; Reinhardt, T. P.; Schwengner, R.; Takács, M. P.; Ur, C. A.; Wagner, A.; Wagner, L.
2015-07-01
Background: The stable nucleus 15N is the mirror of 15O, the bottleneck in the hydrogen burning CNO cycle. Most of the 15N level widths below the proton emission threshold are known from just one nuclear resonance fluorescence (NRF) measurement, with limited precision in some cases. A recent experiment with the AGATA demonstrator array determined level lifetimes using the Doppler shift attenuation method in 15O. As a reference and for testing the method, level lifetimes in 15N have also been determined in the same experiment. Purpose: The latest compilation of 15N level properties dates back to 1991. The limited precision in some cases in the compilation calls for a new measurement to enable a comparison to the AGATA demonstrator data. The widths of several 15N levels have been studied with the NRF method. Method: The solid nitrogen compounds enriched in 15N have been irradiated with bremsstrahlung. The γ rays following the deexcitation of the excited nuclear levels were detected with four high-purity germanium detectors. Results: Integrated photon-scattering cross sections of 10 levels below the proton emission threshold have been measured. Partial γ -ray widths of ground-state transitions were deduced and compared to the literature. The photon-scattering cross sections of two levels above the proton emission threshold, but still below other particle emission energies have also been measured, and proton resonance strengths and proton widths were deduced. Conclusions: Gamma and proton widths consistent with the literature values were obtained, but with greatly improved precision.
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
Three-Dimensional Printing Based Hybrid Manufacturing of Microfluidic Devices.
Alapan, Yunus; Hasan, Muhammad Noman; Shen, Richang; Gurkan, Umut A
2015-05-01
Microfluidic platforms offer revolutionary and practical solutions to challenging problems in biology and medicine. Even though traditional micro/nanofabrication technologies expedited the emergence of the microfluidics field, recent advances in advanced additive manufacturing hold significant potential for single-step, stand-alone microfluidic device fabrication. One such technology, which holds a significant promise for next generation microsystem fabrication is three-dimensional (3D) printing. Presently, building 3D printed stand-alone microfluidic devices with fully embedded microchannels for applications in biology and medicine has the following challenges: (i) limitations in achievable design complexity, (ii) need for a wider variety of transparent materials, (iii) limited z-resolution, (iv) absence of extremely smooth surface finish, and (v) limitations in precision fabrication of hollow and void sections with extremely high surface area to volume ratio. We developed a new way to fabricate stand-alone microfluidic devices with integrated manifolds and embedded microchannels by utilizing a 3D printing and laser micromachined lamination based hybrid manufacturing approach. In this new fabrication method, we exploit the minimized fabrication steps enabled by 3D printing, and reduced assembly complexities facilitated by laser micromachined lamination method. The new hybrid fabrication method enables key features for advanced microfluidic system architecture: (i) increased design complexity in 3D, (ii) improved control over microflow behavior in all three directions and in multiple layers, (iii) transverse multilayer flow and precisely integrated flow distribution, and (iv) enhanced transparency for high resolution imaging and analysis. Hybrid manufacturing approaches hold great potential in advancing microfluidic device fabrication in terms of standardization, fast production, and user-independent manufacturing.
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
Three-Dimensional Printing Based Hybrid Manufacturing of Microfluidic Devices
Shen, Richang; Gurkan, Umut A.
2016-01-01
Microfluidic platforms offer revolutionary and practical solutions to challenging problems in biology and medicine. Even though traditional micro/nanofabrication technologies expedited the emergence of the microfluidics field, recent advances in advanced additive manufacturing hold significant potential for single-step, stand-alone microfluidic device fabrication. One such technology, which holds a significant promise for next generation microsystem fabrication is three-dimensional (3D) printing. Presently, building 3D printed stand-alone microfluidic devices with fully embedded microchannels for applications in biology and medicine has the following challenges: (i) limitations in achievable design complexity, (ii) need for a wider variety of transparent materials, (iii) limited z-resolution, (iv) absence of extremely smooth surface finish, and (v) limitations in precision fabrication of hollow and void sections with extremely high surface area to volume ratio. We developed a new way to fabricate stand-alone microfluidic devices with integrated manifolds and embedded microchannels by utilizing a 3D printing and laser micromachined lamination based hybrid manufacturing approach. In this new fabrication method, we exploit the minimized fabrication steps enabled by 3D printing, and reduced assembly complexities facilitated by laser micromachined lamination method. The new hybrid fabrication method enables key features for advanced microfluidic system architecture: (i) increased design complexity in 3D, (ii) improved control over microflow behavior in all three directions and in multiple layers, (iii) transverse multilayer flow and precisely integrated flow distribution, and (iv) enhanced transparency for high resolution imaging and analysis. Hybrid manufacturing approaches hold great potential in advancing microfluidic device fabrication in terms of standardization, fast production, and user-independent manufacturing. PMID:27512530
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope.
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-04-20
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments.
NASA Astrophysics Data System (ADS)
Marques, Haroldo Antonio; Marques, Heloísa Alves Silva; Aquino, Marcio; Veettil, Sreeja Vadakke; Monico, João Francisco Galera
2018-02-01
GPS and GLONASS are currently the Global Navigation Satellite Systems (GNSS) with full operational capacity. The integration of GPS, GLONASS and future GNSS constellations can provide better accuracy and more reliability in geodetic positioning, in particular for kinematic Precise Point Positioning (PPP), where the satellite geometry is considered a limiting factor to achieve centimeter accuracy. The satellite geometry can change suddenly in kinematic positioning in urban areas or under conditions of strong atmospheric effects such as for instance ionospheric scintillation that may degrade satellite signal quality, causing cycle slips and even loss of lock. Scintillation is caused by small scale irregularities in the ionosphere and is characterized by rapid changes in amplitude and phase of the signal, which are more severe in equatorial and high latitudes geomagnetic regions. In this work, geodetic positioning through the PPP method was evaluated with integrated GPS and GLONASS data collected in the equatorial region under varied scintillation conditions. The GNSS data were processed in kinematic PPP mode and the analyses show accuracy improvements of up to 60% under conditions of strong scintillation when using multi-constellation data instead of GPS data alone. The concepts and analyses related to the ionospheric scintillation effects, the mathematical model involved in PPP with GPS and GLONASS data integration as well as accuracy assessment with data collected under ionospheric scintillation effects are presented.
NASA Astrophysics Data System (ADS)
Lösel, P.
2017-06-01
Large area Micromegas detectors will be employed for the first time in high-energy physics experiments. To cope with increasing background rates, associated with the steadily increasing luminosity of LHC to 10 times design luminosity, the present detector technology in the current innermost stations of the muon endcap system of the ATLAS experiment (the Small Wheel), will be replaced in 2019/2020 by resistive strip Micromegas and small strip TGC detectors. Both technologies will provide tracking and trigger information. In the "New Small Wheel" the Micromegas will be arranged in eight detection layers built of trapezoidally shaped quadruplets of four different sizes covering in total about 1200 m2 of detection plane. In order to achieve 15 % transverse momentum resolution for 1 TeV muons, a challenging mechanical precision is required in the construction of each active plane, with an alignment of the readout strips at the level of 30 μm RMS along the precision coordinate and 80 μm RMS perpendicular to the plane. Each individual Micromegas plane must achieve a spatial resolution better than 100 μm at background rates up to 15 kHz/cm2 while being operated in an inhomogeneous magnetic field (B <= 0.3 T). The required mechanical precision for the production of the components and their assembly, on such large area detectors, is a key point and must be controlled during construction and integration. Particularly the alignment of the readout strips within a quadruplet appears to be demanding. The readout strips are etched on PCB boards using photolithographic processes. Depending on the type of the module, 3 or 5 PCB boards need to be joined and precisely aligned to form a full readout plane. The precision in the alignment is reached either by use of precision mechanical holes or by optical masks, both referenced to the strip patterns. Assembly procedures have been developed to build the single panels with the required mechanical precision and to assemble them in a module including the four metallic micro-meshes. Methods to confirm the precision of components and assembly are based on precise optical devices and X-ray or cosmic muon investigations. We will report on the construction procedures for the Micromegas quadruplets, on the quality control procedures and results, and on the assembly and calibration methods.
Cheng, Lijun; Schneider, Bryan P
2016-01-01
Background Cancer has been extensively characterized on the basis of genomics. The integration of genetic information about cancers with data on how the cancers respond to target based therapy to help to optimum cancer treatment. Objective The increasing usage of sequencing technology in cancer research and clinical practice has enormously advanced our understanding of cancer mechanisms. The cancer precision medicine is becoming a reality. Although off-label drug usage is a common practice in treating cancer, it suffers from the lack of knowledge base for proper cancer drug selections. This eminent need has become even more apparent considering the upcoming genomics data. Methods In this paper, a personalized medicine knowledge base is constructed by integrating various cancer drugs, drug-target database, and knowledge sources for the proper cancer drugs and their target selections. Based on the knowledge base, a bioinformatics approach for cancer drugs selection in precision medicine is developed. It integrates personal molecular profile data, including copy number variation, mutation, and gene expression. Results By analyzing the 85 triple negative breast cancer (TNBC) patient data in the Cancer Genome Altar, we have shown that 71.7% of the TNBC patients have FDA approved drug targets, and 51.7% of the patients have more than one drug target. Sixty-five drug targets are identified as TNBC treatment targets and 85 candidate drugs are recommended. Many existing TNBC candidate targets, such as Poly (ADP-Ribose) Polymerase 1 (PARP1), Cell division protein kinase 6 (CDK6), epidermal growth factor receptor, etc., were identified. On the other hand, we found some additional targets that are not yet fully investigated in the TNBC, such as Gamma-Glutamyl Hydrolase (GGH), Thymidylate Synthetase (TYMS), Protein Tyrosine Kinase 6 (PTK6), Topoisomerase (DNA) I, Mitochondrial (TOP1MT), Smoothened, Frizzled Class Receptor (SMO), etc. Our additional analysis of target and drug selection strategy is also fully supported by the drug screening data on TNBC cell lines in the Cancer Cell Line Encyclopedia. Conclusions The proposed bioinformatics approach lays a foundation for cancer precision medicine. It supplies much needed knowledge base for the off-label cancer drug usage in clinics. PMID:27107440
Research on polarization imaging information parsing method
NASA Astrophysics Data System (ADS)
Yuan, Hongwu; Zhou, Pucheng; Wang, Xiaolong
2016-11-01
Polarization information parsing plays an important role in polarization imaging detection. This paper focus on the polarization information parsing method: Firstly, the general process of polarization information parsing is given, mainly including polarization image preprocessing, multiple polarization parameters calculation, polarization image fusion and polarization image tracking, etc.; And then the research achievements of the polarization information parsing method are presented, in terms of polarization image preprocessing, the polarization image registration method based on the maximum mutual information is designed. The experiment shows that this method can improve the precision of registration and be satisfied the need of polarization information parsing; In terms of multiple polarization parameters calculation, based on the omnidirectional polarization inversion model is built, a variety of polarization parameter images are obtained and the precision of inversion is to be improve obviously; In terms of polarization image fusion , using fuzzy integral and sparse representation, the multiple polarization parameters adaptive optimal fusion method is given, and the targets detection in complex scene is completed by using the clustering image segmentation algorithm based on fractal characters; In polarization image tracking, the average displacement polarization image characteristics of auxiliary particle filtering fusion tracking algorithm is put forward to achieve the smooth tracking of moving targets. Finally, the polarization information parsing method is applied to the polarization imaging detection of typical targets such as the camouflage target, the fog and latent fingerprints.
Kim, Hyo Seon; Chun, Jin Mi; Kwon, Bo-In; Lee, A-Reum; Kim, Ho Kyoung; Lee, A Yeong
2016-10-01
Ultra-performance convergence chromatography, which integrates the advantages of supercritical fluid chromatography and ultra high performance liquid chromatography technologies, is an environmentally friendly analytical method that uses dramatically reduced amounts of organic solvents. An ultra-performance convergence chromatography method was developed and validated for the quantification of decursinol angelate and decursin in Angelica gigas using a CSH Fluoro-Phenyl column (2.1 mm × 150 mm, 1.7 μm) with a run time of 4 min. The method had an improved resolution and a shorter analysis time in comparison to the conventional high-performance liquid chromatography method. This method was validated in terms of linearity, precision, and accuracy. The limits of detection were 0.005 and 0.004 μg/mL for decursinol angelate and decursin, respectively, while the limits of quantitation were 0.014 and 0.012 μg/mL, respectively. The two components showed good regression (correlation coefficient (r 2 ) > 0.999), excellent precision (RSD < 2.28%), and acceptable recoveries (99.75-102.62%). The proposed method can be used to efficiently separate, characterize, and quantify decursinol angelate and decursin in Angelica gigas and its related medicinal materials or preparations, with the advantages of a shorter analysis time, greater sensitivity, and better environmental compatibility. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Noninvasive Body Setup Method for Radiotherapy by Using a Multimodal Image Fusion Technique
Zhang, Jie; Chen, Yunxia; Wang, Chenchen; Chu, Kaiyue; Jin, Jianhua; Huang, Xiaolin; Guan, Yue; Li, Weifeng
2017-01-01
Purpose: To minimize the mismatch error between patient surface and immobilization system for tumor location by a noninvasive patient setup method. Materials and Methods: The method, based on a point set registration, proposes a shift for patient positioning by integrating information of the computed tomography scans and that of optical surface landmarks. An evaluation of the method included 3 areas: (1) a validation on a phantom by estimating 100 known mismatch errors between patient surface and immobilization system. (2) Five patients with pelvic tumors were considered. The tumor location errors of the method were measured using the difference between the proposal shift of cone-beam computed tomography and that of our method. (3) The collected setup data from the evaluation of patients were compared with the published performance data of other 2 similar systems. Results: The phantom verification results showed that the method was capable of estimating mismatch error between patient surface and immobilization system in a precision of <0.22 mm. For the pelvic tumor, the method had an average tumor location error of 1.303, 2.602, and 1.684 mm in left–right, anterior–posterior, and superior–inferior directions, respectively. The performance comparison with other 2 similar systems suggested that the method had a better positioning accuracy for pelvic tumor location. Conclusion: By effectively decreasing an interfraction uncertainty source (mismatch error between patient surface and immobilization system) in radiotherapy, the method can improve patient positioning precision for pelvic tumor. PMID:29333959
Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.
Khoromskaia, Venera; Khoromskij, Boris N
2015-12-21
We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.
Package-X 2.0: A Mathematica package for the analytic calculation of one-loop integrals
NASA Astrophysics Data System (ADS)
Patel, Hiren H.
2017-09-01
This article summarizes new features and enhancements of the first major update of Package-X. Package-X 2.0 can now generate analytic expressions for arbitrarily high rank dimensionally regulated tensor integrals with up to four distinct propagators, each with arbitrary integer weight, near an arbitrary even number of spacetime dimensions, giving UV divergent, IR divergent, and finite parts at (almost) any real-valued kinematic point. Additionally, it can generate multivariable Taylor series expansions of these integrals around any non-singular kinematic point to arbitrary order. All special functions and abbreviations output by Package-X 2.0 support Mathematica's arbitrary precision evaluation capabilities to deal with issues of numerical stability. Finally, tensor algebraic routines of Package-X have been polished and extended to support open fermion chains both on and off shell. The documentation (equivalent to over 100 printed pages) is accessed through Mathematica's Wolfram Documentation Center and contains information on all Package-X symbols, with over 300 basic usage examples, 3 project-scale tutorials, and instructions on linking to FEYNCALC and LOOPTOOLS. Program files doi:http://dx.doi.org/10.17632/yfkwrd4d5t.1 Licensing provisions: CC by 4.0 Programming language: Mathematica (Wolfram Language) Journal reference of previous version: H. H. Patel, Comput. Phys. Commun 197, 276 (2015) Does the new version supersede the previous version?: Yes Summary of revisions: Extension to four point one-loop integrals with higher powers of denominator factors, separate extraction of UV and IR divergent parts, testing for power IR divergences, construction of Taylor series expansions of one-loop integrals, numerical evaluation with arbitrary precision arithmetic, manipulation of fermion chains, improved tensor algebraic routines, and much expanded documentation. Nature of problem: Analytic calculation of one-loop integrals in relativistic quantum field theory. Solution method: Passarino-Veltman reduction formula, Denner-Dittmaier reduction formulae, and additional algorithms described in the manuscript. Restrictions: One-loop integrals are limited to those involving no more than four denominator factors.
Barton, Zachary J; Rodríguez-López, Joaquín
2017-03-07
We report a method of precisely positioning a Hg-based ultramicroelectrode (UME) for scanning electrochemical microscopy (SECM) investigations of any substrate. Hg-based probes are capable of performing amalgamation reactions with metal cations, which avoid unwanted side reactions and positive feedback mechanisms that can prove problematic for traditional probe positioning methods. However, prolonged collection of ions eventually leads to saturation of the amalgam accompanied by irreversible loss of Hg. In order to obtain negative feedback positioning control without risking damage to the SECM probe, we implement cyclic voltammetry probe approach surfaces (CV-PASs), consisting of CVs performed between incremental motor movements. The amalgamation current, peak stripping current, and integrated stripping charge extracted from a shared CV-PAS give three distinct probe approach curves (CV-PACs), which can be used to determine the tip-substrate gap to within 1% of the probe radius. Using finite element simulations, we establish a new protocol for fitting any CV-PAC and demonstrate its validity with experimental results for sodium and potassium ions in propylene carbonate by obtaining over 3 orders of magnitude greater accuracy and more than 20-fold greater precision than existing methods. Considering the timescales of diffusion and amalgam saturation, we also present limiting conditions for obtaining and fitting CV-PAC data. The ion-specific signals isolated in CV-PACs allow precise and accurate positioning of Hg-based SECM probes over any sample and enable the deployment of CV-PAS SECM as an analytical tool for traditionally challenging conditions.
2011-10-11
developed a method for determining the structure (component logs and their 3D place- ment) of a LINCOLN LOG assembly from a single image from an uncalibrated...small a class of components. Moreover, we focus on determining the precise pose and structure of an assembly, including the 3D pose of each...medial axes are parallel to the work surface. Thus valid structures Fig. 1. The 3D geometric shape parameters of LINCOLN LOGS. have logs on
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.
2016-01-01
MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
High Energy 2-Micron Solid-State Laser Transmitter for NASA's Airborne CO2 Measurements
NASA Technical Reports Server (NTRS)
Singh, Upendra N.; Yu, Jirong; Petros, Mulugeta; Bai, Yingxin
2012-01-01
A 2-micron pulsed, Integrated Path Differential Absorption (IPDA) lidar instrument for ground and airborne atmospheric CO2 concentration measurements via direct detection method is being developed at NASA Langley Research Center. This instrument will provide an alternate approach to measure atmospheric CO2 concentrations with significant advantages. A high energy pulsed approach provides high-precision measurement capability by having high signal-to-noise level and unambiguously eliminates the contamination from aerosols and clouds that can bias the IPDA measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Regnier, D.; Dubray, N.; Verriere, M.
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less
Regnier, D.; Dubray, N.; Verriere, M.; ...
2017-12-20
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less
Combining clinical and genomics queries using i2b2 – Three methods
Murphy, Shawn N.; Avillach, Paul; Bellazzi, Riccardo; Phillips, Lori; Gabetta, Matteo; Eran, Alal; McDuffie, Michael T.; Kohane, Isaac S.
2017-01-01
We are fortunate to be living in an era of twin biomedical data surges: a burgeoning representation of human phenotypes in the medical records of our healthcare systems, and high-throughput sequencing making rapid technological advances. The difficulty representing genomic data and its annotations has almost by itself led to the recognition of a biomedical “Big Data” challenge, and the complexity of healthcare data only compounds the problem to the point that coherent representation of both systems on the same platform seems insuperably difficult. We investigated the capability for complex, integrative genomic and clinical queries to be supported in the Informatics for Integrating Biology and the Bedside (i2b2) translational software package. Three different data integration approaches were developed: The first is based on Sequence Ontology, the second is based on the tranSMART engine, and the third on CouchDB. These novel methods for representing and querying complex genomic and clinical data on the i2b2 platform are available today for advancing precision medicine. PMID:28388645
Gao, Zhouzheng; Zhang, Hongping; Ge, Maorong; Niu, Xiaoji; Shen, Wenbin; Wickert, Jens; Schuh, Harald
2015-01-01
The continuity and reliability of precise GNSS positioning can be seriously limited by severe user observation environments. The Inertial Navigation System (INS) can overcome such drawbacks, but its performance is clearly restricted by INS sensor errors over time. Accordingly, the tightly coupled integration of GPS and INS can overcome the disadvantages of each individual system and together form a new navigation system with a higher accuracy, reliability and availability. Recently, ionosphere-constrained (IC) precise point positioning (PPP) utilizing raw GPS observations was proven able to improve both the convergence and positioning accuracy of the conventional PPP using ionosphere-free combined observations (LC-PPP). In this paper, a new mode of tightly coupled integration, in which the IC-PPP instead of LC-PPP is employed, is implemented to further improve the performance of the coupled system. We present the detailed mathematical model and the related algorithm of the new integration of IC-PPP and INS. To evaluate the performance of the new tightly coupled integration, data of both airborne and vehicle experiments with a geodetic GPS receiver and tactical grade inertial measurement unit are processed and the results are analyzed. The statistics show that the new approach can further improve the positioning accuracy compared with both IC-PPP and the tightly coupled integration of the conventional PPP and INS. PMID:25763647
A novel laser ranging system for measurement of ground-to-satellite distances
NASA Technical Reports Server (NTRS)
Golden, K. E.; Kind, D. E.; Leonard, S. L.; Ward, R. C.
1973-01-01
A technique was developed for improving the precision of laser ranging measurements of ground-to-satellite distances. The method employs a mode-locked laser transmitter and utilizes an image converter tube equipped with deflection plates in measuring the time of flight of the laser pulse to a distant retroreflector and back. Samples of the outgoing and returning light pulses are focussed on the photocathode of the image converter tube, whose deflection plates are driven by a high-voltage 120 MHz sine wave derived from a very stable oscillator. From the relative positions of the images produced at the output phosphor by the two light pulses, it is possible to make a precise determination of the fractional amount by which the time of flight exceeds some large integral multiple of the period of the deflection sinusoid.
Mapping experiment with space station
NASA Technical Reports Server (NTRS)
Wu, S. S. C.
1986-01-01
Mapping of the Earth from space stations can be approached in two areas. One is to collect gravity data for defining topographic datum using Earth's gravity field in terms of spherical harmonics. The other is to search and explore techniques of mapping topography using either optical or radar images with or without reference to ground central points. Without ground control points, an integrated camera system can be designed. With ground control points, the position of the space station (camera station) can be precisely determined at any instant. Therefore, terrestrial topography can be precisely mapped either by conventional photogrammetric methods or by current digital technology of image correlation. For the mapping experiment, it is proposed to establish four ground points either in North America or Africa (including the Sahara desert). If this experiment should be successfully accomplished, it may also be applied to the defense charting systems.
Pietsch, Torsten; Haberler, Christine
2016-01-01
The revised WHO classification of tumors of the CNS 2016 has introduced the concept of the integrated diagnosis. The definition of medulloblastoma entities now requires a combination of the traditional histological information with additional molecular/genetic features. For definition of the histopathological component of the medulloblastoma diagnosis, the tumors should be assigned to one of the four entities classic, desmoplastic/nodular (DNMB), extensive nodular (MBEN), or large cell/anaplastic (LC/A) medulloblastoma. The genetically defined component comprises the four entities WNT-activated, SHH-activated and TP53 wildtype, SHH-activated and TP53 mutant, or non-WNT/non-SHH medulloblastoma. Robust and validated methods are available to allow a precise diagnosis of these medulloblastoma entities according to the updated WHO classification, and for differential diagnostic purposes. A combination of immunohistochemical markers including β-catenin, Yap1, p75-NGFR, Otx2, and p53, in combination with targeted sequencing and copy number assessment such as FISH analysis for MYC genes allows a precise assignment of patients for risk-adapted stratification. It also allows comparison to results of study cohorts in the past and provides a robust basis for further treatment refinement. PMID:27781424
Pietsch, Torsten; Haberler, Christine
The revised WHO classification of tumors of the CNS 2016 has introduced the concept of the integrated diagnosis. The definition of medulloblastoma entities now requires a combination of the traditional histological information with additional molecular/genetic features. For definition of the histopathological component of the medulloblastoma diagnosis, the tumors should be assigned to one of the four entities classic, desmoplastic/nodular (DNMB), extensive nodular (MBEN), or large cell/anaplastic (LC/A) medulloblastoma. The genetically defined component comprises the four entities WNT-activated, SHH-activated and TP53 wildtype, SHH-activated and TP53 mutant, or non-WNT/non-SHH medulloblastoma. Robust and validated methods are available to allow a precise diagnosis of these medulloblastoma entities according to the updated WHO classification, and for differential diagnostic purposes. A combination of immunohistochemical markers including β-catenin, Yap1, p75-NGFR, Otx2, and p53, in combination with targeted sequencing and copy number assessment such as FISH analysis for MYC genes allows a precise assignment of patients for risk-adapted stratification. It also allows comparison to results of study cohorts in the past and provides a robust basis for further treatment refinement. .
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
COBALT CoOperative Blending of Autonomous Landing Technology
NASA Technical Reports Server (NTRS)
Carson, John M. III; Restrepo, Carolina I.; Robertson, Edward A.; Seubert, Carl R.; Amzajerdian, Farzin
2016-01-01
COBALT is a terrestrial test platform for development and maturation of GN&C (Guidance, Navigation and Control) technologies for PL&HA (Precision Landing and Hazard Avoidance). The project is developing a third generation, Langley Navigation Doppler Lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the JPL Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. These technologies together provide navigation that enables controlled precision landing. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive Vertical Test Bed (VTB) developed by Masten Space Systems (MSS), and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).
Ion traps for precision experiments at rare-isotope-beam facilities
NASA Astrophysics Data System (ADS)
Kwiatkowski, Anna
2016-09-01
Ion traps first entered experimental nuclear physics when the ISOLTRAP team demonstrated Penning trap mass spectrometry of radionuclides. From then on, the demand for ion traps has grown at radioactive-ion-beam (RIB) facilities since beams can be tailored for the desired experiment. Ion traps have been deployed for beam preparation, from bunching (thereby allowing time coincidences) to beam purification. Isomerically pure beams needed for nuclear-structure investigations can be prepared for trap-assisted or in-trap decay spectroscopy. The latter permits studies of highly charged ions for stellar evolution, which would be impossible with traditional experimental nuclear-physics methods. Moreover, the textbook-like conditions and advanced ion manipulation - even of a single ion - permit high-precision experiments. Consequently, the most accurate and precise mass measurements are now performed in Penning traps. After a brief introduction to ion trapping, I will focus on examples which showcase the versatility and utility of the technique at RIB facilities. I will demonstrate how this atomic-physics technique has been integrated into nuclear science, accelerator physics, and chemistry. DOE.
Microfabricated cylindrical ion trap
Blain, Matthew G.
2005-03-22
A microscale cylindrical ion trap, having an inner radius of order one micron, can be fabricated using surface micromachining techniques and materials known to the integrated circuits manufacturing and microelectromechanical systems industries. Micromachining methods enable batch fabrication, reduced manufacturing costs, dimensional and positional precision, and monolithic integration of massive arrays of ion traps with microscale ion generation and detection devices. Massive arraying enables the microscale cylindrical ion trap to retain the resolution, sensitivity, and mass range advantages necessary for high chemical selectivity. The microscale CIT has a reduced ion mean free path, allowing operation at higher pressures with less expensive and less bulky vacuum pumping system, and with lower battery power than conventional- and miniature-sized ion traps. The reduced electrode voltage enables integration of the microscale cylindrical ion trap with on-chip integrated circuit-based rf operation and detection electronics (i.e., cell phone electronics). Therefore, the full performance advantages of microscale cylindrical ion traps can be realized in truly field portable, handheld microanalysis systems.
Integrated light and scanning electron microscopy of GFP-expressing cells.
Peddie, Christopher J; Liv, Nalan; Hoogenboom, Jacob P; Collinson, Lucy M
2014-01-01
Integration of light and electron microscopes provides imaging tools in which fluorescent proteins can be localized to cellular structures with a high level of precision. However, until recently, there were few methods that could deliver specimens with sufficient fluorescent signal and electron contrast for dual imaging without intermediate staining steps. Here, we report protocols that preserve green fluorescent protein (GFP) in whole cells and in ultrathin sections of resin-embedded cells, with membrane contrast for integrated imaging. Critically, GFP is maintained in a stable and active state within the vacuum of an integrated light and scanning electron microscope. For light microscopists, additional structural information gives context to fluorescent protein expression in whole cells, illustrated here by analysis of filopodia and focal adhesions in Madin Darby canine kidney cells expressing GFP-Paxillin. For electron microscopists, GFP highlights the proteins of interest within the architectural space of the cell, illustrated here by localization of the conical lipid diacylglycerol to cellular membranes. © 2014 Elsevier Inc. All rights reserved.
Kong, Biao; Selomulya, Cordelia; Zheng, Gengfeng; Zhao, Dongyuan
2015-11-21
Prussian blue (PB), the oldest synthetic coordination compound, is a classic and fascinating transition metal coordination material. Prussian blue is based on a three-dimensional (3-D) cubic polymeric porous network consisting of alternating ferric and ferrous ions, which provides facile assembly as well as precise interaction with active sites at functional interfaces. A fundamental understanding of the assembly mechanism of PB hetero-interfaces is essential to enable the full potential applications of PB crystals, including chemical sensing, catalysis, gas storage, drug delivery and electronic displays. Developing controlled assembly methods towards functionally integrated hetero-interfaces with adjustable sizes and morphology of PB crystals is necessary. A key point in the functional interface and device integration of PB nanocrystals is the fabrication of hetero-interfaces in a well-defined and oriented fashion on given substrates. This review will bring together these key aspects of the hetero-interfaces of PB nanocrystals, ranging from structure and properties, interfacial assembly strategies, to integrated hetero-structures for diverse sensing.
NASA Astrophysics Data System (ADS)
Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong
2016-07-01
Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.
Analog-to-digital conversion techniques for precision photometry
NASA Technical Reports Server (NTRS)
Opal, Chet B.
1988-01-01
Three types of analog-to-digital converters are described: parallel, successive-approximation, and integrating. The functioning of comparators and sample-and-hold amplifiers is explained. Differential and integral linearity are defined, and good and bad examples are illustrated. The applicability and relative advantages of the three types of converters for precision astronomical photometric measurements are discussed. For most measurements, integral linearity is more important than differential linearity. Successive-approximation converters should be used with multielement solid state detectors because of their high speed, but dual slope integrating converters may be superior for use with single element solid state detectors where speed of digitization is not a factor. In all cases, the input signal should be tailored so that they occupy the upper part of the converter's dynamic range; this can be achieved by providing adjustable gain, or better by varying the integration time of the observation if possible.
Integrated Positioning for Coal Mining Machinery in Enclosed Underground Mine Based on SINS/WSN
Hui, Jing; Wu, Lei; Yan, Wenxu; Zhou, Lijuan
2014-01-01
To realize dynamic positioning of the shearer, a new method based on SINS/WSN is studied in this paper. Firstly, the shearer movement model is built and running regularity of the shearer in coal mining face has been mastered. Secondly, as external calibration of SINS using GPS is infeasible in enclosed underground mine, WSN positioning strategy is proposed to eliminate accumulative error produced by SINS; then the corresponding coupling model is established. Finally, positioning performance is analyzed by simulation and experiment. Results show that attitude angle and position of the shearer can be real-timely tracked by integrated positioning strategy based on SINS/WSN, and positioning precision meet the demand of actual working condition. PMID:24574891
High-precision numerical integration of equations in dynamics
NASA Astrophysics Data System (ADS)
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.
THERMOS. 30-Group ENDF/B Scattered Kernels
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCrosson, F.J.; Finch, D.R.
1973-12-01
These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library.« less
The integration of laser communication and ranging
NASA Astrophysics Data System (ADS)
Xu, Mengmeng; Sun, Jianfeng; Zhou, Yu; Zhang, Bo; Zhang, Guo; Li, Guangyuan; He, Hongyu; Lao, Chenzhe
2017-08-01
The method to realize the integration of laser communication and ranging is proposed in this paper. In the transmitter of two places, the ranging codes with uniqueness, good autocorrelation and cross-correlation properties are embed in the communication data and the encoded with the communication data to realize serial communication. And then the encoded data are modulated and send to each other, which can realize high speed two one-way laser communication. At the receiver, we can get the received ranging code after the demodulation, decoding and clock recovery. The received ranging codes and the local ranging codes do the autocorrelation to get a roughly range, while the phase difference between the local clock and the recovery clock to achieve the precision of the distance.
Practice innovation: the need for nimble data platforms to implement precision oncology care.
Elfiky, Aymen; Zhang, Dongyang; Krishnan Nair, Hari K
2015-01-01
Given the drive toward personalized, value-based, and coordinated cancer care delivery, modern knowledge-based practice is being shaped within the context of an increasingly technology-driven healthcare landscape. The ultimate promise of 'precision medicine' is predicated on taking advantage of the range of new capabilities for integrating disease- and individual-specific data to define new taxonomies as part of a systems-based knowledge network. Specifically, with cancer being a constantly evolving complex disease process, proper care of an individual will require the ability to seamlessly integrate multi-dimensional 'omic' and clinical data. Importantly, however, the challenges of curating knowledge from multiple dynamic data sources and translating to practice at the point-of-care highlight parallel needs. As patients, caregivers, and their environments become more proactive in clinical care and management, practical success of precision medicine is equally dependent on the development of proper infrastructures for evolving data integration, platforms for knowledge representation in a clinically-relevant context, and implementation within a provider's work-life and workflow.
Realizing drug repositioning by adapting a recommendation system to handle the process.
Ozsoy, Makbule Guclin; Özyer, Tansel; Polat, Faruk; Alhajj, Reda
2018-04-12
Drug repositioning is the process of identifying new targets for known drugs. It can be used to overcome problems associated with traditional drug discovery by adapting existing drugs to treat new discovered diseases. Thus, it may reduce associated risk, cost and time required to identify and verify new drugs. Nowadays, drug repositioning has received more attention from industry and academia. To tackle this problem, researchers have applied many different computational methods and have used various features of drugs and diseases. In this study, we contribute to the ongoing research efforts by combining multiple features, namely chemical structures, protein interactions and side-effects to predict new indications of target drugs. To achieve our target, we realize drug repositioning as a recommendation process and this leads to a new perspective in tackling the problem. The utilized recommendation method is based on Pareto dominance and collaborative filtering. It can also integrate multiple data-sources and multiple features. For the computation part, we applied several settings and we compared their performance. Evaluation results show that the proposed method can achieve more concentrated predictions with high precision, where nearly half of the predictions are true. Compared to other state of the art methods described in the literature, the proposed method is better at making right predictions by having higher precision. The reported results demonstrate the applicability and effectiveness of recommendation methods for drug repositioning.
NASA Astrophysics Data System (ADS)
Li, Kai; Zhou, Xuhua; Guo, Nannan; Zhao, Gang; Xu, Kexin; Lei, Weiwei
2017-09-01
Zero-difference kinematic, dynamic and reduced-dynamic precise orbit determination (POD) are three methods to obtain the precise orbits of Low Earth Orbit satellites (LEOs) by using the on-board GPS observations. Comparing the differences between those methods have great significance to establish the mathematical model and is usefull for us to select a suitable method to determine the orbit of the satellite. Based on the zero-difference GPS carrier-phase measurements, Shanghai Astronomical Observatory (SHAO) has improved the early version of SHORDE and then developed it as an integrated software system, which can perform the POD of LEOs by using the above three methods. In order to introduce the function of the software, we take the Gravity Recovery And Climate Experiment (GRACE) on-board GPS observations in January 2008 as example, then we compute the corresponding orbits of GRACE by using the SHORDE software. In order to evaluate the accuracy, we compare the orbits with the precise orbits provided by Jet Propulsion Laboratory (JPL). The results show that: (1) If we use the dynamic POD method, and the force models are used to represent the non-conservative forces, the average accuracy of the GRACE orbit is 2.40cm, 3.91cm, 2.34cm and 5.17cm in radial (R), along-track (T), cross-track (N) and 3D directions respectively; If we use the accelerometer observation instead of non-conservative perturbation model, the average accuracy of the orbit is 1.82cm, 2.51cm, 3.48cm and 4.68cm in R, T, N and 3D directions respectively. The result shows that if we use accelerometer observation instead of the non-conservative perturbation model, the accuracy of orbit is better. (2) When we use the reduced-dynamic POD method to get the orbits, the average accuracy of the orbit is 0.80cm, 1.36cm, 2.38cm and 2.87cm in R, T, N and 3D directions respectively. This method is carried out by setting up the pseudo-stochastic pulses to absorb the errors of atmospheric drag and other perturbations. (3) If we use the kinematic POD method, the accuracy of the GRACE orbit is 2.92cm, 2.48cm, 2.76cm and 4.75cm in R, T, N and 3D directions respectively. In conclusion, it can be seen that the POD of GRACE satellite is practicable by using different strategies and methods. The orbit solution is well and stable, they all can obtain the GRACE orbits with centimeter-level precision.
Huang, Yongyang; Badar, Mudabbir; Nitkowski, Arthur; Weinroth, Aaron; Tansu, Nelson; Zhou, Chao
2017-01-01
Space-division multiplexing optical coherence tomography (SDM-OCT) is a recently developed parallel OCT imaging method in order to achieve multi-fold speed improvement. However, the assembly of fiber optics components used in the first prototype system was labor-intensive and susceptible to errors. Here, we demonstrate a high-speed SDM-OCT system using an integrated photonic chip that can be reliably manufactured with high precisions and low per-unit cost. A three-layer cascade of 1 × 2 splitters was integrated in the photonic chip to split the incident light into 8 parallel imaging channels with ~3.7 mm optical delay in air between each channel. High-speed imaging (~1s/volume) of porcine eyes ex vivo and wide-field imaging (~18.0 × 14.3 mm2) of human fingers in vivo were demonstrated with the chip-based SDM-OCT system. PMID:28856055
Public health and precision medicine share a goal.
Vaithinathan, Asokan G; Asokan, Vanitha
2017-05-01
The advances made in genomics and molecular tools aid public health programs in the investigation of outbreaks and control of diseases by taking advantage of the precision medicine. Precision medicine means "segregating the individuals into subpopulations who vary in their disease susceptibility and response to a precise treatment" and not merely designing of drugs or creation of medical devices. By 2017, the United Kingdom 100,000 Genomes Project is expected to sequence 100,000 genomes from 70,000 patients. Similarly, the Precision Medicine Initiative of the United States plans to increase population-based genome sequencing and link it with clinical data. A national cohort of around 1 million people is to be established in the long term, to investigate the genetic and environmental determinants of health and disease, and further integrated to their electronic health records that are optional. Precision public health can be seen as administering the right intervention to the needy population at an appropriate time. Precision medicine originates from a wet-lab while evidence-based medicine is nurtured in a clinic. Linking the quintessential basic science research and clinical practice is necessary. In addition, new technologies to employ and analyze data in an integrated and dynamic way are essential for public health and precision medicine. The transition from evidence-based approach in public health to genomic approach to individuals with a paradigm shift of a "reactive" medicine to a more "proactive" and personalized health care may sound exceptional. However, a population perspective is needed for the precision medicine to succeed. © 2016 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.
Flight evaluation of differential GPS aided inertial navigation systems
NASA Technical Reports Server (NTRS)
Mcnally, B. David; Paielli, Russell A.; Bach, Ralph E., Jr.; Warner, David N., Jr.
1992-01-01
Algorithms are described for integration of Differential Global Positioning System (DGPS) data with Inertial Navigation System (INS) data to provide an integrated DGPS/INS navigation system. The objective is to establish the benefits that can be achieved through various levels of integration of DGPS with INS for precision navigation. An eight state Kalman filter integration was implemented in real-time on a twin turbo-prop transport aircraft to evaluate system performance during terminal approach and landing operations. A fully integrated DGPS/INS system is also presented which models accelerometer and rate-gyro measurement errors plus position, velocity, and attitude errors. The fully integrated system was implemented off-line using range-domain (seventeen-state) and position domain (fifteen-state) Kalman filters. Both filter integration approaches were evaluated using data collected during the flight test. Flight-test data consisted of measurements from a 5 channel Precision Code GPS receiver, a strap-down Inertial Navigation Unit (INU), and GPS satellite differential range corrections from a ground reference station. The aircraft was laser tracked to determine its true position. Results indicate that there is no significant improvement in positioning accuracy with the higher levels of DGPS/INS integration. All three systems provided high-frequency (e.g., 20 Hz) estimates of position and velocity. The fully integrated system provided estimates of inertial sensor errors which may be used to improve INS navigation accuracy should GPS become unavailable, and improved estimates of acceleration, attitude, and body rates which can be used for guidance and control. Precision Code DGPS/INS positioning accuracy (root-mean-square) was 1.0 m cross-track and 3.0 m vertical. (This AGARDograph was sponsored by the Guidance and Control Panel.)
Integration of phytochemicals and phytotherapy into cancer precision medicine.
Efferth, Thomas; Saeed, Mohamed E M; Mirghani, Elhaj; Alim, Awadh; Yassin, Zahir; Saeed, Elfatih; Khalid, Hassan E; Daak, Salah
2017-07-25
Concepts of individualized therapy in the 1970s and 1980s attempted to develop predictive in vitro tests for individual drug responsiveness without reaching clinical routine. Precision medicine attempts to device novel individual cancer therapy strategies. Using bioinformatics, relevant knowledge is extracted from huge data amounts. However, tumor heterogeneity challenges chemotherapy due to genetically and phenotypically different cell subpopulations, which may lead to refractory tumors. Natural products always served as vital resources for cancer therapy (e.g., Vinca alkaloids, camptothecin, paclitaxel, etc.) and are also sources for novel drugs. Targeted drugs developed to specifically address tumor-related proteins represent the basis of precision medicine. Natural products from plants represent excellent resource for targeted therapies. Phytochemicals and herbal mixtures act multi-specifically, i.e. they attack multiple targets at the same time. Network pharmacology facilitates the identification of the complexity of pharmacogenomic networks and new signaling networks that are distorted in tumors. In the present review, we give a conceptual overview, how the problem of drug resistance may be approached by integrating phytochemicals and phytotherapy into academic western medicine. Modern technology platforms (e.g. "-omics" technologies, DNA/RNA sequencing, and network pharmacology) can be applied for diverse treatment modalities such as cytotoxic and targeted chemotherapy as well as phytochemicals and phytotherapy. Thereby, these technologies represent an integrative momentum to merge the best of two worlds: clinical oncology and traditional medicine. In conclusion, the integration of phytochemicals and phytotherapy into cancer precision medicine represents a valuable asset to chemically synthesized chemicals and therapeutic antibodies.
Hyperspectral imagery for mapping crop yield for precision agriculture
USDA-ARS?s Scientific Manuscript database
Crop yield is perhaps the most important piece of information for crop management in precision agriculture. It integrates the effects of various spatial variables such as soil properties, topographic attributes, tillage, plant population, fertilization, irrigation, and pest infestations. A yield map...
NASA Astrophysics Data System (ADS)
Piro, Salvatore; Papale, Enrico; Zamuner, Daniela
2016-04-01
Geophysical methods are frequently used in archaeological prospection in order to provide detailed information about the presence of structures in the subsurface as well as their position and their geometrical reconstruction, by measuring variations of some physical properties. Often, due to the limited size and depth of an archaeological structure, it may be rather difficult to single out its position and extent because of the generally low signal-to-noise ratio. This problem can be overcome by improving data acquisition, processing techniques and by integrating different geophysical methods. In this work, two sites of archaeological interest, were investigated employing several methods (Ground Penetrating Radar (GPR), Electrical Resistivity Tomography (ERT), Fluxgate Differential Magnetic) to obtain precise and detailed maps of subsurface bodies. The first site, situated in a suburban area between Itri and Fondi, in the Aurunci Natural Regional Park (Central Italy), is characterized by the presence of remains of past human activity dating from the third century B.C. The second site, is instead situated in an urban area in the city of Rome (Basilica di Santa Balbina), where historical evidence is also present. The methods employed, allowed to determine the position and the geometry of some structures in the subsurface related to this past human activity. To have a better understanding of the subsurface, we then performed a qualitative and quantitative integration of this data, which consists in fusing the data from all the methods used, to have a complete visualization of the investigated area. Qualitative integration consists in graphically overlaying the maps obtained by the single methods; this method yields only images, not new data that may be subsequently analyzed. Quantitative integration is instead performed by mathematical and statistical solutions, which allows to have a more accurate reconstruction of the subsurface and generates new data with high information content.
A Long-Term Performance Enhancement Method for FOG-Based Measurement While Drilling
Zhang, Chunxi; Lin, Tie
2016-01-01
In the oil industry, the measurement-while-drilling (MWD) systems are usually used to provide the real-time position and orientation of the bottom hole assembly (BHA) during drilling. However, the present MWD systems based on magnetic surveying technology can barely ensure good performance because of magnetic interference phenomena. In this paper, a MWD surveying system based on a fiber optic gyroscope (FOG) was developed to replace the magnetic surveying system. To accommodate the size of the downhole drilling conditions, a new design method is adopted. In order to realize long-term and high position precision and orientation surveying, an integrated surveying algorithm is proposed based on inertial navigation system (INS) and drilling features. In addition, the FOG-based MWD error model is built and the drilling features are analyzed. The state-space system model and the observation updates model of the Kalman filter are built. To validate the availability and utility of the algorithm, the semi-physical simulation is conducted under laboratory conditions. The results comparison with the traditional algorithms show that the errors were suppressed and the measurement precision of the proposed algorithm is better than the traditional ones. In addition, the proposed method uses a lot less time than the zero velocity update (ZUPT) method. PMID:27483270
A Long-Term Performance Enhancement Method for FOG-Based Measurement While Drilling.
Zhang, Chunxi; Lin, Tie
2016-07-28
In the oil industry, the measurement-while-drilling (MWD) systems are usually used to provide the real-time position and orientation of the bottom hole assembly (BHA) during drilling. However, the present MWD systems based on magnetic surveying technology can barely ensure good performance because of magnetic interference phenomena. In this paper, a MWD surveying system based on a fiber optic gyroscope (FOG) was developed to replace the magnetic surveying system. To accommodate the size of the downhole drilling conditions, a new design method is adopted. In order to realize long-term and high position precision and orientation surveying, an integrated surveying algorithm is proposed based on inertial navigation system (INS) and drilling features. In addition, the FOG-based MWD error model is built and the drilling features are analyzed. The state-space system model and the observation updates model of the Kalman filter are built. To validate the availability and utility of the algorithm, the semi-physical simulation is conducted under laboratory conditions. The results comparison with the traditional algorithms show that the errors were suppressed and the measurement precision of the proposed algorithm is better than the traditional ones. In addition, the proposed method uses a lot less time than the zero velocity update (ZUPT) method.
Streamline integration as a method for two-dimensional elliptic grid generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.
We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less
Biotemplated Morpho Butterfly Wings for Tunable Structurally Colored Photocatalysts.
Rodríguez, Robin E; Agarwal, Sneha P; An, Shun; Kazyak, Eric; Das, Debashree; Shang, Wen; Skye, Rachael; Deng, Tao; Dasgupta, Neil P
2018-02-07
Morpho sulkowskyi butterfly wings contain naturally occurring hierarchical nanostructures that produce structural coloration. The high aspect ratio and surface area of these wings make them attractive nanostructured templates for applications in solar energy and photocatalysis. However, biomimetic approaches to replicate their complex structural features and integrate functional materials into their three-dimensional framework are highly limited in precision and scalability. Herein, a biotemplating approach is presented that precisely replicates Morpho nanostructures by depositing nanocrystalline ZnO coatings onto wings via low-temperature atomic layer deposition (ALD). This study demonstrates the ability to precisely tune the natural structural coloration while also integrating multifunctionality by imparting photocatalytic activity onto fully intact Morpho wings. Optical spectroscopy and finite-difference time-domain numerical modeling demonstrate that ALD ZnO coatings can rationally tune the structural coloration across the visible spectrum. These structurally colored photocatalysts exhibit an optimal coating thickness to maximize photocatalytic activity, which is attributed to trade-offs between light absorption and catalytic quantum yield with increasing coating thickness. These multifunctional photocatalysts present a new approach to integrating solar energy harvesting into visually attractive surfaces that can be integrated into building facades or other macroscopic structures to impart aesthetic appeal.
COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights
NASA Technical Reports Server (NTRS)
Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas
2016-01-01
The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.
Flexible integration of free-standing nanowires into silicon photonics.
Chen, Bigeng; Wu, Hao; Xin, Chenguang; Dai, Daoxin; Tong, Limin
2017-06-14
Silicon photonics has been developed successfully with a top-down fabrication technique to enable large-scale photonic integrated circuits with high reproducibility, but is limited intrinsically by the material capability for active or nonlinear applications. On the other hand, free-standing nanowires synthesized via a bottom-up growth present great material diversity and structural uniformity, but precisely assembling free-standing nanowires for on-demand photonic functionality remains a great challenge. Here we report hybrid integration of free-standing nanowires into silicon photonics with high flexibility by coupling free-standing nanowires onto target silicon waveguides that are simultaneously used for precise positioning. Coupling efficiency between a free-standing nanowire and a silicon waveguide is up to ~97% in the telecommunication band. A hybrid nonlinear-free-standing nanowires-silicon waveguides Mach-Zehnder interferometer and a racetrack resonator for significantly enhanced optical modulation are experimentally demonstrated, as well as hybrid active-free-standing nanowires-silicon waveguides circuits for light generation. These results suggest an alternative approach to flexible multifunctional on-chip nanophotonic devices.Precisely assembling free-standing nanowires for on-demand photonic functionality remains a challenge. Here, Chen et al. integrate free-standing nanowires into silicon waveguides and show all-optical modulation and light generation on silicon photonic chips.
NASA Astrophysics Data System (ADS)
Malekmohammadi, Bahram; Ramezani Mehrian, Majid; Jafari, Hamid Reza
2012-11-01
One of the most important water-resources management strategies for arid lands is managed aquifer recharge (MAR). In establishing a MAR scheme, site selection is the prime prerequisite that can be assisted by geographic information system (GIS) tools. One of the most important uncertainties in the site-selection process using GIS is finite ranges or intervals resulting from data classification. In order to reduce these uncertainties, a novel method has been developed involving the integration of multi-criteria decision making (MCDM), GIS, and a fuzzy inference system (FIS). The Shemil-Ashkara plain in the Hormozgan Province of Iran was selected as the case study; slope, geology, groundwater depth, potential for runoff, land use, and groundwater electrical conductivity have been considered as site-selection factors. By defining fuzzy membership functions for the input layers and the output layer, and by constructing fuzzy rules, a FIS has been developed. Comparison of the results produced by the proposed method and the traditional simple additive weighted (SAW) method shows that the proposed method yields more precise results. In conclusion, fuzzy-set theory can be an effective method to overcome associated uncertainties in classification of geographic information data.
Research on the phase adjustment method for dispersion interferometer on HL-2A tokamak
NASA Astrophysics Data System (ADS)
Tongyu, WU; Wei, ZHANG; Haoxi, WANG; Yan, ZHOU; Zejie, YIN
2018-06-01
A synchronous demodulation system is proposed and deployed for CO2 dispersion interferometer on HL-2A, which aims at high plasma density measurements and real-time feedback control. In order to make sure that the demodulator and the interferometer signal are synchronous in phase, a phase adjustment (PA) method has been developed for the demodulation system. The method takes advantages of the field programmable gate array parallel and pipeline process capabilities to carry out high performance and low latency PA. Some experimental results presented show that the PA method is crucial to the synchronous demodulation system and reliable to follow the fast change of the electron density. The system can measure the line-integrated density with a high precision of 2.0 × 1018 m‑2.
Xie, Wei-Qi; Chai, Xin-Sheng
2016-04-22
This paper describes a new method for the rapid determination of the moisture content in paper materials. The method is based on multiple headspace extraction gas chromatography (MHE-GC) at a temperature above the boiling point of water, from which an integrated water loss from the tested sample due to evaporation can be measured and from which the moisture content in the sample can be determined. The results show that the new method has a good precision (with the relative standard deviation <0.96%), high sensitivity (the limit of quantitation=0.005%) and good accuracy (the relative differences <1.4%). Therefore, the method is quite suitable for many uses in research and industrial applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Back-illuminate fiber system research for multi-object fiber spectroscopic telescope
NASA Astrophysics Data System (ADS)
Zhou, Zengxiang; Liu, Zhigang; Hu, Hongzhuan; Wang, Jianping; Zhai, Chao; Chu, Jiaru
2016-07-01
In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. A set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare with the integrating sphere, meet the conditions of fiber position measurement.Using parallel controlled fiber positioner as the spectroscopic receiver is an efficiency observation system for spectra survey, has been used in LAMOST recently, and will be proposed in CFHT and rebuilt telescope Mayall. In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. After many years on these research, the back illuminated fiber measurement was the best method to acquire the precision position of fibers. In LAMOST, a set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement and was controlled by high-level observation system which could shut down during the telescope observation. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare the integrating sphere, meet the conditions of fiber position measurement.
Liu, Jiangang; Wang, Guangyao; Chu, Qingquan; Chen, Fu
2017-07-01
Nitrogen (N) application significantly increases maize yield; however, the unreasonable use of N fertilizer is common in China. The analysis of crop yield gaps can reveal the limiting factors for yield improvement, but there is a lack of practical strategies for narrowing yield gaps of household farms. The objectives of this study were to assess the yield gap of summer maize using an integrative method and to develop strategies for narrowing the maize yield gap through precise N fertilization. The results indicated that there was a significant difference in maize yield among fields, with a low level of variation. Additionally, significant differences in N application rate were observed among fields, with high variability. Based on long-term simulation results, the optimal N application rate was 193 kg ha -1 , with a corresponding maximum attainable yield (AY max ) of 10 318 kg ha -1 . A considerable difference between farmers' yields and AY max was observed. Low agronomic efficiency of applied N fertilizer (AE N ) in farmers' fields was exhibited. The integrative method lays a foundation for exploring the specific factors constraining crop yield gaps at the field scale and for developing strategies for rapid site-specific N management. Optimization strategies to narrow the maize yield gap include increasing N application rates and adjusting the N application schedule. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Tonkin, T. N.; Midgley, N. G.; Graham, D. J.; Labadz, J. C.
2014-12-01
Novel topographic survey methods that integrate both structure-from-motion (SfM) photogrammetry and small unmanned aircraft systems (sUAS) are a rapidly evolving investigative technique. Due to the diverse range of survey configurations available and the infancy of these new methods, further research is required. Here, the accuracy, precision and potential applications of this approach are investigated. A total of 543 images of the Cwm Idwal moraine-mound complex were captured from a light (< 5 kg) semi-autonomous multi-rotor unmanned aircraft system using a consumer-grade 18 MP compact digital camera. The images were used to produce a DSM (digital surface model) of the moraines. The DSM is in good agreement with 7761 total station survey points providing a total vertical RMSE value of 0.517 m and vertical RMSE values as low as 0.200 m for less densely vegetated areas of the DSM. High-precision topographic data can be acquired rapidly using this technique with the resulting DSMs and orthorectified aerial imagery at sub-decimetre resolutions. Positional errors on the total station dataset, vegetation and steep terrain are identified as the causes of vertical disagreement. Whilst this aerial survey approach is advocated for use in a range of geomorphological settings, care must be taken to ensure that adequate ground control is applied to give a high degree of accuracy.
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, Gary F.; Banerjee, Prasanta K.; Honkala, Keith A.
1988-01-01
In the present work, the boundary element method (BEM) is chosen as the basic analysis tool, principally because the definition of temperature, flux, displacement and traction are very precise on a boundary-based discretization scheme. One fundamental difficulty is, of course, that a BEM formulation requires a considerable amount of analytical work, which is not needed in the other numerical methods. Progress made toward the development of a boundary element formulation for the study of hot fluid-structure interaction in Earth-to-Orbit engine hot section components is reported. The primary thrust of the program to date has been directed quite naturally toward the examination of fluid flow, since boundary element methods for fluids are at a much less developed state.
Neural control and precision of flight muscle activation in Drosophila.
Lehmann, Fritz-Olaf; Bartussek, Jan
2017-01-01
Precision of motor commands is highly relevant in a large context of various locomotor behaviors, including stabilization of body posture, heading control and directed escape responses. While posture stability and heading control in walking and swimming animals benefit from high friction via ground reaction forces and elevated viscosity of water, respectively, flying animals have to cope with comparatively little aerodynamic friction on body and wings. Although low frictional damping in flight is the key to the extraordinary aerial performance and agility of flying birds, bats and insects, it challenges these animals with extraordinary demands on sensory integration and motor precision. Our review focuses on the dynamic precision with which Drosophila activates its flight muscular system during maneuvering flight, considering relevant studies on neural and muscular mechanisms of thoracic propulsion. In particular, we tackle the precision with which flies adjust power output of asynchronous power muscles and synchronous flight control muscles by monitoring muscle calcium and spike timing within the stroke cycle. A substantial proportion of the review is engaged in the significance of visual and proprioceptive feedback loops for wing motion control including sensory integration at the cellular level. We highlight that sensory feedback is the basis for precise heading control and body stability in flies.
Long-range open-path greenhouse gas monitoring using mid-infrared laser dispersion spectroscopy
NASA Astrophysics Data System (ADS)
Daghestani, Nart; Brownsword, Richard; Weidmann, Damien
2015-04-01
Accurate and sensitive methods of monitoring greenhouse gas (GHG) emission over large areas has become a pressing need to deliver improved estimates of both human-made and natural GHG budgets. These needs relate to a variety of sectors including environmental monitoring, energy, oil and gas industry, waste management, biogenic emission characterization, and leak detection. To address the needs, long-distance open-path laser spectroscopy methods offer significant advantages in terms of temporal resolution, sensitivity, compactness and cost effectiveness. Path-integrated mixing ratio measurements stemming from long open-path laser spectrometers can provide emission mapping when combined with meteorological data and/or through tomographic approaches. Laser absorption spectroscopy is the predominant method of detecting gasses over long integrated path lengths. The development of dispersion spectrometers measuring tiny refractive index changes, rather than optical power transmission, may offer a set of specific advantages1. These include greater immunity to laser power fluctuations, greater dynamic range due to the linearity of dispersion, and ideally a zero baseline signal easing quantitative retrievals of path integrated mixing ratios. Chirped laser dispersion spectrometers (CLaDS) developed for the monitoring of atmospheric methane and carbon dioxide will be presented. Using quantum cascade laser as the source, a minimalistic and compact system operating at 7.8 μm has been developed and demonstrated for the monitoring of atmospheric methane over a 90 meter open path2. Through full instrument modelling and error propagation analysis, precision of 3 ppm.m.Hz-0.5 has been established (one sigma precision for atmospheric methane normalized over a 1 m path and 1 s measurement duration). The system was fully functional in the rain, sleet, and moderate fog. The physical model and system concept of CLaDS can be adapted to any greenhouse gas species. Currently we are developing an in-lab instrument that can measure carbon dioxide using a quantum cascade laser operating in the 4 μm range. In this case, the dynamic range benefit of CLaDS is used to provide high precision even when peak absorbance in the CO2 spectrum gets greater than 2. Development for this deployable CO2 measurement system is still at an early stage. So far laboratory gas cell experiments have demonstrated a 9.3 ppm.m.Hz-0.5 for CO2 monitoring. This corresponds to about 0.02% relative precision in measuring CO2 atmospheric background over a 100 m open-path in one second. 1 G. Wysocki and D. Weidmann, "Molecular dispersion spectroscopy for chemical sensing using chirped mid-infrared quantum cascade laser," Opt. Express 18(25), 26123-26140 (2010). 2 N.S. Daghestani, R. Brownsword, D. Weidmann, 'Analysis and demonstration of atmospheric methane monitoring by mid-infrared open-path chirped dispersion spectroscopy' Opt. Express 22(25), A1731-A1743 (2014).
Acquisition and processing of data for isotope-ratio-monitoring mass spectrometry
NASA Technical Reports Server (NTRS)
Ricci, M. P.; Merritt, D. A.; Freeman, K. H.; Hayes, J. M.
1994-01-01
Methods are described for continuous monitoring of signals required for precise analyses of 13C, 18O, and 15N in gas streams containing varying quantities of CO2 and N2. The quantitative resolution (i.e. maximum performance in the absence of random errors) of these methods is adequate for determination of isotope ratios with an uncertainty of one part in 10(5); the precision actually obtained is often better than one part in 10(4). This report describes data-processing operations including definition of beginning and ending points of chromatographic peaks and quantitation of background levels, allowance for effects of chromatographic separation of isotopically substituted species, integration of signals related to specific masses, correction for effects of mass discrimination, recognition of drifts in mass spectrometer performance, and calculation of isotopic delta values. Characteristics of a system allowing off-line revision of parameters used in data reduction are described and an algorithm for identification of background levels in complex chromatograms is outlined. Effects of imperfect chromatographic resolution are demonstrated and discussed and an approach to deconvolution of signals from coeluting substances described.
NASA Astrophysics Data System (ADS)
Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.
2016-01-01
In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.
[Mathematical model of micturition allowing a detailed analysis of free urine flowmetry].
Valentini, F; Besson, G; Nelson, P
1999-04-01
A mathematical model of micturition allowing precise analysis of uroflowmetry curves (VBN method) is described together with some of its applications. The physiology of micturition and possible diagnostic hypotheses able to explain the shape of the uroflowmetry curve can be expressed by a series of differential equations. Integration of the system allows the validity of these hypotheses to be tested by simulation. A theoretical uroflowmetry is calculated in less than 1 second and analysis of a dysuric uroflowmetry takes about 5 minutes. The efficacy of the model is due to its rapidity and the precision of the comparisons between measured and predicted values. The method has been applied to almost one thousand curves. The uroflowmetries of normal subjects are restored without adjustment with a quadratic error of less than 1%, while those of dysuric patients require identification of one or two adaptive parameters characteristic of the underlying disease. These parameters remain constant during the same session, but vary with the disease and/or the treatment. This model could become a tool for noninvasive urodynamic studies.
Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo
NASA Astrophysics Data System (ADS)
Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao
2017-10-01
Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.
Unexpected arousal modulates the influence of sensory noise on confidence
Allen, Micah; Frank, Darya; Schwarzkopf, D Samuel; Fardo, Francesca; Winston, Joel S; Hauser, Tobias U; Rees, Geraint
2016-01-01
Human perception is invariably accompanied by a graded feeling of confidence that guides metacognitive awareness and decision-making. It is often assumed that this arises solely from the feed-forward encoding of the strength or precision of sensory inputs. In contrast, interoceptive inference models suggest that confidence reflects a weighted integration of sensory precision and expectations about internal states, such as arousal. Here we test this hypothesis using a novel psychophysical paradigm, in which unseen disgust-cues induced unexpected, unconscious arousal just before participants discriminated motion signals of variable precision. Across measures of perceptual bias, uncertainty, and physiological arousal we found that arousing disgust cues modulated the encoding of sensory noise. Furthermore, the degree to which trial-by-trial pupil fluctuations encoded this nonlinear interaction correlated with trial level confidence. Our results suggest that unexpected arousal regulates perceptual precision, such that subjective confidence reflects the integration of both external sensory and internal, embodied states. DOI: http://dx.doi.org/10.7554/eLife.18103.001 PMID:27776633
A primer on precision medicine informatics.
Sboner, Andrea; Elemento, Olivier
2016-01-01
In this review, we describe key components of a computational infrastructure for a precision medicine program that is based on clinical-grade genomic sequencing. Specific aspects covered in this review include software components and hardware infrastructure, reporting, integration into Electronic Health Records for routine clinical use and regulatory aspects. We emphasize informatics components related to reproducibility and reliability in genomic testing, regulatory compliance, traceability and documentation of processes, integration into clinical workflows, privacy requirements, prioritization and interpretation of results to report based on clinical needs, rapidly evolving knowledge base of genomic alterations and clinical treatments and return of results in a timely and predictable fashion. We also seek to differentiate between the use of precision medicine in germline and cancer. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Nguyen, Huu-Tho; Md Dawal, Siti Zawiah; Nukman, Yusoff; Aoyama, Hideki; Case, Keith
2015-01-01
Globalization of business and competitiveness in manufacturing has forced companies to improve their manufacturing facilities to respond to market requirements. Machine tool evaluation involves an essential decision using imprecise and vague information, and plays a major role to improve the productivity and flexibility in manufacturing. The aim of this study is to present an integrated approach for decision-making in machine tool selection. This paper is focused on the integration of a consistent fuzzy AHP (Analytic Hierarchy Process) and a fuzzy COmplex PRoportional ASsessment (COPRAS) for multi-attribute decision-making in selecting the most suitable machine tool. In this method, the fuzzy linguistic reference relation is integrated into AHP to handle the imprecise and vague information, and to simplify the data collection for the pair-wise comparison matrix of the AHP which determines the weights of attributes. The output of the fuzzy AHP is imported into the fuzzy COPRAS method for ranking alternatives through the closeness coefficient. Presentation of the proposed model application is provided by a numerical example based on the collection of data by questionnaire and from the literature. The results highlight the integration of the improved fuzzy AHP and the fuzzy COPRAS as a precise tool and provide effective multi-attribute decision-making for evaluating the machine tool in the uncertain environment. PMID:26368541
Spatially controlled doping of two-dimensional SnS 2 through intercalation for electronics
Gong, Yongji; Yuan, Hongtao; Wu, Chun-Lan; ...
2018-02-26
Doped semiconductors are the most important building elements for modern electronic devices. In silicon-based integrated circuits, facile and controllable fabrication and integration of these materials can be realized without introducing a high-resistance interface. Besides, the emergence of two-dimensional (2D) materials enables the realization of atomically thin integrated circuits. However, the 2D nature of these materials precludes the use of traditional ion implantation techniques for carrier doping and further hinders device development10. Here, we demonstrate a solvent-based intercalation method to achieve p-type, n-type and degenerately doped semiconductors in the same parent material at the atomically thin limit. In contrast to naturallymore » grown n-type S-vacancy SnS 2, Cu intercalated bilayer SnS 2 obtained by this technique displays a hole field-effect mobility of ~40 cm 2 V -1 s -1, and the obtained Co-SnS 2 exhibits a metal-like behaviour with sheet resistance comparable to that of few-layer graphene. Combining this intercalation technique with lithography, an atomically seamless p–n–metal junction could be further realized with precise size and spatial control, which makes in-plane heterostructures practically applicable for integrated devices and other 2D materials. Therefore, the presented intercalation method can open a new avenue connecting the previously disparate worlds of integrated circuits and atomically thin materials.« less
Spatially controlled doping of two-dimensional SnS 2 through intercalation for electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Yongji; Yuan, Hongtao; Wu, Chun-Lan
Doped semiconductors are the most important building elements for modern electronic devices. In silicon-based integrated circuits, facile and controllable fabrication and integration of these materials can be realized without introducing a high-resistance interface. Besides, the emergence of two-dimensional (2D) materials enables the realization of atomically thin integrated circuits. However, the 2D nature of these materials precludes the use of traditional ion implantation techniques for carrier doping and further hinders device development10. Here, we demonstrate a solvent-based intercalation method to achieve p-type, n-type and degenerately doped semiconductors in the same parent material at the atomically thin limit. In contrast to naturallymore » grown n-type S-vacancy SnS 2, Cu intercalated bilayer SnS 2 obtained by this technique displays a hole field-effect mobility of ~40 cm 2 V -1 s -1, and the obtained Co-SnS 2 exhibits a metal-like behaviour with sheet resistance comparable to that of few-layer graphene. Combining this intercalation technique with lithography, an atomically seamless p–n–metal junction could be further realized with precise size and spatial control, which makes in-plane heterostructures practically applicable for integrated devices and other 2D materials. Therefore, the presented intercalation method can open a new avenue connecting the previously disparate worlds of integrated circuits and atomically thin materials.« less
Spatially controlled doping of two-dimensional SnS2 through intercalation for electronics
NASA Astrophysics Data System (ADS)
Gong, Yongji; Yuan, Hongtao; Wu, Chun-Lan; Tang, Peizhe; Yang, Shi-Ze; Yang, Ankun; Li, Guodong; Liu, Bofei; van de Groep, Jorik; Brongersma, Mark L.; Chisholm, Matthew F.; Zhang, Shou-Cheng; Zhou, Wu; Cui, Yi
2018-04-01
Doped semiconductors are the most important building elements for modern electronic devices1. In silicon-based integrated circuits, facile and controllable fabrication and integration of these materials can be realized without introducing a high-resistance interface2,3. Besides, the emergence of two-dimensional (2D) materials enables the realization of atomically thin integrated circuits4-9. However, the 2D nature of these materials precludes the use of traditional ion implantation techniques for carrier doping and further hinders device development10. Here, we demonstrate a solvent-based intercalation method to achieve p-type, n-type and degenerately doped semiconductors in the same parent material at the atomically thin limit. In contrast to naturally grown n-type S-vacancy SnS2, Cu intercalated bilayer SnS2 obtained by this technique displays a hole field-effect mobility of 40 cm2 V-1 s-1, and the obtained Co-SnS2 exhibits a metal-like behaviour with sheet resistance comparable to that of few-layer graphene5. Combining this intercalation technique with lithography, an atomically seamless p-n-metal junction could be further realized with precise size and spatial control, which makes in-plane heterostructures practically applicable for integrated devices and other 2D materials. Therefore, the presented intercalation method can open a new avenue connecting the previously disparate worlds of integrated circuits and atomically thin materials.
McNulty, Jason D; Klann, Tyler; Sha, Jin; Salick, Max; Knight, Gavin T; Turng, Lih-Sheng; Ashton, Randolph S
2014-06-07
Increased realization of the spatial heterogeneity found within in vivo tissue microenvironments has prompted the desire to engineer similar complexities into in vitro culture substrates. Microcontact printing (μCP) is a versatile technique for engineering such complexities onto cell culture substrates because it permits microscale control of the relative positioning of molecules and cells over large surface areas. However, challenges associated with precisely aligning and superimposing multiple μCP steps severely limits the extent of substrate modification that can be achieved using this method. Thus, we investigated the feasibility of using a vision guided selectively compliant articulated robotic arm (SCARA) for μCP applications. SCARAs are routinely used to perform high precision, repetitive tasks in manufacturing, and even low-end models are capable of achieving microscale precision. Here, we present customization of a SCARA to execute robotic-μCP (R-μCP) onto gold-coated microscope coverslips. The system not only possesses the ability to align multiple polydimethylsiloxane (PDMS) stamps but also has the capability to do so even after the substrates have been removed, reacted to graft polymer brushes, and replaced back into the system. Plus, non-biased computerized analysis shows that the system performs such sequential patterning with <10 μm precision and accuracy, which is equivalent to the repeatability specifications of the employed SCARA model. R-μCP should facilitate the engineering of complex in vivo-like complexities onto culture substrates and their integration with microfluidic devices.
Rauber, Markus; Alber, Ina; Müller, Sven; Neumann, Reinhard; Picht, Oliver; Roth, Christina; Schökel, Alexander; Toimil-Molares, Maria Eugenia; Ensinger, Wolfgang
2011-06-08
The fabrication of three-dimensional assemblies consisting of large quantities of nanowires is of great technological importance for various applications including (electro-)catalysis, sensitive sensing, and improvement of electronic devices. Because the spatial distribution of the nanostructured material can strongly influence the properties, architectural design is required in order to use assembled nanowires to their full potential. In addition, special effort has to be dedicated to the development of efficient methods that allow precise control over structural parameters of the nanoscale building blocks as a means of tuning their characteristics. This paper reports the direct synthesis of highly ordered large-area nanowire networks by a method based on hard templates using electrodeposition within nanochannels of ion track-etched polymer membranes. Control over the complexity of the networks and the dimensions of the integrated nanostructures are achieved by a modified template fabrication. The networks possess high surface area and excellent transport properties, turning them into a promising electrocatalyst material as demonstrated by cyclic voltammetry studies on platinum nanowire networks catalyzing methanol oxidation. Our method opens up a new general route for interconnecting nanowires to stable macroscopic network structures of very high integration level that allow easy handling of nanowires while maintaining their connectivity.
Numerically stable formulas for a particle-based explicit exponential integrator
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth
2015-05-01
Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.
Sykes, J R; Lindsay, R; Dean, C J; Brettle, D S; Magee, D R; Thwaites, D I
2008-10-07
For image-guided radiotherapy (IGRT) systems based on cone beam CT (CBCT) integrated into a linear accelerator, the reproducible alignment of imager to x-ray source is critical to the registration of both the x-ray-volumetric image with the megavoltage (MV) beam isocentre and image sharpness. An enhanced method of determining the CBCT to MV isocentre alignment using the QUASAR Penta-Guide phantom was developed which improved both precision and accuracy. This was benchmarked against our existing method which used software and a ball-bearing (BB) phantom provided by Elekta. Additionally, a method of measuring an image sharpness metric (MTF(50)) from the edge response function of a spherical air cavity within the Penta-Guide phantom was developed and its sensitivity was tested by simulating misalignments of the kV imager. Reproducibility testing of the enhanced Penta-Guide method demonstrated a systematic error of <0.2 mm when compared to the BB method with near equivalent random error (s=0.15 mm). The mean MTF(50) for five measurements was 0.278+/-0.004 lp mm(-1) with no applied misalignment. Simulated misalignments exhibited a clear peak in the MTF(50) enabling misalignments greater than 0.4 mm to be detected. The Penta-Guide phantom can be used to precisely measure CBCT-MV coincidence and image sharpness on CBCT-IGRT systems.
An automatic segmentation method of a parameter-adaptive PCNN for medical images.
Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide
2017-09-01
Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.
Water vapour retrieval using the Precision Solar Spectroradiometer
NASA Astrophysics Data System (ADS)
Raptis, Panagiotis-Ioannis; Kazadzis, Stelios; Gröbner, Julian; Kouremeti, Natalia; Doppler, Lionel; Becker, Ralf; Helmis, Constantinos
2018-02-01
The Precision Solar Spectroradiometer (PSR) is a new spectroradiometer developed at Physikalisch-Meteorologisches Observatorium Davos - World Radiation Center (PMOD-WRC), Davos, measuring direct solar irradiance at the surface, in the 300-1020 nm spectral range and at high temporal resolution. The purpose of this work is to investigate the instrument's potential to retrieve integrated water vapour (IWV) using its spectral measurements. Two different approaches were developed in order to retrieve IWV: the first one uses single-channel and wavelength measurements, following a theoretical water vapour high absorption wavelength, and the second one uses direct sun irradiance integrated at a certain spectral region. IWV results have been validated using a 2-year data set, consisting of an AERONET sun-photometer Cimel CE318, a Global Positioning System (GPS), a microwave radiometer profiler (MWP) and radiosonde retrievals recorded at Meteorological Observatorium Lindenberg, Germany. For the monochromatic approach, better agreement with retrievals from other methods and instruments was achieved using the 946 nm channel, while for the spectral approach the 934-948 nm window was used. Compared to other instruments' retrievals, the monochromatic approach leads to mean relative differences up to 3.3 % with the coefficient of determination (R2) being in the region of 0.87-0.95, while for the spectral approach mean relative differences up to 0.7 % were recorded with R2 in the region of 0.96-0.98. Uncertainties related to IWV retrieval methods were investigated and found to be less than 0.28 cm for both methods. Absolute IWV deviations of differences between PSR and other instruments were determined the range of 0.08-0.30 cm and only in extreme cases would reach up to 15 %.
NASA Astrophysics Data System (ADS)
Ren, Changzhi; Li, Xiaoyan; Song, Xiaoli; Niu, Yong; Li, Aihua; Zhang, Zhenchao
2012-09-01
Direct drive technology is the key to solute future 30-m and larger telescope motion system to guarantee a very high tracking accuracy, in spite of unbalanced and sudden loads such as wind gusts and in spite of a structure that, because of its size, can not be infinitely stiff. However, this requires the design and realization of unusually large torque motor that the torque slew rate must be extremely steep too. A conventional torque motor design appears inadequate. This paper explores one redundant unit permanent magnet synchronous motor and its simulation bed for 30-m class telescope. Because its drive system is one high integrated electromechanical system, one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. This paper discusses the design and control of the precise tracking simulation bed in detail.
Performance evaluation and clinical applications of 3D plenoptic cameras
NASA Astrophysics Data System (ADS)
Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel
2015-06-01
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
NASA Astrophysics Data System (ADS)
Lesiuk, Michał; Moszynski, Robert
2014-12-01
In this paper we consider the calculation of two-center exchange integrals over Slater-type orbitals (STOs). We apply the Neumann expansion of the Coulomb interaction potential and consider calculation of all basic quantities which appear in the resulting expression. Analytical closed-form equations for all auxiliary quantities have already been known but they suffer from large digital erosion when some of the parameters are large or small. We derive two differential equations which are obeyed by the most difficult basic integrals. Taking them as a starting point, useful series expansions for small parameter values or asymptotic expansions for large parameter values are systematically derived. The resulting expansions replace the corresponding analytical expressions when the latter introduce significant cancellations. Additionally, we reconsider numerical integration of some necessary quantities and present a new way to calculate the integrand with a controlled precision. All proposed methods are combined to lead to a general, stable algorithm. We perform extensive numerical tests of the introduced expressions to verify their validity and usefulness. Advances reported here provide methodology to compute two-electron exchange integrals over STOs for a broad range of the nonlinear parameters and large angular momenta.
Deep Coupled Integration of CSAC and GNSS for Robust PNT.
Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi
2015-09-11
Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.
Deep Coupled Integration of CSAC and GNSS for Robust PNT
Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi
2015-01-01
Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542
Habib, Neven M; Abdelrahman, Maha M; Abdelwhab, Nada S; Ali, Nourudin W
2017-03-01
Accurate and precise TLC-densitometric and HPLC-diode-array detector (DAD) methods have been developed and validated to resolve two binary mixtures containing pyridoxine hydrochloride (PYH) with either cyclizine hydrochloride (CYH) or meclizine hydrochloride (MEH). In the developed TLC-densitometric method, chromatographic separation of the three studied drugs was carried out on silica gel 60 F254 plates using a developing system containing methylene chloride + acetone + methanol (7 + 1 + 0.5, v/v/v) scanning separated bands at 220 nm. Beer-Lambert law was obeyed in the ranges of 0.2-5, 0.2-4, and 0.2-4 µg/band for PYH, CYH, and MEH, respectively. On the other hand, the developed HPLC-DAD method depended on chromatographic separation on a Zorbax Eclipse C18 column using methanol-KH2PO4 (0.05 M; 90 + 10, v/v; pH 5, with H3PO4 and KOH) as the mobile phase, a flow rate of 1 mL/min, and UV scanning at 220 nm. A linear relationship was obtained between the integrated peak area and the concentration in the ranges of 10-50, 10-50, and 7-50 µg/mL for PYH, CYH, and MEH, respectively. The proposed methods were successfully applied for the determination of the cited drugs in their pharmaceutical formulations. Statistical comparison with the reported methods using Student's t- and F-tests found there were no significant differences between the proposed and reported methods for accuracy and precision.
Hu, Qinglei
2007-10-01
This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.
Progress in Integrative Biomaterial Systems to Approach Three-Dimensional Cell Mechanotransduction
Zhang, Ying; Liao, Kin; Li, Chuan; Lai, Alvin C.K.; Foo, Ji-Jinn
2017-01-01
Mechanotransduction between cells and the extracellular matrix regulates major cellular functions in physiological and pathological situations. The effect of mechanical cues on biochemical signaling triggered by cell–matrix and cell–cell interactions on model biomimetic surfaces has been extensively investigated by a combination of fabrication, biophysical, and biological methods. To simulate the in vivo physiological microenvironment in vitro, three dimensional (3D) microstructures with tailored bio-functionality have been fabricated on substrates of various materials. However, less attention has been paid to the design of 3D biomaterial systems with geometric variances, such as the possession of precise micro-features and/or bio-sensing elements for probing the mechanical responses of cells to the external microenvironment. Such precisely engineered 3D model experimental platforms pave the way for studying the mechanotransduction of multicellular aggregates under controlled geometric and mechanical parameters. Concurrently with the progress in 3D biomaterial fabrication, cell traction force microscopy (CTFM) developed in the field of cell biophysics has emerged as a highly sensitive technique for probing the mechanical stresses exerted by cells onto the opposing deformable surface. In the current work, we first review the recent advances in the fabrication of 3D micropatterned biomaterials which enable the seamless integration with experimental cell mechanics in a controlled 3D microenvironment. Then, we discuss the role of collective cell–cell interactions in the mechanotransduction of engineered tissue equivalents determined by such integrative biomaterial systems under simulated physiological conditions. PMID:28952551
Design and Verification of a Digital Controller for a 2-Piece Hemispherical Resonator Gyroscope
Lee, Jungshin; Yun, Sung Wook; Rhim, Jaewook
2016-01-01
A Hemispherical Resonator Gyro (HRG) is the Coriolis Vibratory Gyro (CVG) that measures rotation angle or angular velocity using Coriolis force acting the vibrating mass. A HRG can be used as a rate gyro or integrating gyro without structural modification by simply changing the control scheme. In this paper, differential control algorithms are designed for a 2-piece HRG. To design a precision controller, the electromechanical modelling and signal processing must be pre-performed accurately. Therefore, the equations of motion for the HRG resonator with switched harmonic excitations are derived with the Duhamel Integral method. Electromechanical modeling of the resonator, electric module and charge amplifier is performed by considering the mode shape of a thin hemispherical shell. Further, signal processing and control algorithms are designed. The multi-flexing scheme of sensing, driving cycles and x, y-axis switching cycles is appropriate for high precision and low maneuverability systems. The differential control scheme is easily capable of rejecting the common mode errors of x, y-axis signals and changing the rate integrating mode on basis of these studies. In the rate gyro mode the controller is composed of Phase-Locked Loop (PLL), amplitude, quadrature and rate control loop. All controllers are designed on basis of a digital PI controller. The signal processing and control algorithms are verified through Matlab/Simulink simulations. Finally, a FPGA and DSP board with these algorithms is verified through experiments. PMID:27104539
Yao, Shun-chun; Chen, Jian-chao; Lu, Ji-dong; Shen, Yue-liang; Pan, Gang
2015-06-01
In coal-fired plants, Unburned carbon (UC) in fly ash is the major determinant of combustion efficiency in coal-fired boiler. The balance between unburned carbon and NO(x) emissions stresses the need for rapid and accurate methods for the measurement of unburned carbon. Laser-induced breakdown spectroscopy (LIBS) is employed to measure the unburned carbon content in fly ash. In this case, it is found that the C line interference with Fe line at about 248 nm. The interference leads to C could not be quantified independently from Fe. A correction approach for extracting C integrated intensity from the overlapping peak is proposed. The Fe 248.33 nm, Fe 254.60 nm and Fe 272.36 nm lines are used to correct the Fe 247.98 nm line which interference with C 247.86 nm, respectively. Then, the corrected C integrated intensity is compared with the uncorrected C integrated intensity for constructing calibration curves of unburned carbon, and also for the precision and accuracy of repeat measurements. The analysis results show that the regression coefficients of the calibration curves and the precision and accuracy of repeat measurements are improved by correcting C-Fe interference, especially for the fly ash samples with low level unburned carbon content. However, the choice of the Fe line need to avoid a over-correction for C line. Obviously, Fe 254.60 nm is the best
Shin, Jeong Hong; Jung, Soobin; Ramakrishna, Suresh; Kim, Hyongbum Henry; Lee, Junwon
2018-07-07
Genome editing technology using programmable nucleases has rapidly evolved in recent years. The primary mechanism to achieve precise integration of a transgene is mainly based on homology-directed repair (HDR). However, an HDR-based genome-editing approach is less efficient than non-homologous end-joining (NHEJ). Recently, a microhomology-mediated end-joining (MMEJ)-based transgene integration approach was developed, showing feasibility both in vitro and in vivo. We expanded this method to achieve targeted sequence substitution (TSS) of mutated sequences with normal sequences using double-guide RNAs (gRNAs), and a donor template flanking the microhomologies and target sequence of the gRNAs in vitro and in vivo. Our method could realize more efficient sequence substitution than the HDR-based method in vitro using a reporter cell line, and led to the survival of a hereditary tyrosinemia mouse model in vivo. The proposed MMEJ-based TSS approach could provide a novel therapeutic strategy, in addition to HDR, to achieve gene correction from a mutated sequence to a normal sequence. Copyright © 2018 Elsevier Inc. All rights reserved.
A fast and accurate algorithm for QTAIM integration in solids.
Otero-de-la-Roza, A; Luaña, Víctor
2011-01-30
A new algorithm is presented for the calculation of atomic properties, in the sense of the quantum theory of atoms in molecules. This new method, named QTREE, applies to solid-state densities and allows the computation of the atomic properties of all the atoms in the crystal in seconds to minutes. The basis of the method is the recursive subdivision of a symmetry-reduced wedge of the Wigner-Seitz cell, which in turn is expressed as a union of tetrahedra, plus the use of β-spheres to improve the performance. A considerable speedup is thus achieved compared with traditional quadrature-based schemes, justified by the poor performance of the latter because of the particular features of atomic basins in solids. QTREE can use both analytical or interpolated densities, calculates all the atomic properties available, and converges to the correct values in the limit of infinite precision. Several gradient path tracing and integration techniques are tested. Basin volumes and charges for a selected set of 11 crystals are determined as a test of the new method. Copyright © 2010 Wiley Periodicals, Inc.
Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N
2015-12-11
Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.
[High Precision Identification of Igneous Rock Lithology by Laser Induced Breakdown Spectroscopy].
Wang, Chao; Zhang, Wei-gang; Yan, Zhi-quan
2015-09-01
In the field of petroleum exploration, lithology identification of finely cuttings sample, especially high precision identification of igneous rock with similar property, has become one of the geological problems. In order to solve this problem, a new method is proposed based on element analysis of Laser-Induced Breakdown Spectroscopy (LIBS) and Total Alkali versus Silica (TAS) diagram. Using independent LIBS system, factors influencing spectral signal, such as pulse energy, acquisition time delay, spectrum acquisition method and pre-ablation are researched through contrast experiments systematically. The best analysis conditions of igneous rock are determined: pulse energy is 50 mJ, acquisition time delay is 2 μs, the analysis result is integral average of 20 different points of sample's surface, and pre-ablation has been proved not suitable for igneous rock sample by experiment. The repeatability of spectral data is improved effectively. Characteristic lines of 7 elements (Na, Mg, Al, Si, K, Ca, Fe) commonly used for lithology identification of igneous rock are determined, and igneous rock samples of different lithology are analyzed and compared. Calibration curves of Na, K, Si are generated by using national standard series of rock samples, and all the linearly dependent coefficients are greater than 0.9. The accuracy of quantitative analysis is investigated by national standard samples. Element content of igneous rock is analyzed quantitatively by calibration curve, and its lithology is identified accurately by the method of TAS diagram, whose accuracy rate is 90.7%. The study indicates that LIBS can effectively achieve the high precision identification of the lithology of igneous rock.
A flux calibration device for the SuperNova Integral Field Spectrograph (SNIFS)
NASA Astrophysics Data System (ADS)
Lombardo, Simona; Aldering, Greg; Hoffmann, Akos; Kowalski, Marek; Kuesters, Daniel; Reif, Klaus; Rigault, Michael
2014-07-01
Observational cosmology employing optical surveys often require precise flux calibration. In this context we present SNIFS Calibration Apparatus (SCALA), a flux calibration system developed for the SuperNova Integral Field Spectrograph (SNIFS), operating at the University of Hawaii 2.2 m telescope. SCALA consists of a hexagonal array of 18 small parabolic mirrors distributed over the face of, and feeding parallel light to, the telescope entrance pupil. The mirrors are illuminated by integrating spheres and a wavelength-tunable (from UV to IR) light source, generating light beams with opening angles of 1°. These nearly parallel beams are flat and flux-calibrated at a subpercent level, enabling us to calibrate our "telescope + SNIFS system" at the required precision.
Integrating DNA strand-displacement circuitry with DNA tile self-assembly
Zhang, David Yu; Hariadi, Rizal F.; Choi, Harry M.T.; Winfree, Erik
2013-01-01
DNA nanotechnology has emerged as a reliable and programmable way of controlling matter at the nanoscale through the specificity of Watson–Crick base pairing, allowing both complex self-assembled structures with nanometer precision and complex reaction networks implementing digital and analog behaviors. Here we show how two well-developed frameworks, DNA tile self-assembly and DNA strand-displacement circuits, can be systematically integrated to provide programmable kinetic control of self-assembly. We demonstrate the triggered and catalytic isothermal self-assembly of DNA nanotubes over 10 μm long from precursor DNA double-crossover tiles activated by an upstream DNA catalyst network. Integrating more sophisticated control circuits and tile systems could enable precise spatial and temporal organization of dynamic molecular structures. PMID:23756381
MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation
Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; ...
2016-01-01
We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.
Oort's cloud evolution under the influence of the galactic field.
NASA Astrophysics Data System (ADS)
Kiryushenkova, N. V.; Chepurova, V. M.; Shershkina, S. L.
By numerical integration (Everhart's method) of the differential equations of cometary movement in Oort's cloud an attempt was made to observe how the galactic gravitational field changes the orbital elements of these comets during three solar revolutions in the Galaxy. It is shown that the cometary orbits are more elongated, even the initially circular orbits become strongly elliptical, in the outer layers of Oort's cloud it is possible for comets to turn into hyperbolic orbits and to leave the solar system. The boundaries of the solar system have been precised.
Interfacing and Verifying ALHAT Safe Precision Landing Systems with the Morpheus Vehicle
NASA Technical Reports Server (NTRS)
Carson, John M., III; Hirsh, Robert L.; Roback, Vincent E.; Villalpando, Carlos; Busa, Joseph L.; Pierrottet, Diego F.; Trawny, Nikolas; Martin, Keith E.; Hines, Glenn D.
2015-01-01
The NASA Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project developed a suite of prototype sensors to enable autonomous and safe precision landing of robotic or crewed vehicles under any terrain lighting conditions. Development of the ALHAT sensor suite was a cross-NASA effort, culminating in integration and testing on-board a variety of terrestrial vehicles toward infusion into future spaceflight applications. Terrestrial tests were conducted on specialized test gantries, moving trucks, helicopter flights, and a flight test onboard the NASA Morpheus free-flying, rocket-propulsive flight-test vehicle. To accomplish these tests, a tedious integration process was developed and followed, which included both command and telemetry interfacing, as well as sensor alignment and calibration verification to ensure valid test data to analyze ALHAT and Guidance, Navigation and Control (GNC) performance. This was especially true for the flight test campaign of ALHAT onboard Morpheus. For interfacing of ALHAT sensors to the Morpheus flight system, an adaptable command and telemetry architecture was developed to allow for the evolution of per-sensor Interface Control Design/Documents (ICDs). Additionally, individual-sensor and on-vehicle verification testing was developed to ensure functional operation of the ALHAT sensors onboard the vehicle, as well as precision-measurement validity for each ALHAT sensor when integrated within the Morpheus GNC system. This paper provides some insight into the interface development and the integrated-systems verification that were a part of the build-up toward success of the ALHAT and Morpheus flight test campaigns in 2014. These campaigns provided valuable performance data that is refining the path toward spaceflight infusion of the ALHAT sensor suite.
Five critical elements to ensure the precision medicine.
Chen, Chengshui; He, Mingyan; Zhu, Yichun; Shi, Lin; Wang, Xiangdong
2015-06-01
The precision medicine as a new emerging area and therapeutic strategy has occurred and was practiced in the individual and brought unexpected successes, and gained high attentions from professional and social aspects as a new path to improve the treatment and prognosis of patients. There will be a number of new components to appear or be discovered, of which clinical bioinformatics integrates clinical phenotypes and informatics with bioinformatics, computational science, mathematics, and systems biology. In addition to those tools, precision medicine calls more accurate and repeatable methodologies for the identification and validation of gene discovery. Precision medicine will bring more new therapeutic strategies, drug discovery and development, and gene-oriented treatment. There is an urgent need to identify and validate disease-specific, mechanism-based, or epigenetics-dependent biomarkers to monitor precision medicine, and develop "precision" regulations to guard the application of precision medicine.
NASA Technical Reports Server (NTRS)
Cake, J. E.; Regetz, J. D., Jr.
1975-01-01
A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronous orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.
NASA Technical Reports Server (NTRS)
Cake, J. E.; Regetz, J. D., Jr.
1975-01-01
A method is presented for open loop guidance of a solar electric propulsion spacecraft to geosynchronsus orbit. The method consists of determining the thrust vector profiles on the ground with an optimization computer program, and performing updates based on the difference between the actual trajectory and that predicted with a precision simulation computer program. The motivation for performing the guidance analysis during the mission planning phase is discussed, and a spacecraft design option that employs attitude orientation constraints is presented. The improvements required in both the optimization program and simulation program are set forth, together with the efforts to integrate the programs into the ground support software for the guidance system.
NASA Technical Reports Server (NTRS)
Hayati, Samad; Tso, Kam; Roston, Gerald
1988-01-01
Autonomous robot task execution requires that the end effector of the robot be positioned accurately relative to a reference world-coordinate frame. The authors present a complete formulation to identify the actual robot geometric parameters. The method applies to any serial link manipulator with arbitrary order and combination of revolute and prismatic joints. A method is also presented to solve the inverse kinematic of the actual robot model which usually is not a so-called simple robot. Experimental results performed by utilizing a PUMA 560 with simple measurement hardware are presented. As a result of this calibration a precision move command is designed and integrated into a robot language, RCCL, and used in the NASA Telerobot Testbed.
Three-dimensional microstructure simulation of Ni-based superalloy investment castings
NASA Astrophysics Data System (ADS)
Pan, Dong; Xu, Qingyan; Liu, Baicheng
2011-05-01
An integrated macro and micro multi-scale model for the three-dimensional microstructure simulation of Ni-based superalloy investment castings was developed, and applied to industrial castings to investigate grain evolution during solidification. A ray tracing method was used to deal with the complex heat radiation transfer. The microstructure evolution was simulated based on the Modified Cellular Automaton method, which was coupled with three-dimensional nested macro and micro grids. Experiments for Ni-based superalloy turbine wheel investment casting were carried out, which showed a good correspondence with the simulated results. It is indicated that the proposed model is able to predict the microstructure of the casting precisely, which provides a tool for the optimizing process.
Development and evaluation of a hybrid averaged orbit generator
NASA Technical Reports Server (NTRS)
Mcclain, W. D.; Long, A. C.; Early, L. W.
1978-01-01
A rapid orbit generator based on a first-order application of the Generalized Method of Averaging has been developed for the Research and Development (R&D) version of the Goddard Trajectory Determination System (GTDS). The evaluation of the averaged equations of motion can use both numerically averaged and recursively evaluated, analytically averaged perturbation models. These equations are numerically integrated to obtain the secular and long-period motion. Factors affecting efficient orbit prediction are discussed and guidelines are presented for treatment of each major perturbation. Guidelines for obtaining initial mean elements compatible with the theory are presented. An overview of the orbit generator is presented and comparisons with high precision methods are given.
Miniature vibration isolation system for space applications
NASA Astrophysics Data System (ADS)
Quenon, Dan; Boyd, Jim; Buchele, Paul; Self, Rick; Davis, Torey; Hintz, Timothy L.; Jacobs, Jack H.
2001-06-01
In recent years, there has been a significant interest in, and move towards using highly sensitive, precision payloads on space vehicles. In order to perform tasks such as communicating at extremely high data rates between satellites using laser cross-links, or searching for new planets in distant solar systems using sparse aperture optical elements, a satellite bus and its payload must remain relatively motionless. The ability to hold a precision payload steady is complicated by disturbances from reaction wheels, control moment gyroscopes, solar array drives, stepper motors, and other devices. Because every satellite is essentially unique in its construction, isolating or damping unwanted vibrations usually requires a robust system over a wide bandwidth. The disadvantage of these systems is that they typically are not retrofittable and not tunable to changes in payload size or inertias. Previous work, funded by AFRL, DARPA, BMDO and others, developed technology building blocks that provide new methods to control vibrations of spacecraft. The technology of smart materials enables an unprecedented level of integration of sensors, actuators, and structures; this integration provides the opportunity for new structural designs that can adaptively influence their surrounding environment. To date, several demonstrations have been conducted to mature these technologies. Making use of recent advances in smart materials, microelectronics, Micro-Electro Mechanical Systems (MEMS) sensors, and Multi-Functional Structures (MFS), the Air Force Research Laboratory along with its partner DARPA, have initiated an aggressive program to develop a Miniature Vibration Isolation System (MVIS) (patent pending) for space applications. The MVIS program is a systems-level demonstration of the application of advanced smart materials and structures technology that will enable programmable and retrofittable vibration control of spacecraft precision payloads. The current effort has been awarded to Honeywell Space Systems Operation. AFRL is providing in-house research and testing in support of the program as well. The MVIS program will culminate in a flight demonstration that shows the benefits of applying smart materials for vibration isolation in space and precision payload control.
Development of a laser-guided embedded-computer-controlled air-assisted precision sprayer
USDA-ARS?s Scientific Manuscript database
An embedded computer-controlled, laser-guided, air-assisted, variable-rate precision sprayer was developed to automatically adjust spray outputs on both sides of the sprayer to match presence, size, shape, and foliage density of tree crops. The sprayer was the integration of an embedded computer, a ...
Chuong, Kim H.; Mack, David R.; Stintzi, Alain
2018-01-01
Abstract Healthcare institutions face widespread challenges of delivering high-quality and cost-effective care, while keeping up with rapid advances in biomedical knowledge and technologies. Moreover, there is increased emphasis on developing personalized or precision medicine targeted to individuals or groups of patients who share a certain biomarker signature. Learning healthcare systems (LHS) have been proposed for integration of research and clinical practice to fill major knowledge gaps, improve care, reduce healthcare costs, and provide precision care. To date, much discussion in this context has focused on the potential of human genomic data, and not yet on human microbiome data. Rapid advances in human microbiome research suggest that profiling of, and interventions on, the human microbiome can provide substantial opportunity for improved diagnosis, therapeutics, risk management, and risk stratification. In this study, we discuss a potential role for microbiome science in LHSs. We first review the key elements of LHSs, and discuss possibilities of Big Data and patient engagement. We then consider potentials and challenges of integrating human microbiome research into clinical practice as part of an LHS. With rapid growth in human microbiome research, patient-specific microbial data will begin to contribute in important ways to precision medicine. Hence, we discuss how patient-specific microbial data can help guide therapeutic decisions and identify novel effective approaches for precision care of inflammatory bowel disease. To the best of our knowledge, this expert analysis makes an original contribution with new insights poised at the emerging intersection of LHSs, microbiome science, and postgenomics medicine. PMID:28282257
Winzer, Eva; Luger, Maria; Schindler, Karin
2018-06-01
Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.
Integrating nanosphere lithography in device fabrication
NASA Astrophysics Data System (ADS)
Laurvick, Tod V.; Coutu, Ronald A.; Lake, Robert A.
2016-03-01
This paper discusses the integration of nanosphere lithography (NSL) with other fabrication techniques, allowing for nano-scaled features to be realized within larger microelectromechanical system (MEMS) based devices. Nanosphere self-patterning methods have been researched for over three decades, but typically not for use as a lithography process. Only recently has progress been made towards integrating many of the best practices from these publications and determining a process that yields large areas of coverage, with repeatability and enabled a process for precise placement of nanospheres relative to other features. Discussed are two of the more common self-patterning methods used in NSL (i.e. spin-coating and dip coating) as well as a more recently conceived variation of dip coating. Recent work has suggested the repeatability of any method depends on a number of variables, so to better understand how these variables affect the process a series of test vessels were developed and fabricated. Commercially available 3-D printing technology was used to incrementally alter the test vessels allowing for each variable to be investigated individually. With these deposition vessels, NSL can now be used in conjunction with other fabrication steps to integrate features otherwise unattainable through current methods, within the overall fabrication process of larger MEMS devices. Patterned regions in 1800 series photoresist with a thickness of ~700nm are used to capture regions of self-assembled nanospheres. These regions are roughly 2-5 microns in width, and are able to control the placement of 500nm polystyrene spheres by controlling where monolayer self-assembly occurs. The resulting combination of photoresist and nanospheres can then be used with traditional deposition or etch methods to utilize these fine scale features in the overall design.
Robust Functionalization of Large Microelectrode Arrays by Using Pulsed Potentiostatic Deposition
Rothe, Joerg; Frey, Olivier; Madangopal, Rajtarun; Rickus, Jenna; Hierlemann, Andreas
2016-01-01
Surface modification of microelectrodes is a central step in the development of microsensors and microsensor arrays. Here, we present an electrodeposition scheme based on voltage pulses. Key features of this method are uniformity in the deposited electrode coatings, flexibility in the overall deposition area, i.e., the sizes and number of the electrodes to be coated, and precise control of the surface texture. Deposition and characterization of four different materials are demonstrated, including layers of high-surface-area platinum, gold, conducting polymer poly(ethylenedioxythiophene), also known as PEDOT, and the non-conducting polymer poly(phenylenediamine), also known as PPD. The depositions were conducted using a fully integrated complementary metal-oxide-semiconductor (CMOS) chip with an array of 1024 microelectrodes. The pulsed potentiostatic deposition scheme is particularly suitable for functionalization of individual electrodes or electrode subsets of large integrated microelectrode arrays: the required deposition waveforms are readily available in an integrated system, the same deposition parameters can be used to functionalize the surface of either single electrodes or large arrays of thousands of electrodes, and the deposition method proved to be robust and reproducible for all materials tested. PMID:28025569
Yang, Yanqiang; Zhang, Chunxi; Lu, Jiazhen
2017-01-16
Strapdown inertial navigation system/celestial navigation system (SINS/CNS) integrated navigation is a fully autonomous and high precision method, which has been widely used to improve the hitting accuracy and quick reaction capability of near-Earth flight vehicles. The installation errors between SINS and star sensors have been one of the main factors that restrict the actual accuracy of SINS/CNS. In this paper, an integration algorithm based on the star vector observations is derived considering the star sensor installation error. Then, the star sensor installation error is accurately estimated based on Kalman Filtering (KF). Meanwhile, a local observability analysis is performed on the rank of observability matrix obtained via linearization observation equation, and the observable conditions are presented and validated. The number of star vectors should be greater than or equal to 2, and the times of posture adjustment also should be greater than or equal to 2. Simulations indicate that the star sensor installation error could be readily observable based on the maneuvering condition; moreover, the attitude errors of SINS are less than 7 arc-seconds. This analysis method and conclusion are useful in the ballistic trajectory design of near-Earth flight vehicles.
Zhang, Ding-kun; Wang, Jia-bo; Yang, Ming; Peng, Cheng; Xiao, Xiao-he
2015-07-01
Good medicinal herbs, good drugs. Good evaluation method and indices are the prerequisite of good medicinal herbs. However, there exist numerous indices for quality evaluation and control in Chinese medicinal materials. However, most of these indices are non-interrelated each other, as well as having little relationship with efficiency and safety. The results of different evaluatior methods may not be consistent, even contradictory. Considering the complex material properties of Chinese medicinal materials, single method and index is difficult to objectively and comprehensively reflect the quality. Therefore, it is essential to explore the integrated evaluation methods. In this paper, oriented by the integrated evaluation strategies for traditional Chinese medicine quality, a new method called integrated quality index (IQI) by the integration of empirical evaluation, chemical evaluation, and biological evaluation was proposed. In addition, a study case of hypertoxic herb Aconitum carmichaelii Debx. was provided to explain this method in detail. The results suggested that in the view of specifications, the average weight of Jiangyou aconite was the greatest, followed by Weishan aconite, Butuo aconite, Hanzhong aconite, and Anxian aconite; from the point of chemical components, Jiangyou aconite had the characteristic with strong efficacy and weak toxicity, next was Hanzhong aconite, Butuo aconite, Weishan aconite, and Anxian aconite; taking toxicity price as the index, Hanzhong aconite and Jiangyou aconite have the lower toxicity, while Butuo aconite, Weishan aconite, and Anxian aconite have the relatively higher one. After the normalization and integration of evaluation results, we calculated the IQI value of Jiangyou aconite, Hanzhong aconite, Butuo aconite, Weishan aconite, and Anxian aconite were 0.842 +/- 0.091, 0.597 +/- 0.047, 0.442 +/- 0.033, 0.454 +/- 0.038, 0.170 +/- 0.021, respectively. The quality of Jiangyou aconite is significantly better than the others (P < 0.05) followed by Hanzhong aconite, which is consistent with the traditional understanding of genuineness. It can be concluded that IQI achieves the integrated control and evaluation for the quality of Chinese medicinal materials, and it is an exploration for building the good medicinal herbs standards. In addition, IQI provides technical supports for the geoherbalism evaluation, selective breeding, the development of precision decoction pieces, high quality and favourable price in market circulation, and rational drug use.
Ye, Yusen; Gao, Lin; Zhang, Shihua
2017-01-01
Transcription factors play a key role in transcriptional regulation of genes and determination of cellular identity through combinatorial interactions. However, current studies about combinatorial regulation is deficient due to lack of experimental data in the same cellular environment and extensive existence of data noise. Here, we adopt a Bayesian CANDECOMP/PARAFAC (CP) factorization approach (BCPF) to integrate multiple datasets in a network paradigm for determining precise TF interaction landscapes. In our first application, we apply BCPF to integrate three networks built based on diverse datasets of multiple cell lines from ENCODE respectively to predict a global and precise TF interaction network. This network gives 38 novel TF interactions with distinct biological functions. In our second application, we apply BCPF to seven types of cell type TF regulatory networks and predict seven cell lineage TF interaction networks, respectively. By further exploring the dynamics and modularity of them, we find cell lineage-specific hub TFs participate in cell type or lineage-specific regulation by interacting with non-specific TFs. Furthermore, we illustrate the biological function of hub TFs by taking those of cancer lineage and blood lineage as examples. Taken together, our integrative analysis can reveal more precise and extensive description about human TF combinatorial interactions. PMID:29033978
Ye, Yusen; Gao, Lin; Zhang, Shihua
2017-01-01
Transcription factors play a key role in transcriptional regulation of genes and determination of cellular identity through combinatorial interactions. However, current studies about combinatorial regulation is deficient due to lack of experimental data in the same cellular environment and extensive existence of data noise. Here, we adopt a Bayesian CANDECOMP/PARAFAC (CP) factorization approach (BCPF) to integrate multiple datasets in a network paradigm for determining precise TF interaction landscapes. In our first application, we apply BCPF to integrate three networks built based on diverse datasets of multiple cell lines from ENCODE respectively to predict a global and precise TF interaction network. This network gives 38 novel TF interactions with distinct biological functions. In our second application, we apply BCPF to seven types of cell type TF regulatory networks and predict seven cell lineage TF interaction networks, respectively. By further exploring the dynamics and modularity of them, we find cell lineage-specific hub TFs participate in cell type or lineage-specific regulation by interacting with non-specific TFs. Furthermore, we illustrate the biological function of hub TFs by taking those of cancer lineage and blood lineage as examples. Taken together, our integrative analysis can reveal more precise and extensive description about human TF combinatorial interactions.
Integrating three-dimensional digital technologies for comprehensive implant dentistry.
Patel, Neal
2010-06-01
The increase in the popularity of and the demand for the use of dental implants to replace teeth has encouraged advancement in clinical technology and materials to improve patients' acceptance and clinical outcomes. Recent advances such as three-dimensional dental radiography with cone-beam computed tomography (CBCT), precision dental implant planning software and clinical execution with guided surgery all play a role in the success of implant dentistry. The author illustrates the technique of comprehensive implant dentistry planning through integration of computer-aided design/computer-aided manufacturing (CAD/CAM) and CBCT data. The technique includes clinical treatment with guided surgery, including the creation of a final restoration with a high-strength ceramic (IPS e.max CAD, Ivoclar Vivadent, Amherst, N.Y.). The author also introduces a technique involving CAD/CAM for fabricating custom implant abutments. The release of software integrating CEREC Acquisition Center with Bluecam (Sirona Dental Systems, Charlotte, N.C.) chairside CAD/CAM and Galileos CBCT imaging (Sirona Dental Systems) allows dentists to plan implant placement, perform implant dentistry with increased precision and provide predictable restorative results by using chairside IPS e.max CAD. The precision of clinical treatment provided by the integration of CAD/CAM and CBCT allows dentists to plan for ideal surgical placement and the appropriate thickness of restorative modalities before placing implants.
Precise Truss Assembly Using Commodity Parts and Low Precision Welding
NASA Technical Reports Server (NTRS)
Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus
2014-01-01
Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.
Spatial-spectral blood cell classification with microscopic hyperspectral imagery
NASA Astrophysics Data System (ADS)
Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng
2017-10-01
Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.
Vitali, Francesca; Li, Qike; Schissler, A Grant; Berghout, Joanne; Kenost, Colleen; Lussier, Yves A
2017-12-18
The development of computational methods capable of analyzing -omics data at the individual level is critical for the success of precision medicine. Although unprecedented opportunities now exist to gather data on an individual's -omics profile ('personalome'), interpreting and extracting meaningful information from single-subject -omics remain underdeveloped, particularly for quantitative non-sequence measurements, including complete transcriptome or proteome expression and metabolite abundance. Conventional bioinformatics approaches have largely been designed for making population-level inferences about 'average' disease processes; thus, they may not adequately capture and describe individual variability. Novel approaches intended to exploit a variety of -omics data are required for identifying individualized signals for meaningful interpretation. In this review-intended for biomedical researchers, computational biologists and bioinformaticians-we survey emerging computational and translational informatics methods capable of constructing a single subject's 'personalome' for predicting clinical outcomes or therapeutic responses, with an emphasis on methods that provide interpretable readouts. (i) the single-subject analytics of the transcriptome shows the greatest development to date and, (ii) the methods were all validated in simulations, cross-validations or independent retrospective data sets. This survey uncovers a growing field that offers numerous opportunities for the development of novel validation methods and opens the door for future studies focusing on the interpretation of comprehensive 'personalomes' through the integration of multiple -omics, providing valuable insights into individual patient outcomes and treatments. © The Author 2017. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Elitez, İrem; Yaltırak, Cenk; Zabcı, Cengiz; Şahin, Murat
2015-04-01
The precise geological mapping is one of the most important issues in geological studies. Documenting the spatial distribution of geological bodies and their contacts play a crucial role on interpreting the tectonic evolution of any region. Although the traditional field techniques are still accepted to be the most fundamental tools in construction of geological maps, we suggest that the integration of digital technologies to the classical methods significantly increases the resolution and the quality of such products. We simply follow the following steps in integration of the digital data with the traditional field observations. First, we create the digital elevation model (DEM) of the region of interest by interpolating the digital contours of 1:25000 scale topographic maps to 10 m of ground pixel resolution. The non-commercial Google Earth satellite imagery and geological maps of previous studies are draped over the interpolated DEMs in the second stage. The integration of all spatial data is done by using the market leading GIS software, ESRI ArcGIS. We make the preliminary interpretation of major structures as tectonic lineaments and stratigraphic contacts. These preliminary maps are controlled and precisely coordinated during the field studies by using mobile tablets and/or phablets with GPS receivers. The same devices are also used in measuring and recording the geologic structures of the study region. Finally, all digitally collected measurements and observations are added to the GIS database and we finalise our geological map with all available information. We applied this integrated method to map the Burdur-Fethiye Shear Zone (BFSZ) in the southwest Turkey. The BFSZ is an active sinistral 60-to-90 km-wide shear zone, which prolongs about 300 km-long between Suhut-Cay in the northeast and Köyceğiz Lake-Kalkan in the southwest on land. The numerous studies suggest contradictory models not only about the evolution but also about the fault geometry of this wide deformation zone. In our study, we have mapped this complicated region since 2008 by using the data and the steps, which are described briefly above. After our joint-analyses, we show that there is no continuous single and narrow fault, the Burdur-Fethiye Fault, as it was previously suggested by many researches. Instead, the whole region is deformed under the oblique-sinistral shearing with considerable amount of extension, which causes a counterclockwise rotation within the zone.
Preparation and Integration of ALHAT Precision Landing Technology for Morpheus Flight Testing
NASA Technical Reports Server (NTRS)
Carson, John M., III; Robertson, Edward A.; Pierrottet, Diego F.; Roback, Vincent E.; Trawny, Nikolas; Devolites, Jennifer L.; Hart, Jeremy J.; Estes, Jay N.; Gaddis, Gregory S.
2014-01-01
The Autonomous precision Landing and Hazard Avoidance Technology (ALHAT) project has developed a suite of prototype sensors for enabling autonomous and safe precision land- ing of robotic or crewed vehicles on solid solar bodies under varying terrain lighting condi- tions. The sensors include a Lidar-based Hazard Detection System (HDS), a multipurpose Navigation Doppler Lidar (NDL), and a long-range Laser Altimeter (LAlt). Preparation for terrestrial ight testing of ALHAT onboard the Morpheus free- ying, rocket-propelled ight test vehicle has been in progress since 2012, with ight tests over a lunar-like ter- rain eld occurring in Spring 2014. Signi cant work e orts within both the ALHAT and Morpheus projects has been required in the preparation of the sensors, vehicle, and test facilities for interfacing, integrating and verifying overall system performance to ensure readiness for ight testing. The ALHAT sensors have undergone numerous stand-alone sensor tests, simulations, and calibrations, along with integrated-system tests in special- ized gantries, trucks, helicopters and xed-wing aircraft. A lunar-like terrain environment was constructed for ALHAT system testing during Morpheus ights, and vibration and thermal testing of the ALHAT sensors was performed based on Morpheus ights prior to ALHAT integration. High- delity simulations were implemented to gain insight into integrated ALHAT sensors and Morpheus GN&C system performance, and command and telemetry interfacing and functional testing was conducted once the ALHAT sensors and electronics were integrated onto Morpheus. This paper captures some of the details and lessons learned in the planning, preparation and integration of the individual ALHAT sen- sors, the vehicle, and the test environment that led up to the joint ight tests.
NASA Astrophysics Data System (ADS)
Chen, Syuan-Yi; Gong, Sheng-Sian
2017-09-01
This study aims to develop an adaptive high-precision control system for controlling the speed of a vane-type air motor (VAM) pneumatic servo system. In practice, the rotor speed of a VAM depends on the input mass air flow, which can be controlled by the effective orifice area (EOA) of an electronic throttle valve (ETV). As the control variable of a second-order pneumatic system is the integral of the EOA, an observation-based adaptive dynamic sliding-mode control (ADSMC) system is proposed to derive the differential of the control variable, namely, the EOA control signal. In the ADSMC system, a proportional-integral-derivative fuzzy neural network (PIDFNN) observer is used to achieve an ideal dynamic sliding-mode control (DSMC), and a supervisor compensator is designed to eliminate the approximation error. As a result, the ADSMC incorporates the robustness of a DSMC and the online learning ability of a PIDFNN. To ensure the convergence of the tracking error, a Lyapunov-based analytical method is employed to obtain the adaptive algorithms required to tune the control parameters of the online ADSMC system. Finally, our experimental results demonstrate the precision and robustness of the ADSMC system for highly nonlinear and time-varying VAM pneumatic servo systems.
Boukabache, Hamza; Escriba, Christophe; Fourniols, Jean-Yves
2014-10-31
Structural health monitoring using noninvasive methods is one of the major challenges that aerospace manufacturers face in this decade. Our work in this field focuses on the development and the system integration of millimetric piezoelectric sensors/ actuators to generate and measure specific guided waves. The aim of the application is to detect mechanical flaws on complex composite and alloy structures to quantify efficiently the global structures' reliability. The study begins by a physical and analytical analysis of a piezoelectric patch. To preserve the structure's integrity, the transducers are directly pasted onto the surface which leads to a critical issue concerning the interfacing layer. In order to improve the reliability and mitigate the influence of the interfacing layer, the global equations of piezoelectricity are coupled with a load transfer model. Thus we can determine precisely the shear strain developed on the surface of the structure. To exploit the generated signal, a high precision analog charge amplifier coupled to a double T notch filter were designed and scaled. Finally, a novel joined time-frequency analysis based on a wavelet decomposition algorithm is used to extract relevant structures signatures. Finally, this paper provides examples of application on aircraft structure specimens and the feasibility of the system is thus demonstrated.
Boukabache, Hamza; Escriba, Christophe; Fourniols, Jean-Yves
2014-01-01
Structural health monitoring using noninvasive methods is one of the major challenges that aerospace manufacturers face in this decade. Our work in this field focuses on the development and the system integration of millimetric piezoelectric sensors/ actuators to generate and measure specific guided waves. The aim of the application is to detect mechanical flaws on complex composite and alloy structures to quantify efficiently the global structures' reliability. The study begins by a physical and analytical analysis of a piezoelectric patch. To preserve the structure's integrity, the transducers are directly pasted onto the surface which leads to a critical issue concerning the interfacing layer. In order to improve the reliability and mitigate the influence of the interfacing layer, the global equations of piezoelectricity are coupled with a load transfer model. Thus we can determine precisely the shear strain developed on the surface of the structure. To exploit the generated signal, a high precision analog charge amplifier coupled to a double T notch filter were designed and scaled. Finally, a novel joined time-frequency analysis based on a wavelet decomposition algorithm is used to extract relevant structures signatures. Finally, this paper provides examples of application on aircraft structure specimens and the feasibility of the system is thus demonstrated. PMID:25365457
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Liang, Yingchun; Chen, Mingjun; Tong, Zhen; Chen, Jiaxuan
2010-10-01
Tool wear not only changes its geometry accuracy and integrity, but also decrease machining precision and surface integrity of workpiece that affect using performance and service life of workpiece in ultra-precision machining. Scholars made a lot of experimental researches and stimulant analyses, but there is a great difference on the wear mechanism, especially on the nano-scale wear mechanism. In this paper, the three-dimensional simulation model is built to simulate nano-metric cutting of a single crystal silicon with a non-rigid right-angle diamond tool with 0 rake angle and 0 clearance angle by the molecular dynamics (MD) simulation approach, which is used to investigate the diamond tool wear during the nano-metric cutting process. A Tersoff potential is employed for the interaction between carbon-carbon atoms, silicon-silicon atoms and carbon-silicon atoms. The tool gets the high alternating shear stress, the tool wear firstly presents at the cutting edge where intension is low. At the corner the tool is splitted along the {1 1 1} crystal plane, which forms the tipping. The wear at the flank face is the structure transformation of diamond that the diamond structure transforms into the sheet graphite structure. Owing to the tool wear the cutting force increases.
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons
Cemgil, Ali Taylan
2017-01-01
We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking. PMID:29109375
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.
Daniş, F Serhan; Cemgil, Ali Taylan
2017-10-29
We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.
NASA Astrophysics Data System (ADS)
Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping
2017-01-01
This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.
Martín, Angel; Padín, Jorge; Anquela, Ana Belén; Sánchez, Juán; Belda, Santiago
2009-01-01
Magnetic data consists of a sequence of collected points with spatial coordinates and magnetic information. The spatial location of these points needs to be as exact as possible in order to develop a precise interpretation of magnetic anomalies. GPS is a valuable tool for accomplishing this objective, especially if the RTK approach is used. In this paper the VRS (Virtual Reference Station) technique is introduced as a new approach for real-time positioning of magnetic sensors. The main advantages of the VRS approach are, firstly, that only a single GPS receiver is needed (no base station is necessary), reducing field work and equipment costs. Secondly, VRS can operate at distances separated 50–70 km from the reference stations without degrading accuracy. A compact integration of a GSM-19 magnetometer sensor with a geodetic GPS antenna is presented; this integration does not diminish the operational flexibility of the original magnetometer and can work with the VRS approach. The coupled devices were tested in marshlands around Gandia, a city located approximately 100 km South of Valencia (Spain), thought to be the site of a Roman cemetery. The results obtained show adequate geometry and high-precision positioning for the structures to be studied (a comparison with the original low precision GPS of the magnetometer is presented). Finally, the results of the magnetic survey are of great interest for archaeological purposes. PMID:22574055
System Performance of an Integrated Airborne Spacing Algorithm with Ground Automation
NASA Technical Reports Server (NTRS)
Swieringa, Kurt A.; Wilson, Sara R.; Baxley, Brian T.
2016-01-01
The National Aeronautics and Space Administration's (NASA's) first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the Terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools to enable precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise spacing behind another aircraft. Recent simulations and IM algorithm development at NASA have focused on trajectory-based IM operations where aircraft equipped with IM avionics are expected to achieve a spacing goal, assigned by air traffic controllers, at the final approach fix. The recently published IM Minimum Operational Performance Standards describe five types of IM operations. This paper discusses the results and conclusions of a human-in-the-loop simulation that investigated three of those IM operations. The results presented in this paper focus on system performance and integration metrics. Overall, the IM operations conducted in this simulation integrated well with ground-based decisions support tools and certain types of IM operational were able to provide improved spacing precision at the final approach fix; however, some issues were identified that should be addressed prior to implementing IM procedures into real-world operations.
Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...
2015-11-23
Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Dyer, Charles R.; Paul, Brian E.
1994-01-01
The VIS-AD data model integrates metadata about the precision of values, including missing data indicators and the way that arrays sample continuous functions, with the data objects of a scientific programming language. The data objects of this data model form a lattice, ordered by the precision with which they approximate mathematical objects. We define a similar lattice of displays and study visualization processes as functions from data lattices to display lattices. Such functions can be applied to visualize data objects of all data types and are thus polymorphic.
Wisdom of crowds for robust gene network inference
Marbach, Daniel; Costello, James C.; Küffner, Robert; Vega, Nicci; Prill, Robert J.; Camacho, Diogo M.; Allison, Kyle R.; Kellis, Manolis; Collins, James J.; Stolovitzky, Gustavo
2012-01-01
Reconstructing gene regulatory networks from high-throughput data is a long-standing problem. Through the DREAM project (Dialogue on Reverse Engineering Assessment and Methods), we performed a comprehensive blind assessment of over thirty network inference methods on Escherichia coli, Staphylococcus aureus, Saccharomyces cerevisiae, and in silico microarray data. We characterize performance, data requirements, and inherent biases of different inference approaches offering guidelines for both algorithm application and development. We observe that no single inference method performs optimally across all datasets. In contrast, integration of predictions from multiple inference methods shows robust and high performance across diverse datasets. Thereby, we construct high-confidence networks for E. coli and S. aureus, each comprising ~1700 transcriptional interactions at an estimated precision of 50%. We experimentally test 53 novel interactions in E. coli, of which 23 were supported (43%). Our results establish community-based methods as a powerful and robust tool for the inference of transcriptional gene regulatory networks. PMID:22796662
Névéol, Aurélie; Shooshan, Sonya E.; Mork, James G.; Aronson, Alan R.
2007-01-01
Objective This paper reports on the latest results of an Indexing Initiative effort addressing the automatic attachment of subheadings to MeSH main headings recommended by the NLM’s Medical Text Indexer. Material and Methods Several linguistic and statistical approaches are used to retrieve and attach the subheadings. Continuing collaboration with NLM indexers also provided insight on how automatic methods can better enhance indexing practice. Results The methods were evaluated on corpus of 50,000 MEDLINE citations. For main heading/subheading pair recommendations, the best precision is obtained with a post-processing rule method (58%) while the best recall is obtained by pooling all methods (64%). For stand-alone subheading recommendations, the best performance is obtained with the PubMed Related Citations algorithm. Conclusion Significant progress has been made in terms of subheading coverage. After further evaluation, some of this work may be integrated in the MEDLINE indexing workflow. PMID:18693897
Rainfall Measurement with a Ground Based Dual Frequency Radar
NASA Technical Reports Server (NTRS)
Takahashi, Nobuhiro; Horie, Hiroaki; Meneghini, Robert
1997-01-01
Dual frequency methods are one of the most useful ways to estimate precise rainfall rates. However, there are some difficulties in applying this method to ground based radars because of the existence of a blind zone and possible error in the radar calibration. Because of these problems, supplemental observations such as rain gauges or satellite link estimates of path integrated attenuation (PIA) are needed. This study shows how to estimate rainfall rate with a ground based dual frequency radar with rain gauge and satellite link data. Applications of this method to stratiform rainfall is also shown. This method is compared with single wavelength method. Data were obtained from a dual frequency (10 GHz and 35 GHz) multiparameter radar radiometer built by the Communications Research Laboratory (CRL), Japan, and located at NASA/GSFC during the spring of 1997. Optical rain gauge (ORG) data and broadcasting satellite signal data near the radar t location were also utilized for the calculation.
Active Manual Movement Improves Directional Perception of Illusory Force.
Amemiya, Tomohiro; Gomi, Hiroaki
2016-01-01
Active touch sensing is known to facilitate the discrimination or recognition of the spatial properties of an object from the movement of tactile sensors on the skin and by integrating proprioceptive feedback about hand positions or motor commands related to ongoing hand movements. On the other hand, several studies have reported that tactile processing is suppressed by hand movement. Thus, it is unclear whether or not the active exploration of force direction by using hand or arm movement improves the perception of the force direction. Here, we show that active manual movement in both the rotational and translational directions enhances the precise perception of the force direction. To make it possible to move a hand in space without any physical constraints, we have adopted a method of inducing the sensation of illusory force by asymmetric vibration. We found that the precision of the perceived force direction was significantly better when the shoulder is rotated medially and laterally. We also found that directional errors supplied by the motor response of the perceived force were smaller than those resulting from perceptual judgments between visual and haptic directional stimuli. These results demonstrate that active manual movement boosts the precision of the perceived direction of an illusory force.
IoT for Real-Time Measurement of High-Throughput Liquid Dispensing in Laboratory Environments.
Shumate, Justin; Baillargeon, Pierre; Spicer, Timothy P; Scampavia, Louis
2018-04-01
Critical to maintaining quality control in high-throughput screening is the need for constant monitoring of liquid-dispensing fidelity. Traditional methods involve operator intervention with gravimetric analysis to monitor the gross accuracy of full plate dispenses, visual verification of contents, or dedicated weigh stations on screening platforms that introduce potential bottlenecks and increase the plate-processing cycle time. We present a unique solution using open-source hardware, software, and 3D printing to automate dispenser accuracy determination by providing real-time dispense weight measurements via a network-connected precision balance. This system uses an Arduino microcontroller to connect a precision balance to a local network. By integrating the precision balance as an Internet of Things (IoT) device, it gains the ability to provide real-time gravimetric summaries of dispensing, generate timely alerts when problems are detected, and capture historical dispensing data for future analysis. All collected data can then be accessed via a web interface for reviewing alerts and dispensing information in real time or remotely for timely intervention of dispense errors. The development of this system also leveraged 3D printing to rapidly prototype sensor brackets, mounting solutions, and component enclosures.
NASA Technical Reports Server (NTRS)
Chapman, G. M. (Principal Investigator); Carnes, J. G.
1981-01-01
Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.
Mohapatra, Shyam S; Batra, Surinder K; Bharadwaj, Srinivas; Bouvet, Michael; Cosman, Bard; Goel, Ajay; Jogunoori, Wilma; Kelley, Michael J; Mishra, Lopa; Mishra, Bibhuti; Mohapatra, Subhra; Patel, Bhaumik; Pisegna, Joseph R; Raufman, Jean-Pierre; Rao, Shuyun; Roy, Hemant; Scheuner, Maren; Singh, Satish; Vidyarthi, Gitanjali; White, Jon
2018-05-01
Colorectal cancer (CRC) accounts for ~9% of all cancers in the Veteran population, a fact which has focused a great deal of the attention of the VA's research and development efforts. A field-based meeting of CRC experts was convened to discuss both challenges and opportunities in precision medicine for CRC. This group, designated as the VA Colorectal Cancer Cell-genomics Consortium (VA4C), discussed advances in CRC biology, biomarkers, and imaging for early detection and prevention. There was also a discussion of precision treatment involving fluorescence-guided surgery, targeted chemotherapies and immunotherapies, and personalized cancer treatment approaches. The overarching goal was to identify modalities that might ultimately lead to personalized cancer diagnosis and treatment. This review summarizes the findings of this VA field-based meeting, in which much of the current knowledge on CRC prescreening and treatment was discussed. It was concluded that there is a need and an opportunity to identify new targets for both the prevention of CRC and the development of effective therapies for advanced disease. Also, developing methods integrating genomic testing with tumoroid-based clinical drug response might lead to more accurate diagnosis and prognostication and more effective personalized treatment of CRC.
De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S
2016-12-01
An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.
Application of Template Matching for Improving Classification of Urban Railroad Point Clouds
Arastounia, Mostafa; Oude Elberink, Sander
2016-01-01
This study develops an integrated data-driven and model-driven approach (template matching) that clusters the urban railroad point clouds into three classes of rail track, contact cable, and catenary cable. The employed dataset covers 630 m of the Dutch urban railroad corridors in which there are four rail tracks, two contact cables, and two catenary cables. The dataset includes only geometrical information (three dimensional (3D) coordinates of the points) with no intensity data and no RGB data. The obtained results indicate that all objects of interest are successfully classified at the object level with no false positives and no false negatives. The results also show that an average 97.3% precision and an average 97.7% accuracy at the point cloud level are achieved. The high precision and high accuracy of the rail track classification (both greater than 96%) at the point cloud level stems from the great impact of the employed template matching method on excluding the false positives. The cables also achieve quite high average precision (96.8%) and accuracy (98.4%) due to their high sampling and isolated position in the railroad corridor. PMID:27973452
NASA Astrophysics Data System (ADS)
Chen, Xi; Walker, John T.; Geron, Chris
2017-10-01
Evaluation of the semi-continuous Monitor for AeRosols and GAses in ambient air (MARGA, Metrohm Applikon B.V.) was conducted with an emphasis on examination of accuracy and precision associated with processing of chromatograms. Using laboratory standards and atmospheric measurements, analytical accuracy, precision and method detection limits derived using the commercial MARGA software were compared to an alternative chromatography procedure consisting of a custom Java script to reformat raw MARGA conductivity data and Chromeleon (Thermo Scientific Dionex) software for peak integration. Our analysis revealed issues with accuracy and precision resulting from misidentification and misintegration of chromatograph peaks by the MARGA automated software as well as a systematic bias at low concentrations for anions. Reprocessing and calibration of raw MARGA data using the alternative chromatography method lowered method detection limits and reduced variability (precision) between parallel sampler boxes. Instrument performance was further evaluated during a 1-month intensive field campaign in the fall of 2014, including analysis of diurnal patterns of gaseous and particulate water-soluble species (NH3, SO2, HNO3, NH4+, SO42- and NO3-), gas-to-particle partitioning and particle neutralization state. At ambient concentrations below ˜ 1 µg m-3, concentrations determined using the MARGA software are biased +30 and +10 % for NO3- and SO42-, respectively, compared to concentrations determined using the alternative chromatography procedure. Differences between the two methods increase at lower concentrations. We demonstrate that positively biased NO3- and SO42- measurements result in overestimation of aerosol acidity and introduce nontrivial errors to ion balances of inorganic aerosol. Though the source of the bias is uncertain, it is not corrected by the MARGA online single-point internal LiBr standard. Our results show that calibration and verification of instrument accuracy by multilevel external standards is required to adequately control analytical accuracy. During the field intensive, the MARGA was able to capture rapid compositional changes in PM2.5 due to changes in meteorology and air mass history relative to known source regions of PM precursors, including a fine NO3- aerosol event associated with intrusion of Arctic air into the southeastern US.
Liu, Jen-Pei; Lu, Li-Tien; Liao, C T
2009-09-01
Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.
Monitoring of laser material processing using machine integrated low-coherence interferometry
NASA Astrophysics Data System (ADS)
Kunze, Rouwen; König, Niels; Schmitt, Robert
2017-06-01
Laser material processing has become an indispensable tool in modern production. With the availability of high power pico- and femtosecond laser sources, laser material processing is advancing into applications, which demand for highest accuracies such as laser micro milling or laser drilling. In order to enable narrow tolerance windows, a closedloop monitoring of the geometrical properties of the processed work piece is essential for achieving a robust manufacturing process. Low coherence interferometry (LCI) is a high-precision measuring principle well-known from surface metrology. In recent years, we demonstrated successful integrations of LCI into several different laser material processing methods. Within this paper, we give an overview about the different machine integration strategies, that always aim at a complete and ideally telecentric integration of the measurement device into the existing beam path of the processing laser. Thus, highly accurate depth measurements within machine coordinates and a subsequent process control and quality assurance are possible. First products using this principle have already found its way to the market, which underlines the potential of this technology for the monitoring of laser material processing.
NASA Astrophysics Data System (ADS)
Wray, J. D.
2003-05-01
The robotic observatory telescope must point precisely on the target object, and then track autonomously to a fraction of the FWHM of the system PSF for durations of ten to twenty minutes or more. It must retain this precision while continuing to function at rates approaching thousands of observations per night for all its years of useful life. These stringent requirements raise new challenges unique to robotic telescope systems design. Critical design considerations are driven by the applicability of the above requirements to all systems of the robotic observatory, including telescope and instrument systems, telescope-dome enclosure systems, combined electrical and electronics systems, environmental (e.g. seeing) control systems and integrated computer control software systems. Traditional telescope design considerations include the effects of differential thermal strain, elastic flexure, plastic flexure and slack or backlash with respect to focal stability, optical alignment and angular pointing and tracking precision. Robotic observatory design must holistically encapsulate these traditional considerations within the overall objective of maximized long-term sustainable precision performance. This overall objective is accomplished through combining appropriate mechanical and dynamical system characteristics with a full-time real-time telescope mount model feedback computer control system. Important design considerations include: identifying and reducing quasi-zero-backlash; increasing size to increase precision; directly encoding axis shaft rotation; pointing and tracking operation via real-time feedback between precision mount model and axis mounted encoders; use of monolithic construction whenever appropriate for sustainable mechanical integrity; accelerating dome motion to eliminate repetitive shock; ducting internal telescope air to outside dome; and the principal design criteria: maximizing elastic repeatability while minimizing slack, plastic deformation and hysteresis to facilitate long-term repeatably precise pointing and tracking performance.
Repeatability precision of the falling number procedure under standard and modified methodologies
USDA-ARS?s Scientific Manuscript database
The falling number (FN) procedure is used worldwide to assess the integrity of the starch stored within wheat seed. As an indirect measurement of the activity level of alpha-amylase, FN relies on a dedicated viscometer that measures the amount of time needed for a metal stirring rod of precise geome...
NASA Astrophysics Data System (ADS)
Kabiri, K.
2017-09-01
The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.
NASA Technical Reports Server (NTRS)
Zelenka, Richard E.
1992-01-01
Avionic systems that depend on digitized terrain elevation data for guidance generation or navigational reference require accurate absolute and relative distance measurements to the terrain, especially as they approach lower altitudes. This is particularly exacting in low-altitude helicopter missions, where aggressive terrain hugging maneuvers create minimal horizontal and vertical clearances and demand precise terrain positioning. Sole reliance on airborne precision navigation and stored terrain elevation data for above-ground-level (AGL) positioning severely limits the operational altitude of such systems. A Kalman filter is presented which blends radar altimeter returns, precision navigation, and stored terrain elevation data for AGL positioning. The filter is evaluated using low-altitude helicopter flight test data acquired over moderately rugged terrain. The proposed Kalman filter is found to remove large disparities in predicted AGL altitude (i.e., from airborne navigation and terrain elevation data) in the presence of measurement anomalies and dropouts. Previous work suggested a minimum clearance altitude of 220 ft AGL for a near-terrain guidance system; integration of a radar altimeter allows for operation of that system below 50 ft, subject to obstacle-avoidance limitations.
Duplicate document detection in DocBrowse
NASA Astrophysics Data System (ADS)
Chalana, Vikram; Bruce, Andrew G.; Nguyen, Thien
1998-04-01
Duplicate documents are frequently found in large databases of digital documents, such as those found in digital libraries or in the government declassification effort. Efficient duplicate document detection is important not only to allow querying for similar documents, but also to filter out redundant information in large document databases. We have designed three different algorithm to identify duplicate documents. The first algorithm is based on features extracted from the textual content of a document, the second algorithm is based on wavelet features extracted from the document image itself, and the third algorithm is a combination of the first two. These algorithms are integrated within the DocBrowse system for information retrieval from document images which is currently under development at MathSoft. DocBrowse supports duplicate document detection by allowing (1) automatic filtering to hide duplicate documents, and (2) ad hoc querying for similar or duplicate documents. We have tested the duplicate document detection algorithms on 171 documents and found that text-based method has an average 11-point precision of 97.7 percent while the image-based method has an average 11- point precision of 98.9 percent. However, in general, the text-based method performs better when the document contains enough high-quality machine printed text while the image- based method performs better when the document contains little or no quality machine readable text.
El-Kommos, Michael E; El-Gizawy, Samia M; Atia, Noha N; Hosny, Noha M
2014-03-01
The combination of certain non-sedating antihistamines (NSA) such as fexofenadine (FXD), ketotifen (KET) and loratadine (LOR) with pseudoephedrine (PSE) or acetaminophen (ACE) is widely used in the treatment of allergic rhinitis, conjunctivitis and chronic urticaria. A rapid, simple, selective and precise densitometric method was developed and validated for simultaneous estimation of six synthetic binary mixtures and their pharmaceutical dosage forms. The method employed thin layer chromatography aluminum plates precoated with silica gel G 60 F254 as the stationary phase. The mobile phases chosen for development gave compact bands for the mixtures FXD-PSE (I), KET-PSE (II), LOR-PSE (III), FXD-ACE (IV), KET-ACE (V) and LOR-ACE (VI) [Retardation factor (Rf ) values were (0.20, 0.32), (0.69, 0.34), (0.79, 0.13), (0.36, 0.70), (0.51, 0.30) and (0.76, 0.26), respectively]. Spectrodensitometric scanning integration was performed at 217, 218, 218, 233, 272 and 251 nm for the mixtures I-VI, respectively. The linear regression data for the calibration plots showed an excellent linear relationship. The method was validated for precision, accuracy, robustness and recovery. Limits of detection and quantitation were calculated. Statistical analysis proved that the method is reproducible and selective for the simultaneous estimation of these binary mixtures. Copyright © 2013 John Wiley & Sons, Ltd.
Oulas, Anastasis; Minadakis, George; Zachariou, Margarita; Sokratous, Kleitos; Bourdakou, Marilena M; Spyrou, George M
2017-11-27
Systems Bioinformatics is a relatively new approach, which lies in the intersection of systems biology and classical bioinformatics. It focuses on integrating information across different levels using a bottom-up approach as in systems biology with a data-driven top-down approach as in bioinformatics. The advent of omics technologies has provided the stepping-stone for the emergence of Systems Bioinformatics. These technologies provide a spectrum of information ranging from genomics, transcriptomics and proteomics to epigenomics, pharmacogenomics, metagenomics and metabolomics. Systems Bioinformatics is the framework in which systems approaches are applied to such data, setting the level of resolution as well as the boundary of the system of interest and studying the emerging properties of the system as a whole rather than the sum of the properties derived from the system's individual components. A key approach in Systems Bioinformatics is the construction of multiple networks representing each level of the omics spectrum and their integration in a layered network that exchanges information within and between layers. Here, we provide evidence on how Systems Bioinformatics enhances computational therapeutics and diagnostics, hence paving the way to precision medicine. The aim of this review is to familiarize the reader with the emerging field of Systems Bioinformatics and to provide a comprehensive overview of its current state-of-the-art methods and technologies. Moreover, we provide examples of success stories and case studies that utilize such methods and tools to significantly advance research in the fields of systems biology and systems medicine. © The Author 2017. Published by Oxford University Press.
Superior Intraparietal Sulcus Controls the Variability of Visual Working Memory Precision.
Galeano Weber, Elena M; Peters, Benjamin; Hahn, Tim; Bledowski, Christoph; Fiebach, Christian J
2016-05-18
Limitations of working memory (WM) capacity depend strongly on the cognitive resources that are available for maintaining WM contents in an activated state. Increasing the number of items to be maintained in WM was shown to reduce the precision of WM and to increase the variability of WM precision over time. Although WM precision was recently associated with neural codes particularly in early sensory cortex, we have so far no understanding of the neural bases underlying the variability of WM precision, and how WM precision is preserved under high load. To fill this gap, we combined human fMRI with computational modeling of behavioral performance in a delayed color-estimation WM task. Behavioral results replicate a reduction of WM precision and an increase of precision variability under high loads (5 > 3 > 1 colors). Load-dependent BOLD signals in primary visual cortex (V1) and superior intraparietal sulcus (IPS), measured during the WM task at 2-4 s after sample onset, were modulated by individual differences in load-related changes in the variability of WM precision. Although stronger load-related BOLD increase in superior IPS was related to lower increases in precision variability, thus stabilizing WM performance, the reverse was observed for V1. Finally, the detrimental effect of load on behavioral precision and precision variability was accompanied by a load-related decline in the accuracy of decoding the memory stimuli (colors) from left superior IPS. We suggest that the superior IPS may contribute to stabilizing visual WM performance by reducing the variability of memory precision in the face of higher load. This study investigates the neural bases of capacity limitations in visual working memory by combining fMRI with cognitive modeling of behavioral performance, in human participants. It provides evidence that the superior intraparietal sulcus (IPS) is a critical brain region that influences the variability of visual working memory precision between and within individuals (Fougnie et al., 2012; van den Berg et al., 2012) under increased memory load, possibly in cooperation with perceptual systems of the occipital cortex. These findings substantially extend our understanding of the nature of capacity limitations in visual working memory and their neural bases. Our work underlines the importance of integrating cognitive modeling with univariate and multivariate methods in fMRI research, thus improving our knowledge of brain-behavior relationships. Copyright © 2016 the authors 0270-6474/16/365623-13$15.00/0.
Design and evaluation of precise current integrator for scanning probe microscopy
NASA Astrophysics Data System (ADS)
Raczkowski, Kamil; Piasecki, Tomasz; Rudek, Maciej; Gotszalk, Teodor
2017-03-01
Several of the scanning probe microscopy (SPM) techniques, such as the scanning tunnelling microscopy (STM) or conductive atomic force microscopy (C-AFM), rely on precise measurements of current flowing between the investigated sample and the conductive nanoprobe. The parameters of current-to-voltage converter (CVC), which should detect current in the picompere range, are of utmost importance to those systems as they determine the microscopes’ measuring capabilities. That was the motivation for research on the precise current integrator (PCI), described in this paper, which could be used as the CVC in the C-AFM systems. The main design goal of the PCI was to provide a small and versatile device with the sub-picoampere level resolution with high dynamic range in the order of nanoamperes. The PCI was based on the integrating amplifier (Texas Instruments DDC112) paired with a STM32F4 microcontroller unit (MCU).The gain and bandwidth of the PCI might be easily changed by varying the integration time and the feedback capacitance. Depending on these parameters it was possible to obtain for example the 2.15 pA resolution at 688 nA range with 1 kHz bandwidth or 7.4 fA resolution at 0.98 nA range with 10 Hz bandwidth. The measurement of sinusoidal current with 28 fA amplitude was also presented. The PCI was integrated with the C-AFM system and used in the highly ordered pyrolytic graphite (HOPG) and graphene samples imaging.
Progress of targeted genome modification approaches in higher plants.
Cardi, Teodoro; Neal Stewart, C
2016-07-01
Transgene integration in plants is based on illegitimate recombination between non-homologous sequences. The low control of integration site and number of (trans/cis)gene copies might have negative consequences on the expression of transferred genes and their insertion within endogenous coding sequences. The first experiments conducted to use precise homologous recombination for gene integration commenced soon after the first demonstration that transgenic plants could be produced. Modern transgene targeting categories used in plant biology are: (a) homologous recombination-dependent gene targeting; (b) recombinase-mediated site-specific gene integration; (c) oligonucleotide-directed mutagenesis; (d) nuclease-mediated site-specific genome modifications. New tools enable precise gene replacement or stacking with exogenous sequences and targeted mutagenesis of endogeneous sequences. The possibility to engineer chimeric designer nucleases, which are able to target virtually any genomic site, and use them for inducing double-strand breaks in host DNA create new opportunities for both applied plant breeding and functional genomics. CRISPR is the most recent technology available for precise genome editing. Its rapid adoption in biological research is based on its inherent simplicity and efficacy. Its utilization, however, depends on available sequence information, especially for genome-wide analysis. We will review the approaches used for genome modification, specifically those for affecting gene integration and modification in higher plants. For each approach, the advantages and limitations will be noted. We also will speculate on how their actual commercial development and implementation in plant breeding will be affected by governmental regulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, David R.; Bershady, Matthew A., E-mail: david.andersen@nrc-cnrc.gc.ca, E-mail: mab@astro.wisc.edu
2013-05-01
Using the integral field unit DensePak on the WIYN 3.5 m telescope we have obtained H{alpha} velocity fields of 39 nearly face-on disks at echelle resolutions. High-quality, uniform kinematic data and a new modeling technique enabled us to derive accurate and precise kinematic inclinations with mean i{sub kin} = 23 Degree-Sign for 90% of these galaxies. Modeling the kinematic data as single, inclined disks in circular rotation improves upon the traditional tilted-ring method. We measure kinematic inclinations with a precision in sin i of 25% at 20 Degree-Sign and 6% at 30 Degree-Sign . Kinematic inclinations are consistent with photometricmore » and inverse Tully-Fisher inclinations when the sample is culled of galaxies with kinematic asymmetries, for which we give two specific prescriptions. Kinematic inclinations can therefore be used in statistical ''face-on'' Tully-Fisher studies. A weighted combination of multiple, independent inclination measurements yield the most precise and accurate inclination. Combining inverse Tully-Fisher inclinations with kinematic inclinations yields joint probability inclinations with a precision in sin i of 10% at 15 Degree-Sign and 5% at 30 Degree-Sign . This level of precision makes accurate mass decompositions of galaxies possible even at low inclination. We find scaling relations between rotation speed and disk-scale length identical to results from more inclined samples. We also observe the trend of more steeply rising rotation curves with increased rotation speed and light concentration. This trend appears to be uncorrelated with disk surface brightness.« less
Dimensional measurement of micro parts with high aspect ratio in HIT-UOI
NASA Astrophysics Data System (ADS)
Dang, Hong; Cui, Jiwen; Feng, Kunpeng; Li, Junying; Zhao, Shiyuan; Zhang, Haoran; Tan, Jiubin
2016-11-01
Micro parts with high aspect ratios have been widely used in different fields including aerospace and defense industries, while the dimensional measurement of these micro parts becomes a challenge in the field of precision measurement and instrument. To deal with this contradiction, several probes for the micro parts precision measurement have been proposed by researchers in Center of Ultra-precision Optoelectronic Instrument (UOI), Harbin Institute of Technology (HIT). In this paper, optical fiber probes with structures of spherical coupling(SC) with double optical fibers, micro focal-length collimation (MFL-collimation) and fiber Bragg grating (FBG) are described in detail. After introducing the sensing principles, both advantages and disadvantages of these probes are analyzed respectively. In order to improve the performances of these probes, several approaches are proposed. A two-dimensional orthogonal path arrangement is propounded to enhance the dimensional measurement ability of MFL-collimation probes, while a high resolution and response speed interrogation method based on differential method is used to improve the accuracy and dynamic characteristics of the FBG probes. The experiments for these special structural fiber probes are given with a focus on the characteristics of these probes, and engineering applications will also be presented to prove the availability of them. In order to improve the accuracy and the instantaneity of the engineering applications, several techniques are used in probe integration. The effectiveness of these fiber probes were therefore verified through both the analysis and experiments.
Integrated high pressure manifold for thermoplastic microfluidic devices
NASA Astrophysics Data System (ADS)
Aghvami, S. Ali; Fraden, Seth
2017-11-01
We introduce an integrated tubing manifold for thermoplastic microfluidic chips that tolerates high pressure. In contrast to easy tubing in PDMS microfluidic devices, tube connection has been challenging for plastic microfluidics. Our integrated manifold connection tolerates 360 psi while conventional PDMS connections fail at 50 psi. Important design considerations are incorporation of a quick-connect, leak-free and high-pressure manifold for the inlets and outlets on the lid and registration marks that allow the precise alignment of the inlets and outlets. In our method, devices are comprised of two molded pieces joined together to create a sealed device. The first piece contains the microfluidic features and the second contains the inlet and outlet manifold, a frame for rigidity and a viewing window. The mold for the lid with integrated manifold is CNC milled from aluminium. A cone shape PDMS component which acts as an O-ring, seals the connection between molded manifold and tubing. The lid piece with integrated inlet and outlets will be a standard piece and can be used for different chips and designs. Sealing the thermoplastic device is accomplished by timed immersion of the lid in a mixture of volatile and non-volatile solvents followed by application of heat and pressure.
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732
Lubow, Bruce C; Ransom, Jason I
2016-01-01
Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.
Limits to the precision of gradient sensing with spatial communication and temporal integration.
Mugler, Andrew; Levchenko, Andre; Nemenman, Ilya
2016-02-09
Gradient sensing requires at least two measurements at different points in space. These measurements must then be communicated to a common location to be compared, which is unavoidably noisy. Although much is known about the limits of measurement precision by cells, the limits placed by the communication are not understood. Motivated by recent experiments, we derive the fundamental limits to the precision of gradient sensing in a multicellular system, accounting for communication and temporal integration. The gradient is estimated by comparing a "local" and a "global" molecular reporter of the external concentration, where the global reporter is exchanged between neighboring cells. Using the fluctuation-dissipation framework, we find, in contrast to the case when communication is ignored, that precision saturates with the number of cells independently of the measurement time duration, because communication establishes a maximum length scale over which sensory information can be reliably conveyed. Surprisingly, we also find that precision is improved if the local reporter is exchanged between cells as well, albeit more slowly than the global reporter. The reason is that whereas exchange of the local reporter weakens the comparison, it decreases the measurement noise. We term such a model "regional excitation-global inhibition." Our results demonstrate that fundamental sensing limits are necessarily sharpened when the need to communicate information is taken into account.
GPS/GLONASS RAIM augmentation to WAAS for CAT 1 precision approach
DOT National Transportation Integrated Search
1997-06-30
This paper deals with the potential use of Receiver Autonomous Integrity Monitoring @AIM) to supplement the FAAs Wide Area Augmentation System (WAAS). Integrity refers to the capability of a navigation or landing system to provide a timely warning...
HARNESSING BIG DATA FOR PRECISION MEDICINE: INFRASTRUCTURES AND APPLICATIONS.
Yu, Kun-Hsing; Hart, Steven N; Goldfeder, Rachel; Zhang, Qiangfeng Cliff; Parker, Stephen C J; Snyder, Michael
2017-01-01
Precision medicine is a health management approach that accounts for individual differences in genetic backgrounds and environmental exposures. With the recent advancements in high-throughput omics profiling technologies, collections of large study cohorts, and the developments of data mining algorithms, big data in biomedicine is expected to provide novel insights into health and disease states, which can be translated into personalized disease prevention and treatment plans. However, petabytes of biomedical data generated by multiple measurement modalities poses a significant challenge for data analysis, integration, storage, and result interpretation. In addition, patient privacy preservation, coordination between participating medical centers and data analysis working groups, as well as discrepancies in data sharing policies remain important topics of discussion. In this workshop, we invite experts in omics integration, biobank research, and data management to share their perspectives on leveraging big data to enable precision medicine.Workshop website: http://tinyurl.com/PSB17BigData; HashTag: #PSB17BigData.
High spatial and temporal resolution cell manipulation techniques in microchannels.
Novo, Pedro; Dell'Aica, Margherita; Janasek, Dirk; Zahedi, René P
2016-03-21
The advent of microfluidics has enabled thorough control of cell manipulation experiments in so called lab on chips. Lab on chips foster the integration of actuation and detection systems, and require minute sample and reagent amounts. Typically employed microfluidic structures have similar dimensions as cells, enabling precise spatial and temporal control of individual cells and their local environments. Several strategies for high spatio-temporal control of cells in microfluidics have been reported in recent years, namely methods relying on careful design of the microfluidic structures (e.g. pinched flow), by integration of actuators (e.g. electrodes or magnets for dielectro-, acousto- and magneto-phoresis), or integrations thereof. This review presents the recent developments of cell experiments in microfluidics divided into two parts: an introduction to spatial control of cells in microchannels followed by special emphasis in the high temporal control of cell-stimulus reaction and quenching. In the end, the present state of the art is discussed in line with future perspectives and challenges for translating these devices into routine applications.
Global Ocean Integrals and Means, with Trend Implications.
Wunsch, Carl
2016-01-01
Understanding the ocean requires determining and explaining global integrals and equivalent average values of temperature (heat), salinity (freshwater and salt content), sea level, energy, and other properties. Attempts to determine means, integrals, and climatologies have been hindered by thinly and poorly distributed historical observations in a system in which both signals and background noise are spatially very inhomogeneous, leading to potentially large temporal bias errors that must be corrected at the 1% level or better. With the exception of the upper ocean in the current altimetric-Argo era, no clear documentation exists on the best methods for estimating means and their changes for quantities such as heat and freshwater at the levels required for anthropogenic signals. Underestimates of trends are as likely as overestimates; for example, recent inferences that multidecadal oceanic heat uptake has been greatly underestimated are plausible. For new or augmented observing systems, calculating the accuracies and precisions of global, multidecadal sampling densities for the full water column is necessary to avoid the irrecoverable loss of scientifically essential information.
Inertial aided cycle slip detection and identification for integrated PPP GPS and INS.
Du, Shuang; Gao, Yang
2012-10-25
The recently developed integrated Precise Point Positioning (PPP) GPS/INS system can be useful to many applications, such as UAV navigation systems, land vehicle/machine automation and mobile mapping systems. Since carrier phase measurements are the primary observables in PPP GPS, cycle slips, which often occur due to high dynamics, signal obstructions and low satellite elevation, must be detected and repaired in order to ensure the navigation performance. In this research, a new algorithm of cycle slip detection and identification has been developed. With the aiding from INS, the proposed method jointly uses WL and EWL phase combinations to uniquely determine cycle slips in the L1 and L2 frequencies. To verify the efficiency of the algorithm, both tactical-grade and consumer-grade IMUs are tested by using a real dataset collected from two field tests. The results indicate that the proposed algorithm can efficiently detect and identify the cycle slips and subsequently improve the navigation performance of the integrated system.
Development of a Pulsed 2-Micron Integrated Path Differential Absorption Lidar for CO2 Measurement
NASA Technical Reports Server (NTRS)
Singh, Upendra N.; Yu, Jirong; Petros, Mulugeta; Refaat, Tamer; Refaat, Tamer
2013-01-01
Atmospheric carbon dioxide (CO2) is an important greenhouse gas that significantly contributes to the carbon cycle and global radiation budget on Earth. Active remote sensing of CO2 is important to address several limitations that contend with passive sensors. A 2-micron double-pulsed, Integrated Path Differential Absorption (IPDA) lidar instrument for ground and airborne atmospheric CO2 concentration measurements via direct detection method is being developed at NASA Langley Research Center. This active remote sensing instrument will provide an alternate approach of measuring atmospheric CO2 concentrations with significant advantages. A high energy pulsed approach provides high-precision measurement capability by having high signal-to-noise ratio level and unambiguously eliminates the contamination from aerosols and clouds that can bias the IPDA measurement. Commercial, on the shelf, components are implemented for the detection system. Instrument integration will be presented in this paper as well as a background for CO2 measurement at NASA Langley research Center
Toward precision medicine in Alzheimer's disease.
Reitz, Christiane
2016-03-01
In Western societies, Alzheimer's disease (AD) is the most common form of dementia and the sixth leading cause of death. In recent years, the concept of precision medicine, an approach for disease prevention and treatment that is personalized to an individual's specific pattern of genetic variability, environment and lifestyle factors, has emerged. While for some diseases, in particular select cancers and a few monogenetic disorders such as cystic fibrosis, significant advances in precision medicine have been made over the past years, for most other diseases precision medicine is only in its beginning. To advance the application of precision medicine to a wider spectrum of disorders, governments around the world are starting to launch Precision Medicine Initiatives, major efforts to generate the extensive scientific knowledge needed to integrate the model of precision medicine into every day clinical practice. In this article we summarize the state of precision medicine in AD, review major obstacles in its development, and discuss its benefits in this highly prevalent, clinically and pathologically complex disease.
Integrity monitoring of IGS products
NASA Technical Reports Server (NTRS)
Zumberge, James F.; Plag, H. -P.
2005-01-01
The IGS has successfully produced precise GPS and GLONASS transmitter parameters, coordinates of IGS tracking stations, Earth rotation parameters, and atmospheric parameters. In this paper we discuss the concepts of integrity monitoring, system monitoring, and performance assessment, all in the context of IGS products. We report on a recent survey of IGS product users, and propose an integrity strategy for the IGS.
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.
NASA Astrophysics Data System (ADS)
Oliver, Karen D.; Cousett, Tamira A.; Whitaker, Donald A.; Smith, Luther A.; Mukerjee, Shaibal; Stallings, Casson; Thoma, Eben D.; Alston, Lillian; Colon, Maribel; Wu, Tai; Henkle, Stacy
2017-08-01
A sample integrity evaluation and an interlaboratory comparison were conducted in application of U.S. Environmental Protection Agency (EPA) Methods 325A and 325B for diffusively monitoring benzene and other selected volatile organic compounds (VOCs) using Carbopack X sorbent tubes. To evaluate sample integrity, VOC samples were refrigerated for up to 240 days and analyzed using thermal desorption/gas chromatography-mass spectrometry at the EPA Office of Research and Development laboratory in Research Triangle Park, NC, USA. For the interlaboratory comparison, three commercial analytical laboratories were asked to follow Method 325B when analyzing samples of VOCs that were collected in field and laboratory settings for EPA studies. Overall results indicate that the selected VOCs collected diffusively on sorbent tubes generally were stable for 6 months or longer when samples were refrigerated. This suggests the specified maximum 30-day storage time of VOCs collected diffusively on Carbopack X passive samplers and analyzed using Method 325B might be able to be relaxed. Interlaboratory comparison results were in agreement for the challenge samples collected diffusively in an exposure chamber in the laboratory, with most measurements within ±25% of the theoretical concentration. Statistically significant differences among laboratories for ambient challenge samples were small, less than 1 part per billion by volume (ppbv). Results from all laboratories exhibited good precision and generally agreed well with each other.
Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods
Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste; ...
2017-04-03
This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less
Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste
This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less
[Holistic integrative medicine: the road to the future of the development of burn medicine].
Fan, D M
2017-01-20
Holistic integrative medicine is the road to the future of the development of burn medicine. Not only burn medicine, but also human medicine gradually enters the era of holistic integrative medicine. Holistic integrative medicine is different from translational medicine, evidence-based medicine or precision medicine, which integrates the most advanced knowledge and theories in medicine fields with the most effective practices and experiences in clinical specialties to form a new medical system.