Sample records for method requiring minimal

  1. Superiorization with level control

    NASA Astrophysics Data System (ADS)

    Cegielski, Andrzej; Al-Musallam, Fadhel

    2017-04-01

    The convex feasibility problem is to find a common point of a finite family of closed convex subsets. In many applications one requires something more, namely finding a common point of closed convex subsets which minimizes a continuous convex function. The latter requirement leads to an application of the superiorization methodology which is actually settled between methods for convex feasibility problem and the convex constrained minimization. Inspired by the superiorization idea we introduce a method which sequentially applies a long-step algorithm for a sequence of convex feasibility problems; the method employs quasi-nonexpansive operators as well as subgradient projections with level control and does not require evaluation of the metric projection. We replace a perturbation of the iterations (applied in the superiorization methodology) by a perturbation of the current level in minimizing the objective function. We consider the method in the Euclidean space in order to guarantee the strong convergence, although the method is well defined in a Hilbert space.

  2. Minimization search method for data inversion

    NASA Technical Reports Server (NTRS)

    Fymat, A. L.

    1975-01-01

    Technique has been developed for determining values of selected subsets of independent variables in mathematical formulations. Required computation time increases with first power of the number of variables. This is in contrast with classical minimization methods for which computational time increases with third power of the number of variables.

  3. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches.

    PubMed

    Almutairy, Meznah; Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method.

  4. Comparing fixed sampling with minimizer sampling when using k-mer indexes to find maximal exact matches

    PubMed Central

    Torng, Eric

    2018-01-01

    Bioinformatics applications and pipelines increasingly use k-mer indexes to search for similar sequences. The major problem with k-mer indexes is that they require lots of memory. Sampling is often used to reduce index size and query time. Most applications use one of two major types of sampling: fixed sampling and minimizer sampling. It is well known that fixed sampling will produce a smaller index, typically by roughly a factor of two, whereas it is generally assumed that minimizer sampling will produce faster query times since query k-mers can also be sampled. However, no direct comparison of fixed and minimizer sampling has been performed to verify these assumptions. We systematically compare fixed and minimizer sampling using the human genome as our database. We use the resulting k-mer indexes for fixed sampling and minimizer sampling to find all maximal exact matches between our database, the human genome, and three separate query sets, the mouse genome, the chimp genome, and an NGS data set. We reach the following conclusions. First, using larger k-mers reduces query time for both fixed sampling and minimizer sampling at a cost of requiring more space. If we use the same k-mer size for both methods, fixed sampling requires typically half as much space whereas minimizer sampling processes queries only slightly faster. If we are allowed to use any k-mer size for each method, then we can choose a k-mer size such that fixed sampling both uses less space and processes queries faster than minimizer sampling. The reason is that although minimizer sampling is able to sample query k-mers, the number of shared k-mer occurrences that must be processed is much larger for minimizer sampling than fixed sampling. In conclusion, we argue that for any application where each shared k-mer occurrence must be processed, fixed sampling is the right sampling method. PMID:29389989

  5. An efficient graph theory based method to identify every minimal reaction set in a metabolic network

    PubMed Central

    2014-01-01

    Background Development of cells with minimal metabolic functionality is gaining importance due to their efficiency in producing chemicals and fuels. Existing computational methods to identify minimal reaction sets in metabolic networks are computationally expensive. Further, they identify only one of the several possible minimal reaction sets. Results In this paper, we propose an efficient graph theory based recursive optimization approach to identify all minimal reaction sets. Graph theoretical insights offer systematic methods to not only reduce the number of variables in math programming and increase its computational efficiency, but also provide efficient ways to find multiple optimal solutions. The efficacy of the proposed approach is demonstrated using case studies from Escherichia coli and Saccharomyces cerevisiae. In case study 1, the proposed method identified three minimal reaction sets each containing 38 reactions in Escherichia coli central metabolic network with 77 reactions. Analysis of these three minimal reaction sets revealed that one of them is more suitable for developing minimal metabolism cell compared to other two due to practically achievable internal flux distribution. In case study 2, the proposed method identified 256 minimal reaction sets from the Saccharomyces cerevisiae genome scale metabolic network with 620 reactions. The proposed method required only 4.5 hours to identify all the 256 minimal reaction sets and has shown a significant reduction (approximately 80%) in the solution time when compared to the existing methods for finding minimal reaction set. Conclusions Identification of all minimal reactions sets in metabolic networks is essential since different minimal reaction sets have different properties that effect the bioprocess development. The proposed method correctly identified all minimal reaction sets in a both the case studies. The proposed method is computationally efficient compared to other methods for finding minimal reaction sets and useful to employ with genome-scale metabolic networks. PMID:24594118

  6. Cell-free protein synthesis in micro compartments: building a minimal cell from biobricks.

    PubMed

    Jia, Haiyang; Heymann, Michael; Bernhard, Frank; Schwille, Petra; Kai, Lei

    2017-10-25

    The construction of a minimal cell that exhibits the essential characteristics of life is a great challenge in the field of synthetic biology. Assembling a minimal cell requires multidisciplinary expertise from physics, chemistry and biology. Scientists from different backgrounds tend to define the essence of 'life' differently and have thus proposed different artificial cell models possessing one or several essential features of living cells. Using the tools and methods of molecular biology, the bottom-up engineering of a minimal cell appears in reach. However, several challenges still remain. In particular, the integration of individual sub-systems that is required to achieve a self-reproducing cell model presents a complex optimization challenge. For example, multiple self-organisation and self-assembly processes have to be carefully tuned. We review advances and developments of new methods and techniques, for cell-free protein synthesis as well as micro-fabrication, for their potential to resolve challenges and to accelerate the development of minimal cells. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Optimization Methods in Sherpa

    NASA Astrophysics Data System (ADS)

    Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.

    2009-09-01

    Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).

  8. Steady state method to determine unsaturated hydraulic conductivity at the ambient water potential

    DOEpatents

    HUbbell, Joel M.

    2014-08-19

    The present invention relates to a new laboratory apparatus for measuring the unsaturated hydraulic conductivity at a single water potential. One or more embodiments of the invented apparatus can be used over a wide range of water potential values within the tensiometric range, requires minimal laboratory preparation, and operates unattended for extended periods with minimal supervision. The present invention relates to a new laboratory apparatus for measuring the unsaturated hydraulic conductivity at a single water potential. One or more embodiments of the invented apparatus can be used over a wide range of water potential values within the tensiometric range, requires minimal laboratory preparation, and operates unattended for extended periods with minimal supervision.

  9. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  10. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  11. 40 CFR 230.5 - General procedures to be followed.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....5 Section 230.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) OCEAN DUMPING... discharge in § 230.10(a) through (d), the measures to mimimize adverse impact of subpart H, and the required... minimize the environmental impact of the discharge, based upon the specialized methods of minimization of...

  12. Optimum runway orientation relative to crosswinds

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Brown, S. C.

    1972-01-01

    Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.

  13. Finite Element Analysis in Concurrent Processing: Computational Issues

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  14. Efficient linear algebra routines for symmetric matrices stored in packed form.

    PubMed

    Ahlrichs, Reinhart; Tsereteli, Kakha

    2002-01-30

    Quantum chemistry methods require various linear algebra routines for symmetric matrices, for example, diagonalization or Cholesky decomposition for positive matrices. We present a small set of these basic routines that are efficient and minimize memory requirements.

  15. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  16. A two-dimensionally coincident second difference cosmic ray spike removal method for the fully automated processing of Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2014-01-01

    Charge-coupled device detectors are vulnerable to cosmic rays that can contaminate Raman spectra with positive going spikes. Because spikes can adversely affect spectral processing and data analyses, they must be removed. Although both hardware-based and software-based spike removal methods exist, they typically require parameter and threshold specification dependent on well-considered user input. Here, we present a fully automated spike removal algorithm that proceeds without requiring user input. It is minimally dependent on sample attributes, and those that are required (e.g., standard deviation of spectral noise) can be determined with other fully automated procedures. At the core of the method is the identification and location of spikes with coincident second derivatives along both the spectral and spatiotemporal dimensions of two-dimensional datasets. The method can be applied to spectra that are relatively inhomogeneous because it provides fairly effective and selective targeting of spikes resulting in minimal distortion of spectra. Relatively effective spike removal obtained with full automation could provide substantial benefits to users where large numbers of spectra must be processed.

  17. An extension of command shaping methods for controlling residual vibration using frequency sampling

    NASA Technical Reports Server (NTRS)

    Singer, Neil C.; Seering, Warren P.

    1992-01-01

    The authors present an extension to the impulse shaping technique for commanding machines to move with reduced residual vibration. The extension, called frequency sampling, is a method for generating constraints that are used to obtain shaping sequences which minimize residual vibration in systems such as robots whose resonant frequencies change during motion. The authors present a review of impulse shaping methods, a development of the proposed extension, and a comparison of results of tests conducted on a simple model of the space shuttle robot arm. Frequency shaping provides a method for minimizing the impulse sequence duration required to give the desired insensitivity.

  18. Real time selective harmonic minimization for multilevel inverters using genetic algorithm and artifical neural network angle generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filho, Faete J; Tolbert, Leon M; Ozpineci, Burak

    2012-01-01

    The work developed here proposes a methodology for calculating switching angles for varying DC sources in a multilevel cascaded H-bridges converter. In this approach the required fundamental is achieved, the lower harmonics are minimized, and the system can be implemented in real time with low memory requirements. Genetic algorithm (GA) is the stochastic search method to find the solution for the set of equations where the input voltages are the known variables and the switching angles are the unknown variables. With the dataset generated by GA, an artificial neural network (ANN) is trained to store the solutions without excessive memorymore » storage requirements. This trained ANN then senses the voltage of each cell and produces the switching angles in order to regulate the fundamental at 120 V and eliminate or minimize the low order harmonics while operating in real time.« less

  19. Generally astigmatic Gaussian beam representation and optimization using skew rays

    NASA Astrophysics Data System (ADS)

    Colbourne, Paul D.

    2014-12-01

    Methods are presented of using skew rays to optimize a generally astigmatic optical system to obtain the desired Gaussian beam focus and minimize aberrations, and to calculate the propagating generally astigmatic Gaussian beam parameters at any point. The optimization method requires very little computation beyond that of a conventional ray optimization, and requires no explicit calculation of the properties of the propagating Gaussian beam. Unlike previous methods, the calculation of beam parameters does not require matrix calculations or the introduction of non-physical concepts such as imaginary rays.

  20. Minimizing Input-to-Output Latency in Virtual Environment

    NASA Technical Reports Server (NTRS)

    Adelstein, Bernard D.; Ellis, Stephen R.; Hill, Michael I.

    2009-01-01

    A method and apparatus were developed to minimize latency (time delay ) in virtual environment (VE) and other discrete- time computer-base d systems that require real-time display in response to sensor input s. Latency in such systems is due to the sum of the finite time requi red for information processing and communication within and between sensors, software, and displays.

  1. Controlled sound field with a dual layer loudspeaker array

    NASA Astrophysics Data System (ADS)

    Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.

    2014-08-01

    Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.

  2. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  3. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  4. Statistical Characterization of Environmental Error Sources Affecting Electronically Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Walker, Eric L.; Everhart, Joel L.

    2006-01-01

    Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure [ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.

  5. Statistical Characterization of Environmental Error Sources Affecting Electronically Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Green, Del L.; Walker, Eric L.; Everhart, Joel L.

    2006-01-01

    Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure (ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.

  6. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  7. Manufacturing Methods and Technology Project Summary Reports

    DTIC Science & Technology

    1984-12-01

    are used. The instrument chosen provides a convenient method of artifically aging a propellant sample while automatically analyzing for evolved oxides...and aging . Shortly after the engineering sample run, a change in REMBASS require- ments eliminated the crystal high shock requirements. This resulted...material with minimum outgassing in a precision vacuum QXFF. Minimal outgas- ..- sing reduces aging in the finished unit. A fixture was also developed to

  8. Mixed-Methods Design in Biology Education Research: Approach and Uses

    ERIC Educational Resources Information Center

    Warfa, Abdi-Rizak M.

    2016-01-01

    Educational research often requires mixing different research methodologies to strengthen findings, better contextualize or explain results, or minimize the weaknesses of a single method. This article provides practical guidelines on how to conduct such research in biology education, with a focus on mixed-methods research (MMR) that uses both…

  9. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  10. [Diagnosis of primary hyperlipoproteinemia in umbilical cord blood (author's transl)].

    PubMed

    Parwaresch, M R; Radzun, H J; Mäder, C

    1977-10-01

    The aim of the present investigation was to assay the frequency of primary dyslipoproteinemia in a random sample of one hundred newborns and to describe the minimal methodical requirements for sound diagnosis. After comparison of different methods total lipids were determined by gravimetry, cholesterol and triglycerides by enzymatic methods, nonesterified fatty acids by direct colorimetry; phospholipids were estimated indirectly. All measurements were applied to umbilical cord sera and to lipoprotein fractions separated by selective precipitation. The diagnosis of hyperlipoproteinemia type IV, which is the most frequent one in adults, is highly afflicted with pitfalls in the postnatal period. A primary hyper-alpha-liproteinemia occured in one case and type II-hyperlipoproteinemia in two cases, one of the parents being involved in each case. For mass screening triglycerides should be assayed in serum and cholesterol in precipitated and resolubilized LDL-fraction, for which the minimal requirements are described.

  11. A minimal multiconfigurational technique.

    PubMed

    Fernández Rico, J; Paniagua, M; GarcíA De La Vega, J M; Fernández-Alonso, J I; Fantucci, P

    1986-04-01

    A direct minimization method previously presented by the authors is applied here to biconfigurational wave functions. A very moderate increasing in the time by iteration with respect to the one-determinant calculation and good convergence properties have been found. So qualitatively correct studies on singlet systems with strong biradical character can be performed with a cost similar to that required by Hartree-Fock calculations. Copyright © 1986 John Wiley & Sons, Inc.

  12. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  13. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization.

    PubMed

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate.

  14. Quasi-Newton parallel geometry optimization methods

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Ayers, Paul W.

    2010-07-01

    Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.

  15. Accelerating an Ordered-Subset Low-Dose X-Ray Cone Beam Computed Tomography Image Reconstruction with a Power Factor and Total Variation Minimization

    PubMed Central

    Huang, Hsuan-Ming; Hsiao, Ing-Tsung

    2016-01-01

    In recent years, there has been increased interest in low-dose X-ray cone beam computed tomography (CBCT) in many fields, including dentistry, guided radiotherapy and small animal imaging. Despite reducing the radiation dose, low-dose CBCT has not gained widespread acceptance in routine clinical practice. In addition to performing more evaluation studies, developing a fast and high-quality reconstruction algorithm is required. In this work, we propose an iterative reconstruction method that accelerates ordered-subsets (OS) reconstruction using a power factor. Furthermore, we combine it with the total-variation (TV) minimization method. Both simulation and phantom studies were conducted to evaluate the performance of the proposed method. Results show that the proposed method can accelerate conventional OS methods, greatly increase the convergence speed in early iterations. Moreover, applying the TV minimization to the power acceleration scheme can further improve the image quality while preserving the fast convergence rate. PMID:27073853

  16. Optimal design method to minimize users' thinking mapping load in human-machine interactions.

    PubMed

    Huang, Yanqun; Li, Xu; Zhang, Jie

    2015-01-01

    The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.

  17. A method of hidden Markov model optimization for use with geophysical data sets

    NASA Technical Reports Server (NTRS)

    Granat, R. A.

    2003-01-01

    Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.

  18. Applying Sigma Metrics to Reduce Outliers.

    PubMed

    Litten, Joseph

    2017-03-01

    Sigma metrics can be used to predict assay quality, allowing easy comparison of instrument quality and predicting which tests will require minimal quality control (QC) rules to monitor the performance of the method. A Six Sigma QC program can result in fewer controls and fewer QC failures for methods with a sigma metric of 5 or better. The higher the number of methods with a sigma metric of 5 or better, the lower the costs for reagents, supplies, and control material required to monitor the performance of the methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Method for materials deposition by ablation transfer processing

    DOEpatents

    Weiner, Kurt H.

    1996-01-01

    A method in which a thin layer of semiconducting, insulating, or metallic material is transferred by ablation from a source substrate, coated uniformly with a thin layer of said material, to a target substrate, where said material is desired, with a pulsed, high intensity, patternable beam of energy. The use of a patternable beam allows area-selective ablation from the source substrate resulting in additive deposition of the material onto the target substrate which may require a very low percentage of the area to be covered. Since material is placed only where it is required, material waste can be minimized by reusing the source substrate for depositions on multiple target substrates. Due to the use of a pulsed, high intensity energy source the target substrate remains at low temperature during the process, and thus low-temperature, low cost transparent glass or plastic can be used as the target substrate. The method can be carried out atmospheric pressures and at room temperatures, thus eliminating vacuum systems normally required in materials deposition processes. This invention has particular application in the flat panel display industry, as well as minimizing materials waste and associated costs.

  20. Live minimal path for interactive segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Chartrand, Gabriel; Tang, An; Chav, Ramnada; Cresson, Thierry; Chantrel, Steeve; De Guise, Jacques A.

    2015-03-01

    Medical image segmentation is nowadays required for medical device development and in a growing number of clinical and research applications. Since dedicated automatic segmentation methods are not always available, generic and efficient interactive tools can alleviate the burden of manual segmentation. In this paper we propose an interactive segmentation tool based on image warping and minimal path segmentation that is efficient for a wide variety of segmentation tasks. While the user roughly delineates the desired organs boundary, a narrow band along the cursors path is straightened, providing an ideal subspace for feature aligned filtering and minimal path algorithm. Once the segmentation is performed on the narrow band, the path is warped back onto the original image, precisely delineating the desired structure. This tool was found to have a highly intuitive dynamic behavior. It is especially efficient against misleading edges and required only coarse interaction from the user to achieve good precision. The proposed segmentation method was tested for 10 difficult liver segmentations on CT and MRI images, and the resulting 2D overlap Dice coefficient was 99% on average..

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fetterly, K; Mathew, V

    Purpose: Transcatheter aortic valve replacement (TAVR) procedures provide a method to implant a prosthetic aortic valve via a minimallyinvasive, catheter-based procedure. TAVR procedures require use of interventional fluoroscopy c-arm projection angles which are aligned with the aortic valve plane to minimize prosthetic valve positioning error due to x-ray imaging parallax. The purpose of this work is to calculate the continuous range of interventional fluoroscopy c-arm projection angles which are aligned with the aortic valve plane from a single planar image of a valvuloplasty balloon inflated across the aortic valve. Methods: Computational methods to measure the 3D angular orientation of themore » aortic valve were developed. Required inputs include a planar x-ray image of a known valvuloplasty balloon inflated across the aortic valve and specifications of x-ray imaging geometry from the DICOM header of the image. A-priori knowledge of the species-specific typical range of aortic orientation is required to specify the sign of the angle of the long axis of the balloon with respect to the x-ray beam. The methods were validated ex-vivo and in a live pig. Results: Ex-vivo experiments demonstrated that the angular orientation of a stationary inflated valvuloplasty balloon can be measured with precision less than 1 degree. In-vivo pig experiments demonstrated that cardiac motion contributed to measurement variability, with precision less than 3 degrees. Error in specification of x-ray geometry directly influences measurement accuracy. Conclusion: This work demonstrates that the 3D angular orientation of the aortic valve can be calculated precisely from a planar image of a valvuloplasty balloon inflated across the aortic valve and known x-ray geometry. This method could be used to determine appropriate c-arm angular projections during TAVR procedures to minimize x-ray imaging parallax and thereby minimize prosthetic valve positioning errors.« less

  2. Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.

    PubMed

    OConnor, William; Runquist, Elizabeth A

    2008-07-01

    Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.

  3. Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.

    PubMed

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes.

  4. Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base

    PubMed Central

    Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg

    2014-01-01

    For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146

  5. Minimally verbal school-aged children with autism spectrum disorder: the neglected end of the spectrum.

    PubMed

    Tager-Flusberg, Helen; Kasari, Connie

    2013-12-01

    It is currently estimated that about 30% of children with autism spectrum disorder remain minimally verbal, even after receiving years of interventions and a range of educational opportunities. Very little is known about the individuals at this end of the autism spectrum, in part because this is a highly variable population with no single set of defining characteristics or patterns of skills or deficits, and in part because it is extremely challenging to provide reliable or valid assessments of their developmental functioning. In this paper, we summarize current knowledge based on research including minimally verbal children. We review promising new novel methods for assessing the verbal and nonverbal abilities of minimally verbal school-aged children, including eye-tracking and brain-imaging methods that do not require overt responses. We then review what is known about interventions that may be effective in improving language and communication skills, including discussion of both nonaugmentative and augmentative methods. In the final section of the paper, we discuss the gaps in the literature and needs for future research. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  6. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  7. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  8. Intelligent Sampling of Hazardous Particle Populations in Resource-Constrained Environments

    NASA Astrophysics Data System (ADS)

    McCollough, J. P.; Quinn, J. M.; Starks, M. J.; Johnston, W. R.

    2017-10-01

    Sampling of anomaly-causing space environment drivers is necessary for both real-time operations and satellite design efforts, and optimizing measurement sampling helps minimize resource demands. Relating these measurements to spacecraft anomalies requires the ability to resolve spatial and temporal variability in the energetic charged particle hazard of interest. Here we describe a method for sampling particle fluxes informed by magnetospheric phenomenology so that, along a given trajectory, the variations from both temporal dynamics and spatial structure are adequately captured while minimizing oversampling. We describe the coordinates, sampling method, and specific regions and parameters employed. We compare resulting sampling cadences with data from spacecraft spanning the regions of interest during a geomagnetically active period, showing that the algorithm retains the gross features necessary to characterize environmental impacts on space systems in diverse orbital regimes while greatly reducing the amount of sampling required. This enables sufficient environmental specification within a resource-constrained context, such as limited telemetry bandwidth, processing requirements, and timeliness.

  9. Closed Loop System Identification with Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.

    2004-01-01

    High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.

  10. Minimizing Higgs potentials via numerical polynomial homotopy continuation

    NASA Astrophysics Data System (ADS)

    Maniatis, M.; Mehta, D.

    2012-08-01

    The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.

  11. Spectral embedding finds meaningful (relevant) structure in image and microarray data

    PubMed Central

    Higgs, Brandon W; Weller, Jennifer; Solka, Jeffrey L

    2006-01-01

    Background Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented. Results We present the results of applying the spectral method of Lafon, a nonlinear DR method based on the weighted graph Laplacian, that minimizes the requirements for such parameter optimization for two biological data types. We demonstrate that it is successful in determining implicit ordering of brain slice image data and in classifying separate species in microarray data, as compared to two conventional linear methods and three nonlinear methods (one of which is an alternative spectral method). This spectral implementation is shown to provide more meaningful information, by preserving important relationships, than the methods of DR presented for comparison. Tuning parameter fitting is simple and is a general, rather than data type or experiment specific approach, for the two datasets analyzed here. Tuning parameter optimization is minimized in the DR step to each subsequent classification method, enabling the possibility of valid cross-experiment comparisons. Conclusion Results from the spectral method presented here exhibit the desirable properties of preserving meaningful nonlinear relationships in lower dimensional space and requiring minimal parameter fitting, providing a useful algorithm for purposes of visualization and classification across diverse datasets, a common challenge in systems biology. PMID:16483359

  12. Reliability-Based Electronics Shielding Design Tools

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; O'Neill, P. J.; Zang, T. A.; Pandolf, J. E.; Tripathi, R. K.; Koontz, Steven L.; Boeder, P.; Reddell, B.; Pankop, C.

    2007-01-01

    Shielding design on large human-rated systems allows minimization of radiation impact on electronic systems. Shielding design tools require adequate methods for evaluation of design layouts, guiding qualification testing, and adequate follow-up on final design evaluation.

  13. Coaxial cable stripping device facilitates RF cabling fabrication

    NASA Technical Reports Server (NTRS)

    Hughes, R. S.; Tobias, R. A.

    1967-01-01

    Coaxial cable stripping device assures clean, right angled shoulder for RF cable connector fabrication. This method requires minimal skill and creates a low voltage standing wave ratio and mechanical stability in the interconnecting RF Cables.

  14. Supportability Technologies for Future Exploration Missions

    NASA Technical Reports Server (NTRS)

    Watson, Kevin; Thompson, Karen

    2007-01-01

    Future long-duration human exploration missions will be challenged by resupply limitations and mass and volume constraints. Consequently, it will be essential that the logistics footprint required to support these missions be minimized and that capabilities be provided to make them highly autonomous from a logistics perspective. Strategies to achieve these objectives include broad implementation of commonality and standardization at all hardware levels and across all systems, repair of failed hardware at the lowest possible hardware level, and manufacture of structural and mechanical replacement components as needed. Repair at the lowest hardware levels will require the availability of compact, portable systems for diagnosis of failures in electronic systems and verification of system functionality following repair. Rework systems will be required that enable the removal and replacement of microelectronic components with minimal human intervention to minimize skill requirements and training demand for crews. Materials used in the assembly of electronic systems (e.g. solders, fluxes, conformal coatings) must be compatible with the available repair methods and the spacecraft environment. Manufacturing of replacement parts for structural and mechanical applications will require additive manufacturing systems that can generate near-net-shape parts from the range of engineering alloys employed in the spacecraft structure and in the parts utilized in other surface systems. These additive manufacturing processes will need to be supported by real-time non-destructive evaluation during layer-additive processing for on-the-fly quality control. This will provide capabilities for quality control and may serve as an input for closed-loop process control. Additionally, non-destructive methods should be available for material property determination. These nondestructive evaluation processes should be incorporated with the additive manufacturing process - providing an in-process capability to ensure that material deposited during layer-additive processing meets required material property criteria.

  15. Control of Materials Flammability Hazards

    NASA Technical Reports Server (NTRS)

    Griffin, Dennis E.

    2003-01-01

    This viewgraph presentation provides information on selecting, using, and configuring spacecraft materials in such a way as to minimize the ability of fire to spread onboard a spacecraft. The presentation gives an overview of the flammability requirements of NASA-STD-6001, listing specific tests and evaluation criteria it requires. The presentation then gives flammability reduction methods for specific spacecraft items and materials.

  16. Performance of device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Cao, Zhu; Zhao, Qi; Ma, Xiongfeng

    2016-07-01

    Quantum key distribution provides information-theoretically-secure communication. In practice, device imperfections may jeopardise the system security. Device-independent quantum key distribution solves this problem by providing secure keys even when the quantum devices are untrusted and uncharacterized. Following a recent security proof of the device-independent quantum key distribution, we improve the key rate by tightening the parameter choice in the security proof. In practice where the system is lossy, we further improve the key rate by taking into account the loss position information. From our numerical simulation, our method can outperform existing results. Meanwhile, we outline clear experimental requirements for implementing device-independent quantum key distribution. The maximal tolerable error rate is 1.6%, the minimal required transmittance is 97.3%, and the minimal required visibility is 96.8 % .

  17. On the lower bound of monitor solutions of maximally permissive supervisors for a subclass α-S3PR of flexible manufacturing systems

    NASA Astrophysics Data System (ADS)

    Chao, Daniel Yuh

    2015-01-01

    Recently, a novel and computationally efficient method - based on a vector covering approach - to design optimal control places and an iteration approach that computes the reachability graph to obtain a maximally permissive liveness enforcing supervisor for FMS (flexible manufacturing systems) have been reported. However, it is unclear as to the relationship between the structure of the net and the minimal number of monitors required. This paper develops a theory to show that the minimal number of monitors required cannot be less than that of basic siphons in α-S3PR (systems of simple sequential processes with resources). This confirms that two of the three controlled systems by Chen et al. are of a minimal monitor configuration since they belong to α-S3PR and their number in each example equals that of basic siphons.

  18. Implementing the measurement interval midpoint method for change estimation

    Treesearch

    James A. Westfall; Thomas Frieswyk; Douglas M. Griffith

    2009-01-01

    The adoption of nationally consistent estimation procedures for the Forest Inventory and Analysis (FIA) program mandates changes in the methods used to develop resource trend information. Particularly, it is prescribed that changes in tree status occur at the midpoint of the measurement interval to minimize potential bias. The individual-tree characteristics requiring...

  19. A new numerical method for calculating extrema of received power for polarimetric SAR

    USGS Publications Warehouse

    Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.

    2009-01-01

    A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.

  20. Method for materials deposition by ablation transfer processing

    DOEpatents

    Weiner, K.H.

    1996-04-16

    A method in which a thin layer of semiconducting, insulating, or metallic material is transferred by ablation from a source substrate, coated uniformly with a thin layer of said material, to a target substrate, where said material is desired, with a pulsed, high intensity, patternable beam of energy. The use of a patternable beam allows area-selective ablation from the source substrate resulting in additive deposition of the material onto the target substrate which may require a very low percentage of the area to be covered. Since material is placed only where it is required, material waste can be minimized by reusing the source substrate for depositions on multiple target substrates. Due to the use of a pulsed, high intensity energy source the target substrate remains at low temperature during the process, and thus low-temperature, low cost transparent glass or plastic can be used as the target substrate. The method can be carried out atmospheric pressures and at room temperatures, thus eliminating vacuum systems normally required in materials deposition processes. This invention has particular application in the flat panel display industry, as well as minimizing materials waste and associated costs. 1 fig.

  1. Online learning in optical tomography: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Li, Qin; Liu, Jian-Guo

    2018-07-01

    We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.

  2. Unconventional minimal subtraction and Bogoliubov-Parasyuk-Hepp-Zimmermann method: Massive scalar theory and critical exponents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carvalho, Paulo R. S.; Leite, Marcelo M.

    2013-09-15

    We introduce a simpler although unconventional minimal subtraction renormalization procedure in the case of a massive scalar λφ{sup 4} theory in Euclidean space using dimensional regularization. We show that this method is very similar to its counterpart in massless field theory. In particular, the choice of using the bare mass at higher perturbative order instead of employing its tree-level counterpart eliminates all tadpole insertions at that order. As an application, we compute diagrammatically the critical exponents η and ν at least up to two loops. We perform an explicit comparison with the Bogoliubov-Parasyuk-Hepp-Zimmermann (BPHZ) method at the same loop order,more » show that the proposed method requires fewer diagrams and establish a connection between the two approaches.« less

  3. Methods for minimizing plastic flow of oil shale during in situ retorting

    DOEpatents

    Lewis, Arthur E.; Mallon, Richard G.

    1978-01-01

    In an in situ oil shale retorting process, plastic flow of hot rubblized oil shale is minimized by injecting carbon dioxide and water into spent shale above the retorting zone. These gases react chemically with the mineral constituents of the spent shale to form a cement-like material which binds the individual shale particles together and bonds the consolidated mass to the wall of the retort. This relieves the weight burden borne by the hot shale below the retorting zone and thereby minimizes plastic flow in the hot shale. At least a portion of the required carbon dioxide and water can be supplied by recycled product gases.

  4. A Numerical Optimization Approach for Tuning Fuzzy Logic Controllers

    NASA Technical Reports Server (NTRS)

    Woodard, Stanley E.; Garg, Devendra P.

    1998-01-01

    This paper develops a method to tune fuzzy controllers using numerical optimization. The main attribute of this approach is that it allows fuzzy logic controllers to be tuned to achieve global performance requirements. Furthermore, this approach allows design constraints to be implemented during the tuning process. The method tunes the controller by parameterizing the membership functions for error, change-in-error and control output. The resulting parameters form a design vector which is iteratively changed to minimize an objective function. The minimal objective function results in an optimal performance of the system. A spacecraft mounted science instrument line-of-sight pointing control is used to demonstrate results.

  5. A study on the theoretical and practical accuracy of conoscopic holography-based surface measurements: toward image registration in minimally invasive surgery†

    PubMed Central

    Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.

    2013-01-01

    Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086

  6. H∞ memory feedback control with input limitation minimization for offshore jacket platform stabilization

    NASA Astrophysics Data System (ADS)

    Yang, Jia Sheng

    2018-06-01

    In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.

  7. On eco-efficient technologies to minimize industrial water consumption

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad C.; Mohammadifard, Hossein; Ghaffari, Ghasem

    2016-07-01

    Purpose - Water scarcity will further stress on available water systems and decrease the security of water in many areas. Therefore, innovative methods to minimize industrial water usage and waste production are of paramount importance in the process of extending fresh water resources and happen to be the main life support systems in many arid regions of the world. This paper demonstrates that there are good opportunities for many industries to save water and decrease waste water in softening process by substituting traditional with echo-friendly methods. The patented puffing method is an eco-efficient and viable technology for water saving and waste reduction in lime softening process. Design/methodology/approach - Lime softening process (LSP) is a very sensitive process to chemical reactions. In addition, optimal monitoring not only results in minimizing sludge that must be disposed of but also it reduces the operating costs of water conditioning. Weakness of the current (regular) control of LSP based on chemical analysis has been demonstrated experimentally and compared with the eco-efficient puffing method. Findings - This paper demonstrates that there is a good opportunity for many industries to save water and decrease waste water in softening process by substituting traditional method with puffing method, a patented eco-efficient technology. Originality/value - Details of the required innovative works to minimize industrial water usage and waste production are outlined in this paper. Employing the novel puffing method for monitoring of lime softening process results in saving a considerable amount of water while reducing chemical sludge.

  8. Optimal Superpositioning of Flexible Molecule Ensembles

    PubMed Central

    Gapsys, Vytautas; de Groot, Bert L.

    2013-01-01

    Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072

  9. Preparation of positive blood cultures for direct MALDI-ToF MS identification.

    PubMed

    Robinson, Andrew M; Ussher, James E

    2016-08-01

    MALDI-ToF MS can be used to identify microorganisms directly from blood cultures. This study compared two methods of sample preparation. Similar levels of genus- (91% vs 90%) and species-level identifications (79% vs 74%) were obtained with differential centrifugation and SDS methods. The SDS method is faster and requires minimal handling. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. The Use of Bioluminescence in Detecting Biohazardous Substances in Water.

    ERIC Educational Resources Information Center

    Thomulka, Kenneth William; And Others

    1993-01-01

    Describes an inexpensive, reproducible alternative assay that requires minimal preparation and equipment for water testing. It provides students with a direct method of detecting potentially biohazardous material in water by observing the reduction in bacterial luminescence. (PR)

  11. 40 CFR 63.694 - Testing methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... determine treatment process required HAP biodegradation efficiency (Rbio) for compliance with standards... procedures to minimize the loss of compounds due to volatilization, biodegradation, reaction, or sorption... compounds due to volatilization, biodegradation, reaction, or sorption during the sample collection, storage...

  12. 40 CFR 63.694 - Testing methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... determine treatment process required HAP biodegradation efficiency (Rbio) for compliance with standards... procedures to minimize the loss of compounds due to volatilization, biodegradation, reaction, or sorption... compounds due to volatilization, biodegradation, reaction, or sorption during the sample collection, storage...

  13. 40 CFR 63.694 - Testing methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... determine treatment process required HAP biodegradation efficiency (Rbio) for compliance with standards... procedures to minimize the loss of compounds due to volatilization, biodegradation, reaction, or sorption... compounds due to volatilization, biodegradation, reaction, or sorption during the sample collection, storage...

  14. Early identification of microorganisms in blood culture prior to the detection of a positive signal in the BACTEC FX system using matrix-assisted laser desorption/ionization-time of flight mass spectrometry.

    PubMed

    Wang, Ming-Cheng; Lin, Wei-Hung; Yan, Jing-Jou; Fang, Hsin-Yi; Kuo, Te-Hui; Tseng, Chin-Chung; Wu, Jiunn-Jong

    2015-08-01

    Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) is a valuable method for rapid identification of blood stream infection (BSI) pathogens. Integration of MALDI-TOF MS and blood culture system can speed the identification of causative BSI microorganisms. We investigated the minimal microorganism concentrations of common BSI pathogens required for positive blood culture using BACTEC FX and for positive identification using MALDI-TOF MS. The time to detection with positive BACTEC FX and minimal incubation time with positive MALDI-TOF MS identification were determined for earlier identification of common BSI pathogens. The minimal microorganism concentrations required for positive blood culture using BACTEC FX were >10(7)-10(8) colony forming units/mL for most of the BSI pathogens. The minimal microorganism concentrations required for identification using MALDI-TOF MS were > 10(7) colony forming units/mL. Using simulated BSI models, one can obtain enough bacterial concentration from blood culture bottles for successful identification of five common Gram-positive and Gram-negative bacteria using MALDI-TOF MS 1.7-2.3 hours earlier than the usual time to detection in blood culture systems. This study provides an approach to earlier identification of BSI pathogens prior to the detection of a positive signal in the blood culture system using MALDI-TOF MS, compared to current methods. It can speed the time for identification of BSI pathogens and may have benefits of earlier therapy choice and on patient outcome. Copyright © 2013. Published by Elsevier B.V.

  15. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    PubMed

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  16. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  17. Protective immunity of Nile tilapia against Ichthyophthirius

    USDA-ARS?s Scientific Manuscript database

    Tilapia are currently cultured in different types of production systems ranging from pond, tank, cage, flowing water and intensive water reuse culture systems. Intensification of tilapia culture requires methods to prevent and control diseases to minimize the loss. Ichthyophthirius multifiliis (I...

  18. 29 CFR 95.22 - Payment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and Human Services, Payment Management System, P.O. Box 6021, Rockville, MD 20852. Interest amounts up... FOREIGN GOVERNMENTS, AND INTERNATIONAL ORGANIZATIONS Post-Award Requirements Financial and Program Management § 95.22 Payment. (a) Payment methods shall minimize the time elapsing between the transfer of...

  19. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  20. Minimizing EVA Airlock Time and Depress Gas Losses

    NASA Technical Reports Server (NTRS)

    Trevino, Luis A.; Lafuse, Sharon A.

    2008-01-01

    This paper describes the need and solution for minimizing EVA airlock time and depress gas losses using a new method that minimizes EVA out-the-door time for a suited astronaut and reclaims most of the airlock depress gas. This method consists of one or more related concepts that use an evacuated reservoir tank to store and reclaim the airlock depress gas. The evacuated tank can be an inflatable tank, a spent fuel tank from a lunar lander descent stage, or a backup airlock. During EVA airlock operations, the airlock and reservoir would be equalized at some low pressure, and through proper selection of reservoir size, most of the depress gas would be stored in the reservoir for later reclamation. The benefit of this method is directly applicable to long duration lunar and Mars missions that require multiple EVA missions (up to 100, two-person lunar EVAs) and conservation of consumables, including depress pump power and depress gas. The current ISS airlock gas reclamation method requires approximately 45 minutes of the astronaut s time in the airlock and 1 KW in electrical power. The proposed method would decrease the astronaut s time in the airlock because the depress gas is being temporarily stored in a reservoir tank for later recovery. Once the EVA crew is conducting the EVA, the volume in the reservoir would be pumped back to the cabin at a slow rate. Various trades were conducted to optimize this method, which include time to equalize the airlock with the evacuated reservoir versus reservoir size, pump power to reclaim depress gas versus time allotted, inflatable reservoir pros and cons (weight, volume, complexity), and feasibility of spent lunar nitrogen and oxygen tanks as reservoirs.

  1. Power Generation by Harvesting Ambient Energy with a Micro-Electromagnetic Generator

    DTIC Science & Technology

    2009-03-01

    more applicable at the micro scale are also being investigated including piezoelectric and electrostatics. Solar energy harvesting is a proven method. It...with IC circuitry. 6.2.7 Piezoelectric Research. In Chapter 2, energy harvesting through the use of piezoelectric materials was briefly discussed. A... piezoelectric harvesters require minimal movement for power generation, whereas an electromagnet generator generally requires significant mechanical motion in

  2. High definition in minimally invasive surgery: a review of methods for recording, editing, and distributing video.

    PubMed

    Kelly, Christopher R; Hogle, Nancy J; Landman, Jaime; Fowler, Dennis L

    2008-09-01

    The use of high-definition cameras and monitors during minimally invasive procedures can provide the surgeon and operating team with more than twice the resolution of standard definition systems. Although this dramatic improvement in visualization offers numerous advantages, the adoption of high definition cameras in the operating room can be challenging because new recording equipment must be purchased, and several new technologies are required to edit and distribute video. The purpose of this review article is to provide an overview of the popular methods for recording, editing, and distributing high-definition video. This article discusses the essential technical concepts of high-definition video, reviews the different kinds of equipment and methods most often used for recording, and describes several options for video distribution.

  3. Plant genotyping using fluorescently tagged inter-simple sequence repeats (ISSRs): basic principles and methodology.

    PubMed

    Prince, Linda M

    2015-01-01

    Inter-simple sequence repeat PCR (ISSR-PCR) is a fast, inexpensive genotyping technique based on length variation in the regions between microsatellites. The method requires no species-specific prior knowledge of microsatellite location or composition. Very small amounts of DNA are required, making this method ideal for organisms of conservation concern, or where the quantity of DNA is extremely limited due to organism size. ISSR-PCR can be highly reproducible but requires careful attention to detail. Optimization of DNA extraction, fragment amplification, and normalization of fragment peak heights during fluorescent detection are critical steps to minimizing the downstream time spent verifying and scoring the data.

  4. Advancements in the delivery of epigenetic drugs

    PubMed Central

    Cramer, Samantha A.; Adjei, Isaac M.; Labhasetwar, Vinod

    2015-01-01

    Introduction Advancements in epigenetic treatments are not only coming from new drugs but from modifications or encapsulation of the existing drugs into different formulations leading to greater stability and enhanced delivery to the target site. The epigenome is highly regulated and complex; therefore it is important that off-target effects of epigenetic drugs be minimized. The step from in vitro to in vivo treatment of these drugs often requires development of a method of effective delivery for clinical translation. Areas covered This review covers epigenetic mechanisms such as DNA methylation, chromatin remodeling and small RNA mediated gene regulation. There is a section in the review with examples of diseases where epigenetic alterations lead to impaired pathways, with an emphasis on cancer. Epigenetic drugs, their targets and clinical status are presented. Advantages of using a delivery method for epigenetic drugs as well as examples of current advancements and challenges are also discussed. Expert opinion Epigenetic drugs have the potential to be very effective therapy against a number of diseases, especially cancers and neurological disorders. As with many chemotherapeutics, undesired side effects need to be minimized. Finding a suitable delivery method means reducing side effects and achieving a higher therapeutic index. Each drug may require a unique delivery method exploiting the drug's chemistry or other physical characteristic requiring interdisciplinary participation and would benefit from a better understanding of the mechanisms of action. PMID:25739728

  5. 45 CFR 2543.22 - Payment.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... and disbursement by the recipient, and (2) Financial management systems that meet the standards for... remitted annually to Department of Health and Human Services, Payment Management System, Rockville, MD... Requirements Financial and Program Management § 2543.22 Payment. (a) Payment methods shall minimize the time...

  6. 45 CFR 2543.22 - Payment.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... and disbursement by the recipient, and (2) Financial management systems that meet the standards for... remitted annually to Department of Health and Human Services, Payment Management System, Rockville, MD... Requirements Financial and Program Management § 2543.22 Payment. (a) Payment methods shall minimize the time...

  7. 45 CFR 2543.22 - Payment.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... and disbursement by the recipient, and (2) Financial management systems that meet the standards for... remitted annually to Department of Health and Human Services, Payment Management System, Rockville, MD... Requirements Financial and Program Management § 2543.22 Payment. (a) Payment methods shall minimize the time...

  8. Real-time combustion monitoring of PCDD/F indicators by REMPI-TOFMS

    EPA Science Inventory

    Analyses for polychlorinated dibenzodioxin and dibenzofuran (PCDD/F) emissions typically require a 4 h extractive sample taken on an annual or less frequent basis. This results in a potentially minimally representative monitoring scheme. More recently, methods for continual sampl...

  9. Panama Canal Fog Navigation Study : System Requirements Statement

    DOT National Transportation Integrated Search

    1984-03-01

    Efforts to minimize the adverse impact of fog on Panama Canal operations have focused in the past on obtaining methods of predicting fog, of dispersing fog and of providing navigation during fog. This report describes the result of the most recent fo...

  10. Ultrasonic-based membrane aided sample preparation of urine proteomes.

    PubMed

    Jesus, Jemmyson Romário; Santos, Hugo M; López-Fernández, H; Lodeiro, Carlos; Arruda, Marco Aurélio Zezzi; Capelo, J L

    2018-02-01

    A new ultrafast ultrasonic-based method for shotgun proteomics as well as label-free protein quantification in urine samples is developed. The method first separates the urine proteins using nitrocellulose-based membranes and then proteins are in-membrane digested using trypsin. The enzymatic digestion process is accelerated from overnight to four minutes using a sonoreactor ultrasonic device. Overall, the sample treatment pipeline comprising protein separation, digestion and identification is done in just 3h. The process is assessed using urine of healthy volunteers. The method shows that male can be differentiated from female using the protein content of urine in a fast, easy and straightforward way. 232 and 226 proteins are identified in urine of male and female, respectively. From this, 162 are common to both genders, whilst 70 are unique to male and 64 to female. From the 162 common proteins, 13 are present at levels statistically different (p < 0.05). The method matches the analytical minimalism concept as outlined by Halls, as each stage of this analysis is evaluated to minimize the time, cost, sample requirement, reagent consumption, energy requirements and production of waste products. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Flagellar Synchronization Is a Simple Alternative to Cell Cycle Synchronization for Ciliary and Flagellar Studies

    PubMed Central

    Dutta, Soumita

    2017-01-01

    ABSTRACT The unicellular green alga Chlamydomonas reinhardtii is an ideal model organism for studies of ciliary function and assembly. In assays for biological and biochemical effects of various factors on flagellar structure and function, synchronous culture is advantageous for minimizing variability. Here, we have characterized a method in which 100% synchronization is achieved with respect to flagellar length but not with respect to the cell cycle. The method requires inducing flagellar regeneration by amputation of the entire cell population and limiting regeneration time. This results in a maximally homogeneous distribution of flagellar lengths at 3 h postamputation. We found that time-limiting new protein synthesis during flagellar synchronization limits variability in the unassembled pool of limiting flagellar protein and variability in flagellar length without affecting the range of cell volumes. We also found that long- and short-flagella mutants that regenerate normally require longer and shorter synchronization times, respectively. By minimizing flagellar length variability using a simple method requiring only hours and no changes in media, flagellar synchronization facilitates the detection of small changes in flagellar length resulting from both chemical and genetic perturbations in Chlamydomonas. This method increases our ability to probe the basic biology of ciliary size regulation and related disease etiologies. IMPORTANCE Cilia and flagella are highly conserved antenna-like organelles that found in nearly all mammalian cell types. They perform sensory and motile functions contributing to numerous physiological and developmental processes. Defects in their assembly and function are implicated in a wide range of human diseases ranging from retinal degeneration to cancer. Chlamydomonas reinhardtii is an algal model system for studying mammalian cilium formation and function. Here, we report a simple synchronization method that allows detection of small changes in ciliary length by minimizing variability in the population. We find that this method alters the key relationship between cell size and the amount of protein accumulated for flagellar growth. This provides a rapid alternative to traditional methods of cell synchronization for uncovering novel regulators of cilia. PMID:28289724

  12. Flagellar Synchronization Is a Simple Alternative to Cell Cycle Synchronization for Ciliary and Flagellar Studies.

    PubMed

    Dutta, Soumita; Avasthi, Prachee

    2017-01-01

    The unicellular green alga Chlamydomonas reinhardtii is an ideal model organism for studies of ciliary function and assembly. In assays for biological and biochemical effects of various factors on flagellar structure and function, synchronous culture is advantageous for minimizing variability. Here, we have characterized a method in which 100% synchronization is achieved with respect to flagellar length but not with respect to the cell cycle. The method requires inducing flagellar regeneration by amputation of the entire cell population and limiting regeneration time. This results in a maximally homogeneous distribution of flagellar lengths at 3 h postamputation. We found that time-limiting new protein synthesis during flagellar synchronization limits variability in the unassembled pool of limiting flagellar protein and variability in flagellar length without affecting the range of cell volumes. We also found that long- and short-flagella mutants that regenerate normally require longer and shorter synchronization times, respectively. By minimizing flagellar length variability using a simple method requiring only hours and no changes in media, flagellar synchronization facilitates the detection of small changes in flagellar length resulting from both chemical and genetic perturbations in Chlamydomonas . This method increases our ability to probe the basic biology of ciliary size regulation and related disease etiologies. IMPORTANCE Cilia and flagella are highly conserved antenna-like organelles that found in nearly all mammalian cell types. They perform sensory and motile functions contributing to numerous physiological and developmental processes. Defects in their assembly and function are implicated in a wide range of human diseases ranging from retinal degeneration to cancer. Chlamydomonas reinhardtii is an algal model system for studying mammalian cilium formation and function. Here, we report a simple synchronization method that allows detection of small changes in ciliary length by minimizing variability in the population. We find that this method alters the key relationship between cell size and the amount of protein accumulated for flagellar growth. This provides a rapid alternative to traditional methods of cell synchronization for uncovering novel regulators of cilia.

  13. Means and method of balancing multi-cylinder reciprocating machines

    DOEpatents

    Corey, John A.; Walsh, Michael M.

    1985-01-01

    A virtual balancing axis arrangement is described for multi-cylinder reciprocating piston machines for effectively balancing out imbalanced forces and minimizing residual imbalance moments acting on the crankshaft of such machines without requiring the use of additional parallel-arrayed balancing shafts or complex and expensive gear arrangements. The novel virtual balancing axis arrangement is capable of being designed into multi-cylinder reciprocating piston and crankshaft machines for substantially reducing vibrations induced during operation of such machines with only minimal number of additional component parts. Some of the required component parts may be available from parts already required for operation of auxiliary equipment, such as oil and water pumps used in certain types of reciprocating piston and crankshaft machine so that by appropriate location and dimensioning in accordance with the teachings of the invention, the virtual balancing axis arrangement can be built into the machine at little or no additional cost.

  14. OxMaR: open source free software for online minimization and randomization for clinical trials.

    PubMed

    O'Callaghan, Christopher A

    2014-01-01

    Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.

  15. Comprehensive Training Curricula for Minimally Invasive Surgery

    PubMed Central

    Palter, Vanessa N

    2011-01-01

    Background The unique skill set required for minimally invasive surgery has in part contributed to a certain portion of surgical residency training transitioning from the operating room to the surgical skills laboratory. Simulation lends itself well as a method to shorten the learning curve for minimally invasive surgery by allowing trainees to practice the unique motor skills required for this type of surgery in a safe, structured environment. Although a significant amount of important work has been done to validate simulators as viable systems for teaching technical skills outside the operating room, the next step is to integrate simulation training into a comprehensive curriculum. Objectives This narrative review aims to synthesize the evidence and educational theories underlining curricula development for technical skills both in a broad context and specifically as it pertains to minimally invasive surgery. Findings The review highlights the critical aspects of simulation training, such as the effective provision of feedback, deliberate practice, training to proficiency, the opportunity to practice at varying levels of difficulty, and the inclusion of both cognitive teaching and hands-on training. In addition, frameworks for integrating simulation training into a comprehensive curriculum are described. Finally, existing curricula on both laparoscopic box trainers and virtual reality simulators are critically evaluated. PMID:22942951

  16. Costs and benefits of different methods of esophagectomy for esophageal cancer.

    PubMed

    Yanasoot, Alongkorn; Yolsuriyanwong, Kamtorn; Ruangsin, Sakchai; Laohawiriyakamol, Supparerk; Sunpaweravong, Somkiat

    2017-01-01

    Background A minimally invasive approach to esophagectomy is being used increasingly, but concerns remain regarding the feasibility, safety, cost, and outcomes. We performed an analysis of the costs and benefits of minimally invasive, hybrid, and open esophagectomy approaches for esophageal cancer surgery. Methods The data of 83 consecutive patients who underwent a McKeown's esophagectomy at Prince of Songkla University Hospital between January 2008 and December 2014 were analyzed. Open esophagectomy was performed in 54 patients, minimally invasive esophagectomy in 13, and hybrid esophagectomy in 16. There were no differences in patient characteristics among the 3 groups Minimally invasive esophagectomy was undertaken via a thoracoscopic-laparoscopic approach, hybrid esophagectomy via a thoracoscopic-laparotomy approach, and open esophagectomy by a thoracotomy-laparotomy approach. Results Minimally invasive esophagectomy required a longer operative time than hybrid or open esophagectomy ( p = 0.02), but these patients reported less postoperative pain ( p = 0.01). There were no significant differences in blood loss, intensive care unit stay, hospital stay, or postoperative complications among the 3 groups. Minimally invasive esophagectomy incurred higher operative and surgical material costs than hybrid or open esophagectomy ( p = 0.01), but there were no significant differences in inpatient care and total hospital costs. Conclusion Minimally invasive esophagectomy resulted in the least postoperative pain but the greatest operative cost and longest operative time. Open esophagectomy was associated with the lowest operative cost and shortest operative time but the most postoperative pain. Hybrid esophagectomy had a shorter learning curve while sharing the advantages of minimally invasive esophagectomy.

  17. A Manually Operated, Advance Off-Stylet Insertion Tool for Minimally Invasive Cochlear Implantation Surgery

    PubMed Central

    Kratchman, Louis B.; Schurzig, Daniel; McRackan, Theodore R.; Balachandran, Ramya; Noble, Jack H.; Webster, Robert J.; Labadie, Robert F.

    2014-01-01

    The current technique for cochlear implantation (CI) surgery requires a mastoidectomy to gain access to the cochlea for electrode array insertion. It has been shown that microstereotactic frames can enable an image-guided, minimally invasive approach to CI surgery called percutaneous cochlear implantation (PCI) that uses a single drill hole for electrode array insertion, avoiding a more invasive mastoidectomy. Current clinical methods for electrode array insertion are not compatible with PCI surgery because they require a mastoidectomy to access the cochlea; thus, we have developed a manually operated electrode array insertion tool that can be deployed through a PCI drill hole. The tool can be adjusted using a preoperative CT scan for accurate execution of the advance off-stylet (AOS) insertion technique and requires less skill to operate than is currently required to implant electrode arrays. We performed three cadaver insertion experiments using the AOS technique and determined that all insertions were successful using CT and microdissection. PMID:22851233

  18. The Minimal Preprocessing Pipelines for the Human Connectome Project

    PubMed Central

    Glasser, Matthew F.; Sotiropoulos, Stamatios N; Wilson, J Anthony; Coalson, Timothy S; Fischl, Bruce; Andersson, Jesper L; Xu, Junqian; Jbabdi, Saad; Webster, Matthew; Polimeni, Jonathan R; Van Essen, David C; Jenkinson, Mark

    2013-01-01

    The Human Connectome Project (HCP) faces the challenging task of bringing multiple magnetic resonance imaging (MRI) modalities together in a common automated preprocessing framework across a large cohort of subjects. The MRI data acquired by the HCP differ in many ways from data acquired on conventional 3 Tesla scanners and often require newly developed preprocessing methods. We describe the minimal preprocessing pipelines for structural, functional, and diffusion MRI that were developed by the HCP to accomplish many low level tasks, including spatial artifact/distortion removal, surface generation, cross-modal registration, and alignment to standard space. These pipelines are specially designed to capitalize on the high quality data offered by the HCP. The final standard space makes use of a recently introduced CIFTI file format and the associated grayordinates spatial coordinate system. This allows for combined cortical surface and subcortical volume analyses while reducing the storage and processing requirements for high spatial and temporal resolution data. Here, we provide the minimum image acquisition requirements for the HCP minimal preprocessing pipelines and additional advice for investigators interested in replicating the HCP’s acquisition protocols or using these pipelines. Finally, we discuss some potential future improvements for the pipelines. PMID:23668970

  19. Data identification for improving gene network inference using computational algebra.

    PubMed

    Dimitrova, Elena; Stigler, Brandilyn

    2014-11-01

    Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.

  20. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  1. The Costs of Legislated Minimal Competency Requirements. A background paper prepared for the Minimal Cometency Workshops sponsored by the Education Commission of the States and the National Institute of Education.

    ERIC Educational Resources Information Center

    Anderson, Barry D.

    Little is known about the costs of setting up and implementing legislated minimal competency testing (MCT). To estimate the financial obstacles which lie between the idea and its implementation, MCT requirements are viewed from two perspectives. The first, government regulation, views legislated minimal competency requirements as an attempt by the…

  2. A Tool for the Automated Design and Evaluation of Habitat Interior Layouts

    NASA Technical Reports Server (NTRS)

    Simon, Matthew A.; Wilhite, Alan W.

    2013-01-01

    The objective of space habitat design is to minimize mass and system size while providing adequate space for all necessary equipment and a functional layout that supports crew health and productivity. Unfortunately, development and evaluation of interior layouts is often ignored during conceptual design because of the subjectivity and long times required using current evaluation methods (e.g., human-in-the-loop mockup tests and in-depth CAD evaluations). Early, more objective assessment could prevent expensive design changes that may increase vehicle mass and compromise functionality. This paper describes a new interior design evaluation method to enable early, structured consideration of habitat interior layouts. This interior layout evaluation method features a comprehensive list of quantifiable habitat layout evaluation criteria, automatic methods to measure these criteria from a geometry model, and application of systems engineering tools and numerical methods to construct a multi-objective value function measuring the overall habitat layout performance. In addition to a detailed description of this method, a C++/OpenGL software tool which has been developed to implement this method is also discussed. This tool leverages geometry modeling coupled with collision detection techniques to identify favorable layouts subject to multiple constraints and objectives (e.g., minimize mass, maximize contiguous habitable volume, maximize task performance, and minimize crew safety risks). Finally, a few habitat layout evaluation examples are described to demonstrate the effectiveness of this method and tool to influence habitat design.

  3. Minimally invasive surgical method to detect sound processing in the cochlear apex by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Sripriya; Zhang, Yuan; Petrie, Tracy; Fridberger, Anders; Ren, Tianying; Wang, Ruikang; Jacques, Steven L.; Nuttall, Alfred L.

    2016-02-01

    Sound processing in the inner ear involves separation of the constituent frequencies along the length of the cochlea. Frequencies relevant to human speech (100 to 500 Hz) are processed in the apex region. Among mammals, the guinea pig cochlear apex processes similar frequencies and is thus relevant for the study of speech processing in the cochlea. However, the requirement for extensive surgery has challenged the optical accessibility of this area to investigate cochlear processing of signals without significant intrusion. A simple method is developed to provide optical access to the guinea pig cochlear apex in two directions with minimal surgery. Furthermore, all prior vibration measurements in the guinea pig apex involved opening an observation hole in the otic capsule, which has been questioned on the basis of the resulting changes to cochlear hydrodynamics. Here, this limitation is overcome by measuring the vibrations through the unopened otic capsule using phase-sensitive Fourier domain optical coherence tomography. The optically and surgically advanced method described here lays the foundation to perform minimally invasive investigation of speech-related signal processing in the cochlea.

  4. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  5. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  6. On the formulation of a minimal uncertainty model for robust control with structured uncertainty

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1991-01-01

    In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.

  7. Method of Minimizing Size of Heat Rejection Systems for Thermoelectric Coolers to Cool Detectors in Space

    NASA Technical Reports Server (NTRS)

    Choi, Michael K.

    2014-01-01

    A thermal design concept of attaching the thermoelectric cooler (TEC) hot side directly to the radiator and maximizing the number of TECs to cool multiple detectors in space is presented. It minimizes the temperature drop between the TECs and radiator. An ethane constant conductance heat pipe transfers heat from the detectors to a TEC cold plate which the cold side of the TECs is attached to. This thermal design concept minimizes the size of TEC heat rejection systems. Hence it reduces the problem of accommodating the radiator within a required envelope. It also reduces the mass of the TEC heat rejection system. Thermal testing of a demonstration unit in vacuum verified the thermal performance of the thermal design concept.

  8. Subject-specific cardiovascular system model-based identification and diagnosis of septic shock with a minimally invasive data set: animal experiments and proof of concept.

    PubMed

    Chase, J Geoffrey; Lambermont, Bernard; Starfinger, Christina; Hann, Christopher E; Shaw, Geoffrey M; Ghuysen, Alexandre; Kolh, Philippe; Dauby, Pierre C; Desaive, Thomas

    2011-01-01

    A cardiovascular system (CVS) model and parameter identification method have previously been validated for identifying different cardiac and circulatory dysfunctions in simulation and using porcine models of pulmonary embolism, hypovolemia with PEEP titrations and induced endotoxic shock. However, these studies required both left and right heart catheters to collect the data required for subject-specific monitoring and diagnosis-a maximally invasive data set in a critical care setting although it does occur in practice. Hence, use of this model-based diagnostic would require significant additional invasive sensors for some subjects, which is unacceptable in some, if not all, cases. The main goal of this study is to prove the concept of using only measurements from one side of the heart (right) in a 'minimal' data set to identify an effective patient-specific model that can capture key clinical trends in endotoxic shock. This research extends existing methods to a reduced and minimal data set requiring only a single catheter and reducing the risk of infection and other complications-a very common, typical situation in critical care patients, particularly after cardiac surgery. The extended methods and assumptions that found it are developed and presented in a case study for the patient-specific parameter identification of pig-specific parameters in an animal model of induced endotoxic shock. This case study is used to define the impact of this minimal data set on the quality and accuracy of the model application for monitoring, detecting and diagnosing septic shock. Six anesthetized healthy pigs weighing 20-30 kg received a 0.5 mg kg(-1) endotoxin infusion over a period of 30 min from T0 to T30. For this research, only right heart measurements were obtained. Errors for the identified model are within 8% when the model is identified from data, re-simulated and then compared to the experimentally measured data, including measurements not used in the identification process for validation. Importantly, all identified parameter trends match physiologically and clinically and experimentally expected changes, indicating that no diagnostic power is lost. This work represents a further with human subjects validation for this model-based approach to cardiovascular diagnosis and therapy guidance in monitoring endotoxic disease states. The results and methods obtained can be readily extended from this case study to the other animal model results presented previously. Overall, these results provide further support for prospective, proof of concept clinical testing with humans.

  9. Suprapubic cystostomy using optical urethrotome in female patients.

    PubMed

    Sawant, Ajit Somaji; Patwardhan, Sujata K; Attar, Mohammad Ismail; Varma, Radheshyam; Bansal, Ujjwal

    2009-08-01

    In many female patients for lower urinary tract reconstructive procedures, a suprapubic cystostomy along with perurethral catheter is required for urinary diversion. We describe a new and simple method of intraoperative suprapubic catheter placement using optical urethrotome wherein distension of bladder is not required. A total of 26 patients underwent suprapubic catheter placement intraoperatively with the aid of Sachse' optical urethrotome and its outer sheath from January 2005 to May 2008. A 16F Foley catheter could be successfully placed suprapubically in all patients with this method. There were no complications like injury to intraabdominal viscera, retropubic hematoma, hematuria, or catheter dislodgement. We describe a new method of intraoperative suprapubic catheter placement in female patients that is minimally invasive, technically safe, simple, and effective, and does not require bladder distension.

  10. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    PubMed Central

    Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  11. Experimental Validation of the Dynamic Inertia Measurement Method to Find the Mass Properties of an Iron Bird Test Article

    NASA Technical Reports Server (NTRS)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The mass properties of an aerospace vehicle are required by multiple disciplines in the analysis and prediction of flight behavior. Pendulum oscillation methods have been developed and employed for almost a century as a means to measure mass properties. However, these oscillation methods are costly, time consuming, and risky. The NASA Armstrong Flight Research Center has been investigating the Dynamic Inertia Measurement, or DIM method as a possible alternative to oscillation methods. The DIM method uses ground test techniques that are already applied to aerospace vehicles when conducting modal surveys. Ground vibration tests would require minimal additional instrumentation and time to apply the DIM method. The DIM method has been validated on smaller test articles, but has not yet been fully proven on large aerospace vehicles.

  12. Alignment of angular velocity sensors for a vestibular prosthesis.

    PubMed

    Digiovanna, Jack; Carpaneto, Jacopo; Micera, Silvestro; Merfeld, Daniel M

    2012-02-13

    Vestibular prosthetics transmit angular velocities to the nervous system via electrical stimulation. Head-fixed gyroscopes measure angular motion, but the gyroscope coordinate system will not be coincident with the sensory organs the prosthetic replaces. Here we show a simple calibration method to align gyroscope measurements with the anatomical coordinate system. We benchmarked the method with simulated movements and obtain proof-of-concept with one healthy subject. The method was robust to misalignment, required little data, and minimal processing.

  13. Nonlinear gamma correction via normed bicoherence minimization in optical fringe projection metrology

    NASA Astrophysics Data System (ADS)

    Kamagara, Abel; Wang, Xiangzhao; Li, Sikun

    2018-03-01

    We propose a method to compensate for the projector intensity nonlinearity induced by gamma effect in three-dimensional (3-D) fringe projection metrology by extending high-order spectra analysis and bispectral norm minimization to digital sinusoidal fringe pattern analysis. The bispectrum estimate allows extraction of vital signal information features such as spectral component correlation relationships in fringe pattern images. Our approach exploits the fact that gamma introduces high-order harmonic correlations in the affected fringe pattern image. Estimation and compensation of projector nonlinearity is realized by detecting and minimizing the normed bispectral coherence of these correlations. The proposed technique does not require calibration information and technical knowledge or specification of fringe projection unit. This is promising for developing a modular and calibration-invariant model for intensity nonlinear gamma compensation in digital fringe pattern projection profilometry. Experimental and numerical simulation results demonstrate this method to be efficient and effective in improving the phase measuring accuracies with phase-shifting fringe pattern projection profilometry.

  14. Long-term seafloor monitoring at an open ocean aquaculture site in the western Gulf of Maine, USA: development of an adaptive protocol.

    PubMed

    Grizzle, R E; Ward, L G; Fredriksson, D W; Irish, J D; Langan, R; Heinig, C S; Greene, J K; Abeels, H A; Peter, C R; Eberhardt, A L

    2014-11-15

    The seafloor at an open ocean finfish aquaculture facility in the western Gulf of Maine, USA was monitored from 1999 to 2008 by sampling sites inside a predicted impact area modeled by oceanographic conditions and fecal and food settling characteristics, and nearby reference sites. Univariate and multivariate analyses of benthic community measures from box core samples indicated minimal or no significant differences between impact and reference areas. These findings resulted in development of an adaptive monitoring protocol involving initial low-cost methods that required more intensive and costly efforts only when negative impacts were initially indicated. The continued growth of marine aquaculture is dependent on further development of farming methods that minimize negative environmental impacts, as well as effective monitoring protocols. Adaptive monitoring protocols, such as the one described herein, coupled with mathematical modeling approaches, have the potential to provide effective protection of the environment while minimize monitoring effort and costs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A minimally invasive method for extraction of sturgeon oocytes

    USGS Publications Warehouse

    Candrl, James S.; Papoulias, Diana M.; Tillitt, Donald E.

    2010-01-01

    Fishery biologists, hatchery personnel, and caviar fishers routinely extract oocytes from sturgeon (Acipenseridae) to determine the stage of maturation by checking egg quality. Typically, oocytes are removed either by inserting a catheter into the oviduct or by making an incision in the body cavity. Both methods can be time-consuming and stressful to the fish. We describe a device to collect mature oocytes from sturgeons quickly and effectively with minimal stress on the fish. The device is made by creating a needle from stainless steel tubing and connecting it to a syringe with polyvinyl chloride tubing. The device is filled with saline solution or water, the needle is inserted into the abdominal wall, and eggs are extracted from the fish. Using this device, an oocyte sample can be collected in less than 30 s. Such sampling leaves a minute wound that heals quickly and does not require suturing. The extractor device can easily be used in the field or hatchery, reduces fish handling time, and minimizes stress.

  16. Automated and unsupervised detection of malarial parasites in microscopic images.

    PubMed

    Purwar, Yashasvi; Shah, Sirish L; Clarke, Gwen; Almugairi, Areej; Muehlenbachs, Atis

    2011-12-13

    Malaria is a serious infectious disease. According to the World Health Organization, it is responsible for nearly one million deaths each year. There are various techniques to diagnose malaria of which manual microscopy is considered to be the gold standard. However due to the number of steps required in manual assessment, this diagnostic method is time consuming (leading to late diagnosis) and prone to human error (leading to erroneous diagnosis), even in experienced hands. The focus of this study is to develop a robust, unsupervised and sensitive malaria screening technique with low material cost and one that has an advantage over other techniques in that it minimizes human reliance and is, therefore, more consistent in applying diagnostic criteria. A method based on digital image processing of Giemsa-stained thin smear image is developed to facilitate the diagnostic process. The diagnosis procedure is divided into two parts; enumeration and identification. The image-based method presented here is designed to automate the process of enumeration and identification; with the main advantage being its ability to carry out the diagnosis in an unsupervised manner and yet have high sensitivity and thus reducing cases of false negatives. The image based method is tested over more than 500 images from two independent laboratories. The aim is to distinguish between positive and negative cases of malaria using thin smear blood slide images. Due to the unsupervised nature of method it requires minimal human intervention thus speeding up the whole process of diagnosis. Overall sensitivity to capture cases of malaria is 100% and specificity ranges from 50-88% for all species of malaria parasites. Image based screening method will speed up the whole process of diagnosis and is more advantageous over laboratory procedures that are prone to errors and where pathological expertise is minimal. Further this method provides a consistent and robust way of generating the parasite clearance curves.

  17. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  18. 75 FR 36444 - Proposed Extension of the Approval of Information Collection Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-25

    ... be provided in the desired format, reporting burden (time and financial resources) is minimized... of the following methods: E-mail: [email protected] ; Mail, Hand Delivery, Courier: Regulatory... collection. Because we continue to experience delays in receiving mail in the Washington, DC area, commenters...

  19. A Simplified Experimental Scheme for the Study of Mitosis.

    ERIC Educational Resources Information Center

    Gill, John

    1980-01-01

    A procedure is described for providing preparations of dividing cells from root apical meristems, requiring only inexpensive equipment and minimal experimental skill, and using 8-Hydroxyquinoline and Toluidene-blue as a chromosome stain. The method has been sucessfully tested in schools and yields permanent preparations of adequate quality for…

  20. Teaching the Concept of Gibbs Energy Minimization through Its Application to Phase-Equilibrium Calculation

    ERIC Educational Resources Information Center

    Privat, Romain; Jaubert, Jean-Noe¨l; Berger, Etienne; Coniglio, Lucie; Lemaitre, Ce´cile; Meimaroglou, Dimitrios; Warth, Vale´rie

    2016-01-01

    Robust and fast methods for chemical or multiphase equilibrium calculation are routinely needed by chemical-process engineers working on sizing or simulation aspects. Yet, while industrial applications essentially require calculation tools capable of discriminating between stable and nonstable states and converging to nontrivial solutions,…

  1. A String Search Marketing Application Using Visual Programming

    ERIC Educational Resources Information Center

    Chin, Jerry M.; Chin, Mary H.; Van Landuyt, Cathryn

    2013-01-01

    This paper demonstrates the use of programing software that provides the student programmer visual cues to construct the code to a student programming assignment. This method does not disregard or minimize the syntax or required logical constructs. The student can concentrate more on the logic and less on the language itself.

  2. The LaueUtil toolkit for Laue photocrystallography. II. Spot finding and integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinowski, Jaroslaw A.; Fournier, Bertrand; Makal, Anna

    2015-10-15

    A spot-integration method is described which does not require prior indexing of the reflections. It is based on statistical analysis of the values from each of the pixels on successive frames, followed for each frame by morphological analysis to identify clusters of high value pixels which form an appropriate mask corresponding to a reflection peak. The method does not require prior assumptions such as fitting of a profile or definition of an integration box. The results are compared with those of the seed-skewness method which is based on minimizing the skewness of the intensity distribution within a peak's integration box.more » Applications in Laue photocrystallography are presented.« less

  3. Deployment Analysis of a Simple Tape-Spring Hinge Using Probabilistic Methods

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Horta, Lucas G.

    2012-01-01

    Acceptance of new deployable structures architectures and concepts requires validated design methods to minimize the expense involved with technology validation flight testing. Deployable concepts for large lightweight spacecraft include booms, antennae, and masts. This paper explores the implementation of probabilistic methods in the design process for the deployment of a strain-energy mechanism, specifically a simple tape-spring hinge. Strain-energy mechanisms are attractive for deployment in very lightweight systems because they do not require the added mass and complexity associated with motors and controllers. However, designers are hesitant to include free deployment, strain-energy mechanisms because of the potential for uncontrolled behavior. In the example presented here, the tapespring cross-sectional dimensions have been varied and a target displacement during deployment has been selected as the design metric. Specifically, the tape-spring should reach the final position in the shortest time with the minimal amount of overshoot and oscillations. Surrogate models have been used to reduce computational expense. Parameter values to achieve the target response have been computed and used to demonstrate the approach. Based on these results, the application of probabilistic methods for design of a tape-spring hinge has shown promise as a means of designing strain-energy components for more complex space concepts.

  4. Multidimensional Normalization to Minimize Plate Effects of Suspension Bead Array Data.

    PubMed

    Hong, Mun-Gwan; Lee, Woojoo; Nilsson, Peter; Pawitan, Yudi; Schwenk, Jochen M

    2016-10-07

    Enhanced by the growing number of biobanks, biomarker studies can now be performed with reasonable statistical power by using large sets of samples. Antibody-based proteomics by means of suspension bead arrays offers one attractive approach to analyze serum, plasma, or CSF samples for such studies in microtiter plates. To expand measurements beyond single batches, with either 96 or 384 samples per plate, suitable normalization methods are required to minimize the variation between plates. Here we propose two normalization approaches utilizing MA coordinates. The multidimensional MA (multi-MA) and MA-loess both consider all samples of a microtiter plate per suspension bead array assay and thus do not require any external reference samples. We demonstrate the performance of the two MA normalization methods with data obtained from the analysis of 384 samples including both serum and plasma. Samples were randomized across 96-well sample plates, processed, and analyzed in assay plates, respectively. Using principal component analysis (PCA), we could show that plate-wise clusters found in the first two components were eliminated by multi-MA normalization as compared with other normalization methods. Furthermore, we studied the correlation profiles between random pairs of antibodies and found that both MA normalization methods substantially reduced the inflated correlation introduced by plate effects. Normalization approaches using multi-MA and MA-loess minimized batch effects arising from the analysis of several assay plates with antibody suspension bead arrays. In a simulated biomarker study, multi-MA restored associations lost due to plate effects. Our normalization approaches, which are available as R package MDimNormn, could also be useful in studies using other types of high-throughput assay data.

  5. Conducted Transients on Spacecraft Primary Power Lines

    NASA Technical Reports Server (NTRS)

    Mc Closkey, John; Dimov, Jen

    2017-01-01

    One of the sources of potential interference on spacecraft primary power lines is that of conducted transients resulting from equipment being switched on and off of the bus. Susceptibility to such transients is addressed by some version of the CS06 requirement of MIL-STD-461462. This presentation provides a summary of the history of the CS06 requirement and test method, a basis for understanding of the sources of these transients, analysis techniques for determining their worst-case characteristics, and guidelines for minimizing their magnitudes and applying the requirement appropriately.

  6. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  7. The Immediate Aesthetic and Functional Restoration of Maxillary Incisors Compromised by Periodontitis Using Short Implants with Single Crown Restorations: A Minimally Invasive Approach and Five-Year Follow-Up

    PubMed Central

    Marincola, Mauro; Lombardo, Giorgio; Pighi, Jacopo; Corrocher, Giovanni; Mascellaro, Anna; Lehrberg, Jeffrey; Nocini, Pier Francesco

    2015-01-01

    The functional and aesthetic restoration of teeth compromised due to aggressive periodontitis presents numerous challenges for the clinician. Horizontal bone loss and soft tissue destruction resulting from periodontitis can impede implant placement and the regeneration of an aesthetically pleasing gingival smile line, often requiring bone augmentation and mucogingival surgery, respectively. Conservative approaches to the treatment of aggressive periodontitis (i.e., treatments that use minimally invasive tools and techniques) have been purported to yield positive outcomes. Here, we report on the treatment and five-year follow-up of patient suffering from aggressive periodontitis using a minimally invasive surgical technique and implant system. By using the methods described herein, we were able to achieve the immediate aesthetic and functional restoration of the maxillary incisors in a case that would otherwise require bone augmentation and extensive mucogingival surgery. This technique represents a conservative and efficacious alternative to the aesthetic and functional replacement of teeth compromised due to aggressive periodontitis. PMID:26649207

  8. Minimally invasive brow suspension for facial paralysis.

    PubMed

    Costantino, Peter D; Hiltzik, David H; Moche, Jason; Preminger, Aviva

    2003-01-01

    To report a new technique for unilateral brow suspension for facial paralysis that is minimally invasive, limits supraciliary scar formation, does not require specialized endoscopic equipment or expertise, and has proved to be equal to direct brow suspension in durability and symmetry. Retrospective survey of a case series of 23 patients between January 1997 and December 2000. Metropolitan tertiary care center. Patients with head and neck tumors and brow ptosis caused by facial nerve paralysis. The results of the procedure were determined using the following 3-tier rating system: outstanding (excellent elevation and symmetry); acceptable (good elevation and fair symmetry); and unacceptable (loss of elevation). The results were considered outstanding in 12 patients, acceptable in 9 patients, and unacceptable in only 1 patient. One patient developed a hematoma, and 1 patient required a secondary adjustment. The technique has proved to be superior to standard brow suspension procedures with regard to scar formation and equal with respect to facial symmetry and suspension. These results have caused us to abandon direct brow suspension and to use this minimally invasive method in all cases of brow ptosis due to facial paralysis.

  9. Beyond Hosting Capacity: Using Shortest Path Methods to Minimize Upgrade Cost Pathways: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensollen, Nicolas; Horowitz, Kelsey A; Palmintier, Bryan S

    We present in this paper a graph based forwardlooking algorithm applied to distribution planning in the context of distributed PV penetration. We study the target hosting capacity (THC) problem where the objective is to find the cheapest sequence of system upgrades to reach a predefined hosting capacity target value. We show in this paper that commonly used short-term cost minimization approaches lead most of the time to suboptimal solutions. By comparing our method against such myopic techniques on real distribution systems, we show that our algorithm is able to reduce the overall integration costs by looking at future decisions. Becausemore » hosting capacity is hard to compute, this problem requires efficient methods to search the space. We demonstrate here that heuristics using domain specific knowledge can be efficiently used to improve the algorithm performance such that real distribution systems can be studied.« less

  10. Human error mitigation initiative (HEMI) : summary report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less

  11. Multi-terminal pipe routing by Steiner minimal tree and particle swarm optimisation

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Wang, Chengen

    2012-08-01

    Computer-aided design of pipe routing is of fundamental importance for complex equipments' developments. In this article, non-rectilinear branch pipe routing with multiple terminals that can be formulated as a Euclidean Steiner Minimal Tree with Obstacles (ESMTO) problem is studied in the context of an aeroengine-integrated design engineering. Unlike the traditional methods that connect pipe terminals sequentially, this article presents a new branch pipe routing algorithm based on the Steiner tree theory. The article begins with a new algorithm for solving the ESMTO problem by using particle swarm optimisation (PSO), and then extends the method to the surface cases by using geodesics to meet the requirements of routing non-rectilinear pipes on the surfaces of aeroengines. Subsequently, the adaptive region strategy and the basic visibility graph method are adopted to increase the computation efficiency. Numeral computations show that the proposed routing algorithm can find satisfactory routing layouts while running in polynomial time.

  12. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    PubMed

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  13. A Robust Cooperated Control Method with Reinforcement Learning and Adaptive H∞ Control

    NASA Astrophysics Data System (ADS)

    Obayashi, Masanao; Uchiyama, Shogo; Kuremoto, Takashi; Kobayashi, Kunikazu

    This study proposes a robust cooperated control method combining reinforcement learning with robust control to control the system. A remarkable characteristic of the reinforcement learning is that it doesn't require model formula, however, it doesn't guarantee the stability of the system. On the other hand, robust control system guarantees stability and robustness, however, it requires model formula. We employ both the actor-critic method which is a kind of reinforcement learning with minimal amount of computation to control continuous valued actions and the traditional robust control, that is, H∞ control. The proposed system was compared method with the conventional control method, that is, the actor-critic only used, through the computer simulation of controlling the angle and the position of a crane system, and the simulation result showed the effectiveness of the proposed method.

  14. A comparison of 2 cesarean section methods, modified Misgav-Ladach and Pfannenstiel-Kerr: A randomized controlled study.

    PubMed

    Şahin, Nur; Genc, Mine; Turan, Gülüzar Arzu; Kasap, Esin; Güçlü, Serkan

    2018-03-13

    The modified Misgav-Ladach method (MML) is a minimally invasive cesarean section procedure compared with the classic Pfannenstiel-Kerr (PK) method. The aim of the study was to compare the MML method and the PK method in terms of intraoperative and short-term postoperative outcomes. This prospective, randomized controlled trial involved 252 pregnant women scheduled for primary emergency or elective cesarean section between October, 2014 and July, 2015. The primary outcome measures were the duration of surgery, extraction time, Apgar score, blood loss, wound complications, and number of sutures used. Secondary outcome measures were the wound infection, time of bowel restitution, visual analogue scale (VAS) scores at 6 h and 24 h after the operation, limitations in movement, and analgesic requirements. At 6 weeks after surgery, the patients were evaluated regarding late complications. There was a significant reduction in total operating and extraction time in the MML group (p < 0.001). Limitations in movement were lower at 24 h after the MML operation, and less analgesic was required in the MML group. There was no difference between the 2 groups in terms of febrile morbidity or the duration of hospitalization. At 6 weeks after the operation, no complaints and no additional complications from the surgery were noted. The MML method is a minimally invasive cesarean section. In the future, as surgeons' experience increases, MML will likely be chosen more often than the classic PK method.

  15. Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?

    NASA Astrophysics Data System (ADS)

    Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2018-01-01

    Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.

  16. Linking the Tinnitus Questionnaire and the subjective Clinical Global Impression: Which differences are clinically important?

    PubMed Central

    2012-01-01

    Background Development of new tinnitus treatments requires prospective placebo-controlled randomized trials to prove their efficacy. The Tinnitus Questionnaire (TQ) is a validated and commonly used instrument for assessment of tinnitus severity and has been used in many clinical studies. Defining the Minimal Clinically Important Difference (MCID) for TQ changes is an important step to a better interpretation of the clinical relevance of changes observed in clinical trials. In this study we aimed to estimate the minimum change of the TQ score that could be considered clinically relevant. Methods 757 patients with chronic tinnitus were pooled from the TRI database and the RESET study. An anchor-based approach using the Clinical Global Impression (CGI) scale and distributional approaches were used to estimate MCID. Receiver Operating Characteristic (ROC) curves were calculated to define optimal TQ change cutoffs discriminating between minimally changed and unchanged subjects. Results The relationship between TQ change scores and CGI ratings of change was good (r = 0.52, p < 0.05). Mean change scores associated with minimally better and minimally worse CGI categories were −6.65 and +2.72 respectively. According to the ROC method MCID for improvement was −5 points and for deterioration +1 points. Conclusion Distribution and anchor-based methods yielded comparable results in identifying MCIDs. ΔTQ scores of −5 and +1 points were identified as the minimal clinically relevant change for improvement and worsening respectively. The asymmetry of the MCIDs for improvement and worsening may be related to expectation effects. PMID:22781703

  17. [Flat rate reimbursement system for minimally invasive management of unstable vertebral fractures. An analysis of costs and benefits].

    PubMed

    Hartwig, E; Schultheiss, M; Bischoff, M

    2002-08-01

    Some 30% of unstable vertebral fractures of the thoracic and lumbar spine involve a destruction of the ventral column and thus of the supporting structures of the spine. This requires extensive surgical reconstruction procedures, which are carried out using minimally invasive techniques. The disadvantages of the minimally invasive methods are the high cost, the technical equipment and the expenditure of time required in the initial phase for the performance of the surgical procedure. With the structural reform of the health care system in the year 2000, the private-sector regulatory bodies were called upon to introduce a flat-rate compensation system for hospital services according to section 17b of the Hospital Law (KHG). The previous financing system which involved per-diem operating cost rates has thus been abolished. Calculations of individual entities are now required. Considering the case values to date, a contribution margin deficit of EUR 4628.45 has been calculated for our patients with fractures of the thoracic and lumbar spine without neurological defunctionalization symptoms. An economically efficient medical care is thus no longer possible. Consequently, an adjustment of the German relative weights must urgently be demanded in order to guarantee a high-quality medical care of patients.

  18. Minimizing marker mass and handling time when attaching radio-transmitters and geolocators to small songbirds

    USGS Publications Warehouse

    Streby, Henry M.; McAllister, Tara L.; Peterson, Sean M.; Kramer, Gunnar R.; Lehman, Justin A.; Andersen, David E.

    2015-01-01

    Radio-transmitters and light-level geolocators are currently small enough for use on songbirds weighing <15 g. Various methods are used to attach these markers to larger songbirds, but with small birds it becomes especially important to minimize marker mass and bird handling time. Here, we offer modifications to harness materials and marker preparation for transmitters and geolocators, and we describe deployment methods that can be safely completed in 20–60 s per bird. We describe a 0.5-mm elastic sewing thread harness for radio-transmitters that allows nestlings, fledglings, and adults to be marked with the same harness size and reliably falls off to avoid poststudy effects. We also describe a 0.5-mm jewelry cord harness for geolocators that provides a firm fit for >1 yr. Neither harness type requires plastic or metal tubes, rings, or other attachment fixtures on the marker, nor do they require crimping beads, epoxy, scissors, or tying knots while handling birds. Both harnesses add 0.03 g to the mass of markers for small wood-warblers (Parulidae). This minimal additional mass is offset by trimming transmitter antennas or geolocator connection nodes, resulting in no net mass gain for transmitters and 0.02 g added for geolocators compared with conventional harness methods that add >0.40 g. We and others have used this transmitter attachment method with several small songbird species, with no effects on adult and fledgling behavior and survival. We have used this geolocator attachment method on 9-g wood-warblers with no effects on return rates, return dates, territory fidelity, and body mass. We hope that these improvements to the design and deployment of the leg-loop harness method will enable the safe and successful use of these markers, and eventually GPS and other tags, on similarly small songbirds.

  19. Iodine addition using triiodide solutions

    NASA Technical Reports Server (NTRS)

    Rutz, Jeffrey A.; Muckle, Susan V.; Sauer, Richard L.

    1992-01-01

    The study develops: a triiodide solution for use in preparing ground service equipment (GSE) water for Shuttle support, an iodine dissolution method that is reliable and requires minimal time and effort to prepare, and an iodine dissolution agent with a minimal concentration of sodium salt. Sodium iodide and hydriodic acid were both found to dissolve iodine to attain the desired GSE iodine concentrations of 7.5 +/- 2.5 mg/L and 25 +/- 5 mg/L. The 1.75:1 and 2:1 sodium iodide solutions produced higher iodine recoveries than the 1.2:1 hydriodic acid solution. A two-hour preparation time is required for the three sodium iodide solutions. The 1.2:1 hydriodic acid solution can be prepared in less than 5 min. Two sodium iodide stock solutions (2.5:1 and 2:1) were found to dissolve iodine without undergoing precipitation.

  20. FEM Modeling of a Magnetoelectric Transducer for Autonomous Micro Sensors in Medical Application

    NASA Astrophysics Data System (ADS)

    Yang, Gang; Talleb, Hakeim; Gensbittel, Aurélie; Ren, Zhuoxiang

    2015-11-01

    In the context of wireless and autonomous sensors, this paper presents the multiphysics modeling of an energy transducer based on magnetoelectric (ME) composite for biomedical applications. The study considers the power requirement of an implanted sensor, the communication distance, the size limit of the device for minimal invasive insertion as well as the electromagnetic exposure restriction of the human body. To minimize the electromagnetic absorption by the human body, the energy source is provided by an external reader emitting low frequency magnetic field. The modeling is carried out with the finite element method by solving simultaneously the multiple physics problems including the electric load of the conditioning circuit. The simulation results show that with the T-L mode of a trilayer laminated ME composite, the transducer can deliver the required energy in respecting different constraints.

  1. Progress in the development of paper-based diagnostics for low-resource point-of-care settings

    PubMed Central

    Byrnes, Samantha; Thiessen, Gregory; Fu, Elain

    2014-01-01

    This Review focuses on recent work in the field of paper microfluidics that specifically addresses the goal of translating the multistep processes that are characteristic of gold-standard laboratory tests to low-resource point-of-care settings. A major challenge is to implement multistep processes with the robust fluid control required to achieve the necessary sensitivity and specificity of a given application in a user-friendly package that minimizes equipment. We review key work in the areas of fluidic controls for automation in paper-based devices, readout methods that minimize dedicated equipment, and power and heating methods that are compatible with low-resource point-of-care settings. We also highlight a focused set of recent applications and discuss future challenges. PMID:24256361

  2. Systems identification using a modified Newton-Raphson method: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Iliff, K. W.

    1972-01-01

    A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.

  3. Global 21 cm Signal Extraction from Foreground and Instrumental Effects. I. Pattern Recognition Framework for Separation Using Training Sets

    NASA Astrophysics Data System (ADS)

    Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric

    2018-02-01

    The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.

  4. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Robert N; White, Devin A; Urban, Marie L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less

  5. Optimal boarding method for airline passengers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steffen, Jason H.; /Fermilab

    2008-02-01

    Using a Markov Chain Monte Carlo optimization algorithm and a computer simulation, I find the passenger ordering which minimizes the time required to board the passengers onto an airplane. The model that I employ assumes that the time that a passenger requires to load his or her luggage is the dominant contribution to the time needed to completely fill the aircraft. The optimal boarding strategy may reduce the time required to board and airplane by over a factor of four and possibly more depending upon the dimensions of the aircraft. I explore some features of the optimal boarding method andmore » discuss practical modifications to the optimal. Finally, I mention some of the benefits that could come from implementing an improved passenger boarding scheme.« less

  6. IATA for skin sensitization potential – 1 out of 2 or 2 out of 3? (ESCD meeting)

    EPA Science Inventory

    To meet EU regulatory requirements and to avoid or minimize animal testing, there is a need for non-animal methods to assess skin sensitization potential. Given the complexity of the skin sensitization endpoint, there is an expectation that integrated testing and assessment appro...

  7. Earned and Unearned Degrees, Earned and Unearned Teaching Certificates: Implications for Education.

    ERIC Educational Resources Information Center

    Shaughnessy, Michael F.; Gaedke, Billy

    This article discusses the impact of instructional television, directed study courses, and other alternative teacher certification methods. Colleges and universities are becoming aware of nontraditional programs that require minimal, if any, time on campus or direct contact with instructors. Soon, there will be a proliferation of Internet courses.…

  8. Grading Homework to Emphasize Problem-Solving Process Skills

    ERIC Educational Resources Information Center

    Harper, Kathleen A.

    2012-01-01

    This article describes a grading approach that encourages students to employ particular problem-solving skills. Some strengths of this method, called "process-based grading," are that it is easy to implement, requires minimal time to grade, and can be used in conjunction with either an online homework delivery system or paper-based homework.

  9. A Conflict of Cultures: Planning vs. Tradition in Public Libraries.

    ERIC Educational Resources Information Center

    Raber, Douglas

    1995-01-01

    Strategic planning for public libraries as advocated by the PLA (Public Library Association) in the Public Library Development Program is likely to be met with resistance due to changes it requires in traditional public library planning and services. Conflicts that may arise are discussed, as well as methods for preventing, minimizing, and…

  10. IATA for skin sensitization potential – 1 out of 2 or 2 out of 3? (ASCCT meeting)

    EPA Science Inventory

    To meet EU regulatory requirements and to avoid or minimize animal testing, there is a need for non-animal methods to assess skin sensitization potential. Given the complexity of the skin sensitization endpoint, there is an expectation that integrated testing and assessment appro...

  11. Honors Selection Study 1966-67.

    ERIC Educational Resources Information Center

    Neidich, Alan

    Because many of the students selected for participation in the University of South Carolina's College of Arts and Science Honors Program failed to attain the minimal grade point level required to remain in the program, the Counseling Bureau undertook an evaluative study to improve selection methods. The project aimed to find answers to 3…

  12. Low solvent, low temperature method for extracting biodiesel lipids from concentrated microalgal biomass.

    PubMed

    Olmstead, Ian L D; Kentish, Sandra E; Scales, Peter J; Martin, Gregory J O

    2013-11-01

    An industrially relevant method for disrupting microalgal cells and preferentially extracting neutral lipids for large-scale biodiesel production was demonstrated on pastes (20-25% solids) of Nannochloropsis sp. The highly resistant Nannochloropsis sp. cells. were disrupted by incubation for 15 h at 37°C followed by high pressure homogenization at 1200 ± 100 bar. Lipid extraction was performed by twice contacting concentrated algal paste with minimal hexane (solvent:biomass ratios (w/w) of <2:1 and <1.3:1) in a stirred vessel at 35°C. Cell disruption prior to extraction increased lipid recovery 100-fold, with yields of 30-50% w/w obtained in the first hexane contact, and a further 6.5-20% in the second contact. The hexane preferentially extracted neutral lipids over glyco- and phospholipids, with up to 86% w/w of the neutral lipids recovered. The process was effective on wet concentrated paste, required minimal solvent and moderate temperature, and did not require difficult to recover polar solvents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Method and apparatus for improving the quality and efficiency of ultrashort-pulse laser machining

    DOEpatents

    Stuart, Brent C.; Nguyen, Hoang T.; Perry, Michael D.

    2001-01-01

    A method and apparatus for improving the quality and efficiency of machining of materials with laser pulse durations shorter than 100 picoseconds by orienting and maintaining the polarization of the laser light such that the electric field vector is perpendicular relative to the edges of the material being processed. Its use is any machining operation requiring remote delivery and/or high precision with minimal collateral dames.

  14. Pre- and post-treatment techniques for spacecraft water recovery

    NASA Technical Reports Server (NTRS)

    Putnam, David F.; Colombo, Gerald V.; Chullen, Cinda

    1986-01-01

    Distillation-based waste water pretreatment and recovered water posttreatment methods are proposed for the NASA Space Station. Laboratory investigation results are reported for two nonoxidizing urine pretreatment formulas (hexadecyl trimethyl ammonium bromide and Cu/Cr) which minimize the generation of volatile organics, thereby significantly reducing posttreatment requirements. Three posttreatment methods (multifiltration, reverse osmosis, and UV-assisted ozone oxidation) have been identified which appear promising for the removal of organic contaminants from recovered water.

  15. Two-dimensional frequency-domain acoustic full-waveform inversion with rugged topography

    NASA Astrophysics Data System (ADS)

    Zhang, Qian-Jiang; Dai, Shi-Kun; Chen, Long-Wei; Li, Kun; Zhao, Dong-Dong; Huang, Xing-Xing

    2015-09-01

    We studied finite-element-method-based two-dimensional frequency-domain acoustic FWI under rugged topography conditions. The exponential attenuation boundary condition suitable for rugged topography is proposed to solve the cutoff boundary problem as well as to consider the requirement of using the same subdivision grid in joint multifrequency inversion. The proposed method introduces the attenuation factor, and by adjusting it, acoustic waves are sufficiently attenuated in the attenuation layer to minimize the cutoff boundary effect. Based on the law of exponential attenuation, expressions for computing the attenuation factor and the thickness of attenuation layers are derived for different frequencies. In multifrequency-domain FWI, the conjugate gradient method is used to solve equations in the Gauss-Newton algorithm and thus minimize the computation cost in calculating the Hessian matrix. In addition, the effect of initial model selection and frequency combination on FWI is analyzed. Examples using numerical simulations and FWI calculations are used to verify the efficiency of the proposed method.

  16. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    PubMed

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  17. Integrating risk minimization planning throughout the clinical development and commercialization lifecycle: an opinion on how drug development could be improved

    PubMed Central

    Morrato, Elaine H; Smith, Meredith Y

    2015-01-01

    Pharmaceutical risk minimization programs are now an established requirement in the regulatory landscape. However, pharmaceutical companies have been slow to recognize and embrace the significant potential these programs offer in terms of enhancing trust with health care professionals and patients, and for providing a mechanism for bringing products to the market that might not otherwise have been approved. Pitfalls of the current drug development process include risk minimization programs that are not data driven; missed opportunities to incorporate pragmatic methods and market-based insights, outmoded tools and data sources, lack of rapid evaluative learning to support timely adaption, lack of systematic approaches for patient engagement, and questions on staffing and organizational infrastructure. We propose better integration of risk minimization with clinical drug development and commercialization work streams throughout the product lifecycle. We articulate a vision and propose broad adoption of organizational models for incorporating risk minimization expertise into the drug development process. Three organizational models are discussed and compared: outsource/external vendor, embedded risk management specialist model, and Center of Excellence. PMID:25750537

  18. Comparing two methods of thoracoscopic sympathectomy for palmar hyperhidrosis.

    PubMed

    Ibrahim, Magdi; Allam, Abdulla

    2014-09-01

    Hyperhidrosis can cause significant professional and social handicaps. Thoracic endoscopic sympathectomy has become the surgical technique of choice for treating intractable palmar hyperhidrosis and can be performed through multiple ports or a single port. This prospective study compares outcomes between the two methods. The study followed 71 consecutive patients who underwent video-assisted sympathectomy for palmar hyperhidrosis between January 2008 and June 2012. In all patients, the procedure was bilateral and performed in one stage. The multiple-port method was used in 35 patients (group A) and the single-port method in 36 patients (group B). Preoperative, intraoperative, and postoperative variables; morbidity, recurrence; and survival were compared in both groups. The procedure was successful in 100% of the patients; none experienced a recurrence of palmar hyperhidrosis, Horner syndrome (oculosympathetic palsy), or serious postoperative complications, and none died. No patients required conversion to an open procedure. Residual minimal pneumothorax occurred in two patients (5.7%) in group A and in one patient (2.8%) in group B. Minimal hemothorax occurred in one patient (2.9%) in group A and in three patients (8.3%) in group B. Compensatory hyperhidrosis occurred in seven patients (20%) in group A and in eight patients (22.2%) in group B. No difference was found between the multiple- and single-port methods. Both are effective, safe minimally invasive procedures that permanently improve quality of life in patients with palmar hyperhidrosis.

  19. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  20. Spin coating apparatus

    DOEpatents

    Torczynski, John R.

    2000-01-01

    A spin coating apparatus requires less cleanroom air flow than prior spin coating apparatus to minimize cleanroom contamination. A shaped exhaust duct from the spin coater maintains process quality while requiring reduced cleanroom air flow. The exhaust duct can decrease in cross section as it extends from the wafer, minimizing eddy formation. The exhaust duct can conform to entrainment streamlines to minimize eddy formation and reduce interprocess contamination at minimal cleanroom air flow rates.

  1. Intrauterine devices and other forms of contraception: thinking outside the pack.

    PubMed

    Allen, Caitlin; Kolehmainen, Christine

    2015-05-01

    A variety of contraception options are available in addition to traditional combined oral contraceptive pills. Newer long-acting reversible contraceptive (LARC) methods such as intrauterine devices and subcutaneous implants are preferred because they do not depend on patient compliance. They are highly effective and appropriate for most women. Female and male sterilization are other effective but they are irreversible and require counseling to minimize regret. The contraceptive injection, patch, and ring do not require daily administration, but their typical efficacy rates are lower than LARC methods and similar to those for combined oral contraceptive pills. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Assessment of land use impact on water-related ecosystem services capturing the integrated terrestrial-aquatic system.

    PubMed

    Maes, Wouter H; Heuvelmans, Griet; Muys, Bart

    2009-10-01

    Although the importance of green (evaporative) water flows in delivering ecosystem services has been recognized, most operational impact assessment methods still focus only on blue water flows. In this paper, we present a new model to evaluate the effect of land use occupation and transformation on water quantity. Conceptually based on the supply of ecosystem services by terrestrial and aquatic ecosystems, the model is developed for, but not limited to, land use impact assessment in life cycle assessment (LCA) and requires a minimum amount of input data. Impact is minimal when evapotranspiration is equal to that of the potential natural vegetation, and maximal when evapotranspiration is zero or when it exceeds a threshold value derived from the concept of environmental water requirement. Three refinements to the model, requiring more input data, are proposed. The first refinement considers a minimal impact over a certain range based on the boundary evapotranspiration of the potential natural vegetation. In the second refinement the effects of evaporation and transpiration are accounted for separately, and in the third refinement a more correct estimate of evaporation from a fully sealed surface is incorporated. The simplicity and user friendliness of the proposed impact assessment method are illustrated with two examples.

  3. Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three

    NASA Astrophysics Data System (ADS)

    Steinhardt, Charles L.; Jermyn, Adam S.

    2018-02-01

    Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.

  4. The elastic ratio: introducing curvature into ratio-based image segmentation.

    PubMed

    Schoenemann, Thomas; Masnou, Simon; Cremers, Daniel

    2011-09-01

    We present the first ratio-based image segmentation method that allows imposing curvature regularity of the region boundary. Our approach is a generalization of the ratio framework pioneered by Jermyn and Ishikawa so as to allow penalty functions that take into account the local curvature of the curve. The key idea is to cast the segmentation problem as one of finding cyclic paths of minimal ratio in a graph where each graph node represents a line segment. Among ratios whose discrete counterparts can be globally minimized with our approach, we focus in particular on the elastic ratio [Formula: see text] that depends, given an image I, on the oriented boundary C of the segmented region candidate. Minimizing this ratio amounts to finding a curve, neither small nor too curvy, through which the brightness flux is maximal. We prove the existence of minimizers for this criterion among continuous curves with mild regularity assumptions. We also prove that the discrete minimizers provided by our graph-based algorithm converge, as the resolution increases, to continuous minimizers. In contrast to most existing segmentation methods with computable and meaningful, i.e., nondegenerate, global optima, the proposed approach is fully unsupervised in the sense that it does not require any kind of user input such as seed nodes. Numerical experiments demonstrate that curvature regularity allows substantial improvement of the quality of segmentations. Furthermore, our results allow drawing conclusions about global optima of a parameterization-independent version of the snakes functional: the proposed algorithm allows determining parameter values where the functional has a meaningful solution and simultaneously provides the corresponding global solution.

  5. [Research on fuzzy proportional-integral-derivative control of master-slave minimally invasive operation robot driver].

    PubMed

    Zhao, Ximei; Ren, Chengyi; Liu, Hao; Li, Haogyi

    2014-12-01

    Robotic catheter minimally invasive operation requires that the driver control system has the advantages of quick response, strong anti-jamming and real-time tracking of target trajectory. Since the catheter parameters of itself and movement environment and other factors continuously change, when the driver is controlled using traditional proportional-integral-derivative (PID), the controller gain becomes fixed once the PID parameters are set. It can not change with the change of the parameters of the object and environmental disturbance so that its change affects the position tracking accuracy, and may bring a large overshoot endangering patients' vessel. Therefore, this paper adopts fuzzy PID control method to adjust PID gain parameters in the tracking process in order to improve the system anti-interference ability, dynamic performance and tracking accuracy. The simulation results showed that the fuzzy PID control method had a fast tracking performance and a strong robustness. Compared with those of traditional PID control, the feasibility and practicability of fuzzy PID control are verified in a robotic catheter minimally invasive operation.

  6. Source of electrical power for an electric vehicle and other purposes, and related methods

    DOEpatents

    LaFollette, Rodney M.

    2000-05-16

    Microthin sheet technology is disclosed by which superior batteries are constructed which, among other things, accommodate the requirements for high load rapid discharge and recharge, mandated by electric vehicle criteria. The microthin sheet technology has process and article overtones and can be used to form thin electrodes used in batteries of various kinds and types, such as spirally-wound batteries, bipolar batteries, lead acid batteries, silver/zinc batteries, and others. Superior high performance battery features include: (a) minimal ionic resistance; (b) minimal electronic resistance; (c) minimal polarization resistance to both charging and discharging; (d) improved current accessibility to active material of the electrodes; (e) a high surface area to volume ratio; (f) high electrode porosity (microporosity); (g) longer life cycle; (h) superior discharge/recharge characteristics; (j) higher capacities (A.multidot.hr); and k) high specific capacitance.

  7. Source of electrical power for an electric vehicle and other purposes, and related methods

    DOEpatents

    LaFollette, Rodney M.

    2002-11-12

    Microthin sheet technology is disclosed by which superior batteries are constructed which, among other things, accommodate the requirements for high load rapid discharge and recharge, mandated by electric vehicle criteria. The microthin sheet technology has process and article overtones and can be used to form corrugated thin electrodes used in batteries of various kinds and types, such as spirally-wound batteries, bipolar batteries, lead acid batteries, silver/zinc batteries, and others. Superior high performance battery features include: (a) minimal ionic resistance; (b) minimal electronic resistance; (c) minimal polarization resistance to both charging and discharging; (d) improved current accessibility to active material of the electrodes; (e) a high surface area to volume ratio; (f) high electrode porosity (microporosity); (g) longer life cycle; (h) superior discharge/recharge characteristics; (i) higher capacities (A.multidot.hr); and (j) high specific capacitance.

  8. [Minimally invasive interventional techniques involving the urogenital tract in dogs and cats].

    PubMed

    Heilmann, R M

    2016-01-01

    Minimally invasive interventional techniques are advancing fast in small animal medicine. These techniques utilize state-of-the-art diagnostic methods, including fluoroscopy, ultrasonography, endoscopy, and laparoscopy. Minimally invasive procedures are particularly attractive in the field of small animal urology because, in the past, treatment options for diseases of the urogenital tract were rather limited or associated with a high rate of complications. Most endourological interventions have a steep learning curve. With the appropriate equipment and practical training some of these procedures can be performed in most veterinary practices. However, most interventions require referral to a specialty clinic. This article summarizes the standard endourological equipment and materials as well as the different endourological interventions performed in dogs and cats with diseases of the kidneys/renal pelves, ureters, or lower urinary tract (urinary bladder and urethra).

  9. An automated method for identifying artifact in independent component analysis of resting-state FMRI.

    PubMed

    Bhaganagarapu, Kaushik; Jackson, Graeme D; Abbott, David F

    2013-01-01

    An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available.

  10. An Automated Method for Identifying Artifact in Independent Component Analysis of Resting-State fMRI

    PubMed Central

    Bhaganagarapu, Kaushik; Jackson, Graeme D.; Abbott, David F.

    2013-01-01

    An enduring issue with data-driven analysis and filtering methods is the interpretation of results. To assist, we present an automatic method for identification of artifact in independent components (ICs) derived from functional MRI (fMRI). The method was designed with the following features: does not require temporal information about an fMRI paradigm; does not require the user to train the algorithm; requires only the fMRI images (additional acquisition of anatomical imaging not required); is able to identify a high proportion of artifact-related ICs without removing components that are likely to be of neuronal origin; can be applied to resting-state fMRI; is automated, requiring minimal or no human intervention. We applied the method to a MELODIC probabilistic ICA of resting-state functional connectivity data acquired in 50 healthy control subjects, and compared the results to a blinded expert manual classification. The method identified between 26 and 72% of the components as artifact (mean 55%). About 0.3% of components identified as artifact were discordant with the manual classification; retrospective examination of these ICs suggested the automated method had correctly identified these as artifact. We have developed an effective automated method which removes a substantial number of unwanted noisy components in ICA analyses of resting-state fMRI data. Source code of our implementation of the method is available. PMID:23847511

  11. En Route Spacing System and Method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  12. En route spacing system and method

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Green, Steven M. (Inventor)

    2002-01-01

    A method of and computer software for minimizing aircraft deviations needed to comply with an en route miles-in-trail spacing requirement imposed during air traffic control operations via establishing a spacing reference geometry, predicting spatial locations of a plurality of aircraft at a predicted time of intersection of a path of a first of said plurality of aircraft with the spacing reference geometry, and determining spacing of each of the plurality of aircraft based on the predicted spatial locations.

  13. Optimization of Operations Resources via Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  14. A decomposition approach to the design of a multiferroic memory bit

    NASA Astrophysics Data System (ADS)

    Acevedo, Ruben; Liang, Cheng-Yen; Carman, Gregory P.; Sepulveda, Abdon E.

    2017-06-01

    The objective of this paper is to present a methodology for the design of a memory bit to minimize the energy required to write data at the bit level. By straining a ferromagnetic nickel nano-dot by means of a piezoelectric substrate, its magnetization vector rotates between two stable states defined as a 1 and 0 for digital memory. The memory bit geometry, actuation mechanism and voltage control law were used as design variables. The approach used was to decompose the overall design process into simpler sub-problems whose structure can be exploited for a more efficient solution. This method minimizes the number of fully dynamic coupled finite element analyses required to converge to a near optimal design, thus decreasing the computational time for the design process. An in-plane sample design problem is presented to illustrate the advantages and flexibility of the procedure.

  15. Principles of light energy management

    NASA Astrophysics Data System (ADS)

    Davis, N.

    1994-03-01

    Six methods used to minimize excess energy effects associated with lighting systems for plant growth chambers are reviewed in this report. The energy associated with wall transmission and chamber operating equipment and the experimental requirements, such as fresh air and internal equipment, are not considered here. Only the energy associated with providing and removing the energy for lighting is considered.

  16. Optimal Partitioning of a Data Set Based on the "p"-Median Model

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich

    2008-01-01

    Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…

  17. Principles of light energy management

    NASA Technical Reports Server (NTRS)

    Davis, N.

    1994-01-01

    Six methods used to minimize excess energy effects associated with lighting systems for plant growth chambers are reviewed in this report. The energy associated with wall transmission and chamber operating equipment and the experimental requirements, such as fresh air and internal equipment, are not considered here. Only the energy associated with providing and removing the energy for lighting is considered.

  18. A general solution to the hidden-line problem. [to graphically represent aerodynamic stability derivatives

    NASA Technical Reports Server (NTRS)

    Hedgley, D. R., Jr.

    1982-01-01

    The requirements for computer-generated perspective projections of three dimensional objects has escalated. A general solution was developed. The theoretical solution to this problem is presented. The method is very efficient as it minimizes the selection of points and comparison of line segments and hence avoids the devastation of square-law growth.

  19. An expeditious synthesis of imatinib and analogues utilising flow chemistry methods.

    PubMed

    Hopkin, Mark D; Baxendale, Ian R; Ley, Steven V

    2013-03-21

    A flow-based route to imatinib, the API of Gleevec, was developed and the general procedure then used to generate a number of analogues which were screened for biological activity against Abl1. The flow synthesis required minimal manual intervention and was achieved despite the poor solubility of many of the reaction components.

  20. Asessment of Forest Insect Conditions at Opax Mountain Silviculture Trail

    Treesearch

    Dan Miller; Lorraine Maclauchlan

    1998-01-01

    Forest managment in British Columbia requires that all resource calues are considered along with a variety of apporpriate management practices. For the past 100 years, partial-cutting parctices were the method of choice when harvesting in Interior Douglas-fir (IDF) zone ecosystems. ALong wiht a highly effective fire suppression program and minimal stand tending,...

  1. Assessment of Forest Insect Conditions at Opax Mountain Silviculture Trail

    Treesearch

    Dan Miller; Lorraine Maclauchlan

    1998-01-01

    Forest management in British Columbia requires that all resource values are considered along with a variety of appropriate management practices. For the past l00 years, partial-cutting practices were the method of choice whenharvesting in Interior Douglas-fir (IDF) zone ecosystems. Along with a highly effective fire suppression program and minimal stand tending, these...

  2. GASOLINE: Smoothed Particle Hydrodynamics (SPH) code

    NASA Astrophysics Data System (ADS)

    N-Body Shop

    2017-10-01

    Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.

  3. Temporal bone borehole accuracy for cochlear implantation influenced by drilling strategy: an in vitro study.

    PubMed

    Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias

    2014-11-01

    Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.

  4. Two-port robotic hysterectomy: a novel approach.

    PubMed

    Moawad, Gaby N; Tyan, Paul; Khalil, Elias D Abi

    2018-03-24

    The objective of the study was to demonstrate a novel technique for two-port robotic hysterectomy with a particular focus on the challenging portions of the procedure. The study is designed as a technical video, showing step-by-step a two-port robotic hysterectomy approach (Canadian Task Force classification level III). IRB approval was not required for this study. The benefits of minimally invasive surgery for gynecological pathology have been clearly documented in multiple studies. Patients had fewer medical and surgical complications postoperatively, better cosmesis and quality of life. Most gynecological surgeons require 3-5 ports for the standard gynecological procedure. Even though the minimally invasive multiport system provides an excellent safety profile, multiple incisions are associated with a greater risk for morbidity including infection, pain, and hernia. In the past decade, various new methods have emerged to minimize the number of ports used in gynecological surgery. The interventions employed were a two-port robotic hysterectomy, using a camera port plus one robotic arm, with a focus on salpingectomy and cuff closure. We describe a transvaginal and a transabdominal approach for salpingectomy and a novel method for cuff closure. The transvaginal and transabdominal techniques for salpingectomy for two-port robotic-assisted hysterectomy provide excellent tension and exposure for a safe procedure without the need for an extra port. We also describe a transvaginal technique to place the vaginal cuff on tension during closure. With the necessary set of skills on a carefully chosen patient, two-port robotic-assisted total laparoscopic hysterectomy is a feasible procedure.

  5. Storage and computationally efficient permutations of factorized covariance and square-root information arrays

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector stored Upper triangular Diagonal factorized covariance and vector stored upper triangular Square Root Information arrays is presented. The method involves cyclic permutation of the rows and columns of the arrays and retriangularization with fast (slow) Givens rotations (reflections). Minimal computation is performed, and a one dimensional scratch array is required. To make the method efficient for large arrays on a virtual memory machine, computations are arranged so as to avoid expensive paging faults. This method is potentially important for processing large volumes of radio metric data in the Deep Space Network.

  6. Extended polarization in 3rd order SCC-DFTB from chemical potential equilization

    PubMed Central

    Kaminski, Steve; Giese, Timothy J.; Gaus, Michael; York, Darrin M.; Elstner, Marcus

    2012-01-01

    In this work we augment the approximate density functional method SCC-DFTB (DFTB3) with the chemical potential equilization (CPE) approach in order to improve the performance for molecular electronic polarizabilities. The CPE method, originally implemented for NDDO type methods by Giese and York, has been shown to emend minimal basis methods wrt response properties significantly, and has been applied to SCC-DFTB recently. CPE allows to overcome this inherent limitation of minimal basis methods by supplying an additional response density. The systematic underestimation is thereby corrected quantitatively without the need to extend the atomic orbital basis, i.e. without increasing the overall computational cost significantly. Especially the dependency of polarizability as a function of molecular charge state was significantly improved from the CPE extension of DFTB3. The empirical parameters introduced by the CPE approach were optimized for 172 organic molecules in order to match the results from density functional methods (DFT) methods using large basis sets. However, the first order derivatives of molecular polarizabilities, as e.g. required to compute Raman activities, are not improved by the current CPE implementation, i.e. Raman spectra are not improved. PMID:22894819

  7. Method and apparatus for measuring irradiated fuel profiles

    DOEpatents

    Lee, David M.

    1982-01-01

    A new apparatus is used to substantially instantaneously obtain a profile of an object, for example a spent fuel assembly, which profile (when normalized) has unexpectedly been found to be substantially identical to the normalized profile of the burnup monitor Cs-137 obtained with a germanium detector. That profile can be used without normalization in a new method of identifying and monitoring in order to determine for example whether any of the fuel has been removed. Alternatively, two other new methods involve calibrating that profile so as to obtain a determination of fuel burnup (which is important for complying with safeguards requirements, for utilizing fuel to an optimal extent, and for storing spent fuel in a minimal amount of space). Using either of these two methods of determining burnup, one can reduce the required measurement time significantly (by more than an order of magnitude) over existing methods, yet retain equal or only slightly reduced accuracy.

  8. Optimal RTP Based Power Scheduling for Residential Load in Smart Grid

    NASA Astrophysics Data System (ADS)

    Joshi, Hemant I.; Pandya, Vivek J.

    2015-12-01

    To match supply and demand, shifting of load from peak period to off-peak period is one of the effective solutions. Presently flat rate tariff is used in major part of the world. This type of tariff doesn't give incentives to the customers if they use electrical energy during off-peak period. If real time pricing (RTP) tariff is used, consumers can be encouraged to use energy during off-peak period. Due to advancement in information and communication technology, two-way communications is possible between consumers and utility. To implement this technique in smart grid, home energy controller (HEC), smart meters, home area network (HAN) and communication link between consumers and utility are required. HEC interacts automatically by running an algorithm to find optimal energy consumption schedule for each consumer. However, all the consumers are not allowed to shift their load simultaneously during off-peak period to avoid rebound peak condition. Peak to average ratio (PAR) is considered while carrying out minimization problem. Linear programming problem (LPP) method is used for minimization. The simulation results of this work show the effectiveness of the minimization method adopted. The hardware work is in progress and the program based on the method described here will be made to solve real problem.

  9. Biomagnetic separation of Salmonella Typhimurium with high affine and specific ligand peptides isolated by phage display technique

    NASA Astrophysics Data System (ADS)

    Steingroewer, Juliane; Bley, Thomas; Bergemann, Christian; Boschke, Elke

    2007-04-01

    Analyses of food-borne pathogens are of great importance in order to minimize the health risk for customers. Thus, very sensitive and rapid detection methods are required. Current conventional culture techniques are very time consuming. Modern immunoassays and biochemical analysis also require pre-enrichment steps resulting in a turnaround time of at least 24 h. Biomagnetic separation (BMS) is a promising more rapid method. In this study we describe the isolation of high affine and specific peptides from a phage-peptide library, which combined with BMS allows the detection of Salmonella spp. with a similar sensitivity as that of immunomagnetic separation using antibodies.

  10. Study of flutter related computational procedures for minimum weight structural sizing of advanced aircraft, supplemental data

    NASA Technical Reports Server (NTRS)

    Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.

    1975-01-01

    Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.

  11. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  12. Mixed-Methods Design in Biology Education Research: Approach and Uses

    PubMed Central

    Warfa, Abdi-Rizak M.

    2016-01-01

    Educational research often requires mixing different research methodologies to strengthen findings, better contextualize or explain results, or minimize the weaknesses of a single method. This article provides practical guidelines on how to conduct such research in biology education, with a focus on mixed-methods research (MMR) that uses both quantitative and qualitative inquiries. Specifically, the paper provides an overview of mixed-methods design typologies most relevant in biology education research. It also discusses common methodological issues that may arise in mixed-methods studies and ways to address them. The paper concludes with recommendations on how to report and write about MMR. PMID:27856556

  13. Survey and analysis of research on supersonic drag-due-to-lift minimization with recommendations for wing design

    NASA Technical Reports Server (NTRS)

    Carlson, Harry W.; Mann, Michael J.

    1992-01-01

    A survey of research on drag-due-to-lift minimization at supersonic speeds, including a study of the effectiveness of current design and analysis methods was conducted. The results show that a linearized theory analysis with estimated attainable thrust and vortex force effects can predict with reasonable accuracy the lifting efficiency of flat wings. Significantly better wing performance can be achieved through the use of twist and camber. Although linearized theory methods tend to overestimate the amount of twist and camber required for a given application and provide an overly optimistic performance prediction, these deficiencies can be overcome by implementation of recently developed empirical corrections. Numerous examples of the correlation of experiment and theory are presented to demonstrate the applicability and limitations of linearized theory methods with and without empirical corrections. The use of an Euler code for the estimation of aerodynamic characteristics of a twisted and cambered wing and its application to design by iteration are discussed.

  14. Vibration suppression for large scale adaptive truss structures using direct output feedback control

    NASA Technical Reports Server (NTRS)

    Lu, Lyan-Ywan; Utku, Senol; Wada, Ben K.

    1993-01-01

    In this article, the vibration control of adaptive truss structures, where the control actuation is provided by length adjustable active members, is formulated as a direct output feedback control problem. A control method named Model Truncated Output Feedback (MTOF) is presented. The method allows the control feedback gain to be determined in a decoupled and truncated modal space in which only the critical vibration modes are retained. The on-board computation required by MTOF is minimal; thus, the method is favorable for the applications of vibration control of large scale structures. The truncation of the modal space inevitably introduces spillover effect during the control process. In this article, the effect is quantified in terms of active member locations, and it is shown that the optimal placement of active members, which minimizes the spillover effect (and thus, maximizes the control performance) can be sought. The problem of optimally selecting the locations of active members is also treated.

  15. Monitoring crack extension in fracture toughness tests by ultrasonics

    NASA Technical Reports Server (NTRS)

    Klima, S. J.; Fisher, D. M.; Buzzard, R. J.

    1975-01-01

    An ultrasonic method was used to observe the onset of crack extension and to monitor continued crack growth in fracture toughness specimens during three point bend tests. A 20 MHz transducer was used with commercially available equipment to detect average crack extension less than 0.09 mm. The material tested was a 300-grade maraging steel in the annealed condition. A crack extension resistance curve was developed to demonstrate the usefulness of the ultrasonic method for minimizing the number of tests required to generate such curves.

  16. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  17. System analysis for technology transfer readiness assessment of horticultural postharvest

    NASA Astrophysics Data System (ADS)

    Hayuningtyas, M.; Djatna, T.

    2018-04-01

    Availability of postharvest technology is becoming abundant, but only a few technologies are applicable and useful to a wider community purposes. Based on this problem it requires a significant readiness level of transfer technology approach. This system is reliable to access readiness a technology with level, from 1-9 and to minimize time of transfer technology in every level, time required technology from the selection process can be minimum. Problem was solved by using Relief method to determine ranking by weighting feasible criteria on postharvest technology in each level and PERT (Program Evaluation Review Technique) to schedule. The results from ranking process of post-harvest technology in the field of horticulture is able to pass level 7. That, technology can be developed to increase into pilot scale and minimize time required for technological readiness on PERT with optimistic time of 7,9 years. Readiness level 9 shows that technology has been tested on the actual conditions also tied with estimated production price compared to competitors. This system can be used to determine readiness of technology innovation that is derived from agricultural raw materials and passes certain stages.

  18. Validation of missed space-group symmetry in X-ray powder diffraction structures with dispersion-corrected density functional theory.

    PubMed

    Hempler, Daniela; Schmidt, Martin U; van de Streek, Jacco

    2017-08-01

    More than 600 molecular crystal structures with correct, incorrect and uncertain space-group symmetry were energy-minimized with dispersion-corrected density functional theory (DFT-D, PBE-D3). For the purpose of determining the correct space-group symmetry the required tolerance on the atomic coordinates of all non-H atoms is established to be 0.2 Å. For 98.5% of 200 molecular crystal structures published with missed symmetry, the correct space group is identified; there are no false positives. Very small, very symmetrical molecules can end up in artificially high space groups upon energy minimization, although this is easily detected through visual inspection. If the space group of a crystal structure determined from powder diffraction data is ambiguous, energy minimization with DFT-D provides a fast and reliable method to select the correct space group.

  19. Safety and Equality at Odds: OSHA and Title VII Clash over Health Hazards in the Workplace.

    ERIC Educational Resources Information Center

    Crowell, Donald R.; Copus, David A.

    1978-01-01

    Discusses the legal problems presented by job health hazards which have a different effect on men and women. Where methods of eliminating or minimizing exposure, as required by the Occupational Safety and Health Act, affect only one sex, the provisions of Title VII of the Civil Rights Act may be violated. (MF)

  20. Cooling devices and methods for use with electric submersible pumps

    DOEpatents

    Jankowski, Todd A; Hill, Dallas D

    2014-12-02

    Cooling devices for use with electric submersible pump motors include a refrigerator attached to the end of the electric submersible pump motor with the evaporator heat exchanger accepting all or a portion of the heat load from the motor. The cooling device can be a self-contained bolt-on unit, so that minimal design changes to existing motors are required.

  1. Cooling devices and methods for use with electric submersible pumps

    DOEpatents

    Jankowski, Todd A.; Hill, Dallas D.

    2016-07-19

    Cooling devices for use with electric submersible pump motors include a refrigerator attached to the end of the electric submersible pump motor with the evaporator heat exchanger accepting all or a portion of the heat load from the motor. The cooling device can be a self-contained bolt-on unit, so that minimal design changes to existing motors are required.

  2. Soil Stabilization for Roadways and Airfields

    DTIC Science & Technology

    1987-07-01

    the possibility of accidents and minimize health hazards. Al 1 Occupational Safety and Health Act requirements shall be observed. 8. Method of...v ELS Section Title Page G. Selection of Asphalt Type and Asphalt Content ........ . ... .. 86 H. Safety Precautions, Limitations of Use and...for Lime Fly Ash-Aggregate Base/Subbase Courses ...... ............. ?16 SECTION III - Typical Specification for Road - Mixed Asphalt fyr BasP and

  3. Resolution of an Orbital Issue: A Designed Experiment

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.

    2011-01-01

    Design of Experiments (DOE) is a systematic approach to investigation of a system or process. A series of structured tests are designed in which planned changes are made to the input variables of a process or system. The effects of these changes on a pre-defined output are then assessed. DOE is a formal method of maximizing information gained while minimizing resources required.

  4. Minimal-Approximation-Based Decentralized Backstepping Control of Interconnected Time-Delay Systems.

    PubMed

    Choi, Yun Ho; Yoo, Sung Jin

    2016-12-01

    A decentralized adaptive backstepping control design using minimal function approximators is proposed for nonlinear large-scale systems with unknown unmatched time-varying delayed interactions and unknown backlash-like hysteresis nonlinearities. Compared with existing decentralized backstepping methods, the contribution of this paper is to design a simple local control law for each subsystem, consisting of an actual control with one adaptive function approximator, without requiring the use of multiple function approximators and regardless of the order of each subsystem. The virtual controllers for each subsystem are used as intermediate signals for designing a local actual control at the last step. For each subsystem, a lumped unknown function including the unknown nonlinear terms and the hysteresis nonlinearities is derived at the last step and is estimated by one function approximator. Thus, the proposed approach only uses one function approximator to implement each local controller, while existing decentralized backstepping control methods require the number of function approximators equal to the order of each subsystem and a calculation of virtual controllers to implement each local actual controller. The stability of the total controlled closed-loop system is analyzed using the Lyapunov stability theorem.

  5. Aortic Wave Dynamics and Its Influence on Left Ventricular Workload

    NASA Astrophysics Data System (ADS)

    Pahlevan, Niema; Gharib, Morteza

    2010-11-01

    Clinical and epidemiologic studies have shown that hypertension plays a key role in development of left ventricular (LV) hypertrophy and ultimately heart failure mostly due to increased LV workload. Therefore, it is crucial to diagnose and treat abnormal high LV workload at early stages. The pumping mechanism of the heart is pulsatile, thus it sends pressure and flow wave into the compliant aorta. The wave dynamics in the aorta is dominated by interplay of heart rate (HR), aortic rigidity, and location of reflection sites. We hypothesized that for a fixed cardiac output (CO) and peripheral resistance (PR), interplay of HR and aortic compliance can create conditions that minimize LV power requirement. We used a computational approach to test our hypothesis. Finite element method with direct coupling method of fluid-structure interaction (FSI) was used. Blood was assumed to be incompressible Newtonian fluid and aortic wall was considered elastic isotropic. Simulations were performed for various heart rates and aortic rigidities while inflow wave, CO, and PR were kept constant. For any aortic compliance, LV power requirement becomes minimal at a specific heart rate. The minimum shifts to higher heart rates as aortic rigidity increases.

  6. An Overview on Prenatal Screening for Chromosomal Aberrations.

    PubMed

    Hixson, Lucas; Goel, Srishti; Schuber, Paul; Faltas, Vanessa; Lee, Jessica; Narayakkadan, Anjali; Leung, Ho; Osborne, Jim

    2015-10-01

    This article is a review of current and emerging methods used for prenatal detection of chromosomal aneuploidies. Chromosomal anomalies in the developing fetus can occur in any pregnancy and lead to death prior to or shortly after birth or to costly lifelong disabilities. Early detection of fetal chromosomal aneuploidies, an atypical number of certain chromosomes, can help parents evaluate their pregnancy options. Current diagnostic methods include maternal serum sampling or nuchal translucency testing, which are minimally invasive diagnostics, but lack sensitivity and specificity. The gold standard, karyotyping, requires amniocentesis or chorionic villus sampling, which are highly invasive and can cause abortions. In addition, many of these methods have long turnaround times, which can cause anxiety in mothers. Next-generation sequencing of fetal DNA in maternal blood enables minimally invasive, sensitive, and reasonably rapid analysis of fetal chromosomal anomalies and can be of clinical utility to parents. This review covers traditional methods and next-generation sequencing techniques for diagnosing aneuploidies in terms of clinical utility, technological characteristics, and market potential. © 2015 Society for Laboratory Automation and Screening.

  7. Comparison of three techniques for evaluating skin erythemal response for determination of sun protection factors of sunscreens: high resolution laser Doppler imaging, colorimetry and visual scoring.

    PubMed

    Wilhelm, K P; Kaspar, K; Funkel, O

    2001-04-01

    Sun protection factor (SPF) measurement is based on the determination of the minimal erythema dose (MED). The ratio of doses required to induce a minimal erythema between product-treated and untreated skin is defined as SPF. The aim of this study was to validate the conventionally used visual scoring with two non-invasive methods: high resolution laser Doppler imaging (HR-LDI) and colorimetry. Another goal was to check whether suberythemal reactions could be detected by means of HR-LDI measurements. Four sunscreens were selected. The measurements were made on the back of 10 subjects. A solar simulator SU 5000 (m.u.t., Wedel, Germany) served as radiation source. For the visual assessment, the erythema was defined according to COLIPA as the first perceptible, clearly defined unambiguous redness of the skin. For the colorimetric determination of the erythema, a Chromameter CR 300 (Minolta, Osaka, Japan) was used. The threshold for the colorimetry was chosen according to the COLIPA recommendation as an increase of the redness parameter delta a* = 2.5. For the non-contact perfusion measurements of skin blood flow, a two-dimensional high resolution laser Doppler imager (HR-LDI) (Lisca, Linköping, Sweden) was used. For the HR-LDI measurements, an optimal threshold perfusion needed to be established. For the HR-LDI measurements basal perfusion +1 standard deviation of all basal measurements was found to be a reliable threshold perfusion corresponding to the minimal erythema. Smaller thresholds, which would be necessary for detection of suberythemal responses, did not provide unambiguous data. All three methods, visual scoring, colorimetry and HR-LDI, produced similar SPFs for the test products with a variability of < 5% between methods. The HR-LDI method showed the lowest variation of the mean SPF. Neither of the instrumental methods, however, resulted in an increase of the sensitivity of SPF determination as compared with visual scoring. Both HR-LDI and colorimetry are suitable, reliable and observer-independent methods for MED determination. However, they do not provide greater sensitivity and thus do not result in lower UV dose requirements for testing.

  8. Advances in Green Organic Sonochemistry.

    PubMed

    Draye, Micheline; Kardos, Nathalie

    2016-10-01

    Over the past 15 years, sustainable chemistry has emerged as a new paradigm in the development of chemistry. In the field of organic synthesis, green chemistry rhymes with relevant choice of starting materials, atom economy, methodologies that minimize the number of chemical steps, appropriate use of benign solvents and reagents, efficient strategies for product isolation and purification and energy minimization. In that context, unconventional methods, and especially ultrasound, can be a fine addition towards achieving these green requirements. Undoubtedly, sonochemistry is considered as being one of the most promising green chemical methods (Cravotto et al. Catal Commun 63: 2-9, 2015). This review is devoted to the most striking results obtained in green organic sonochemistry between 2006 and 2016. Furthermore, among catalytic transformations, oxidation reactions are the most polluting reactions in the chemical industry; thus, we have focused a part of our review on the very promising catalytic activity of ultrasound for oxidative purposes.

  9. Systems and technologies for objective evaluation of technical skills in laparoscopic surgery.

    PubMed

    Sánchez-Margallo, Juan A; Sánchez-Margallo, Francisco M; Oropesa, Ignacio; Gómez, Enrique J

    2014-01-01

    Minimally invasive surgery is a highly demanding surgical approach regarding technical requirements for the surgeon, who must be trained in order to perform a safe surgical intervention. Traditional surgical education in minimally invasive surgery is commonly based on subjective criteria to quantify and evaluate surgical abilities, which could be potentially unsafe for the patient. Authors, surgeons and associations are increasingly demanding the development of more objective assessment tools that can accredit surgeons as technically competent. This paper describes the state of the art in objective assessment methods of surgical skills. It gives an overview on assessment systems based on structured checklists and rating scales, surgical simulators, and instrument motion analysis. As a future work, an objective and automatic assessment method of surgical skills should be standardized as a means towards proficiency-based curricula for training in laparoscopic surgery and its certification.

  10. The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers

    NASA Astrophysics Data System (ADS)

    Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.

    1992-01-01

    Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.

  11. Obtaining Soluble Folded Proteins from Inclusion Bodies Using Sarkosyl, Triton X-100, and CHAPS: Application to LB and M9 Minimal Media.

    PubMed

    Massiah, Michael A; Wright, Katharine M; Du, Haijuan

    2016-04-01

    This unit describes a straightforward and efficient method of using sarkosyl to solubilize and recover difficult recombinant proteins, such as GST- and His6 -tagged fusion proteins, that are overexpressed in E. coli. This protocol is especially useful for rescuing recombinant proteins overexpressed in M9 minimal medium. Sarkosyl added to lysis buffers helps with both protein solubility and cell lysis. Higher percentage sarkosyl (up to 10%) can extract >95% of soluble protein from inclusion bodies. In the case of sarkosyl-solubilized GST-fusion proteins, batch-mode affinity purification requires addition of a specific ratio of Triton X-100 and CHAPS, while sarkosyl-solubilized His6 -tagged fusion proteins can be directly purified on Ni(2+) resin columns. Proteins purified by this method could be widely used in biological assays, structure analysis and mass spectrum assay. Copyright © 2016 John Wiley & Sons, Inc.

  12. The applications of deep neural networks to sdBV classification

    NASA Astrophysics Data System (ADS)

    Boudreaux, Thomas M.

    2017-12-01

    With several new large-scale surveys on the horizon, including LSST, TESS, ZTF, and Evryscope, faster and more accurate analysis methods will be required to adequately process the enormous amount of data produced. Deep learning, used in industry for years now, allows for advanced feature detection in minimally prepared datasets at very high speeds; however, despite the advantages of this method, its application to astrophysics has not yet been extensively explored. This dearth may be due to a lack of training data available to researchers. Here we generate synthetic data loosely mimicking the properties of acoustic mode pulsating stars and we show that two separate paradigms of deep learning - the Artificial Neural Network And the Convolutional Neural Network - can both be used to classify this synthetic data effectively. And that additionally this classification can be performed at relatively high levels of accuracy with minimal time spent adjusting network hyperparameters.

  13. Defined presentation of carbohydrates on a duplex DNA scaffold.

    PubMed

    Schlegel, Mark K; Hütter, Julia; Eriksson, Magdalena; Lepenies, Bernd; Seeberger, Peter H

    2011-12-16

    A new method for the spatially defined alignment of carbohydrates on a duplex DNA scaffold is presented. The use of an N-hydroxysuccinimide (NHS)-ester phosphoramidite along with carbohydrates containing an alkylamine linker allows for on-column labeling during solid-phase oligonucleotide synthesis. This modification method during solid-phase synthesis only requires the use of minimal amounts of complex carbohydrates. The covalently attached carbohydrates are presented in the major groove of the B-form duplex DNA as potential substrates for murine type II C-type lectin receptors mMGL1 and mMGL2. CD spectroscopy and thermal melting revealed only minimal disturbance of the overall helical structure. Surface plasmon resonance and cellular uptake studies with bone-marrow-derived dendritic cells were used to assess the capability of these carbohydrate-modified duplexes to bind to mMGL receptors. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Plasma cortisol and 11-ketotestosterone enzyme immunoassay (EIA) kit validation for three fish species: the orange clownfish Amphiprion percula, the orangefin anemonefish Amphiprion chrysopterus and the blacktip reef shark Carcharhinus melanopterus.

    PubMed

    Mills, S C; Mourier, J; Galzin, R

    2010-08-01

    Commercially available enzyme immunoassay (EIA) kits were validated for measuring steroid hormone concentrations in blood plasma from three fish species: the orange clownfish Amphiprion percula, the orangefin anemonefish Amphiprion chrysopterus and the blacktip reef shark Carcharhinus melanopterus. A minimum of 5 microl plasma was required to estimate hormone concentrations with both kits. These EIA kits are a simple method requiring minimal equipment, for measuring hormone profiles under field conditions.

  15. Extrapolation of rotating sound fields.

    PubMed

    Carley, Michael

    2018-03-01

    A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.

  16. A variational data assimilation system for the range dependent acoustic model using the representer method: Theoretical derivations.

    PubMed

    Ngodock, Hans; Carrier, Matthew; Fabre, Josette; Zingarelli, Robert; Souopgui, Innocent

    2017-07-01

    This study presents the theoretical framework for variational data assimilation of acoustic pressure observations into an acoustic propagation model, namely, the range dependent acoustic model (RAM). RAM uses the split-step Padé algorithm to solve the parabolic equation. The assimilation consists of minimizing a weighted least squares cost function that includes discrepancies between the model solution and the observations. The minimization process, which uses the principle of variations, requires the derivation of the tangent linear and adjoint models of the RAM. The mathematical derivations are presented here, and, for the sake of brevity, a companion study presents the numerical implementation and results from the assimilation simulated acoustic pressure observations.

  17. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  18. Methods for computing color anaglyphs

    NASA Astrophysics Data System (ADS)

    McAllister, David F.; Zhou, Ya; Sullivan, Sophia

    2010-02-01

    A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.

  19. Interface Design and Human Factors Considerations for Model-Based Tight Glycemic Control in Critical Care

    PubMed Central

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330

  20. Preliminary Structural Sensitivity Study of Hypersonic Inflatable Aerodynamic Decelerator Using Probabilistic Methods

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.

    2014-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.

  1. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  2. Multi-Objective Online Initialization of Spacecraft Formations

    NASA Technical Reports Server (NTRS)

    Jeffrey, Matthew; Breger, Louis; How, Jonathan P.

    2007-01-01

    This paper extends a previously developed method for finding spacecraft initial conditions (ICs) that minimize the drift resulting from J2 disturbances while also minimizing the fuel required to attain those ICs. It generalizes the single spacecraft optimization to a formation-wide optimization valid for an arbitrary number of vehicles. Additionally, the desired locations of the spacecraft, separate from the starting location, can be specified, either with respect to a reference orbit, or relative to the other spacecraft in the formation. The three objectives (minimize drift, minimize fuel, and maintain a geometric template) are expressed as competing costs in a linear optimization, and are traded against one another through the use of scalar weights. By carefully selecting these weights and re-initializing the formation at regular intervals, a closed-loop, formation-wide control system is created. This control system can be used to reconfigure the formations on the fly, and creates fuel-efficient plans by placing the spacecraft in semi-invariant orbits. The overall approach is demonstrated through nonlinear simulations for two formations a GEO orbit, and an elliptical orbit.

  3. Identification of residual leukemic cells by flow cytometry in childhood B-cell precursor acute lymphoblastic leukemia: verification of leukemic state by flow-sorting and molecular/cytogenetic methods.

    PubMed

    Øbro, Nina F; Ryder, Lars P; Madsen, Hans O; Andersen, Mette K; Lausen, Birgitte; Hasle, Henrik; Schmiegelow, Kjeld; Marquart, Hanne V

    2012-01-01

    Reduction in minimal residual disease, measured by real-time quantitative PCR or flow cytometry, predicts prognosis in childhood B-cell precursor acute lymphoblastic leukemia. We explored whether cells reported as minimal residual disease by flow cytometry represent the malignant clone harboring clone-specific genomic markers (53 follow-up bone marrow samples from 28 children with B-cell precursor acute lymphoblastic leukemia). Cell populations (presumed leukemic and non-leukemic) were flow-sorted during standard flow cytometry-based minimal residual disease monitoring and explored by PCR and/or fluorescence in situ hybridization. We found good concordance between flow cytometry and genomic analyses in the individual flow-sorted leukemic (93% true positive) and normal (93% true negative) cell populations. Four cases with discrepant results had plausible explanations (e.g. partly informative immunophenotype and antigen modulation) that highlight important methodological pitfalls. These findings demonstrate that with sufficient experience, flow cytometry is reliable for minimal residual disease monitoring in B-cell precursor acute lymphoblastic leukemia, although rare cases require supplementary PCR-based monitoring.

  4. Upwind relaxation methods for the Navier-Stokes equations using inner iterations

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Ng, Wing-Fai; Walters, Robert W.

    1992-01-01

    A subsonic and a supersonic problem are respectively treated by an upwind line-relaxation algorithm for the Navier-Stokes equations using inner iterations to accelerate steady-state solution convergence and thereby minimize CPU time. While the ability of the inner iterative procedure to mimic the quadratic convergence of the direct solver method is attested to in both test problems, some of the nonquadratic inner iterative results are noted to have been more efficient than the quadratic. In the more successful, supersonic test case, inner iteration required only about 65 percent of the line-relaxation method-entailed CPU time.

  5. Toxicity Minimized Cryoprotectant Addition and Removal Procedures for Adherent Endothelial Cells

    PubMed Central

    Davidson, Allyson Fry; Glasscock, Cameron; McClanahan, Danielle R.; Benson, James D.; Higgins, Adam Z.

    2015-01-01

    Ice-free cryopreservation, known as vitrification, is an appealing approach for banking of adherent cells and tissues because it prevents dissociation and morphological damage that may result from ice crystal formation. However, current vitrification methods are often limited by the cytotoxicity of the concentrated cryoprotective agent (CPA) solutions that are required to suppress ice formation. Recently, we described a mathematical strategy for identifying minimally toxic CPA equilibration procedures based on the minimization of a toxicity cost function. Here we provide direct experimental support for the feasibility of these methods when applied to adherent endothelial cells. We first developed a concentration- and temperature-dependent toxicity cost function by exposing the cells to a range of glycerol concentrations at 21°C and 37°C, and fitting the resulting viability data to a first order cell death model. This cost function was then numerically minimized in our state constrained optimization routine to determine addition and removal procedures for 17 molal (mol/kg water) glycerol solutions. Using these predicted optimal procedures, we obtained 81% recovery after exposure to vitrification solutions, as well as successful vitrification with the relatively slow cooling and warming rates of 50°C/min and 130°C/min. In comparison, conventional multistep CPA equilibration procedures resulted in much lower cell yields of about 10%. Our results demonstrate the potential for rational design of minimally toxic vitrification procedures and pave the way for extension of our optimization approach to other adherent cell types as well as more complex systems such as tissues and organs. PMID:26605546

  6. Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method

    NASA Technical Reports Server (NTRS)

    Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.

    2003-01-01

    A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.

  7. Planning nonlinear access paths for temporal bone surgery.

    PubMed

    Fauser, Johannes; Sakas, Georgios; Mukhopadhyay, Anirban

    2018-05-01

    Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potential minimally invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. We define a specific motion planning problem in [Formula: see text] with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery. We then present [Formula: see text]-RRT-Connect: two suitable motion planners based on bidirectional Rapidly exploring Random Tree (RRT) to solve this problem efficiently. The benefits of [Formula: see text]-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state-of-the-art methods based on circular arcs or Bézier-Splines when applied to this specific problem. With this work, we demonstrate that preoperative and intra-operative planning of nonlinear access paths is possible for minimally invasive surgeries at the otobasis.

  8. The Naturoptic Method for Safe Recovery of Vision: Mentored Tutoring, Earnings, Academic Entity Financial Resources Tool

    NASA Astrophysics Data System (ADS)

    Sambursky, Nicole D.; McLeod, Roger David; Silva, Sandra Helena

    2009-05-01

    This is a novel method for safely and naturally improving vision. with applications for minority, female, and academic entity, financial advantages. The patented Naturoptic Method is a simple system designed to work quickly, requiring only a minimal number of sessions for improvement. Our mentored and unique activities investigated these claims by implementing the Naturoptic method on ourselves over a period of time. Research was conducted at off campus locations with the inventor of the Naturoptic Method. Initial visual acuity and subsequent progress is self assessed, using standard Snellen Eye Charts. Research is designed to document improvements in vision with successive uses of the Naturoptic Method, as mentored teachers or Awardees of ``The Kaan Balam Matagamon Memorial Award,'' with net earnings shared by the designees, academic entities, the American Indians in Science and Engineering Society, AISES, or charity. The Board requires Awardees, its students, or affiliates, to sign non-disclosure agreements. 185/1526

  9. Detection of influenza antigenic variants directly from clinical samples using polyclonal antibody based proximity ligation assays

    PubMed Central

    Martin, Brigitte E.; Jia, Kun; Sun, Hailiang; Ye, Jianqiang; Hall, Crystal; Ware, Daphne; Wan, Xiu-Feng

    2016-01-01

    Identification of antigenic variants is the key to a successful influenza vaccination program. The empirical serological methods to determine influenza antigenic properties require viral propagation. Here a novel quantitative PCR-based antigenic characterization method using polyclonal antibody and proximity ligation assays, or so-called polyPLA, was developed and validated. This method can detect a viral titer that is less than 1000 TCID50/mL. Not only can this method differentiate between different HA subtypes of influenza viruses but also effectively identify antigenic drift events within the same HA subtype of influenza viruses. Applications in H3N2 seasonal influenza data showed that the results from this novel method are consistent with those from the conventional serological assays. This method is not limited to the detection of antigenic variants in influenza but also other pathogens. It has the potential to be applied through a large-scale platform in disease surveillance requiring minimal biosafety and directly using clinical samples. PMID:25546251

  10. Simultaneous determination of eight water-soluble vitamins in supplemented foods by liquid chromatography.

    PubMed

    Zafra-Gómez, Alberto; Garballo, Antonio; Morales, Juan C; García-Ayuso, Luis E

    2006-06-28

    A fast, simple, and reliable method for the isolation and determination of the vitamins thiamin, riboflavin, niacin, pantothenic acid, pyridoxine, folic acid, cyanocobalamin, and ascorbic acid in food samples is proposed. The most relevant advantages of the proposed method are the simultaneous determination of the eight more common vitamins in enriched food products and a reduction of the time required for quantitative extraction, because the method consists merely of the addition of a precipitation solution and centrifugation of the sample. Furthermore, this method saves a substantial amount of reagents as compared with official methods, and minimal sample manipulation is achieved due to the few steps required. The chromatographic separation is carried out on a reverse phase C18 column, and the vitamins are detected at different wavelengths by either fluorescence or UV-visible detection. The proposed method was applied to the determination of water-soluble vitamins in supplemented milk, infant nutrition products, and milk powder certified reference material (CRM 421, BCR) with recoveries ranging from 90 to 100%.

  11. Applicability of PM3 to transphosphorylation reaction path: Toward designing a minimal ribozyme

    NASA Technical Reports Server (NTRS)

    Manchester, John I.; Shibata, Masayuki; Setlik, Robert F.; Ornstein, Rick L.; Rein, Robert

    1993-01-01

    A growing body of evidence shows that RNA can catalyze many of the reactions necessary both for replication of genetic material and the possible transition into the modern protein-based world. However, contemporary ribozymes are too large to have self-assembled from a prebiotic oligonucleotide pool. Still, it is likely that the major features of the earliest ribozymes have been preserved as molecular fossils in the catalytic RNA of today. Therefore, the search for a minimal ribozyme has been aimed at finding the necessary structural features of a modern ribozyme (Beaudry and Joyce, 1990). Both a three-dimensional model and quantum chemical calculations are required to quantitatively determine the effects of structural features of the ribozyme on the reaction it catalyzes. Using this model, quantum chemical calculations must be performed to determine quantitatively the effects of structural features on catalysis. Previous studies of the reaction path have been conducted at the ab initio level, but these methods are limited to small models due to enormous computational requirements. Semiempirical methods have been applied to large systems in the past; however, the accuracy of these methods depends largely on a simple model of the ribozyme-catalyzed reaction, or hydrolysis of phosphoric acid. We find that the results are qualitatively similar to ab initio results using large basis sets. Therefore, PM3 is suitable for studying the reaction path of the ribozyme-catalyzed reaction.

  12. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  13. Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods

    NASA Astrophysics Data System (ADS)

    Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David

    2015-11-01

    Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Isothermal Amplification Methods for the Detection of Nucleic Acids in Microfluidic Devices

    PubMed Central

    Zanoli, Laura Maria; Spoto, Giuseppe

    2012-01-01

    Diagnostic tools for biomolecular detection need to fulfill specific requirements in terms of sensitivity, selectivity and high-throughput in order to widen their applicability and to minimize the cost of the assay. The nucleic acid amplification is a key step in DNA detection assays. It contributes to improving the assay sensitivity by enabling the detection of a limited number of target molecules. The use of microfluidic devices to miniaturize amplification protocols reduces the required sample volume and the analysis times and offers new possibilities for the process automation and integration in one single device. The vast majority of miniaturized systems for nucleic acid analysis exploit the polymerase chain reaction (PCR) amplification method, which requires repeated cycles of three or two temperature-dependent steps during the amplification of the nucleic acid target sequence. In contrast, low temperature isothermal amplification methods have no need for thermal cycling thus requiring simplified microfluidic device features. Here, the use of miniaturized analysis systems using isothermal amplification reactions for the nucleic acid amplification will be discussed. PMID:25587397

  15. Reference Proteome Extracts for Mass Spec Instrument Performance Validation and Method Development

    PubMed Central

    Rosenblatt, Mike; Urh, Marjeta; Saveliev, Sergei

    2014-01-01

    Biological samples of high complexity are required to test protein mass spec sample preparation procedures and validate mass spec instrument performance. Total cell protein extracts provide the needed sample complexity. However, to be compatible with mass spec applications, such extracts should meet a number of design requirements: compatibility with LC/MS (free of detergents, etc.)high protein integrity (minimal level of protein degradation and non-biological PTMs)compatibility with common sample preparation methods such as proteolysis, PTM enrichment and mass-tag labelingLot-to-lot reproducibility Here we describe total protein extracts from yeast and human cells that meet the above criteria. Two extract formats have been developed: Intact protein extracts with primary use for sample preparation method development and optimizationPre-digested extracts (peptides) with primary use for instrument validation and performance monitoring

  16. 12 strategies for managing capital projects.

    PubMed

    Stoudt, Richard L

    2013-05-01

    To reduce the amount of time and cost associated with capital projects, healthcare leaders should: Begin the project with a clear objective and a concise master facilities plan. Select qualified team members who share the vision of the owner. Base the size of the project on a conservative business plan. Minimize incremental program requirements. Evaluate the cost impact of the building footprint. Consider alternative delivery methods.

  17. A Perceptual Repetition Blindness Effect

    NASA Technical Reports Server (NTRS)

    Hochhaus, Larry; Johnston, James C.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    Before concluding Repetition Blindness is a perceptual phenomenon, alternative explanations based on memory retrieval problems and report bias must be rejected. Memory problems were minimized by requiring a judgment about only a single briefly displayed field. Bias and sensitivity effects were empirically measured with an ROC-curve analysis method based on confidence ratings. Results from five experiments support the hypothesis that Repetition Blindness can be a perceptual phenomenon.

  18. A Minimally Invasive Method for Sampling Nest and Roost Cavities for Fungi: a Novel Approach to Identify the Fungi Associated with Cavity-Nesting Birds

    Treesearch

    Michelle A. Jusino; Daniel Lindner; John K. Cianchetti; Adam T. Grisé; Nicholas J. Brazee; Jeffrey R. Walters

    2014-01-01

    Relationships among cavity-nesting birds, trees, and wood decay fungi pose interesting management challenges and research questions in many systems. Ornithologists need to understand the relationships between cavity-nesting birds and fungi in order to understand the habitat requirements of these birds. Typically, researchers rely on fruiting body surveys to identify...

  19. The feasibility of recharge rate determinations using the steady- state centrifuge method

    USGS Publications Warehouse

    Nimmo, J.R.; Stonestrom, David A.; Akstin, K.C.

    1994-01-01

    The establishment of steady unsaturated flow in a centrifuge permits accurate measurement of small values of hydraulic conductivity (K). This method can provide a recharge determination if it is applied to an unsaturated core sample from a depth at which gravity alone drives the flow. A K value determined at the in situ water content indicates the long-term average recharge rate at a point. Tests of this approach have been made at two sites. For sandy core samples a better knowledge of the matric pressure profiles is required before a recharge rate can be determined. Fine-textured cores required new developments of apparatus and procedures, especially for making centrifuge measurements with minimal compaction of the samples. -from Authors

  20. OpenMM 7: Rapid development of high performance algorithms for molecular dynamics

    PubMed Central

    Swails, Jason; Zhao, Yutong; Beauchamp, Kyle A.; Wang, Lee-Ping; Stern, Chaya D.; Brooks, Bernard R.; Pande, Vijay S.

    2017-01-01

    OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community. PMID:28746339

  1. A generic minimization random allocation and blinding system on web.

    PubMed

    Cai, Hongwei; Xia, Jielai; Xu, Dezhong; Gao, Donghuai; Yan, Yongping

    2006-12-01

    Minimization is a dynamic randomization method for clinical trials. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the validity of conventional analyses and its complexity in implementation. However, both the statistical and clinical validity of minimization were demonstrated in recent studies. Minimization random allocation system integrated with blinding function that could facilitate the implementation of this method in general clinical trials has not been reported. SYSTEM OVERVIEW: The system is a web-based random allocation system using Pocock and Simon minimization method. It also supports multiple treatment arms within a trial, multiple simultaneous trials, and blinding without further programming. This system was constructed with generic database schema design method, Pocock and Simon minimization method and blinding method. It was coded with Microsoft Visual Basic and Active Server Pages (ASP) programming languages. And all dataset were managed with a Microsoft SQL Server database. Some critical programming codes were also provided. SIMULATIONS AND RESULTS: Two clinical trials were simulated simultaneously to test the system's applicability. Not only balanced groups but also blinded allocation results were achieved in both trials. Practical considerations for minimization method, the benefits, general applicability and drawbacks of the technique implemented in this system are discussed. Promising features of the proposed system are also summarized.

  2. Determining minimal display element requirements for surface map displays

    DOT National Transportation Integrated Search

    2003-04-14

    There is a great deal of interest in developing electronic surface map displays to enhance safety and reduce incidents and incursions on or near the airport surface. There is a lack of research, however, detailing the minimal display elements require...

  3. What Does It Take to Change an Editor's Mind? Identifying Minimally Important Difference Thresholds for Peer Reviewer Rating Scores of Scientific Articles.

    PubMed

    Callaham, Michael; John, Leslie K

    2018-01-05

    We define a minimally important difference for the Likert-type scores frequently used in scientific peer review (similar to existing minimally important differences for scores in clinical medicine). The magnitude of score change required to change editorial decisions has not been studied, to our knowledge. Experienced editors at a journal in the top 6% by impact factor were asked how large a change of rating in "overall desirability for publication" was required to trigger a change in their initial decision on an article. Minimally important differences were assessed twice for each editor: once assessing the rating change required to shift the editor away from an initial decision to accept, and the other assessing the magnitude required to shift away from an initial rejection decision. Forty-one editors completed the survey (89% response rate). In the acceptance frame, the median minimally important difference was 0.4 points on a scale of 1 to 5. Editors required a greater rating change to shift from an initial rejection decision; in the rejection frame, the median minimally important difference was 1.2 points. Within each frame, there was considerable heterogeneity: in the acceptance frame, 38% of editors did not change their decision within the maximum available range; in the rejection frame, 51% did not. To our knowledge, this is the first study to determine the minimally important difference for Likert-type ratings of research article quality, or in fact any nonclinical scientific assessment variable. Our findings may be useful for future research assessing whether changes to the peer review process produce clinically meaningful differences in editorial decisionmaking. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  4. Occult traumatic hemothorax: when can sleeping dogs lie?

    PubMed

    Bilello, John F; Davis, James W; Lemaster, Deborah M

    2005-12-01

    Size of traumatic occult hemothorax on admission requiring drainage has not been defined. Computed axial tomography (CAT) may guide drainage criteria. A retrospective review of patients with hemothoraces on CAT was performed. Extrapolating previously described methods of pleural fluid measurement, hemothoraces were quantified using the fluid stripe in the dependent pleural "gutter." Data included patient age, injury severity, and intervention (thoracentesis or tube thoracostomy). Seventy-eight patients with 99 occult hemothoraces met the criteria for study inclusion: 52 hemothoraces qualified as "minimal" and 47 as "moderate/large." Eight patients (15%) in the minimal group and 31 patients (66%) in the moderate/large group underwent intervention (P < .001). There was no difference in patient age, injury severity, ventilator requirement, or presence of pulmonary contusion. CAT in stable blunt-trauma patients can predict which patients with occult hemothorax are likely to undergo intervention. Patients with hemothorax > or = 1.5 cm on CAT were 4 times more likely to undergo drainage intervention compared with those having hemothorax < 1.5 cm.

  5. Renormalization of minimally doubled fermions

    NASA Astrophysics Data System (ADS)

    Capitani, Stefano; Creutz, Michael; Weber, Johannes; Wittig, Hartmut

    2010-09-01

    We investigate the renormalization properties of minimally doubled fermions, at one loop in perturbation theory. Our study is based on the two particular realizations of Boriçi-Creutz and Karsten-Wilczek. A common feature of both formulations is the breaking of hyper-cubic symmetry, which requires that the lattice actions are supplemented by suitable counterterms. We show that three counterterms are required in each case and determine their coefficients to one loop in perturbation theory. For both actions we compute the vacuum polarization of the gluon. It is shown that no power divergences appear and that all contributions which arise from the breaking of Lorentz symmetry are cancelled by the counterterms. We also derive the conserved vector and axial-vector currents for Karsten-Wilczek fermions. Like in the case of the previously studied Boriçi-Creutz action, one obtains simple expressions, involving only nearest-neighbour sites. We suggest methods how to fix the coefficients of the counterterms non-perturbatively and discuss the implications of our findings for practical simulations.

  6. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  7. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  8. Robust Control Design for Systems With Probabilistic Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a reliability- and robustness-based formulation for robust control synthesis for systems with probabilistic uncertainty. In a reliability-based formulation, the probability of violating design requirements prescribed by inequality constraints is minimized. In a robustness-based formulation, a metric which measures the tendency of a random variable/process to cluster close to a target scalar/function is minimized. A multi-objective optimization procedure, which combines stability and performance requirements in time and frequency domains, is used to search for robustly optimal compensators. Some of the fundamental differences between the proposed strategy and conventional robust control methods are: (i) unnecessary conservatism is eliminated since there is not need for convex supports, (ii) the most likely plants are favored during synthesis allowing for probabilistic robust optimality, (iii) the tradeoff between robust stability and robust performance can be explored numerically, (iv) the uncertainty set is closely related to parameters with clear physical meaning, and (v) compensators with improved robust characteristics for a given control structure can be synthesized.

  9. Chemical Detection and Identification Techniques for Exobiology Flight Experiments

    NASA Technical Reports Server (NTRS)

    Kojiro, Daniel R.; Sheverev, Valery A.; Khromov, Nikolai A.

    2002-01-01

    Exobiology flight experiments require highly sensitive instrumentation for in situ analysis of the volatile chemical species that occur in the atmospheres and surfaces of various bodies within the solar system. The complex mixtures encountered place a heavy burden on the analytical Instrumentation to detect and identify all species present. The minimal resources available onboard for such missions mandate that the instruments provide maximum analytical capabilities with minimal requirements of volume, weight and consumables. Advances in technology may be achieved by increasing the amount of information acquired by a given technique with greater analytical capabilities and miniaturization of proven terrestrial technology. We describe here methods to develop analytical instruments for the detection and identification of a wide range of chemical species using Gas Chromatography. These efforts to expand the analytical capabilities of GC technology are focused on the development of detectors for the GC which provide sample identification independent of the GC retention time data. A novel new approach employs Penning Ionization Electron Spectroscopy (PIES).

  10. Novel Data Reduction Based on Statistical Similarity

    DOE PAGES

    Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...

    2016-07-18

    Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less

  11. The OPTIMIST-A trial: evaluation of minimally-invasive surfactant therapy in preterm infants 25–28 weeks gestation

    PubMed Central

    2014-01-01

    Background It is now recognized that preterm infants ≤28 weeks gestation can be effectively supported from the outset with nasal continuous positive airway pressure. However, this form of respiratory therapy may fail to adequately support those infants with significant surfactant deficiency, with the result that intubation and delayed surfactant therapy are then required. Infants following this path are known to have a higher risk of adverse outcomes, including death, bronchopulmonary dysplasia and other morbidities. In an effort to circumvent this problem, techniques of minimally-invasive surfactant therapy have been developed, in which exogenous surfactant is administered to a spontaneously breathing infant who can then remain on continuous positive airway pressure. A method of surfactant delivery using a semi-rigid surfactant instillation catheter briefly passed into the trachea (the “Hobart method”) has been shown to be feasible and potentially effective, and now requires evaluation in a randomised controlled trial. Methods/design This is a multicentre, randomised, masked, controlled trial in preterm infants 25–28 weeks gestation. Infants are eligible if managed on continuous positive airway pressure without prior intubation, and requiring FiO2 ≥ 0.30 at an age ≤6 hours. Randomisation will be to receive exogenous surfactant (200 mg/kg poractant alfa) via the Hobart method, or sham treatment. Infants in both groups will thereafter remain on continuous positive airway pressure unless intubation criteria are reached (FiO2 ≥ 0.45, unremitting apnoea or persistent acidosis). Primary outcome is the composite of death or physiological bronchopulmonary dysplasia, with secondary outcomes including incidence of death; major neonatal morbidities; durations of all modes of respiratory support and hospitalisation; safety of the Hobart method; and outcome at 2 years. A total of 606 infants will be enrolled. The trial will be conducted in >30 centres worldwide, and is expected to be completed by end-2017. Discussion Minimally-invasive surfactant therapy has the potential to ease the burden of respiratory morbidity in preterm infants. The trial will provide definitive evidence on the effectiveness of this approach in the care of preterm infants born at 25–28 weeks gestation. Trial registration Australia and New Zealand Clinical Trial Registry: ACTRN12611000916943; ClinicalTrials.gov: NCT02140580. PMID:25164872

  12. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  13. Rapid fabrication of microneedles using magnetorheological drawing lithography.

    PubMed

    Chen, Zhipeng; Ren, Lei; Li, Jiyu; Yao, Lebin; Chen, Yan; Liu, Bin; Jiang, Lelun

    2018-01-01

    Microneedles are micron-sized needles that are widely applied in biomedical fields owing to their painless, minimally invasive, and convenient operation. However, most microneedle fabrication approaches are costly, time consuming, involve multiple steps, and require expensive equipment. In this study, we present a novel magnetorheological drawing lithography (MRDL) method to efficiently fabricate microneedle, bio-inspired microneedle, and molding-free microneedle array. With the assistance of an external magnetic field, the 3D structure of a microneedle can be directly drawn from a droplet of curable magnetorheological fluid. The formation process of a microneedle consists of two key stages, elasto-capillary self-thinning and magneto-capillary self-shrinking, which greatly affect the microneedle height and tip radius. Penetration and fracture tests demonstrated that the microneedle had sufficient strength and toughness for skin penetration. Microneedle arrays and a bio-inspired microneedle were also fabricated, which further demonstrated the versatility and flexibility of the MRDL method. Microneedles have been widely applied in biomedical fields owing to their painless, minimally invasive, and convenient operation. However, most microneedle fabrication approaches are costly, time consuming, involve multiple steps, and require expensive equipment. Furthermore, most researchers have focused on the biomedical applications of microneedles but have given little attention to the optimization of the fabrication process. This research presents a novel magnetorheological drawing lithography (MRDL) method to fabricate microneedle, bio-inspired microneedle, and molding-free microneedle array. In this proposed technique, a droplet of curable magnetorheological fluid (CMRF) is drawn directly from almost any substrate to produce a 3D microneedle under an external magnetic field. This method not only inherits the advantages of thermal drawing approach without the need for a mask and light irradiation but also eliminates the requirement for drawing temperature adjustment. The MRDL method is extremely simple and can even produce the complex and multiscale structure of bio-inspired microneedle. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  14. Divergence Free High Order Filter Methods for Multiscale Non-ideal MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    Low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field (Delta . B) numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  15. High Order Filter Methods for the Non-ideal Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard divergence cleaning is not required by the present filter approach. For certain non-ideal MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  16. Cryogenic fluid management for low-g transfer

    NASA Technical Reports Server (NTRS)

    Frank, D. J.; Jaekle, D. E., Jr.

    1986-01-01

    An account is given of design and operations criteria pertaining to low-g environment systems for the collection and delivery of liquid cryogens to a supply tank drain inlet in orbit. Analyses must assess the draining efficiencies of such devices, because the minimization of supply tank residual contents is of the essence. Settling accelerations, passive expulsion, and positive expulsion methods of fluid control have all been successfully demonstrated in orbit. Attention is given to the unique advantages and disadvantages of each method in view of different sets of requirements.

  17. Powder Metallurgy Reconditioning of Food and Processing Equipment Components

    NASA Astrophysics Data System (ADS)

    Nafikov, M. Z.; Aipov, R. S.; Konnov, A. Yu.

    2017-12-01

    A powder metallurgy method is developed to recondition the worn surfaces of food and processing equipment components. A combined additive is composed to minimize the powder losses in sintering. A technique is constructed to determine the powder consumption as a function of the required metallic coating thickness. A rapid method is developed to determine the porosity of the coating. The proposed technology is used to fabricate a wear-resistant defectless metallic coating with favorable residual stresses, and the adhesive strength of this coating is equal to the strength of the base metal.

  18. Divergence Free High Order Filter Methods for the Compressible MHD Equations

    NASA Technical Reports Server (NTRS)

    Yea, H. C.; Sjoegreen, Bjoern

    2003-01-01

    The generalization of a class of low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous gas dynamic flows to compressible MHD equations for structured curvilinear grids has been achieved. The new scheme is shown to provide a natural and efficient way for the minimization of the divergence of the magnetic field numerical error. Standard diver- gence cleaning is not required by the present filter approach. For certain MHD test cases, divergence free preservation of the magnetic fields has been achieved.

  19. 5 CFR 582.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... accompany legal process. 582.203 Section 582.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS COMMERCIAL GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.203 Information minimally required to accompany legal process. (a) Sufficient identifying...

  20. 5 CFR 582.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accompany legal process. 582.203 Section 582.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS COMMERCIAL GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.203 Information minimally required to accompany legal process. (a) Sufficient identifying...

  1. 5 CFR 582.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accompany legal process. 582.203 Section 582.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS COMMERCIAL GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.203 Information minimally required to accompany legal process. (a) Sufficient identifying...

  2. 5 CFR 582.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accompany legal process. 582.203 Section 582.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS COMMERCIAL GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.203 Information minimally required to accompany legal process. (a) Sufficient identifying...

  3. 5 CFR 582.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... accompany legal process. 582.203 Section 582.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS COMMERCIAL GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process § 582.203 Information minimally required to accompany legal process. (a) Sufficient identifying...

  4. Determining the Minimal Required Radioactivity of 18F-FDG for Reliable Semiquantification in PET/CT Imaging: A Phantom Study.

    PubMed

    Chen, Ming-Kai; Menard, David H; Cheng, David W

    2016-03-01

    In pursuit of as-low-as-reasonably-achievable (ALARA) doses, this study investigated the minimal required radioactivity and corresponding imaging time for reliable semiquantification in PET/CT imaging. Using a phantom containing spheres of various diameters (3.4, 2.1, 1.5, 1.2, and 1.0 cm) filled with a fixed (18)F-FDG concentration of 165 kBq/mL and a background concentration of 23.3 kBq/mL, we performed PET/CT at multiple time points over 20 h of radioactive decay. The images were acquired for 10 min at a single bed position for each of 10 half-lives of decay using 3-dimensional list mode and were reconstructed into 1-, 2-, 3-, 4-, 5-, and 10-min acquisitions per bed position using an ordered-subsets expectation maximum algorithm with 24 subsets and 2 iterations and a gaussian 2-mm filter. SUVmax and SUVavg were measured for each sphere. The minimal required activity (±10%) for precise SUVmax semiquantification in the spheres was 1.8 kBq/mL for an acquisition of 10 min, 3.7 kBq/mL for 3-5 min, 7.9 kBq/mL for 2 min, and 17.4 kBq/mL for 1 min. The minimal required activity concentration-acquisition time product per bed position was 10-15 kBq/mL⋅min for reproducible SUV measurements within the spheres without overestimation. Using the total radioactivity and counting rate from the entire phantom, we found that the minimal required total activity-time product was 17 MBq⋅min and the minimal required counting rate-time product was 100 kcps⋅min. Our phantom study determined a threshold for minimal radioactivity and acquisition time for precise semiquantification in (18)F-FDG PET imaging that can serve as a guide in pursuit of achieving ALARA doses. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  5. Aerobic conditioning for team sport athletes.

    PubMed

    Stone, Nicholas M; Kilding, Andrew E

    2009-01-01

    Team sport athletes require a high level of aerobic fitness in order to generate and maintain power output during repeated high-intensity efforts and to recover. Research to date suggests that these components can be increased by regularly performing aerobic conditioning. Traditional aerobic conditioning, with minimal changes of direction and no skill component, has been demonstrated to effectively increase aerobic function within a 4- to 10-week period in team sport players. More importantly, traditional aerobic conditioning methods have been shown to increase team sport performance substantially. Many team sports require the upkeep of both aerobic fitness and sport-specific skills during a lengthy competitive season. Classic team sport trainings have been shown to evoke marginal increases/decreases in aerobic fitness. In recent years, aerobic conditioning methods have been designed to allow adequate intensities to be achieved to induce improvements in aerobic fitness whilst incorporating movement-specific and skill-specific tasks, e.g. small-sided games and dribbling circuits. Such 'sport-specific' conditioning methods have been demonstrated to promote increases in aerobic fitness, though careful consideration of player skill levels, current fitness, player numbers, field dimensions, game rules and availability of player encouragement is required. Whilst different conditioning methods appear equivalent in their ability to improve fitness, whether sport-specific conditioning is superior to other methods at improving actual game performance statistics requires further research.

  6. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  7. Determination of real machine-tool settings and minimization of real surface deviation by computerized inspection

    NASA Technical Reports Server (NTRS)

    Litvin, Faydor L.; Kuan, Chihping; Zhang, YI

    1991-01-01

    A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.

  8. Migration without migraines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lines, L.; Burton, A.; Lu, H.X.

    Accurate velocity models are a necessity for reliable migration results. Velocity analysis generally involves the use of methods such as normal moveout analysis (NMO), seismic traveltime tomography, or iterative prestack migration. These techniques can be effective, and each has its own advantage or disadvantage. Conventional NMO methods are relatively inexpensive but basically require simplifying assumptions about geology. Tomography is a more general method but requires traveltime interpretation of prestack data. Iterative prestack depth migration is very general but is computationally expensive. In some cases, there is the opportunity to estimate vertical velocities by use of well information. The well informationmore » can be used to optimize poststack migrations, thereby eliminating some of the time and expense of iterative prestack migration. The optimized poststack migration procedure defined here computes the velocity model which minimizes the depth differences between seismic images and formation depths at the well by using a least squares inversion method. The optimization methods described in this paper will hopefully produce ``migrations without migraines.``« less

  9. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  10. Fermion-to-qubit mappings with varying resource requirements for quantum simulation

    NASA Astrophysics Data System (ADS)

    Steudtner, Mark; Wehner, Stephanie

    2018-06-01

    The mapping of fermionic states onto qubit states, as well as the mapping of fermionic Hamiltonian into quantum gates enables us to simulate electronic systems with a quantum computer. Benefiting the understanding of many-body systems in chemistry and physics, quantum simulation is one of the great promises of the coming age of quantum computers. Interestingly, the minimal requirement of qubits for simulating Fermions seems to be agnostic of the actual number of particles as well as other symmetries. This leads to qubit requirements that are well above the minimal requirements as suggested by combinatorial considerations. In this work, we develop methods that allow us to trade-off qubit requirements against the complexity of the resulting quantum circuit. We first show that any classical code used to map the state of a fermionic Fock space to qubits gives rise to a mapping of fermionic models to quantum gates. As an illustrative example, we present a mapping based on a nonlinear classical error correcting code, which leads to significant qubit savings albeit at the expense of additional quantum gates. We proceed to use this framework to present a number of simpler mappings that lead to qubit savings with a more modest increase in gate difficulty. We discuss the role of symmetries such as particle conservation, and savings that could be obtained if an experimental platform could easily realize multi-controlled gates.

  11. Evaluation of a High Intensity Focused Ultrasound-Immobilized Trypsin Digestion and 18O-Labeling Method for Quantitative Proteomics

    PubMed Central

    López-Ferrer, Daniel; Hixson, Kim K.; Smallwood, Heather; Squier, Thomas C.; Petritis, Konstantinos; Smith, Richard D.

    2009-01-01

    A new method that uses immobilized trypsin concomitant with ultrasonic irradiation results in ultra-rapid digestion and thorough 18O labeling for quantitative protein comparisons. The reproducible and highly efficient method provided effective digestions in <1 min with a minimized amount of enzyme required compared to traditional methods. This method was demonstrated for digestion of both simple and complex protein mixtures, including bovine serum albumin, a global proteome extract from the bacteria Shewanella oneidensis, and mouse plasma, as well as 18O labeling of such complex protein mixtures, which validated the application of this method for differential proteomic measurements. This approach is simple, reproducible, cost effective, rapid, and thus well-suited for automation. PMID:19555078

  12. Split-shot sinker facilitates seton treatment of anal fistulae.

    PubMed

    Awad, M L; Sell, H W; Stahlfeld, K R

    2009-06-01

    The cutting seton is an inexpensive and effective method of treating high complex perianal fistulae. Following placement of the seton, advancement through the external sphincter muscles requires progressive tightening of the seton. The requirement for maintaining the appropriate tension and onset of perianal pressure necrosis are problems frequently encountered using this technique. Using a 3-0 polypropylene suture, a red-rubber catheter, and a nontoxic tin split-shot sinker, we minimized or eliminated these problems. We initially used this technique in one patient with satisfactory results. This technique is technically easy, safe, inexpensive, and efficient, and we are using it in all patients with high perianal fistulae who require a seton.

  13. Chronic Morel-Lavallée Lesion: A Novel Minimally Invasive Method of Treatment.

    PubMed

    Mettu, Ramireddy; Surath, Harsha Vardhan; Chayam, Hanumantha Rao; Surath, Amaranth

    2016-11-01

    A Morel-Lavallée lesion is a closed internal degloving injury resulting from a shearing force applied to the skin. The etiology of this condition may be motor vehicle accidents, falls, contact sports (ie, football, wrestling),1 and iatrogenic after mammoplasty or abdominal liposuction.2 Common sites of the lesions include the pelvis and/or thigh.3 Isolated Morel-Lavallée lesions without underlying fracture are likely to be missed, which result in chronicity. Management of this condition often requires extensive surgical procedures such as debridement, sclerotherapy, serial percutaneous drainage, negative pressure wound therapy (NPWT), and skin grafting.4,5 The authors wish to highlight a minimally invasive technique for the treatment of chronic Morel-Lavallée lesions.

  14. Review of manual control methods for handheld maneuverable instruments.

    PubMed

    Fan, Chunman; Dodou, Dimitra; Breedveld, Paul

    2013-06-01

    By the introduction of new technologies, surgical procedures have been varying from free access in open surgery towards limited access in minimal access surgery. Improving access to difficult-to-reach anatomic sites, e.g. in neurosurgery or percutaneous interventions, needs advanced maneuverable instrumentation. Advances in maneuverable technology require the development of dedicated methods enabling surgeons to stay in direct, manual control of these complex instruments. This article gives an overview of the state-of-the-art in the development of manual control methods for handheld maneuverable instruments. It categorizes the manual control methods in three levels: a) number of steerable segments, b) number of Degrees Of Freedom (DOF), and c) coupling between control motion of the handle and steering motion of the tip. The literature research was completed by using Web of Science, Scopus and PubMed. The study shows that in controlling single steerable segments, direct as well as indirect control methods have been developed, whereas in controlling multiple steerable segments, a gradual shift can be noticed from parallel and serial control to integrated control. The development of multi-segmented maneuverable instruments is still at an early stage, and an intuitive and effective method to control them has to become a primary focus in the domain of minimal access surgery.

  15. Medical versus surgical abortion methods for pregnancy in China: a cost-minimization analysis.

    PubMed

    Xia, Wei; She, Shouzhang; Lam, Tai Hing

    2011-01-01

    Both medical and surgical abortions are popular in developing countries. However, the monetary costs of these two methods have not been compared. 430 women seeking abortions were recruited in 2008. Either a medical or surgical method was used for the abortion. We adopted the perspective of a third-party payer. Cost-minimization analysis was used based on all charges for the overall procedures in an out-patient clinic in Guangzhou, China. 219 subjects (51%) chose a medical method (mifepristone and misoprostol), whereas 211 subjects (49%) chose a surgical method. The efficacy in the surgical group was significantly higher than in the medical group (100 vs. 90%, p < 0.001). Surgical abortion incurred much more costs than medical abortion on average after initial treatment. When the subsequent costs were accumulated within the 2-week follow-up, the mean total cost in the medical group increased significantly due to failure of abortion and persistent bleeding. Patients undergoing medical abortion eventually incurred equivalent expenses compared to patients undergoing surgical abortion (p = 0.42). There was no difference in the mean final costs between the two abortion methods. Complications of persistent bleeding and failure to abort (requiring surgical intervention) in the medical treatment group increased the final mean total cost substantially. Copyright © 2011 S. Karger AG, Basel.

  16. FFTF disposable solid waste cask

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomson, J. D.; Goetsch, S. D.

    1983-01-01

    Disposal of radioactive waste from the Fast Flux Test Facility (FFTF) will utilize a Disposable Solid Waste Cask (DSWC) for the transport and burial of irradiated stainless steel and inconel materials. Retrievability coupled with the desire for minimal facilities and labor costs at the disposal site identified the need for the DSWC. Design requirements for this system were patterned after Type B packages as outlined in 10 CFR 71 with a few exceptions based on site and payload requirements. A summary of the design basis, supporting analytical methods and fabrication practices developed to deploy the DSWC is provided in thismore » paper.« less

  17. A minimalist biosensor: Quantitation of cyclic di-GMP using the conformational change of a riboswitch aptamer.

    PubMed

    Kellenberger, Colleen A; Sales-Lee, Jade; Pan, Yuchen; Gassaway, Madalee M; Herr, Amy E; Hammond, Ming C

    2015-01-01

    Cyclic di-GMP (c-di-GMP) is a second messenger that is important in regulating bacterial physiology and behavior, including motility and virulence. Many questions remain about the role and regulation of this signaling molecule, but current methods of detection are limited by either modest sensitivity or requirements for extensive sample purification. We have taken advantage of a natural, high affinity receptor of c-di-GMP, the Vc2 riboswitch aptamer, to develop a sensitive and rapid electrophoretic mobility shift assay (EMSA) for c-di-GMP quantitation that required minimal engineering of the RNA.

  18. Tilt-tuned etalon locking for tunable laser stabilization.

    PubMed

    Gibson, Bradley M; McCall, Benjamin J

    2015-06-15

    Locking to a fringe of a tilt-tuned etalon provides a simple, inexpensive method for stabilizing tunable lasers. Here, we describe the use of such a system to stabilize an external-cavity quantum cascade laser; the locked laser has an Allan deviation of approximately 1 MHz over a one-second integration period, and has a single-scan tuning range of approximately 0.4  cm(-1). The system is robust, with minimal alignment requirements and automated lock acquisition, and can be easily adapted to different wavelength regions or more stringent stability requirements with minor alterations.

  19. A Protective Eye Shield for Prevention of Media Opacities during Small Animal Ocular Imaging

    PubMed Central

    Bell, Brent A.; Kaul, Charles; Hollyfield, Joe G.

    2014-01-01

    Optical coherence tomography (OCT), scanning laser ophthalmoscopy (SLO) and other non-invasive imaging techniques are increasingly used in eye research to document disease-related changes in rodent eyes. Corneal dehydration is a major contributor to the formation of ocular opacities that can limit the repeated application of these techniques to individual animals. General anesthesia is usually required for imaging, which is accompanied by the loss of the blink reflex. As a consequence, the tear film cannot be maintained, drying occurs and the cornea becomes dehydrated. Without supplemental hydration, structural damage to the cornea quickly follows. Soon thereafter, anterior lens opacities can also develop. Collectively these changes ultimately compromise image quality, especially for studies involving repeated use of the same animal over several weeks or months. To minimize these changes, a protective shield was designed for mice and rats that prevent ocular dehydration during anesthesia. The eye shield, along with a semi-viscous ophthalmic solution, is placed over the corneas as soon as the anesthesia immobilizes the animal. Eye shields are removed for only the brief periods required for imaging and then reapplied before the fellow eye is examined. As a result, the corneal surface of each eye is exposed only for the time required for imaging. The device and detailed methods described here minimize the corneal and lens changes associated with ocular surface desiccation. When these methods are used consistently, high quality images can be obtained repeatedly from individual animals. PMID:25245081

  20. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    PubMed

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.

  1. Process for laser machining and surface treatment

    DOEpatents

    Neil, George R.; Shinn, Michelle D.

    2004-10-26

    An improved method and apparatus increasing the accuracy and reducing the time required to machine materials, surface treat materials, and allow better control of defects such as particulates in pulsed laser deposition. The speed and quality of machining is improved by combining an ultrashort pulsed laser at high average power with a continuous wave laser. The ultrashort pulsed laser provides an initial ultrashort pulse, on the order of several hundred femtoseconds, to stimulate an electron avalanche in the target material. Coincident with the ultrashort pulse or shortly after it, a pulse from a continuous wave laser is applied to the target. The micromachining method and apparatus creates an initial ultrashort laser pulse to ignite the ablation followed by a longer laser pulse to sustain and enlarge on the ablation effect launched in the initial pulse. The pulse pairs are repeated at a high pulse repetition frequency and as often as desired to produce the desired micromachining effect. The micromachining method enables a lower threshold for ablation, provides more deterministic damage, minimizes the heat affected zone, minimizes cracking or melting, and reduces the time involved to create the desired machining effect.

  2. Optimal network modification for spectral radius dependent phase transitions

    NASA Astrophysics Data System (ADS)

    Rosen, Yonatan; Kirsch, Lior; Louzoun, Yoram

    2016-09-01

    The dynamics of contact processes on networks is often determined by the spectral radius of the networks adjacency matrices. A decrease of the spectral radius can prevent the outbreak of an epidemic, or impact the synchronization among systems of coupled oscillators. The spectral radius is thus tightly linked to network dynamics and function. As such, finding the minimal change in network structure necessary to reach the intended spectral radius is important theoretically and practically. Given contemporary big data resources such as large scale communication or social networks, this problem should be solved with a low runtime complexity. We introduce a novel method for the minimal decrease in weights of edges required to reach a given spectral radius. The problem is formulated as a convex optimization problem, where a global optimum is guaranteed. The method can be easily adjusted to an efficient discrete removal of edges. We introduce a variant of the method which finds optimal decrease with a focus on weights of vertices. The proposed algorithm is exceptionally scalable, solving the problem for real networks of tens of millions of edges in a short time.

  3. Electrically tunable soft solid lens inspired by reptile and bird accommodation.

    PubMed

    Pieroni, Michael; Lagomarsini, Clara; De Rossi, Danilo; Carpi, Federico

    2016-10-26

    Electrically tunable lenses are conceived as deformable adaptive optical components able to change focus without motor-controlled translations of stiff lenses. In order to achieve large tuning ranges, large deformations are needed. This requires new technologies for the actuation of highly stretchable lenses. This paper presents a configuration to obtain compact tunable lenses entirely made of soft solid matter (elastomers). This was achieved by combining the advantages of dielectric elastomer actuation (DEA) with a design inspired by the accommodation of reptiles and birds. An annular DEA was used to radially deform a central solid-body lens. Using an acrylic elastomer membrane, a silicone lens and a simple fabrication method, we assembled a tunable lens capable of focal length variations up to 55%, driven by an actuator four times larger than the lens. As compared to DEA-based liquid lenses, the novel architecture halves the required driving voltages, simplifies the fabrication process and allows for a higher versatility in design. These new lenses might find application in systems requiring large variations of focus with low power consumption, silent operation, low weight, shock tolerance, minimized axial encumbrance and minimized changes of performance against vibrations and variations in temperature.

  4. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  5. Electromyographic monitoring and its anatomical implications in minimally invasive spine surgery.

    PubMed

    Uribe, Juan S; Vale, Fernando L; Dakwar, Elias

    2010-12-15

    Literature review. The objective of this article is to examine current intraoperative electromyography (EMG) neurophysiologic monitoring methods and their application in minimally invasive techniques. We will also discuss the recent application of EMG and its anatomic implications to the minimally invasive lateral transpsoas approach to the spine. Minimally invasive techniques require that the same goals of surgery be achieved, with the hope of decreased morbidity to the patient. Unlike standard open procedures, direct visualization of the anatomy is decreased. To increase the safety of minimally invasive spine surgery, neurophysiological monitoring techniques have been developed. Review of the literature was performed using the National Center for Biotechnology Information databases using PUBMED/MEDLINE. All articles in the English language discussing the use of intraoperative EMG monitoring and minimally invasive spine surgery were reviewed. The role of EMG monitoring in special reference to the minimally invasive lateral transpsoas approach is also described. In total, 76 articles were identified that discussed the role of neuromonitoring in spine surgery. The majority of articles on EMG and spine surgery discuss the use of intraoperative neurophysiological monitoring (IOM) for safe and accurate pedicle screw placement. In general, there is a paucity of literature that pertains to intraoperative EMG neuromonitoring and minimally invasive spine surgery. Recently, EMG has been used during minimally invasive lateral transpsoas approach to the lumbar spine for interbody fusion. The addition of EMG to the lateral approach has contributed to decrease the complication rate from 30% to less than 1%. In minimally invasive approaches to the spine, the use of EMG IOM might provide additional safety, such as percutaneous pedicle screw placement, where visualization is limited compared with conventional open procedures. In addition to knowledge of the anatomy and image guidance, directional EMG IOM is crucial for safe passage through the psoas muscle during the minimally invasive lateral retroperitoneal approach.

  6. Spot-shadowing optimization to mitigate damage growth in a high-energy-laser amplifier chain.

    PubMed

    Bahk, Seung-Whan; Zuegel, Jonathan D; Fienup, James R; Widmayer, C Clay; Heebner, John

    2008-12-10

    A spot-shadowing technique to mitigate damage growth in a high-energy laser is studied. Its goal is to minimize the energy loss and undesirable hot spots in intermediate planes of the laser. A nonlinear optimization algorithm solves for the complex fields required to mitigate damage growth in the National Ignition Facility amplifier chain. The method is generally applicable to any large fusion laser.

  7. Hydrogen generation through static-feed water electrolysis

    NASA Technical Reports Server (NTRS)

    Jensen, F. C.; Schubert, F. H.

    1975-01-01

    A static-feed water electrolysis system (SFWES), developed under NASA sponsorship, is presented for potential applicability to terrestrial hydrogen production. The SFWES concept uses (1) an alkaline electrolyte to minimize power requirements and materials-compatibility problems, (2) a method where the electrolyte is retained in a thin porous matrix eliminating bulk electrolyte, and (3) a static water-feed mechanism to prevent electrode and electrolyte contamination and to promote system simplicity.

  8. An Investigation of a Photographic Technique of Measuring High Surface Temperatures

    NASA Technical Reports Server (NTRS)

    Siviter, James H., Jr.; Strass, H. Kurt

    1960-01-01

    A photographic method of temperature determination has been developed to measure elevated temperatures of surfaces. The technique presented herein minimizes calibration procedures and permits wide variation in emulsion developing techniques. The present work indicates that the lower limit of applicability is approximately 1,400 F when conventional cameras, emulsions, and moderate exposures are used. The upper limit is determined by the calibration technique and the accuracy required.

  9. Power recovery system for coal liquefaction process

    DOEpatents

    Horton, Joel R.

    1985-01-01

    Method and apparatus for minimizing energy required to inject reactant such as coal-oil slurry into a reaction vessel, using high pressure effluent from the latter to displace the reactant from a containment vessel into the reaction vessel with assistance of low pressure pump. Effluent is degassed in the containment vessel, and a heel of the degassed effluent is maintained between incoming effluent and reactant in the containment vessel.

  10. A technique for accelerating the convergence of restarted GMRES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Jessup, E R; Manteuffel, T

    2004-03-09

    We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.

  11. Mass Transfer from Entrapped DNAPL Sources Undergoing Remediation: Characterization Methods and Prediction Tools

    DTIC Science & Technology

    2006-08-31

    volumetric depletion efficiency ( VDE ) considers how much DNAPL is depleted from the system, relative to the total volume of solution flushed through the...aqueous phase contaminant. VDE is important to consider, as conditions that result in the fastest mass transfer, highest enhancement, or best MTE, may...volumes of flushing fluid, maximizing DNAPL depletion while minimizing flushing volume requirements may be desirable from a remediation standpoint. VDE

  12. The Border Star 85 Survey: Toward an Archeology of Landscapes

    DTIC Science & Technology

    1988-12-12

    historic properties on that highly active military tire TRU method as implemented) were inadequate for installation. rendering determinations of National...Dofia Ana phase settlement, such required only minimal reporting sufficient to render Na- that one could speculate as to how and why variation among...this dependent upon precipitation. In normal or high rainfall sort are complicated, however, by factors that render them years there would be many

  13. Can cover data be used as a surrogate for seedling counts in regeneration stocking evaluations in northern hardwood forests?

    Treesearch

    Todd E. Ristau; Susan L. Stout

    2014-01-01

    Assessment of regeneration can be time-consuming and costly. Often, foresters look for ways to minimize the cost of doing inventories. One potential method to reduce time required on a plot is use of percent cover data rather than seedling count data to determine stocking. Robust linear regression analysis was used in this report to predict seedling count data from...

  14. Multiple video sequences synchronization during minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Belhaoua, Abdelkrim; Moreau, Johan; Krebs, Alexandre; Waechter, Julien; Radoux, Jean-Pierre; Marescaux, Jacques

    2016-03-01

    Hybrid operating rooms are an important development in the medical ecosystem. They allow integrating, in the same procedure, the advantages of radiological imaging and surgical tools. However, one of the challenges faced by clinical engineers is to support the connectivity and interoperability of medical-electrical point-of-care devices. A system that could enable plug-and-play connectivity and interoperability for medical devices would improve patient safety, save hospitals time and money, and provide data for electronic medical records. In this paper, we propose a hardware platform dedicated to collect and synchronize multiple videos captured from medical equipment in real-time. The final objective is to integrate augmented reality technology into an operation room (OR) in order to assist the surgeon during a minimally invasive operation. To the best of our knowledge, there is no prior work dealing with hardware based video synchronization for augmented reality applications on OR. Whilst hardware synchronization methods can embed temporal value, so called timestamp, into each sequence on-the-y and require no post-processing, they require specialized hardware. However the design of our hardware is simple and generic. This approach was adopted and implemented in this work and its performance is evaluated by comparison to the start-of-the-art methods.

  15. Food Safety Impacts from Post-Harvest Processing Procedures of Molluscan Shellfish.

    PubMed

    Baker, George L

    2016-04-18

    Post-harvest Processing (PHP) methods are viable food processing methods employed to reduce human pathogens in molluscan shellfish that would normally be consumed raw, such as raw oysters on the half-shell. Efficacy of human pathogen reduction associated with PHP varies with respect to time, temperature, salinity, pressure, and process exposure. Regulatory requirements and PHP molluscan shellfish quality implications are major considerations for PHP usage. Food safety impacts associated with PHP of molluscan shellfish vary in their efficacy and may have synergistic outcomes when combined. Further research for many PHP methods are necessary and emerging PHP methods that result in minimal quality loss and effective human pathogen reduction should be explored.

  16. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  17. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  18. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  19. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  20. 5 CFR 581.203 - Information minimally required to accompany legal process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accompany legal process. 581.203 Section 581.203 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT... Process § 581.203 Information minimally required to accompany legal process. (a) Sufficient identifying information must accompany the legal process in order to enable processing by the governmental entity named...

  1. Two methods for proteomic analysis of formalin-fixed, paraffin embedded tissue result in differential protein identification, data quality, and cost.

    PubMed

    Luebker, Stephen A; Wojtkiewicz, Melinda; Koepsell, Scott A

    2015-11-01

    Formalin-fixed paraffin-embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC-MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in-solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre-analytical variations and analyzed with three technical replicates by LC-MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre-analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. The Bravyi-Kitaev transformation for quantum computation of electronic structure

    NASA Astrophysics Data System (ADS)

    Seeley, Jacob T.; Richard, Martin J.; Love, Peter J.

    2012-12-01

    Quantum simulation is an important application of future quantum computers with applications in quantum chemistry, condensed matter, and beyond. Quantum simulation of fermionic systems presents a specific challenge. The Jordan-Wigner transformation allows for representation of a fermionic operator by O(n) qubit operations. Here, we develop an alternative method of simulating fermions with qubits, first proposed by Bravyi and Kitaev [Ann. Phys. 298, 210 (2002), 10.1006/aphy.2002.6254; e-print arXiv:quant-ph/0003137v2], that reduces the simulation cost to O(log n) qubit operations for one fermionic operation. We apply this new Bravyi-Kitaev transformation to the task of simulating quantum chemical Hamiltonians, and give a detailed example for the simplest possible case of molecular hydrogen in a minimal basis. We show that the quantum circuit for simulating a single Trotter time step of the Bravyi-Kitaev derived Hamiltonian for H2 requires fewer gate applications than the equivalent circuit derived from the Jordan-Wigner transformation. Since the scaling of the Bravyi-Kitaev method is asymptotically better than the Jordan-Wigner method, this result for molecular hydrogen in a minimal basis demonstrates the superior efficiency of the Bravyi-Kitaev method for all quantum computations of electronic structure.

  3. Chapter 11: Sample Design Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Khawaja, M. Sami; Rushton, Josh

    Evaluating an energy efficiency program requires assessing the total energy and demand saved through all of the energy efficiency measures provided by the program. For large programs, the direct assessment of savings for each participant would be cost-prohibitive. Even if a program is small enough that a full census could be managed, such an undertaking would almost always be an inefficient use of evaluation resources. The bulk of this chapter describes methods for minimizing and quantifying sampling error. Measurement error and regression error are discussed in various contexts in other chapters.

  4. Control Allocation with Load Balancing

    NASA Technical Reports Server (NTRS)

    Bodson, Marc; Frost, Susan A.

    2009-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.

  5. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  6. A Single-Lap Joint Adhesive Bonding Optimization Method Using Gradient and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S., III; Finckenor, Jeffrey L.

    1999-01-01

    A natural process for any engineer, scientist, educator, etc. is to seek the most efficient method for accomplishing a given task. In the case of structural design, an area that has a significant impact on the structural efficiency is joint design. Unless the structure is machined from a solid block of material, the individual components which compose the overall structure must be joined together. The method for joining a structure varies depending on the applied loads, material, assembly and disassembly requirements, service life, environment, etc. Using both metallic and fiber reinforced plastic materials limits the user to two methods or a combination of these methods for joining the components into one structure. The first is mechanical fastening and the second is adhesive bonding. Mechanical fastening is by far the most popular joining technique; however, in terms of structural efficiency, adhesive bonding provides a superior joint since the load is distributed uniformly across the joint. The purpose of this paper is to develop a method for optimizing single-lap joint adhesive bonded structures using both gradient and genetic algorithms and comparing the solution process for each method. The goal of the single-lap joint optimization is to find the most efficient structure that meets the imposed requirements while still remaining as lightweight, economical, and reliable as possible. For the single-lap joint, an optimum joint is determined by minimizing the weight of the overall joint based on constraints from adhesive strengths as well as empirically derived rules. The analytical solution of the sin-le-lap joint is determined using the classical Goland-Reissner technique for case 2 type adhesive joints. Joint weight minimization is achieved using a commercially available routine, Design Optimization Tool (DOT), for the gradient solution while an author developed method is used for the genetic algorithm solution. Results illustrate the critical design variables as a function of adhesive properties and convergences of different joints based on the two optimization methods.

  7. Single-implant overdentures retained by the Novaloc attachment system: study protocol for a mixed-methods randomized cross-over trial.

    PubMed

    de Souza, Raphael F; Bedos, Christophe; Esfandiari, Shahrokh; Makhoul, Nicholas M; Dagdeviren, Didem; Abi Nader, Samer; Jabbar, Areej A; Feine, Jocelyne S

    2018-04-23

    Overdentures retained by a single implant in the midline have arisen as a minimal implant treatment for edentulous mandibles. The success of this treatment depends on the performance of a single stud attachment that is susceptible to wear-related retention loss. Recently developed biomaterials used in attachments may result in better performance of the overdentures, offering minimal retention loss and greater patient satisfaction. These biomaterials include resistant polymeric matrixes and amorphous diamond-like carbon applied on metallic components. The objective of this explanatory mixed-methods study is to compare Novaloc, a novel attachment system with such characteristics, to a traditional alternative for single implants in the mandible of edentate elderly patients. We will carry out a randomized cross-over clinical trial comparing Novaloc attachments to Locators for single-implant mandibular overdentures in edentate elderly individuals. Participants will be followed for three months with each attachment type; patient-based, clinical, and economic outcomes will be gathered. A sample of 26 participants is estimated to be required to detect clinically relevant differences in terms of the primary outcome (patient ratings of general satisfaction). Participants will choose which attachment they wish to keep, then be interviewed about their experiences and preferences with a single implant prosthesis and with the two attachments. Data from the quantitative and qualitative assessments will be integrated through a mixed-methods explanatory strategy. A last quantitative assessment will take place after 12 months with the preferred attachment; this latter assessment will enable measurement of the attachments' long-term wear and maintenance requirements. Our results will lead to evidence-based recommendations regarding these systems, guiding providers and patients when making decisions on which attachment systems and implant numbers will be most appropriate for individual cases. The recommendation of a specific attachment for elderly edentulous patients may combine positive outcomes from patient perspectives with low cost, good maintenance, and minimal invasiveness. ClinicalTrials.gov, NCT03126942 . Registered on 13 April 2017.

  8. An algorithm for designing minimal microbial communities with desired metabolic capacities

    PubMed Central

    Eng, Alexander; Borenstein, Elhanan

    2016-01-01

    Motivation: Recent efforts to manipulate various microbial communities, such as fecal microbiota transplant and bioreactor systems’ optimization, suggest a promising route for microbial community engineering with numerous medical, environmental and industrial applications. However, such applications are currently restricted in scale and often rely on mimicking or enhancing natural communities, calling for the development of tools for designing synthetic communities with specific, tailored, desired metabolic capacities. Results: Here, we present a first step toward this goal, introducing a novel algorithm for identifying minimal sets of microbial species that collectively provide the enzymatic capacity required to synthesize a set of desired target product metabolites from a predefined set of available substrates. Our method integrates a graph theoretic representation of network flow with the set cover problem in an integer linear programming (ILP) framework to simultaneously identify possible metabolic paths from substrates to products while minimizing the number of species required to catalyze these metabolic reactions. We apply our algorithm to successfully identify minimal communities both in a set of simple toy problems and in more complex, realistic settings, and to investigate metabolic capacities in the gut microbiome. Our framework adds to the growing toolset for supporting informed microbial community engineering and for ultimately realizing the full potential of such engineering efforts. Availability and implementation: The algorithm source code, compilation, usage instructions and examples are available under a non-commercial research use only license at https://github.com/borenstein-lab/CoMiDA. Contact: elbo@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153571

  9. [Precise application of Traditional Chinese Medicine in minimally-invasive techniques].

    PubMed

    Dong, Fu-Hui

    2018-06-25

    The miminally-invasive techniques of traditional Chinese medicine(TCM) uses different types of acupuncture needles to treat diseased locations with special techniques. These techniques include different methods of insertion and closed incision (press cutting, sliding cutting, scrape cutting, etc.). This needling technique is based on the traditional Chinese medicine theories of Pi Bu (cutaneous), Jing Jin (sinew), Jing Luo (meridian), Wu Ti (five body structure components) and Zang Fu (organ system). Commonly used needles include: needle Dao , needle with edge, Pi needle, Shui needle, Ren needle, Gou needle, Chang Yuan needle, Bo needle and so on. The principle of this minimally-invasive technique of TCM is to achieve the greatest healing benefit with the least amount of anatomical and physiological intervention. This will result in the highest standard of health care with the lowest rehabilitative need and burden of care. In the past 20 years, through the collaborative research of several hundred hospitals across China, we systemically reviewed the best minimally invasive technique of TCM and the first line treatments for selective conditions. In 2013, the Department of Medical Affairs of the State Administration of Traditional Chinese Medicine created "Traditional Chinese Medicine Technical Manual"(General Version) and released it nationwide, its contents include: (1)Minimally invasive scar tissue release. ¹Suitable for Bi and pain syndromes of neck, shoulder, waist, buttocks and extremities. ²Degeneration causes local hypertrophy and inflammation, creating local tissue adhesion. ³There are two kind incision methods-press cutting and slide cutting. (2)Minimally invasive fascial tension release. ¹Suitable for localized fascial tension caused by trauma, overuse, or wind-cold-dampness, leading to compensatory hyperplasia. ²Long term high-stress stimulation to local fascia creates compensatory hyperplasia, Ashi points, and tissue texture changes (cords, nodules, masses). ³According to the different structural features of the needles, there are two incision methods: penetrating from the outside to the inside and pulling from inside to outside. (3)Minimally invasive decompression technique. ¹Suitable for internal pressure changes within organ cavities caused by trauma, degeneration, inflammation, such as compartment syndrome, bone marrow edema, increased intraluminal pressure in the bone marrow. ²According to the different tissues, it is categorized into soft tissue decompression, and bone decompression. (4)Minimally invasive orthopedic surgery. Applicable to some postural, developmental deformity correction, mainly through the dynamic balance method and/or static balance method. (5)Minimally invasive dissection. Suitable for fractures, tendons injury caused by deep soft tissue adhesion. (6)Minimally invasive separation. ¹Suitable for cutaneous, sinew regions, superficial adhesions due to lesions, and local post-operative incision adhesions. ²According to the structure of the needle tip, the methods are divided into sharp separation and blunt dissection. (7)Minimally invasive sustained pressure technique. ¹Suitable for neuromuscular dysfunction which causes JING (spasm) syndrome and WEI (atrophy) syndrome. ²The needle is applied with sustained pressure, without penetrating select tissue surface. This includes: nerve root sustained pressure technique; peripheral nerve sustained pressure technique; muscle sustained pressure technique; fascial contact procedure; cutaneous sustained pressure technique.(8)Minimally invasive insertion technique. ¹Suitable for systemic regulation to treat disease. ²Different organs are connected to different layers of tissue. Therefore, to treat specific conditions, specific tissues must be targeted. ³For example, back Shu points are used to treat vertigo from cervical spine issues, and spinal degeneration associated digestion issue. ⁴The internal organs can be regulated by the pathways that runs along the different layers of tissue. The types of stimulation include: meridian acupoint stimulation; cutaneous stimulation; fascia stimulation; mucle stimulation; periosteum stimulation. The clinical application of these techniques has enriched the drug-free therapies of traditional Chinese medicine and achieved excellent outcomes, but at the same time it also raises an important question. How can we apply these minimally invasive techniques to clinical practice so it can be safe and effective? In addition, how can practitioners, individually and further develop their understanding of this minimally invasive technique progressive manner? We make the following recommendations. (1)Clear diagnosis and precise application. Any approach has specific indications and choosing the correct technique comes from a comprehensive understanding of its advantages and disadvantages. Moreover, the accurate application of the technique depends the expertise of the practitioner. Through systematic review and clinical observation, we formulated the First Line Treatment, the Second Line Treatment, and the Third Line Treatment for specific conditions. Using the information gathered from research, practitioners can decide on which point is appropriate to use based on the stage of disease progression. For example, common conditions like the nerve ending tension pain(i.e. cutaneous nerve entrapment syndrome) is caused by stress concentration. There are two types of treatment for this condition: ¹Change in the response to stress state (i.e.non-invasive approach such as manual therapy and physiotherapy). ²Change in state of surrounding environment (i.e. invasive approach such as Pi Needle). Before tissue texture changes to pain point, cord, nodules, the former approach is effective. Once tissue texture changes, the latter approach is First Line Treatment. (2)Systematic training and disease progression training. The minimally invasive techniques of traditional Chinese medicine can treat many kinds of disease. To ensure its safety, organization, progressive development, practitioners are trained systematically and manage their treatment approach through disease hierarchy. Moreover, this technique should be conducted according to its technical difficulty, operating conditions, and expertise of the practitioner. The application of minimally invasive techniques of traditional Chinese medicine does not depend on the hospitals' administration system or the regulatory college of medical professionals. The minimally invasive techniques of TCM should be taught from easy to difficult, simple to complicated, and requires gradual progression by the practitioners. Eventually, the minimally invasive techniques of TCM's diagnostic and treatment protocol can be created. These protocols are currently available for reference: ¹Forming diagnosis and differential diagnosis for the conditions below requires expert diagnostic and application skills: cerebral palsy; cervical vertigo; cervical headache; cervical precordial pain; other spine-related diseases. ²The requirements for the diagnosis and differential diagnosis of such techniques are relatively high, and special training is required for the practitioner who performs this technique. The conditions below uses minimally invasive orthopedic surgery and dissection: scar contracture deformity; congenital developmental malformations; cervical Bi -syndrome; shoulder pain syndrome; knee Bi -syndrome; low back pain; cervical spondylosis; lumbar disc herniation; avascular necrosis of the femoral head; ankylosing spondylitis. ³There are no special requirements for the diagnosis and differential diagnosis of such techniques, and special training is required for the practitioner who performs this technique. The technical content is mainly decompression and scar tissue release. a)Muscle strain diseases: levator scapulae, splenius capitis, splenius cervicis, supraspinatus, infraspinatus, teres minor, teres major, serratus posterior superior, serratus posterior inferior, piriformis, gluteus maximus, gluteus medius, and gluteus minimus, erector spinae. b)Joint degenerative disorders: frozen shoulder, tennis elbow, tenosynovitis, knee osteoarthritis, and plantar fascitis. c) JING-JIN PI-BU pain syndrome (cutaneous nerve entrapment syndrome): occipital great nerve entrapment syndrome, occipital small nerve entrapment syndrome, great auricular nerve entrapment syndrome, suprascapular nerve entrapment syndrome, transverse cutaneous nerve of neck entrapment syndrome. (3)People-centred practice. The most attractive feature of the minimally invasive techniques of TCM is that they do not rely on expensive medical equipment and operating conditions. The key to applying this technique is the practitioners' technique, skill, and expertise. The necessary conditions required to successfully apply this technique is ¹practitioner understands disease progression and diagnosis; ²practitioners' skill in applying technique. We require patient-centered approach, which uses evidence based approach as the focus. We aim to seek the truth from facts, to understand the comprehensive picture, to include pertinent details, to be observant, to be goal oriented, from one to another, from outside to inside, from top to the bottom, compare right from left, through active movement and passive movements and weight-bearing movements, and assisted passive movements to determine instantaneous centre to diagnose stress concentration points. The operating technique is based on the response of patient's tissues to this technique. We must pay attention to diagnosis through palpation: layers, structure, texture, deformity, dislocation, movement characteristic, rhythmic changes. To achieve SHOU MO XIN HUI WU WEI : position, quantify, quantity, timing, and pattern. Accurate grasp of timeliness and dose efficiency. Can distinguish between local or systemic effects of treatment. Through comprehensive judgment of hands feeling, acupuncture needle feeling, and inspiration, to achieve the precious treatment requirements as indicated by the "Huangdi Neijing·Suwen" : "Puncture the bone without damaging tendons, and puncture tendons without damaging muscles, puncture the muscle without damaging pulse, puncture pulse without damaging skin, puncture skin without damaging muscle. Puncture muscle without damage tendons, puncture tendons without damaging bone... Puncture bone without damaging tendons and it means the needle passes through the tendons and arrives at the bone and work on the bone. Puncture tendons without damaging muscles, and it means the needle passes through the muscles and arrives close to the tendon. Puncture the muscle without damaging pulse and it means the needle passes the pulse and does not touch the muscle. Puncture pulse without damaging skin and it means, the needle passes through the skin without penetrating pulse. Puncture skin without damaging muscle and it means, the disease is in the skin and the needles insert into skin but does not damage muscle. Puncture muscle without damage tendons, and it means, the needle passes through the muscle and arrive on the tendon. Puncture tendons without damaging bone." Copyright© 2018 by the China Journal of Orthopaedics and Traumatology Press.

  10. Efficiency of unconstrained minimization techniques in nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.; Knight, N. F., Jr.

    1978-01-01

    Unconstrained minimization algorithms have been critically evaluated for their effectiveness in solving structural problems involving geometric and material nonlinearities. The algorithms have been categorized as being zeroth, first, or second order depending upon the highest derivative of the function required by the algorithm. The sensitivity of these algorithms to the accuracy of derivatives clearly suggests using analytically derived gradients instead of finite difference approximations. The use of analytic gradients results in better control of the number of minimizations required for convergence to the exact solution.

  11. Novel operation and control of an electric vehicle aluminum/air battery system

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Yang, Shao Hua; Knickle, Harold

    The objective of this paper is to create a method to size battery subsystems for an electric vehicle to optimize battery performance. Optimization of performance includes minimizing corrosion by operating at a constant current density. These subsystems will allow for easy mechanical recharging. A proper choice of battery subsystem will allow for longer battery life, greater range and performance. For longer life, the current density and reaction rate should be nearly constant. The control method requires control of power by controlling electrolyte flow in battery sub modules. As power is increased more sub modules come on line and more electrolyte is needed. Solenoid valves open in a sequence to provide the required power. Corrosion is limited because there is no electrolyte in the modules not being used.

  12. A method to improve the range resolution in stepped frequency continuous wave radar

    NASA Astrophysics Data System (ADS)

    Kaczmarek, Paweł

    2018-04-01

    In the paper one of high range resolution methods - Aperture Sampling - was analysed. Unlike MUSIC based techniques it proved to be very efficient in terms of achieving unambiguous synthetic range profile for ultra-wideband stepped frequency continuous wave radar. Assuming that minimal distance required to separate two targets in depth (distance) corresponds to -3 dB width of received echo, AS provided a 30,8 % improvement in range resolution in analysed scenario, when compared to results of applying IFFT. Output data is far superior in terms of both improved range resolution and reduced side lobe level than used typically in this area Inverse Fourier Transform. Furthermore it does not require prior knowledge or an estimate of number of targets to be detected in a given scan.

  13. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  14. 36 CFR 228.8 - Requirements for environmental protection.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... shall be conducted so as, where feasible, to minimize adverse environmental impacts on National Forest..., shall either be removed from National Forest lands or disposed of or treated so as to minimize, so far... 36 Parks, Forests, and Public Property 2 2012-07-01 2012-07-01 false Requirements for...

  15. 36 CFR 228.8 - Requirements for environmental protection.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... shall be conducted so as, where feasible, to minimize adverse environmental impacts on National Forest..., shall either be removed from National Forest lands or disposed of or treated so as to minimize, so far... 36 Parks, Forests, and Public Property 2 2014-07-01 2014-07-01 false Requirements for...

  16. 36 CFR 228.8 - Requirements for environmental protection.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... shall be conducted so as, where feasible, to minimize adverse environmental impacts on National Forest..., shall either be removed from National Forest lands or disposed of or treated so as to minimize, so far... 36 Parks, Forests, and Public Property 2 2013-07-01 2013-07-01 false Requirements for...

  17. 40 CFR 63.6605 - What are my general requirements for complying with this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... maintain any affected source, including associated air pollution control equipment and monitoring equipment, in a manner consistent with safety and good air pollution control practices for minimizing emissions. The general duty to minimize emissions does not require you to make any further efforts to reduce...

  18. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  19. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  20. Undertreatment of people with major depressive disorder in 21 countries*

    PubMed Central

    Thornicroft, Graham; Chatterji, Somnath; Evans-Lacko, Sara; Gruber, Michael; Sampson, Nancy; Aguilar-Gaxiola, Sergio; Al-Hamzawi, Ali; Alonso, Jordi; Andrade, Laura; Borges, Guilherme; Bruffaerts, Ronny; Bunting, Brendan; de Almeida, Jose Miguel Caldas; Florescu, Silvia; de Girolamo, Giovanni; Gureje, Oye; Haro, Josep Maria; He, Yanling; Hinkov, Hristo; Karam, Elie; Kawakami, Norito; Lee, Sing; Navarro-Mateu, Fernando; Piazza, Marina; Posada-Villa, Jose; de Galvis, Yolanda Torres; Kessler, Ronald C.

    2017-01-01

    Background Major depressive disorder (MDD) is a leading cause of disability worldwide. Aims To examine the: (a) 12-month prevalence of DSM-IV MDD; (b) proportion aware that they have a problem needing treatment and who want care; (c) proportion of the latter receiving treatment; and (d) proportion of such treatment meeting minimal standards. Method Representative community household surveys from 21 countries as part of the World Health Organization World Mental Health Surveys. Results Of 51 547 respondents, 4.6% met 12-month criteria for DSM-IV MDD and of these 56.7% reported needing treatment. Among those who recognised their need for treatment, most (71.1%) made at least one visit to a service provider. Among those who received treatment, only 41.0% received treatment that met minimal standards. This resulted in only 16.5% of all individuals with 12-month MDD receiving minimally adequate treatment. Conclusions Only a minority of participants with MDD received minimally adequate treatment: 1 in 5 people in high-income and 1 in 27 in low-/lower-middle-income countries. Scaling up care for MDD requires fundamental transformations in community education and outreach, supply of treatment and quality of services. PMID:27908899

  1. Responsible gambling: general principles and minimal requirements.

    PubMed

    Blaszczynski, Alex; Collins, Peter; Fong, Davis; Ladouceur, Robert; Nower, Lia; Shaffer, Howard J; Tavares, Hermano; Venisse, Jean-Luc

    2011-12-01

    Many international jurisdictions have introduced responsible gambling programs. These programs intend to minimize negative consequences of excessive gambling, but vary considerably in their aims, focus, and content. Many responsible gambling programs lack a conceptual framework and, in the absence of empirical data, their components are based only on general considerations and impressions. This paper outlines the consensus viewpoint of an international group of researchers suggesting fundamental responsible gambling principles, roles of key stakeholders, and minimal requirements that stakeholders can use to frame and inform responsible gambling programs across jurisdictions. Such a framework does not purport to offer value statements regarding the legal status of gambling or its expansion. Rather, it proposes gambling-related initiatives aimed at government, industry, and individuals to promote responsible gambling and consumer protection. This paper argues that there is a set of basic principles and minimal requirements that should form the basis for every responsible gambling program.

  2. PLA realizations for VLSI state machines

    NASA Technical Reports Server (NTRS)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  3. Signaling on the continuous spectrum of nonlinear optical fiber.

    PubMed

    Tavakkolnia, Iman; Safari, Majid

    2017-08-07

    This paper studies different signaling techniques on the continuous spectrum (CS) of nonlinear optical fiber defined by nonlinear Fourier transform. Three different signaling techniques are proposed and analyzed based on the statistics of the noise added to CS after propagation along the nonlinear optical fiber. The proposed methods are compared in terms of error performance, distance reach, and complexity. Furthermore, the effect of chromatic dispersion on the data rate and noise in nonlinear spectral domain is investigated. It is demonstrated that, for a given sequence of CS symbols, an optimal bandwidth (or symbol rate) can be determined so that the temporal duration of the propagated signal at the end of the fiber is minimized. In effect, the required guard interval between the subsequently transmitted data packets in time is minimized and the effective data rate is significantly enhanced. Moreover, by selecting the proper signaling method and design criteria a distance reach of 7100 km is reported by only singling on CS at a rate of 9.6 Gbps.

  4. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  5. A Technical Survey on Optimization of Processing Geo Distributed Data

    NASA Astrophysics Data System (ADS)

    Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.

    2018-04-01

    With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.

  6. Do Circulating Tumor Cells, Exosomes, and Circulating Tumor Nucleic Acids Have Clinical Utility?

    PubMed Central

    Gold, Bert; Cankovic, Milena; Furtado, Larissa V.; Meier, Frederick; Gocke, Christopher D.

    2016-01-01

    Diagnosing and screening for tumors through noninvasive means represent an important paradigm shift in precision medicine. In contrast to tissue biopsy, detection of circulating tumor cells (CTCs) and circulating tumor nucleic acids provides a minimally invasive method for predictive and prognostic marker detection. This allows early and serial assessment of metastatic disease, including follow-up during remission, characterization of treatment effects, and clonal evolution. Isolation and characterization of CTCs and circulating tumor DNA (ctDNA) are likely to improve cancer diagnosis, treatment, and minimal residual disease monitoring. However, more trials are required to validate the clinical utility of precise molecular markers for a variety of tumor types. This review focuses on the clinical utility of CTCs and ctDNA testing in patients with solid tumors, including somatic and epigenetic alterations that can be detected. A comparison of methods used to isolate and detect CTCs and some of the intricacies of the characterization of the ctDNA are also provided. PMID:25908243

  7. Intricacies of Using Kevlar and Thermal Knives in a Deployable Release System: Issues and Solutions

    NASA Technical Reports Server (NTRS)

    Stewart, Alphonso C.; Hair, Jason H.; Broduer, Steve (Technical Monitor)

    2002-01-01

    The utilization of Kevlar cord and thermal knives in a deployable release system produces a number of issues that must be addressed in the design of the system. This paper proposes design considerations that minimize the major issues, thermal knife failure, Kevlar cord relaxation, and the measurement of the cord tension. Design practices can minimize the potential for thermal knife laminate and element damage that result in failure of the knife. A process for in-situ inspection of the knife with resistance, rather than continuity, checks and 10x zoom optical imaging can detect damaged knives. Tests allow the characterization of the behavior of the particular Kevlar cord in use and the development of specific pre-stretching techniques and initial tension values needed to meet requirements. A new method can accurately measure the tension of the Kevlar cord using a guitar tuner, because more conventional methods do not apply to arimid cords such as Kevlar.

  8. Intricacies of Using Kevlar Cord and Thermal Knives in a Deployable Release System: Issues and Solutions

    NASA Technical Reports Server (NTRS)

    Stewart, Alphonso; Hair, Jason H.

    2002-01-01

    The utilization of Kevlar cord and thermal knives in a deployable release system produces a number of issues that must be addressed in the design of the system. This paper proposes design considerations that minimize the major issues, thermal knife failure, Kevlar cord relaxation, and the measurement of the cord tension. Design practices can minimize the potential for thermal knife laminate and element damage that result in failure of the knife. A process for in-situ inspection of the knife with resistance, rather than continuity, checks and 10x zoom optical imaging can detect damaged knives. Tests allow the characterization of the behavior of the particular Kevlar cord in use and the development of specific prestretching techniques and initial tension values needed to meet requirements. A new method can accurately measure the tension of the Kevlar cord using a guitar tuner, because more conventional methods do not apply to arimid cords such as Kevlar.

  9. Intricacies of Using Kevlar Cord and Thermal Knives in a Deployable Release System: Issues and Solutions

    NASA Astrophysics Data System (ADS)

    Stewart, Alphonso; Hair, Jason H.

    2002-04-01

    The utilization of Kevlar cord and thermal knives in a deployable release system produces a number of issues that must be addressed in the design of the system. This paper proposes design considerations that minimize the major issues, thermal knife failure, Kevlar cord relaxation, and the measurement of the cord tension. Design practices can minimize the potential for thermal knife laminate and element damage that result in failure of the knife. A process for in-situ inspection of the knife with resistance, rather than continuity, checks and 10x zoom optical imaging can detect damaged knives. Tests allow the characterization of the behavior of the particular Kevlar cord in use and the development of specific prestretching techniques and initial tension values needed to meet requirements. A new method can accurately measure the tension of the Kevlar cord using a guitar tuner, because more conventional methods do not apply to arimid cords such as Kevlar.

  10. Medical Data Architecture (MDA) Project Status

    NASA Technical Reports Server (NTRS)

    Krihak, M.; Middour, C.; Gurram, M.; Wolfe, S.; Marker, N.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2018-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current in-flight medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are a variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable a more medically autonomous crew than the current paradigm. The medical system requirements are being developed in parallel with the exploration mission architecture and vehicle design. ExMC has recognized that in order to make informed decisions about a medical data architecture framework, current methods for medical data management must not only be understood, but an architecture must also be identified that provides the crew with actionable insight to medical conditions. This medical data architecture will provide the necessary functionality to address the challenges of executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. Hence, the products supported by current prototype development will directly inform exploration medical system requirements.

  11. Medical Data Architecture Project Status

    NASA Technical Reports Server (NTRS)

    Krihak, M.; Middour, C.; Gurram, M.; Wolfe, S.; Marker, N.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2018-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current in-flight medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are a variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable a more medically autonomous crew than the current paradigm. The medical system requirements are being developed in parallel with the exploration mission architecture and vehicle design. ExMC has recognized that in order to make informed decisions about a medical data architecture framework, current methods for medical data management must not only be understood, but an architecture must also be identified that provides the crew with actionable insight to medical conditions. This medical data architecture will provide the necessary functionality to address the challenges of executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. Hence, the products supported by current prototype development will directly inform exploration medical system requirements.

  12. 76 FR 30550 - Federal Management Regulation; Change in Consumer Price Index Minimal Value

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-26

    ... Minimal Value AGENCY: Office of Governmentwide Policy, GSA. ACTION: Final rule. SUMMARY: Pursuant to 5 U.S.C. 7342, at three-year intervals following January 1, 1981, the minimal value for foreign gifts must... required consultation has been completed and the minimal value has been increased to $350 or less as of...

  13. Leachate management design in Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lange, D.A.; Broscious, J.C.; Zullo, E.G.

    1996-02-01

    As part of a project to provide solid waste master plans for 25 cities in Mexico, an American engineering firm, Paul C. Rizzo Associates (Monroeville, Pa.), was contracted to design a comprehensive leachate management system for landfills in the chosen cities. The solid waste master plan project was administered by the Mexican federal government Secretaria de Desarrollo Social (SEDESOL) with funding from the World Bank. While Paul C. Rizzo was the prime contractor for the project, which was completed in 1994, work was also subcontracted to a local Mexican engineering firm. The lack of specific design criteria for leachate managementmore » in current Mexican regulations enabled the use of a creative design for the system based on experience and technical judgment. Important design considerations included the current, primitive open-dump/burning/scavenging method of disposal and recycling of wastes, and the need for a minimal-cost solution in this developing country. The economic situation made the need for minimal expenditures to upgrade infrastructure equally important. The purpose of the design effort was to use evaporation and recirculation methods of landfill leachate management to minimize the amount of leachate that required treatment. Engineers in the project sought an ultimate goal of achieving zero excess leachate at the landfill sites.« less

  14. A look-ahead variant of the Lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems. Ph.D. Thesis - Massachusetts Inst. of Technology, Aug. 1991

    NASA Technical Reports Server (NTRS)

    Nachtigal, Noel M.

    1991-01-01

    The Lanczos algorithm can be used both for eigenvalue problems and to solve linear systems. However, when applied to non-Hermitian matrices, the classical Lanczos algorithm is susceptible to breakdowns and potential instabilities. In addition, the biconjugate gradient (BCG) algorithm, which is the natural generalization of the conjugate gradient algorithm to non-Hermitian linear systems, has a second source of breakdowns, independent of the Lanczos breakdowns. Here, we present two new results. We propose an implementation of a look-ahead variant of the Lanczos algorithm which overcomes the breakdowns by skipping over those steps where a breakdown or a near-breakdown would occur. The new algorithm can handle look-ahead steps of any length and requires the same number of matrix-vector products and inner products per step as the classical Lanczos algorithm without look-ahead. Based on the proposed look-ahead Lanczos algorithm, we then present a novel BCG-like approach, the quasi-minimal residual (QMR) method, which avoids the second source of breakdowns in the BCG algorithm. We present details of the new method and discuss some of its properties. In particular, we discuss the relationship between QMR and BCG, showing how one can recover the BCG iterates, when they exist, from the QMR iterates. We also present convergence results for QMR, showing the connection between QMR and the generalized minimal residual (GMRES) algorithm, the optimal method in this class of methods. Finally, we give some numerical examples, both for eigenvalue computations and for non-Hermitian linear systems.

  15. ASPIC: a novel method to predict the exon-intron structure of a gene that is optimally compatible to a set of transcript sequences.

    PubMed

    Bonizzoni, Paola; Rizzi, Raffaella; Pesole, Graziano

    2005-10-05

    Currently available methods to predict splice sites are mainly based on the independent and progressive alignment of transcript data (mostly ESTs) to the genomic sequence. Apart from often being computationally expensive, this approach is vulnerable to several problems--hence the need to develop novel strategies. We propose a method, based on a novel multiple genome-EST alignment algorithm, for the detection of splice sites. To avoid limitations of splice sites prediction (mainly, over-predictions) due to independent single EST alignments to the genomic sequence our approach performs a multiple alignment of transcript data to the genomic sequence based on the combined analysis of all available data. We recast the problem of predicting constitutive and alternative splicing as an optimization problem, where the optimal multiple transcript alignment minimizes the number of exons and hence of splice site observations. We have implemented a splice site predictor based on this algorithm in the software tool ASPIC (Alternative Splicing PredICtion). It is distinguished from other methods based on BLAST-like tools by the incorporation of entirely new ad hoc procedures for accurate and computationally efficient transcript alignment and adopts dynamic programming for the refinement of intron boundaries. ASPIC also provides the minimal set of non-mergeable transcript isoforms compatible with the detected splicing events. The ASPIC web resource is dynamically interconnected with the Ensembl and Unigene databases and also implements an upload facility. Extensive bench marking shows that ASPIC outperforms other existing methods in the detection of novel splicing isoforms and in the minimization of over-predictions. ASPIC also requires a lower computation time for processing a single gene and an EST cluster. The ASPIC web resource is available at http://aspic.algo.disco.unimib.it/aspic-devel/.

  16. Spatial distribution of cosmetic-procedure businesses in two U.S. cities: a pilot mapping and validation study.

    PubMed

    Austin, S Bryn; Gordon, Allegra R; Kennedy, Grace A; Sonneville, Kendrin R; Blossom, Jeffrey; Blood, Emily A

    2013-12-06

    Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry.

  17. Semi-implicit finite difference methods for three-dimensional shallow water flow

    USGS Publications Warehouse

    Casulli, Vincenzo; Cheng, Ralph T.

    1992-01-01

    A semi-implicit finite difference method for the numerical solution of three-dimensional shallow water flows is presented and discussed. The governing equations are the primitive three-dimensional turbulent mean flow equations where the pressure distribution in the vertical has been assumed to be hydrostatic. In the method of solution a minimal degree of implicitness has been adopted in such a fashion that the resulting algorithm is stable and gives a maximal computational efficiency at a minimal computational cost. At each time step the numerical method requires the solution of one large linear system which can be formally decomposed into a set of small three-diagonal systems coupled with one five-diagonal system. All these linear systems are symmetric and positive definite. Thus the existence and uniquencess of the numerical solution are assured. When only one vertical layer is specified, this method reduces as a special case to a semi-implicit scheme for solving the corresponding two-dimensional shallow water equations. The resulting two- and three-dimensional algorithm has been shown to be fast, accurate and mass-conservative and can also be applied to simulate flooding and drying of tidal mud-flats in conjunction with three-dimensional flows. Furthermore, the resulting algorithm is fully vectorizable for an efficient implementation on modern vector computers.

  18. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison w/ LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).

  19. Spatial Distribution of Cosmetic-Procedure Businesses in Two U.S. Cities: A Pilot Mapping and Validation Study

    PubMed Central

    Austin, S. Bryn; Gordon, Allegra R.; Kennedy, Grace A.; Sonneville, Kendrin R.; Blossom, Jeffrey; Blood, Emily A.

    2013-01-01

    Cosmetic procedures have proliferated rapidly over the past few decades, with over $11 billion spent on cosmetic surgeries and other minimally invasive procedures and another $2.9 billion spent on U.V. indoor tanning in 2012 in the United States alone. While research interest is increasing in tandem with the growth of the industry, methods have yet to be developed to identify and geographically locate the myriad types of businesses purveying cosmetic procedures. Geographic location of cosmetic-procedure businesses is a critical element in understanding the public health impact of this industry; however no studies we are aware of have developed valid and feasible methods for spatial analyses of these types of businesses. The aim of this pilot validation study was to establish the feasibility of identifying businesses offering surgical and minimally invasive cosmetic procedures and to characterize the spatial distribution of these businesses. We developed and tested three methods for creating a geocoded list of cosmetic-procedure businesses in Boston (MA) and Seattle (WA), USA, comparing each method on sensitivity and staff time required per confirmed cosmetic-procedure business. Methods varied substantially. Our findings represent an important step toward enabling rigorous health-linked spatial analyses of the health implications of this little-understood industry. PMID:24322394

  20. New, Patented Technique for Naturally Restoring Healthy Vision

    NASA Astrophysics Data System (ADS)

    Anganes, Andrew A.; McLeod, Roger David; Machado, Milena

    2009-05-01

    The patented NATUROPTIC METHOD FOR RESTORING HEALTHY VISION claims to be a novel teaching method for safely and naturally improving vision. It is a simple tutoring process designed to work quickly, requiring only a minimal number of sessions for improvement. We investigated these claims, implementing Naturoptics for safe recovery of vision, ourselves, over a period of time. Research was conducted at off campus locations, mentored by the creator of the Naturoptic Method. We assessed our initial visual acuity and subsequent progress, using standard Snellen Eye Charts. Our research is designed to document successive improvements in vision, and to assess our potential for teaching the method. Naturoptics' Board encourages work-study memorial awards for students. They are: ``The David Matthew McLeod Memorial Award,'' or ``The Kaan Balam Matagamon Memorial Award,'' with net earnings shared by the designees, academic entities, the American Indians in Science and Engineering Society, AISES, or charity. The Board requires awardees, students, and associated entities, to sign non-disclosure agreements.

  1. Analysis of automatic repeat request methods for deep-space downlinks

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Ekroot, L.

    1995-01-01

    Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.

  2. Multidimensional NMR inversion without Kronecker products: Multilinear inversion

    NASA Astrophysics Data System (ADS)

    Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos

    2016-08-01

    Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.

  3. Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.

    PubMed

    Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A

    2011-01-01

    Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.

  4. Assimilating concentration observations for transport and dispersion modeling in a meandering wind field

    NASA Astrophysics Data System (ADS)

    Haupt, Sue Ellen; Beyer-Lout, Anke; Long, Kerrie J.; Young, George S.

    Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.

  5. Iterative wave-front reconstruction in the Fourier domain.

    PubMed

    Bond, Charlotte Z; Correia, Carlos M; Sauvage, Jean-François; Neichel, Benoit; Fusco, Thierry

    2017-05-15

    The use of Fourier methods in wave-front reconstruction can significantly reduce the computation time for large telescopes with a high number of degrees of freedom. However, Fourier algorithms for discrete data require a rectangular data set which conform to specific boundary requirements, whereas wave-front sensor data is typically defined over a circular domain (the telescope pupil). Here we present an iterative Gerchberg routine modified for the purposes of discrete wave-front reconstruction which adapts the measurement data (wave-front sensor slopes) for Fourier analysis, fulfilling the requirements of the fast Fourier transform (FFT) and providing accurate reconstruction. The routine is used in the adaptation step only and can be coupled to any other Wiener-like or least-squares method. We compare simulations using this method with previous Fourier methods and show an increase in performance in terms of Strehl ratio and a reduction in noise propagation for a 40×40 SPHERE-like adaptive optics system. For closed loop operation with minimal iterations the Gerchberg method provides an improvement in Strehl, from 95.4% to 96.9% in K-band. This corresponds to ~ 40 nm improvement in rms, and avoids the high spatial frequency errors present in other methods, providing an increase in contrast towards the edge of the correctable band.

  6. Investigation of Cleanliness Verification Techniques for Rocket Engine Hardware

    NASA Technical Reports Server (NTRS)

    Fritzemeier, Marilyn L.; Skowronski, Raymund P.

    1994-01-01

    Oxidizer propellant systems for liquid-fueled rocket engines must meet stringent cleanliness requirements for particulate and nonvolatile residue. These requirements were established to limit residual contaminants which could block small orifices or ignite in the oxidizer system during engine operation. Limiting organic residues in high pressure oxygen systems, such as in the Space Shuttle Main Engine (SSME), is particularly important. The current method of cleanliness verification for the SSME uses an organic solvent flush of the critical hardware surfaces. The solvent is filtered and analyzed for particulate matter followed by gravimetric determination of the nonvolatile residue (NVR) content of the filtered solvent. The organic solvents currently specified for use (1, 1, 1-trichloroethane and CFC-113) are ozone-depleting chemicals slated for elimination by December 1995. A test program is in progress to evaluate alternative methods for cleanliness verification that do not require the use of ozone-depleting chemicals and that minimize or eliminate the use of solvents regulated as hazardous air pollutants or smog precursors. Initial results from the laboratory test program to evaluate aqueous-based methods and organic solvent flush methods for NVR verification are provided and compared with results obtained using the current method. Evaluation of the alternative methods was conducted using a range of contaminants encountered in the manufacture of rocket engine hardware.

  7. Computational methods for reactive transport modeling: An extended law of mass-action, xLMA, method for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.

    2016-10-01

    We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a unified open-source framework for modeling chemically reactive systems.

  8. Controlled Visual Sensing and Exploration

    DTIC Science & Technology

    2015-09-16

    point cloud. This is clearly insufficient for most other tasks that require at least the topology of the scene to determine what surfaces or “objects...or whether it is empty space, as in the latter case it is traversable, in the former it is not. To this end, we have developed methods for topology ...Specific achievements include: • We have shown that surface topology and geometry can be computed without minimal surface bias, yielding water-tight

  9. Thermal Desorption Capability Development for Enhanced On-site Health Risk Assessment: HAPSITE (registered trademark) ER Passive Sampling in the Field

    DTIC Science & Technology

    2015-06-07

    Field-Portable Gas Chromatograph-Mass Spectrometer.” Forensic Toxicol, 2006, 24, 17-22. Smith, P. “Person-Portable Gas Chromatography : Rapid Temperature...bench-top Gas Chromatograph-Mass Spectrometer (GC-MS) system (ISQ). Nine sites were sampled and analyzed for compounds using Environmental Protection...extraction methods for Liquid Chromatography -MS (LC- MS). Additionally, TD is approximately 1000X more sensitive, requires minimal sample preparation

  10. New trends in Taylor series based applications

    NASA Astrophysics Data System (ADS)

    Kocina, Filip; Šátek, Václav; Veigend, Petr; Nečasová, Gabriela; Valenta, Václav; Kunovský, Jiří

    2016-06-01

    The paper deals with the solution of large system of linear ODEs when minimal comunication among parallel processors is required. The Modern Taylor Series Method (MTSM) is used. The MTSM allows using a higher order during the computation that means a larger integration step size while keeping desired accuracy. As an example of complex systems we can take the Telegraph Equation Model. Symbolic and numeric solutions are compared when harmonic input signal is used.

  11. Overlay improvement methods with diffraction based overlay and integrated metrology

    NASA Astrophysics Data System (ADS)

    Nam, Young-Sun; Kim, Sunny; Shin, Ju Hee; Choi, Young Sin; Yun, Sang Ho; Kim, Young Hoon; Shin, Si Woo; Kong, Jeong Heung; Kang, Young Seog; Ha, Hun Hwan

    2015-03-01

    To accord with new requirement of securing more overlay margin, not only the optical overlay measurement is faced with the technical limitations to represent cell pattern's behavior, but also the larger measurement samples are inevitable for minimizing statistical errors and better estimation of circumstance in a lot. From these reasons, diffraction based overlay (DBO) and integrated metrology (IM) were mainly proposed as new approaches for overlay enhancement in this paper.

  12. Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring

    PubMed Central

    Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat

    2015-01-01

    We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863

  13. NMR, MRI, and spectroscopic MRI in inhomogeneous fields

    DOEpatents

    Demas, Vasiliki; Pines, Alexander; Martin, Rachel W; Franck, John; Reimer, Jeffrey A

    2013-12-24

    A method for locally creating effectively homogeneous or "clean" magnetic field gradients (of high uniformity) for imaging (with NMR, MRI, or spectroscopic MRI) both in in-situ and ex-situ systems with high degrees of inhomogeneous field strength. THe method of imaging comprises: a) providing a functional approximation of an inhomogeneous static magnetic field strength B.sub.0({right arrow over (r)}) at a spatial position {right arrow over (r)}; b) providing a temporal functional approximation of {right arrow over (G)}.sub.shim(t) with i basis functions and j variables for each basis function, resulting in v.sub.ij variables; c) providing a measured value .OMEGA., which is an temporally accumulated dephasing due to the inhomogeneities of B.sub.0({right arrow over(r)}); and d) minimizing a difference in the local dephasing angle .phi.({right arrow over (r)},t)=.gamma..intg..sub.0.sup.t{square root over (|{right arrow over (B)}.sub.1({right arrow over (r)},t')|.sup.2+({right arrow over (r)}{right arrow over (G)}.sub.shimG.sub.shim(t')+.parallel.{right arrow over (B)}.sub.0({right arrow over (r)}).parallel..DELTA..omega.({right arrow over (r)},t'/.gamma/).sup.2)}dt'-.OMEGA. by varying the v.sub.ij variables to form a set of minimized v.sub.ij variables. The method requires calibration of the static fields prior to minimization, but may thereafter be implemented without such calibration, may be used in open or closed systems, and potentially portable systems.

  14. Effects of tools inserted through snake-like surgical manipulators.

    PubMed

    Murphy, Ryan J; Otake, Yoshito; Wolfe, Kevin C; Taylor, Russell H; Armand, Mehran

    2014-01-01

    Snake-like manipulators with a large, open lumen can offer improved treatment alternatives for minimally-and less-invasive surgeries. In these procedures, surgeons use the manipulator to introduce and control flexible tools in the surgical environment. This paper describes a predictive algorithm for estimating manipulator configuration given tip position for nonconstant curvature, cable-driven manipulators using energy minimization. During experimental bending of the manipulator with and without a tool inserted in its lumen, images were recorded from an overhead camera in conjunction with actuation cable tension and length. To investigate the accuracy, the estimated manipulator configuration from the model and the ground-truth configuration measured from the image were compared. Additional analysis focused on the response differences for the manipulator with and without a tool inserted through the lumen. Results indicate that the energy minimization model predicts manipulator configuration with an error of 0.24 ± 0.22mm without tools in the lumen and 0.24 ± 0.19mm with tools in the lumen (no significant difference, p = 0.81). Moreover, tools did not introduce noticeable perturbations in the manipulator trajectory; however, there was an increase in requisite force required to reach a configuration. These results support the use of the proposed estimation method for calculating the shape of the manipulator with an tool inserted in its lumen when an accuracy range of at least 1mm is required.

  15. Space Operations Center orbit altitude selection strategy

    NASA Technical Reports Server (NTRS)

    Indrikis, J.; Myers, H. L.

    1982-01-01

    The strategy for the operational altitude selection has to respond to the Space Operation Center's (SOC) maintenance requirements and the logistics demands of the missions to be supported by the SOC. Three orbit strategies are developed: two are constant altitude, and one variable altitude. In order to minimize the effect of atmospheric uncertainty the dynamic altitude method is recommended. In this approach the SOC will operate at the optimum altitude for the prevailing atmospheric conditions and logistics model, provided that mission safety constraints are not violated. Over a typical solar activity cycle this method produces significant savings in the overall logistics cost.

  16. Information categorization approach to literary authorship disputes

    NASA Astrophysics Data System (ADS)

    Yang, Albert C.-C.; Peng, C.-K.; Yien, H.-W.; Goldberger, Ary L.

    2003-11-01

    Scientific analysis of the linguistic styles of different authors has generated considerable interest. We present a generic approach to measuring the similarity of two symbolic sequences that requires minimal background knowledge about a given human language. Our analysis is based on word rank order-frequency statistics and phylogenetic tree construction. We demonstrate the applicability of this method to historic authorship questions related to the classic Chinese novel “The Dream of the Red Chamber,” to the plays of William Shakespeare, and to the Federalist papers. This method may also provide a simple approach to other large databases based on their information content.

  17. Adaptive Low Dissipative High Order Filter Methods for Multiscale MHD Flows

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, Bjoern

    2004-01-01

    Adaptive low-dissipative high order filter finite difference methods for long time wave propagation of shock/turbulence/combustion compressible viscous MHD flows has been constructed. Several variants of the filter approach that cater to different flow types are proposed. These filters provide a natural and efficient way for the minimization of the divergence of the magnetic field [divergence of B] numerical error in the sense that no standard divergence cleaning is required. For certain 2-D MHD test problems, divergence free preservation of the magnetic fields of these filter schemes has been achieved.

  18. Computer numeric control generation of toric surfaces

    NASA Astrophysics Data System (ADS)

    Bradley, Norman D.; Ball, Gary A.; Keller, John R.

    1994-05-01

    Until recently, the manufacture of toric ophthalmic lenses relied largely upon expensive, manual techniques for generation and polishing. Recent gains in computer numeric control (CNC) technology and tooling enable lens designers to employ single- point diamond, fly-cutting methods in the production of torics. Fly-cutting methods continue to improve, significantly expanding lens design possibilities while lowering production costs. Advantages of CNC fly cutting include precise control of surface geometry, rapid production with high throughput, and high-quality lens surface finishes requiring minimal polishing. As accessibility and affordability increase within the ophthalmic market, torics promise to dramatically expand lens design choices available to consumers.

  19. A shifted hyperbolic augmented Lagrangian-based artificial fish two-swarm algorithm with guaranteed convergence for constrained global optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.

    2016-12-01

    This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.

  20. JSC Metal Finishing Waste Minimization Methods

    NASA Technical Reports Server (NTRS)

    Sullivan, Erica

    2003-01-01

    THe paper discusses the following: Johnson Space Center (JSC) has achieved VPP Star status and is ISO 9001 compliant. The Structural Engineering Division in the Engineering Directorate is responsible for operating the metal finishing facility at JSC. The Engineering Directorate is responsible for $71.4 million of space flight hardware design, fabrication and testing. The JSC Metal Finishing Facility processes flight hardware to support the programs in particular schedule and mission critical flight hardware. The JSC Metal Finishing Facility is operated by Rothe Joint Venture. The Facility provides following processes: anodizing, alodining, passivation, and pickling. JSC Metal Finishing Facility completely rebuilt in 1998. Total cost of $366,000. All new tanks, electrical, plumbing, and ventilation installed. Designed to meet modern safety, environmental, and quality requirements. Designed to minimize contamination and provide the highest quality finishes.

  1. Change detection technique for muscle tone during static stretching by continuous muscle viscoelasticity monitoring using wearable indentation tester.

    PubMed

    Okamura, Naomi; Kobayashi, Yo; Sugano, Shigeki; Fujie, Masakatsu G

    2017-07-01

    Static stretching is widely performed to decrease muscle tone as a part of rehabilitation protocols. Finding out the optimal duration of static stretching is important to minimize the time required for rehabilitation therapy and it would be helpful for maintaining the patient's motivation towards daily rehabilitation tasks. Several studies have been conducted for the evaluation of static stretching; however, the recommended duration of static stretching varies widely between 15-30 s in general, because the traditional methods for the assessment of muscle tone do not monitor the continuous change in the target muscle's state. We have developed a method to monitor the viscoelasticity of one muscle continuously during static stretching, using a wearable indentation tester. In this study, we investigated a suitable signal processing method to detect the time required to change the muscle tone, utilizing the data collected using a wearable indentation tester. By calculating a viscoelastic index with a certain time window, we confirmed that the stretching duration required to bring about a decrease in muscle tone could be obtained with an accuracy in the order of 1 s.

  2. Gamma radiation in the reduction of S almonella spp. inoculated on minimally processed watercress ( Nasturtium officinalis)

    NASA Astrophysics Data System (ADS)

    Martins, C. G.; Behrens, J. H.; Destro, M. T.; Franco, B. D. G. M.; Vizeu, D. M.; Hutzler, B.; Landgraf, M.

    2004-09-01

    Consumer attitudes towards foods have changed in the last two decades increasing requirements for freshlike products. Consequently, less extreme treatments or additives are being required. Minimally processed foods have freshlike characteristics and satisfy this new consumer demand. Besides freshness, the minimally processing also provide convenience required by the market. Salad vegetables can be source of pathogen such as Salmonella, Escherichia coli O157:H7, Shigella spp. The minimal processing does not reduce the levels of pathogenic microorganisms to safe levels. Therefore, this study was carried out in order to improve the microbiological safety and the shelf-life of minimally processed vegetables using gamma radiation. Minimally processed watercress inoculated with a cocktail of Salmonella spp was exposed to 0.0, 0.2, 0.5, 0.7, 1.0, 1.2 and 1.5 kGy. Irradiated samples were diluted 1:10 in saline peptone water and plated onto tryptic soy agar that were incubated at 37°C/24 h. D 10 values for Salmonella spp. inoculated in watercress varied from 0.29 to 0.43 kGy. Therefore, a dose of 1.7 kGy will reduce Salmonella population in watercress by 4 log 10. The shelf-life was increased by 1 {1}/{2} day when the product was exposed to 1 kGy.

  3. Enhancing requirements engineering for patient registry software systems with evidence-based components.

    PubMed

    Lindoerfer, Doris; Mansmann, Ulrich

    2017-07-01

    Patient registries are instrumental for medical research. Often their structures are complex and their implementations use composite software systems to meet the wide spectrum of challenges. Commercial and open-source systems are available for registry implementation, but many research groups develop their own systems. Methodological approaches in the selection of software as well as the construction of proprietary systems are needed. We propose an evidence-based checklist, summarizing essential items for patient registry software systems (CIPROS), to accelerate the requirements engineering process. Requirements engineering activities for software systems follow traditional software requirements elicitation methods, general software requirements specification (SRS) templates, and standards. We performed a multistep procedure to develop a specific evidence-based CIPROS checklist: (1) A systematic literature review to build a comprehensive collection of technical concepts, (2) a qualitative content analysis to define a catalogue of relevant criteria, and (3) a checklist to construct a minimal appraisal standard. CIPROS is based on 64 publications and covers twelve sections with a total of 72 items. CIPROS also defines software requirements. Comparing CIPROS with traditional software requirements elicitation methods, SRS templates and standards show a broad consensus but differences in issues regarding registry-specific aspects. Using an evidence-based approach to requirements engineering for registry software adds aspects to the traditional methods and accelerates the software engineering process for registry software. The method we used to construct CIPROS serves as a potential template for creating evidence-based checklists in other fields. The CIPROS list supports developers in assessing requirements for existing systems and formulating requirements for their own systems, while strengthening the reporting of patient registry software system descriptions. It may be a first step to create standards for patient registry software system assessments. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    NASA Astrophysics Data System (ADS)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  5. Linear Matrix Inequality Method for a Quadratic Performance Index Minimization Problem with a class of Bilinear Matrix Inequality Conditions

    NASA Astrophysics Data System (ADS)

    Tanemura, M.; Chida, Y.

    2016-09-01

    There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.

  6. Detailed requirements document for common software of shuttle program information management system

    NASA Technical Reports Server (NTRS)

    Everette, J. M.; Bradfield, L. D.; Horton, C. L.

    1975-01-01

    Common software was investigated as a method for minimizing development and maintenance cost of the shuttle program information management system (SPIMS) applications while reducing the time-frame of their development. Those requirements satisfying these criteria are presented along with the stand-alone modules which may be used directly by applications. The SPIMS applications operating on the CYBER 74 computer, are specialized information management systems which use System 2000 as a data base manager. Common software provides the features to support user interactions on a CRT terminal using form input and command response capabilities. These features are available as subroutines to the applications.

  7. BetaSCPWeb: side-chain prediction for protein structures using Voronoi diagrams and geometry prioritization

    PubMed Central

    Ryu, Joonghyun; Lee, Mokwon; Cha, Jehyun; Laskowski, Roman A.; Ryu, Seong Eon; Kim, Deok-Soo

    2016-01-01

    Many applications, such as protein design, homology modeling, flexible docking, etc. require the prediction of a protein's optimal side-chain conformations from just its amino acid sequence and backbone structure. Side-chain prediction (SCP) is an NP-hard energy minimization problem. Here, we present BetaSCPWeb which efficiently computes a conformation close to optimal using a geometry-prioritization method based on the Voronoi diagram of spherical atoms. Its outputs are visual, textual and PDB file format. The web server is free and open to all users at http://voronoi.hanyang.ac.kr/betascpweb with no login requirement. PMID:27151195

  8. Light intensity distribution optimization for tunnel lamps in different zones of a long tunnel.

    PubMed

    Lai, Wei; Liu, Xianming; Chen, Weimin; Lei, Xiaohua; Cheng, Xingfu

    2014-09-22

    The light distributions in different tunnel zones have different requirements in order to meet the driver's visual system. In this paper, the light intensity distributions of tunnel lamps in different zones of a long tunnel are optimized separately. A common nonlinear optimization approach is proposed to minimize the consuming power as well as satisfy the luminance and glare requirements both on the road surface and on the wall set by International Commission on Illumination (CIE). Compared with that of the reported linear optimization method, the optimization model can save energy from 11% to 57.6% under the same installation conditions.

  9. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography.

    PubMed

    Carlier, Stéphane; Didday, Rich; Slots, Tristan; Kayaert, Peter; Sonck, Jeroen; El-Mourad, Mike; Preumont, Nicolas; Schoors, Dany; Van Camp, Guy

    2014-06-01

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator's identification of landmarks to establish the image synchronization. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. An X-ray diffraction method for semiquantitative mineralogical analysis of Chilean nitrate ore

    USGS Publications Warehouse

    Jackson, J.C.; Ericksent, G.E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  11. An x-ray diffraction method for semiquantitative mineralogical analysis of chilean nitrate ore

    USGS Publications Warehouse

    John, C.; George, J.; Ericksen, E.

    1997-01-01

    Computer analysis of X-ray diffraction (XRD) data provides a simple method for determining the semiquantitative mineralogical composition of naturally occurring mixtures of saline minerals. The method herein described was adapted from a computer program for the study of mixtures of naturally occurring clay minerals. The program evaluates the relative intensities of selected diagnostic peaks for the minerals in a given mixture, and then calculates the relative concentrations of these minerals. The method requires precise calibration of XRD data for the minerals to be studied and selection of diffraction peaks that minimize inter-compound interferences. The calculated relative abundances are sufficiently accurate for direct comparison with bulk chemical analyses of naturally occurring saline mineral assemblages.

  12. Bound-preserving Legendre-WENO finite volume schemes using nonlinear mapping

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Pantano, Carlos

    2017-11-01

    We present a new method to enforce field bounds in high-order Legendre-WENO finite volume schemes. The strategy consists of reconstructing each field through an intermediate mapping, which by design satisfies realizability constraints. Determination of the coefficients of the polynomial reconstruction involves nonlinear equations that are solved using Newton's method. The selection between the original or mapped reconstruction is implemented dynamically to minimize computational cost. The method has also been generalized to fields that exhibit interdependencies, requiring multi-dimensional mappings. Further, the method does not depend on the existence of a numerical flux function. We will discuss details of the proposed scheme and show results for systems in conservation and non-conservation form. This work was funded by the NSF under Grant DMS 1318161.

  13. Infrared coagulation: a new treatment for hemorrhoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leicester, R.J.; Nicholls, R.J.; Mann, C.V.

    Many methods, which have effectively reduced the number of patients requiring hospital admission, have been described for the outpatient treatment of hemorrhoids. However, complications have been reported, and the methods are often associated with unpleasant side effects. In 1977 Neiger et al. described a new method that used infrared coagulation, which produced minimal side effects. The authors have conducted a prospective, randomized trial to evaluate infrared coagulation compared with more traditional methods of treatment. The authors' results show that it may be more effective than injection sclerotherapy in treating non-prolapsing hemorrhoids and that it compares favorably with rubber band ligationmore » in most prolapsing hemorrhoids. No complications occurred, and significantly fewer patients experienced pain after infrared coagulation (P . less than 0.001).« less

  14. Evolutionary Bi-objective Optimization for Bulldozer and Its Blade in Soil Cutting

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak; Barakat, Nada

    2018-02-01

    An evolutionary optimization approach is adopted in this paper for simultaneously achieving the economic and productive soil cutting. The economic aspect is defined by minimizing the power requirement from the bulldozer, and the soil cutting is made productive by minimizing the time of soil cutting. For determining the power requirement, two force models are adopted from the literature to quantify the cutting force on the blade. Three domain-specific constraints are also proposed, which are limiting the power from the bulldozer, limiting the maximum force on the bulldozer blade and achieving the desired production rate. The bi-objective optimization problem is solved using five benchmark multi-objective evolutionary algorithms and one classical optimization technique using the ɛ-constraint method. The Pareto-optimal solutions are obtained with the knee-region. Further, the post-optimal analysis is performed on the obtained solutions to decipher relationships among the objectives and decision variables. Such relationships are later used for making guidelines for selecting the optimal set of input parameters. The obtained results are then compared with the experiment results from the literature that show a close agreement among them.

  15. NLINEAR - NONLINEAR CURVE FITTING PROGRAM

    NASA Technical Reports Server (NTRS)

    Everhart, J. L.

    1994-01-01

    A common method for fitting data is a least-squares fit. In the least-squares method, a user-specified fitting function is utilized in such a way as to minimize the sum of the squares of distances between the data points and the fitting curve. The Nonlinear Curve Fitting Program, NLINEAR, is an interactive curve fitting routine based on a description of the quadratic expansion of the chi-squared statistic. NLINEAR utilizes a nonlinear optimization algorithm that calculates the best statistically weighted values of the parameters of the fitting function and the chi-square that is to be minimized. The inputs to the program are the mathematical form of the fitting function and the initial values of the parameters to be estimated. This approach provides the user with statistical information such as goodness of fit and estimated values of parameters that produce the highest degree of correlation between the experimental data and the mathematical model. In the mathematical formulation of the algorithm, the Taylor expansion of chi-square is first introduced, and justification for retaining only the first term are presented. From the expansion, a set of n simultaneous linear equations are derived, which are solved by matrix algebra. To achieve convergence, the algorithm requires meaningful initial estimates for the parameters of the fitting function. NLINEAR is written in Fortran 77 for execution on a CDC Cyber 750 under NOS 2.3. It has a central memory requirement of 5K 60 bit words. Optionally, graphical output of the fitting function can be plotted. Tektronix PLOT-10 routines are required for graphics. NLINEAR was developed in 1987.

  16. Next Day Building Load Predictions based on Limited Input Features Using an On-Line Laterally Primed Adaptive Resonance Theory Artificial Neural Network.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Christian Birk; Robinson, Matt; Yasaei, Yasser

    Optimal integration of thermal energy storage within commercial building applications requires accurate load predictions. Several methods exist that provide an estimate of a buildings future needs. Methods include component-based models and data-driven algorithms. This work implemented a previously untested algorithm for this application that is called a Laterally Primed Adaptive Resonance Theory (LAPART) artificial neural network (ANN). The LAPART algorithm provided accurate results over a two month period where minimal historical data and a small amount of input types were available. These results are significant, because common practice has often overlooked the implementation of an ANN. ANN have often beenmore » perceived to be too complex and require large amounts of data to provide accurate results. The LAPART neural network was implemented in an on-line learning manner. On-line learning refers to the continuous updating of training data as time occurs. For this experiment, training began with a singe day and grew to two months of data. This approach provides a platform for immediate implementation that requires minimal time and effort. The results from the LAPART algorithm were compared with statistical regression and a component-based model. The comparison was based on the predictions linear relationship with the measured data, mean squared error, mean bias error, and cost savings achieved by the respective prediction techniques. The results show that the LAPART algorithm provided a reliable and cost effective means to predict the building load for the next day.« less

  17. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  18. Concave 1-norm group selection

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2015-01-01

    Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206

  19. Safety policy and requirements for payloads using the Space Transportation System (STS)

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The Space Transportation Operations (STO) safety policy is to minimize STO involvement in the payload and its GSE (ground support equipment) design process while maintaining the assurance of a safe operation. Requirements for assuring payload mission success are the responsibility of the payload organization and are beyond the scope of this document. The intent is to provide the overall safety policies and requirements while allowing for negotiation between the payload organization and the STO operator in the method of implementation of payload safety. This revision provides for a relaxation in the monitoring requirements for inhibits, allows the payload organization to pursue design options and reflects, additionally, some new requirements. As of the issue date of this NHB, payloads which have completed the formal safety assessment reviews of their preliminary design on the basis of the May 1979 issue will be reassessed for compliance with the above changes.

  20. Interface design and human factors considerations for model-based tight glycemic control in critical care.

    PubMed

    Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey

    2012-01-01

    Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.

  1. Evaluation of the carotid artery stenosis based on minimization of mechanical energy loss of the blood flow.

    PubMed

    Sia, Sheau Fung; Zhao, Xihai; Li, Rui; Zhang, Yu; Chong, Winston; He, Le; Chen, Yu

    2016-11-01

    Internal carotid artery stenosis requires an accurate risk assessment for the prevention of stroke. Although the internal carotid artery area stenosis ratio at the common carotid artery bifurcation can be used as one of the diagnostic methods of internal carotid artery stenosis, the accuracy of results would still depend on the measurement techniques. The purpose of this study is to propose a novel method to estimate the effect of internal carotid artery stenosis on the blood flow based on the concept of minimization of energy loss. Eight internal carotid arteries from different medical centers were diagnosed as stenosed internal carotid arteries, as plaques were found at different locations on the vessel. A computational fluid dynamics solver was developed based on an open-source code (OpenFOAM) to test the flow ratio and energy loss of those stenosed internal carotid arteries. For comparison, a healthy internal carotid artery and an idealized internal carotid artery model have also been tested and compared with stenosed internal carotid artery in terms of flow ratio and energy loss. We found that at a given common carotid artery bifurcation, there must be a certain flow distribution in the internal carotid artery and external carotid artery, for which the total energy loss at the bifurcation is at a minimum; for a given common carotid artery flow rate, an irregular shaped plaque at the bifurcation constantly resulted in a large value of minimization of energy loss. Thus, minimization of energy loss can be used as an indicator for the estimation of internal carotid artery stenosis.

  2. Improved productivity through interactive communication

    NASA Technical Reports Server (NTRS)

    Marino, P. P.

    1985-01-01

    New methods and approaches are being tried and evaluated with the goal of increasing productivity and quality. The underlying concept in all of these approaches, methods or processes is that people require interactive communication to maximize the organization's strengths and minimize impediments to productivity improvement. This paper examines Bendix Field Engineering Corporation's organizational structure and experiences with employee involvement programs. The paper focuses on methods Bendix developed and implemented to open lines of communication throughout the organization. The Bendix approach to productivity and quality enhancement shows that interactive communication is critical to the successful implementation of any productivity improvement program. The paper concludes with an examination of the Bendix methodologies which can be adopted by any corporation in any industry.

  3. Delta13C and delta18O isotopic composition of CaCO3 measured by continuous flow isotope ratio mass spectrometry: statistical evaluation and verification by application to Devils Hole core DH-11 calcite.

    PubMed

    Révész, Kinga M; Landwehr, Jurate M

    2002-01-01

    A new method was developed to analyze the stable carbon and oxygen isotope ratios of small samples (400 +/- 20 micro g) of calcium carbonate. This new method streamlines the classical phosphoric acid/calcium carbonate (H(3)PO(4)/CaCO(3)) reaction method by making use of a recently available Thermoquest-Finnigan GasBench II preparation device and a Delta Plus XL continuous flow isotope ratio mass spectrometer. Conditions for which the H(3)PO(4)/CaCO(3) reaction produced reproducible and accurate results with minimal error had to be determined. When the acid/carbonate reaction temperature was kept at 26 degrees C and the reaction time was between 24 and 54 h, the precision of the carbon and oxygen isotope ratios for pooled samples from three reference standard materials was

  4. Spacelab Mission Implementation Cost Assessment (SMICA)

    NASA Technical Reports Server (NTRS)

    Guynes, B. V.

    1984-01-01

    A total savings of approximately 20 percent is attainable if: (1) mission management and ground processing schedules are compressed; (2) the equipping, staffing, and operating of the Payload Operations Control Center is revised, and (3) methods of working with experiment developers are changed. The development of a new mission implementation technique, which includes mission definition, experiment development, and mission integration/operations, is examined. The Payload Operations Control Center is to relocate and utilize new computer equipment to produce cost savings. Methods of reducing costs by minimizing the Spacelab and payload processing time during pre- and post-mission operation at KSC are analyzed. The changes required to reduce costs in the analytical integration process are studied. The influence of time, requirements accountability, and risk on costs is discussed. Recommendation for cost reductions developed by the Spacelab Mission Implementation Cost Assessment study are listed.

  5. Data-driven discovery of Koopman eigenfunctions using deep learning

    NASA Astrophysics Data System (ADS)

    Lusch, Bethany; Brunton, Steven L.; Kutz, J. Nathan

    2017-11-01

    Koopman operator theory transforms any autonomous non-linear dynamical system into an infinite-dimensional linear system. Since linear systems are well-understood, a mapping of non-linear dynamics to linear dynamics provides a powerful approach to understanding and controlling fluid flows. However, finding the correct change of variables remains an open challenge. We present a strategy to discover an approximate mapping using deep learning. Our neural networks find this change of variables, its inverse, and a finite-dimensional linear dynamical system defined on the new variables. Our method is completely data-driven and only requires measurements of the system, i.e. it does not require derivatives or knowledge of the governing equations. We find a minimal set of approximate Koopman eigenfunctions that are sufficient to reconstruct and advance the system to future states. We demonstrate the method on several dynamical systems.

  6. Percutaneous Portal Vein Access and Transhepatic Tract Hemostasis

    PubMed Central

    Saad, Wael E. A.; Madoff, David C.

    2012-01-01

    Percutaneous portal vein interventions require minimally invasive access to the portal venous system. Common approaches to the portal vein include transjugular hepatic vein to portal vein access and direct transhepatic portal vein access. A major concern of the transhepatic route is the risk of postprocedural bleeding, which is increased when patients are anticoagulated or receiving pharmaceutical thrombolytic therapy. Thus percutaneous portal vein access and subsequent closure are important technical parts of percutaneous portal vein procedures. At present, various techniques have been used for either portal access or subsequent transhepatic tract closure and hemostasis. Regardless of the method used, meticulous technique is required to achieve the overall safety and effectiveness of portal venous procedures. This article reviews the various techniques of percutaneous transhepatic portal vein access and the various closure and hemostatic methods used to reduce the risk of postprocedural bleeding. PMID:23729976

  7. Load emphasizes muscle effort minimization during selection of arm movement direction

    PubMed Central

    2012-01-01

    Background Directional preferences during center-out horizontal shoulder-elbow movements were previously established for both the dominant and non-dominant arm with the use of a free-stroke drawing task that required random selection of movement directions. While the preferred directions were mirror-symmetrical in both arms, they were attributed to a tendency specific for the dominant arm to simplify control of interaction torque by actively accelerating one joint and producing largely passive motion at the other joint. No conclusive evidence has been obtained in support of muscle effort minimization as a contributing factor to the directional preferences. Here, we tested whether distal load changes directional preferences, making the influence of muscle effort minimization on the selection of movement direction more apparent. Methods The free-stroke drawing task was performed by the dominant and non-dominant arm with no load and with 0.454 kg load at the wrist. Motion of each arm was limited to rotation of the shoulder and elbow in the horizontal plane. Directional histograms of strokes produced by the fingertip were calculated to assess directional preferences in each arm and load condition. Possible causes for directional preferences were further investigated by studying optimization across directions of a number of cost functions. Results Preferences in both arms to move in the diagonal directions were revealed. The previously suggested tendency to actively accelerate one joint and produce passive motion at the other joint was supported in both arms and load conditions. However, the load increased the tendency to produce strokes in the transverse diagonal directions (perpendicular to the forearm orientation) in both arms. Increases in required muscle effort caused by the load suggested that the higher frequency of movements in the transverse directions represented increased influence of muscle effort minimization on the selection of movement direction. This interpretation was supported by cost function optimization results. Conclusions While without load, the contribution of muscle effort minimization was minor, and therefore, not apparent, the load revealed this contribution by enhancing it. Unlike control of interaction torque, the revealed tendency to minimize muscle effort was independent of arm dominance. PMID:23035925

  8. 43 CFR 3272.12 - What environmental protection measures must I include in my utilization plan?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... resources; (6) Minimize air and noise pollution; and (7) Minimize hazards to public health and safety during... operations to ensure that they comply with the requirements of § 3200.4, and applicable noise, air, and water... may require you to collect data concerning existing air and water quality, noise, seismicity...

  9. 43 CFR 3272.12 - What environmental protection measures must I include in my utilization plan?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... resources; (6) Minimize air and noise pollution; and (7) Minimize hazards to public health and safety during... operations to ensure that they comply with the requirements of § 3200.4, and applicable noise, air, and water... may require you to collect data concerning existing air and water quality, noise, seismicity...

  10. 43 CFR 3272.12 - What environmental protection measures must I include in my utilization plan?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... resources; (6) Minimize air and noise pollution; and (7) Minimize hazards to public health and safety during... operations to ensure that they comply with the requirements of § 3200.4, and applicable noise, air, and water... may require you to collect data concerning existing air and water quality, noise, seismicity...

  11. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  12. A space radiation transport method development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2004-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.

  13. Change Point Detection in Correlation Networks

    NASA Astrophysics Data System (ADS)

    Barnett, Ian; Onnela, Jukka-Pekka

    2016-01-01

    Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.

  14. Storage and computationally efficient permutations of factorized covariance and square-root information matrices

    NASA Technical Reports Server (NTRS)

    Muellerschoen, R. J.

    1988-01-01

    A unified method to permute vector-stored upper-triangular diagonal factorized covariance (UD) and vector stored upper-triangular square-root information filter (SRIF) arrays is presented. The method involves cyclical permutation of the rows and columns of the arrays and retriangularization with appropriate square-root-free fast Givens rotations or elementary slow Givens reflections. A minimal amount of computation is performed and only one scratch vector of size N is required, where N is the column dimension of the arrays. To make the method efficient for large SRIF arrays on a virtual memory machine, three additional scratch vectors each of size N are used to avoid expensive paging faults. The method discussed is compared with the methods and routines of Bierman's Estimation Subroutine Library (ESL).

  15. Minimally invasive percutaneous pericardial ICD placement in an infant piglet model: Head-to-head comparison with an open surgical thoracotomy approach.

    PubMed

    Clark, Bradley C; Davis, Tanya D; El-Sayed Ahmed, Magdy M; McCarter, Robert; Ishibashi, Nobuyuki; Jordan, Christopher P; Kane, Timothy D; Kim, Peter C W; Krieger, Axel; Nath, Dilip S; Opfermann, Justin D; Berul, Charles I

    2016-05-01

    Epicardial implantable cardioverter-defibrillator (ICD) placement in infants, children, and patients with complex cardiac anatomy requires an open surgical thoracotomy and is associated with increased pain, longer length of stay, and higher cost. The purpose of this study was to compare an open surgical epicardial placement approach with percutaneous pericardial placement of an ICD lead system in an infant piglet model. Animals underwent either epicardial placement by direct suture fixation through a left thoracotomy or minimally invasive pericardial placement with thoracoscopic visualization. Initial lead testing and defibrillation threshold testing (DFT) were performed. After the 2-week survival period, repeat lead testing and DFT were performed before euthanasia. Minimally invasive placement was performed in 8 piglets and open surgical placement in 7 piglets without procedural morbidity or mortality. The mean initial DFT value was 10.5 J (range 3-28 J) in the minimally invasive group and 10.0 J (range 5-35 J) in the open surgical group (P = .90). After the survival period, the mean DFT value was 12.0 J (range 3-20 J) in the minimally invasive group and 12.3 J (range 3-35 J) in the open surgical group (P = .95). All lead and shock impedances, R-wave amplitudes, and ventricular pacing thresholds remained stable throughout the survival period. Compared with open surgical epicardial ICD lead placement, minimally invasive pericardial placement demonstrates an equivalent ability to effectively defibrillate the heart and has demonstrated similar lead stability. With continued technical development and operator experience, the minimally invasive method may provide a viable alternative to epicardial ICD lead placement in infants, children, and adults at risk of sudden cardiac death. Copyright © 2016 Heart Rhythm Society. All rights reserved.

  16. A Method for Constructing a New Extensible Nomenclature for Clinical Coding Practices in Sub-Saharan Africa.

    PubMed

    Van Laere, Sven; Nyssen, Marc; Verbeke, Frank

    2017-01-01

    Clinical coding is a requirement to provide valuable data for billing, epidemiology and health care resource allocation. In sub-Saharan Africa, we observe a growing awareness of the need for coding of clinical data, not only in health insurances, but also in governments and the hospitals. Presently, coding systems in sub-Saharan Africa are often used for billing purposes. In this paper we consider the use of a nomenclature to also have a clinical impact. Often coding systems are assumed to be complex and too extensive to be used in daily practice. Here, we present a method for constructing a new nomenclature based on existing coding systems by considering a minimal subset in the sub-Saharan region. Evaluation of completeness will be done nationally using the requirements of national registries. The nomenclature requires an extension character for dealing with codes that have to be used for multiple registries. Hospitals will benefit most by using this extension character.

  17. Effectiveness and efficacy of minimally invasive lung volume reduction surgery for emphysema

    PubMed Central

    Pertl, Daniela; Eisenmann, Alexander; Holzer, Ulrike; Renner, Anna-Theresa; Valipour, A.

    2014-01-01

    Lung emphysema is a chronic, progressive and irreversible destruction of the lung tissue. Besides non-medical therapies and the well established medical treatment there are surgical and minimally invasive methods for lung volume reduction (LVR) to treat severe emphysema. This report deals with the effectiveness and cost-effectiveness of minimally invasive methods compared to other treatments for LVR in patients with lung emphysema. Furthermore, legal and ethical aspects are discussed. No clear benefit of minimally invasive methods compared to surgical methods can be demonstrated based on the identified and included evidence. In order to assess the different methods for LVR regarding their relative effectiveness and safety in patients with lung emphysema direct comparative studies are necessary. PMID:25295123

  18. Effectiveness and efficacy of minimally invasive lung volume reduction surgery for emphysema.

    PubMed

    Pertl, Daniela; Eisenmann, Alexander; Holzer, Ulrike; Renner, Anna-Theresa; Valipour, A

    2014-01-01

    Lung emphysema is a chronic, progressive and irreversible destruction of the lung tissue. Besides non-medical therapies and the well established medical treatment there are surgical and minimally invasive methods for lung volume reduction (LVR) to treat severe emphysema. This report deals with the effectiveness and cost-effectiveness of minimally invasive methods compared to other treatments for LVR in patients with lung emphysema. Furthermore, legal and ethical aspects are discussed. No clear benefit of minimally invasive methods compared to surgical methods can be demonstrated based on the identified and included evidence. In order to assess the different methods for LVR regarding their relative effectiveness and safety in patients with lung emphysema direct comparative studies are necessary.

  19. Hyperpolarization of Nitrogen-15 Schiff Bases by Reversible Exchange Catalysis with para-Hydrogen.

    PubMed

    Logan, Angus W J; Theis, Thomas; Colell, Johannes F P; Warren, Warren S; Malcolmson, Steven J

    2016-07-25

    NMR with thermal polarization requires relatively concentrated samples, particularly for nuclei with low abundance and low gyromagnetic ratios, such as (15) N. We expand the substrate scope of SABRE, a recently introduced hyperpolarization method, to allow access to (15) N-enriched Schiff bases. These substrates show fractional (15) N polarization levels of up to 2 % while having only minimal (1) H enhancements. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Methods of fabricating applique circuits

    DOEpatents

    Dimos, Duane B.; Garino, Terry J.

    1999-09-14

    Applique circuits suitable for advanced packaging applications are introduced. These structures are particularly suited for the simple integration of large amounts (many nanoFarads) of capacitance into conventional integrated circuit and multichip packaging technology. In operation, applique circuits are bonded to the integrated circuit or other appropriate structure at the point where the capacitance is required, thereby minimizing the effects of parasitic coupling. An immediate application is to problems of noise reduction and control in modern high-frequency circuitry.

  1. OCEAN THERMAL ENERGY CONVERSION (OTEC) PROGRAMMATIC ENVIRONMENTAL ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sands, M. D.

    1980-01-01

    This programmatic environmental analysis is an initial assessment of OTEC technology considering development, demonstration and commercialization; it is concluded that the OTEC development program should continue because the development, demonstration, and commercialization on a single-plant deployment basis should not present significant environmental impacts. However, several areas within the OTEC program require further investigation in order to assess the potential for environmental impacts from OTEC operation, particularly in large-scale deployments and in defining alternatives to closed-cycle biofouling control: (1) Larger-scale deployments of OTEC clusters or parks require further investigations in order to assess optimal platform siting distances necessary to minimize adversemore » environmental impacts. (2) The deployment and operation of the preoperational platform (OTEC-1) and future demonstration platforms must be carefully monitored to refine environmental assessment predictions, and to provide design modifications which may mitigate or reduce environmental impacts for larger-scale operations. These platforms will provide a valuable opportunity to fully evaluate the intake and discharge configurations, biofouling control methods, and both short-term and long-term environmental effects associated with platform operations. (3) Successful development of OTEC technology to use the maximal resource capabilities and to minimize environmental effects will require a concerted environmental management program, encompassing many different disciplines and environmental specialties.« less

  2. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  3. An autonomous payload controller for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Hudgins, J. I.

    1979-01-01

    The Autonomous Payload Control (APC) system discussed in the present paper was designed on the basis of such criteria as minimal cost of implementation, minimal space required in the flight-deck area, simple operation with verification of the results, minimal additional weight, minimal impact on Orbiter design, and minimal impact on Orbiter payload integration. In its present configuration, the APC provides a means for the Orbiter crew to control as many as 31 autononous payloads. The avionics and human engineering aspects of the system are discussed.

  4. Medication administration through enteral feeding tubes.

    PubMed

    Williams, Nancy Toedter

    2008-12-15

    An overview of enteral feeding tubes, drug administration techniques, considerations for dosage form selection, common drug interactions with enteral formulas, and methods to minimize tube occlusion is given. Enteral nutrition through a feeding tube is the preferred method of nutrition support in patients who have a functioning gastrointestinal tract but who are unable to be fed orally. This method of delivering nutrition is also commonly used for administering medications when patients cannot swallow safely. However, several issues must be considered with concurrent administration of oral medications and enteral formulas. Incorrect administration methods may result in clogged feeding tubes, decreased drug efficacy, increased adverse effects, or drug-formula incompatibilities. Various enteral feeding tubes are available and are typically classified by site of insertion and location of the distal tip of the feeding tube. Liquid medications, particularly elixirs and suspensions, are preferred for enteral administration; however, these formulations may be hypertonic or contain large amounts of sorbitol, and these properties increase the potential for adverse effects. Before solid dosage forms are administered through the feeding tube, it should be determined if the medications are suitable for manipulation, such as crushing a tablet or opening a capsule. Medications should not be added directly to the enteral formula, and feeding tubes should be properly flushed with water before and after each medication is administered. To minimize drug-nutrient interactions, special considerations should be taken when administering phenytoin, carbamazepine, warfarin, fluoroquinolones, and proton pump inhibitors via feeding tubes. Precautions should be implemented to prevent tube occlusions, and immediate intervention is required when blockages occur. Successful drug delivery through enteral feeding tubes requires consideration of the tube size and placement as well as careful selection and appropriate administration of drug dosage forms.

  5. Numerical investigation of a modified family of centered schemes applied to multiphase equations with nonconservative sources

    NASA Astrophysics Data System (ADS)

    Crochet, M. W.; Gonthier, K. A.

    2013-12-01

    Systems of hyperbolic partial differential equations are frequently used to model the flow of multiphase mixtures. These equations often contain sources, referred to as nozzling terms, that cannot be posed in divergence form, and have proven to be particularly challenging in the development of finite-volume methods. Upwind schemes have recently shown promise in properly resolving the steady wave solution of the associated multiphase Riemann problem. However, these methods require a full characteristic decomposition of the system eigenstructure, which may be either unavailable or computationally expensive. Central schemes, such as the Kurganov-Tadmor (KT) family of methods, require minimal characteristic information, which makes them easily applicable to systems with an arbitrary number of phases. However, the proper implementation of nozzling terms in these schemes has been mathematically ambiguous. The primary objectives of this work are twofold: first, an extension of the KT family of schemes is proposed that formally accounts for the nonconservative nozzling sources. This modification results in a semidiscrete form that retains the simplicity of its predecessor and introduces little additional computational expense. Second, this modified method is applied to multiple, but equivalent, forms of the multiphase equations to perform a numerical study by solving several one-dimensional test problems. Both ideal and Mie-Grüneisen equations of state are used, with the results compared to an analytical solution. This study demonstrates that the magnitudes of the resulting numerical errors are sensitive to the form of the equations considered, and suggests an optimal form to minimize these errors. Finally, a separate modification of the wave propagation speeds used in the KT family is also suggested that can reduce the extent of numerical diffusion in multiphase flows.

  6. Method and apparatus for extracting water from air using a desiccant

    DOEpatents

    Spletzer, Barry L.; Callow, Diane Schafer

    2003-01-01

    The present invention provides a method and apparatus for extracting liquid water from moist air using minimal energy input. The method can be considered as four phases: (1) adsorbing water from air into a desiccant, (2) isolating the water-laden desiccant from the air source, (3) desorbing water as vapor from the desiccant into a chamber, and (4) isolating the desiccant from the chamber, and compressing the vapor in the chamber to form liquid condensate. The liquid condensate can be removed for use. Careful design of the dead volumes and pressure balances can minimize the energy required. The dried air can be exchanged for fresh moist air and the process repeated. An apparatus comprises a first chamber in fluid communication with a desiccant, and having ports to intake moist air and exhaust dried air. The apparatus also comprises a second chamber in fluid communication with the desiccant. The second chamber allows variable internal pressure, and has a port for removal of liquid condensate. Each chamber can be configured to be isolated or in communication with the desiccant. The first chamber can be configured to be isolated or in communication with a course of moist air. Various arrangements of valves, pistons, and chambers are described.

  7. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  8. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  9. FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry

    NASA Astrophysics Data System (ADS)

    Dai, Jisheng; Liu, An; Lau, Vincent K. N.

    2018-05-01

    This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.

  10. Discovery of Boolean metabolic networks: integer linear programming based approach.

    PubMed

    Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing

    2018-04-11

    Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".

  11. Estimating the Inertia Matrix of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Keim, Jason; Shields, Joel

    2007-01-01

    A paper presents a method of utilizing some flight data, aboard a spacecraft that includes reaction wheels for attitude control, to estimate the inertia matrix of the spacecraft. The required data are digitized samples of (1) the spacecraft attitude in an inertial reference frame as measured, for example, by use of a star tracker and (2) speeds of rotation of the reaction wheels, the moments of inertia of which are deemed to be known. Starting from the classical equations for conservation of angular momentum of a rigid body, the inertia-matrix-estimation problem is formulated as a constrained least-squares minimization problem with explicit bounds on the inertia matrix incorporated as linear matrix inequalities. The explicit bounds reflect physical bounds on the inertia matrix and reduce the volume of data that must be processed to obtain a solution. The resulting minimization problem is a semidefinite optimization problem that can be solved efficiently, with guaranteed convergence to the global optimum, by use of readily available algorithms. In a test case involving a model attitude platform rotating on an air bearing, it is shown that, relative to a prior method, the present method produces better estimates from few data.

  12. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  13. Reconnaissance and Autonomy for Small Robots (RASR) team: MAGIC 2010 challenge

    NASA Astrophysics Data System (ADS)

    Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark; Corley, Katrina

    2012-06-01

    The Reconnaissance and Autonomy for Small Robots (RASR) team developed a system for the coordination of groups of unmanned ground vehicles (UGVs) that can execute a variety of military relevant missions in dynamic urban environments. Historically, UGV operations have been primarily performed via tele-operation, requiring at least one dedicated operator per robot, and requiring substantial real-time bandwidth to accomplish those missions. Our team goal was to develop a system that can provide long-term value to the war-fighter, utilizing MAGIC-2010 as a stepping stone. To that end, we self-imposed a set of constraints that would force us to develop technology that could readily be used by the military in the near term: • Use a relevant (deployed) platform • Use low-cost, reliable sensors • Develop an expandable and modular control system with innovative software algorithms to minimize the computing footprint required • Minimize required communications bandwidth and handle communication losses • Minimize additional power requirements to maximize battery life and mission duration

  14. LST and instrument considerations. [modular design

    NASA Technical Reports Server (NTRS)

    Levin, G. M.

    1974-01-01

    In order that the LST meet its scientific objectives and also be a National Astronomical Space Facility during the 1980's and 1990's, broad requirements have been levied by the scientific community. These scientific requirements can be directly translated into design requirements and specifications for the scientific instruments. The instrument ensemble design must be consistent with a 15-year operational lifetime. Downtime for major repair/refurbishment or instrument updating must be minimized. The overall efficiency and performance of the instruments should be maximized. Modularization of instruments and instrument subsystems, some degree of on-orbit servicing (both repair and replacement), on-axis location, minimizing the number of reflections within instruments, minimizing polarization effects, and simultaneous operation of the F/24 camera with other instruments, are just a few of the design guidelines and specifications which can and will be met in order that these broader scientific requirements be satisfied.-

  15. Two methods for parameter estimation using multiple-trait models and beef cattle field data.

    PubMed

    Bertrand, J K; Kriese, L A

    1990-08-01

    Two methods are presented for estimating variances and covariances from beef cattle field data using multiple-trait sire models. Both methods require that the first trait have no missing records and that the contemporary groups for the second trait be subsets of the contemporary groups for the first trait; however, the second trait may have missing records. One method uses pseudo expectations involving quadratics composed of the solutions and the right-hand sides of the mixed model equations. The other method is an extension of Henderson's Simple Method to the multiple trait case. Neither of these methods requires any inversions of large matrices in the computation of the parameters; therefore, both methods can handle very large sets of data. Four simulated data sets were generated to evaluate the methods. In general, both methods estimated genetic correlations and heritabilities that were close to the Restricted Maximum Likelihood estimates and the true data set values, even when selection within contemporary groups was practiced. The estimates of residual correlations by both methods, however, were biased by selection. These two methods can be useful in estimating variances and covariances from multiple-trait models in large populations that have undergone a minimal amount of selection within contemporary groups.

  16. A Two-Dimensional Variational Analysis Method for NSCAT Ambiguity Removal: Methodology, Sensitivity, and Tuning

    NASA Technical Reports Server (NTRS)

    Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)

    2001-01-01

    In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.

  17. A new serotyping method for Klebsiella species: development of the technique.

    PubMed Central

    Riser, E; Noone, P; Poulton, T A

    1976-01-01

    A new serotyping method for Klebsiella species using indirect immunofluorescence is described. Nonspecific fluorescence has been minimized by carrying out the capsular antigen-antibody reaction at pH 9.0. Commercial antisera have been tested with the 72 antigenic types of Klebsiella, and appropriate dilutions of each pool and specific antisera have been proposed for use in routine typing. Dilutions were chosen to allow strong fluorescence with each type and its specific antiserum and minimal fluorescence with cross reacting antisera. Where the pool antisera gave a weak reaction for one or more of the component types, it is recommended that the specific antisera for these types be added to the pool dilution. The few remaining cross reactions, with the pool and specific antisera in test dilution, are listed in a table. The unique cross reacting patterns of particular types have been found to be useful in identification. Typing Klebsiella by the fluorescent antibody technique is easy to perform and interpret; the results are reproducible, and it is less expensive than the existing capsular swelling method as it is more sensitive and requires less concentrated antisera. This new method of typing should facilitate detailed epidemiological studies of the mode of transmission of Klebsiella species in hospitals and thus allow more effective infection control measures to be instituted. Images PMID:777042

  18. Bond breaking in epoxy systems: A combined QM/MM approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barr, Stephen A.; Ecker, Allison M.; Berry, Rajiv J., E-mail: Rajiv.Berry@us.af.mil

    2016-06-28

    A novel method to combine quantum mechanics (QM) and molecular mechanics has been developed to accurately and efficiently account for covalent bond breaking in polymer systems under high strain without the use of predetermined break locations. Use of this method will provide a better fundamental understanding of the mechano-chemical origins of fracture in thermosets. Since classical force fields cannot accurately account for bond breaking, and QM is too demanding to simulate large systems, a hybrid approach is required. In the method presented here, strain is applied to the system using a classical force field, and all bond lengths are monitored.more » When a bond is stretched past a threshold value, a zone surrounding the bond is used in a QM energy minimization to determine which, if any, bonds break. The QM results are then used to reconstitute the system to continue the classical simulation at progressively larger strain until another QM calculation is triggered. In this way, a QM calculation is only computed when and where needed, allowing for efficient simulations. A robust QM method for energy minimization has been determined, as well as appropriate values for the QM zone size and the threshold bond length. Compute times do not differ dramatically from classical molecular mechanical simulations.« less

  19. Rapid detection of potyviruses from crude plant extracts.

    PubMed

    Silva, Gonçalo; Oyekanmi, Joshua; Nkere, Chukwuemeka K; Bömer, Moritz; Kumar, P Lava; Seal, Susan E

    2018-04-01

    Potyviruses (genus Potyvirus; family Potyviridae) are widely distributed and represent one of the most economically important genera of plant viruses. Therefore, their accurate detection is a key factor in developing efficient control strategies. However, this can sometimes be problematic particularly in plant species containing high amounts of polysaccharides and polyphenols such as yam (Dioscorea spp.). Here, we report the development of a reliable, rapid and cost-effective detection method for the two most important potyviruses infecting yam based on reverse transcription-recombinase polymerase amplification (RT-RPA). The developed method, named 'Direct RT-RPA', detects each target virus directly from plant leaf extracts prepared with a simple and inexpensive extraction method avoiding laborious extraction of high-quality RNA. Direct RT-RPA enables the detection of virus-positive samples in under 30 min at a single low operation temperature (37 °C) without the need for any expensive instrumentation. The Direct RT-RPA tests constitute robust, accurate, sensitive and quick methods for detection of potyviruses from recalcitrant plant species. The minimal sample preparation requirements and the possibility of storing RPA reagents without cold chain storage, allow Direct RT-RPA to be adopted in minimally equipped laboratories and with potential use in plant clinic laboratories and seed certification facilities worldwide. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Evaluating the Risks of Clinical Research: Direct Comparative Analysis

    PubMed Central

    Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S.; Wendler, David

    2014-01-01

    Abstract Objectives: Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed “risks of daily life” standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. Methods: This study employed a conceptual and normative analysis, and use of an illustrative example. Results: Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the “risks of daily life” standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Conclusions: Direct comparative analysis is a systematic method for applying the “risks of daily life” standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks. PMID:25210944

  1. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  2. Toward a preoperative planning tool for brain tumor resection therapies.

    PubMed

    Coffey, Aaron M; Miga, Michael I; Chen, Ishita; Thompson, Reid C

    2013-01-01

    Neurosurgical procedures involving tumor resection require surgical planning such that the surgical path to the tumor is determined to minimize the impact on healthy tissue and brain function. This work demonstrates a predictive tool to aid neurosurgeons in planning tumor resection therapies by finding an optimal model-selected patient orientation that minimizes lateral brain shift in the field of view. Such orientations may facilitate tumor access and removal, possibly reduce the need for retraction, and could minimize the impact of brain shift on image-guided procedures. In this study, preoperative magnetic resonance images were utilized in conjunction with pre- and post-resection laser range scans of the craniotomy and cortical surface to produce patient-specific finite element models of intraoperative shift for 6 cases. These cases were used to calibrate a model (i.e., provide general rules for the application of patient positioning parameters) as well as determine the current model-based framework predictive capabilities. Finally, an objective function is proposed that minimizes shift subject to patient position parameters. Patient positioning parameters were then optimized and compared to our neurosurgeon as a preliminary study. The proposed model-driven brain shift minimization objective function suggests an overall reduction of brain shift by 23 % over experiential methods. This work recasts surgical simulation from a trial-and-error process to one where options are presented to the surgeon arising from an optimization of surgical goals. To our knowledge, this is the first realization of an evaluative tool for surgical planning that attempts to optimize surgical approach by means of shift minimization in this manner.

  3. Designing single- and multiple-shell sampling schemes for diffusion MRI using spherical code.

    PubMed

    Cheng, Jian; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    In diffusion MRI (dMRI), determining an appropriate sampling scheme is crucial for acquiring the maximal amount of information for data reconstruction and analysis using the minimal amount of time. For single-shell acquisition, uniform sampling without directional preference is usually favored. To achieve this, a commonly used approach is the Electrostatic Energy Minimization (EEM) method introduced in dMRI by Jones et al. However, the electrostatic energy formulation in EEM is not directly related to the goal of optimal sampling-scheme design, i.e., achieving large angular separation between sampling points. A mathematically more natural approach is to consider the Spherical Code (SC) formulation, which aims to achieve uniform sampling by maximizing the minimal angular difference between sampling points on the unit sphere. Although SC is well studied in the mathematical literature, its current formulation is limited to a single shell and is not applicable to multiple shells. Moreover, SC, or more precisely continuous SC (CSC), currently can only be applied on the continuous unit sphere and hence cannot be used in situations where one or several subsets of sampling points need to be determined from an existing sampling scheme. In this case, discrete SC (DSC) is required. In this paper, we propose novel DSC and CSC methods for designing uniform single-/multi-shell sampling schemes. The DSC and CSC formulations are solved respectively by Mixed Integer Linear Programming (MILP) and a gradient descent approach. A fast greedy incremental solution is also provided for both DSC and CSC. To our knowledge, this is the first work to use SC formulation for designing sampling schemes in dMRI. Experimental results indicate that our methods obtain larger angular separation and better rotational invariance than the generalized EEM (gEEM) method currently used in the Human Connectome Project (HCP).

  4. Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamrick, Todd

    2011-01-01

    Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to computemore » the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.« less

  5. A Concept for Power Cycling the Electronics of CALICE-AHCAL with the Train Structure of ILC

    NASA Astrophysics Data System (ADS)

    Göottlicher, Peter; The Calice-Collaboration

    Particle flow algorithm calorimetry requires high granularity three-dimensional readout. The tight power requirement of 40 μW/channel is reached by enabling readout ASIC currents only during beam delivery, corresponding to a 1% duty cycle. EMI noise caused by current switching needs to be minimized by the power system and this paper presents ideas, simulations and first measurements for minimizing disturbances. A carefully design of circuits, printed circuit boards, grounding scheme and use of floating supplies allows current loops to be closed locally, stabilized voltages and minimal currents in the metal structures.

  6. A method for generating reliable atomistic models of amorphous polymers based on a random search of energy minima

    NASA Astrophysics Data System (ADS)

    Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos

    2005-07-01

    A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.

  7. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  8. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

  9. An Automated and Minimally Invasive Tool for Generating Autologous Viable Epidermal Micrografts

    PubMed Central

    Osborne, Sandra N.; Schmidt, Marisa A.; Harper, John R.

    2016-01-01

    ABSTRACT OBJECTIVE: A new epidermal harvesting tool (CelluTome; Kinetic Concepts, Inc, San Antonio, Texas) created epidermal micrografts with minimal donor site damage, increased expansion ratios, and did not require the use of an operating room. The tool, which applies both heat and suction concurrently to normal skin, was used to produce epidermal micrografts that were assessed for uniform viability, donor-site healing, and discomfort during and after the epidermal harvesting procedure. DESIGN: This study was a prospective, noncomparative institutional review board–approved healthy human study to assess epidermal graft viability, donor-site morbidity, and patient experience. SETTING: These studies were conducted at the multispecialty research facility, Clinical Trials of Texas, Inc, San Antonio. PATIENTS: The participants were 15 healthy human volunteers. RESULTS: The average viability of epidermal micrografts was 99.5%. Skin assessment determined that 76% to 100% of the area of all donor sites was the same in appearance as the surrounding skin within 14 days after epidermal harvest. A mean pain of 1.3 (on a scale of 1 to 5) was reported throughout the harvesting process. CONCLUSIONS: Use of this automated, minimally invasive harvesting system provided a simple, low-cost method of producing uniformly viable autologous epidermal micrografts with minimal patient discomfort and superficial donor-site wound healing within 2 weeks. PMID:26765157

  10. An Automated and Minimally Invasive Tool for Generating Autologous Viable Epidermal Micrografts.

    PubMed

    Osborne, Sandra N; Schmidt, Marisa A; Harper, John R

    2016-02-01

    A new epidermal harvesting tool (CelluTome; Kinetic Concepts, Inc, San Antonio, Texas) created epidermal micrografts with minimal donor site damage, increased expansion ratios, and did not require the use of an operating room. The tool, which applies both heat and suction concurrently to normal skin, was used to produce epidermal micrografts that were assessed for uniform viability, donor-site healing, and discomfort during and after the epidermal harvesting procedure. This study was a prospective, noncomparative institutional review board-approved healthy human study to assess epidermal graft viability, donor-site morbidity, and patient experience. These studies were conducted at the multispecialty research facility, Clinical Trials of Texas, Inc, San Antonio. The participants were 15 healthy human volunteers. The average viability of epidermal micrografts was 99.5%. Skin assessment determined that 76% to 100% of the area of all donor sites was the same in appearance as the surrounding skin within 14 days after epidermal harvest. A mean pain of 1.3 (on a scale of 1 to 5) was reported throughout the harvesting process. Use of this automated, minimally invasive harvesting system provided a simple, low-cost method of producing uniformly viable autologous epidermal micrografts with minimal patient discomfort and superficial donor-site wound healing within 2 weeks.

  11. Reverse engineering time discrete finite dynamical systems: a feasible undertaking?

    PubMed

    Delgado-Eckert, Edgar

    2009-01-01

    With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order, a technicality imposed by the use of Gröbner-bases calculations. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called "general position". Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as , where and . Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies.

  12. A method for calculating minimum biodiversity offset multipliers accounting for time discounting, additionality and permanence

    PubMed Central

    Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M

    2014-01-01

    Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578

  13. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  14. Fastener Capture Plate Technology to Contain On-Orbit Debris

    NASA Technical Reports Server (NTRS)

    Eisenhower, Kevin

    2010-01-01

    The Fastener Capture Plate technology was developed to solve the problem of capturing loose hardware and small fasteners, items that were not originally intended to be disengaged in microgravity, thus preventing them from becoming space debris. This technology was incorporated into astronaut tools designed and successfully used on NASA s Hubble Space Telescope Servicing Mission #4. The technology s ultimate benefit is that it allows a very time-efficient method for disengaging fasteners and removing hardware while minimizing the chances of losing parts or generating debris. The technology aims to simplify the manual labor required of the operator. It does so by optimizing visibility and access to the work site and minimizing the operator's need to be concerned with debris while performing the operations. It has a range of unique features that were developed to minimize task time, as well as maximize the ease and confidence of the astronaut operator. This paper describes the technology and the astronaut tools developed specifically for a complicated on-orbit repair, and it includes photographs of the hardware being used in outer space.

  15. Current methods of diagnosis and management of ureteral injuries.

    PubMed

    Armenakas, N A

    1999-04-01

    A delay in diagnosis is the most important contributory factor in morbidity related to ureteral injury. The difficulty in making the diagnosis can be minimized by maintenance of a high index of suspicion and the timely performance of the appropriate radiographic and intraoperative evaluations. A decision on the timing of repair of the ureteral injury is based on the patient's overall condition, promptness of injury recognition, and proper injury staging. Ideally, when identified promptly, ureteral injuries should be repaired immediately. However, once there has been a delay in diagnosis or in the case of an unstable patient, temporizing measures can be used for urinary diversion. With the availability of simple, minimally invasive techniques to manage urinary extravasation and the absence of any risk of ureteral hemorrhage, ureteral reconstruction can be safely deferred until an opportune time during the recovery period. Successful surgical management requires familiarity with the broad reconstructive armamentarium and meticulous attention to the specific details of each procedure. Through adherence to the diagnostic and therapeutic principles outlined, complications can be minimized and renal preservation can be maximized in patients sustaining ureteral injuries.

  16. Minimization of the energy loss of nuclear power plants in case of partial in-core monitoring system failure

    NASA Astrophysics Data System (ADS)

    Zagrebaev, A. M.; Ramazanov, R. N.; Lunegova, E. A.

    2017-01-01

    In this paper we consider the optimization problem minimize of the energy loss of nuclear power plants in case of partial in-core monitoring system failure. It is possible to continuation of reactor operation at reduced power or total replacement of the channel neutron measurements, requiring shutdown of the reactor and the stock of detectors. This article examines the reconstruction of the energy release in the core of a nuclear reactor on the basis of the indications of height sensors. The missing measurement information can be reconstructed by mathematical methods, and replacement of the failed sensors can be avoided. It is suggested that a set of ‘natural’ functions determined by means of statistical estimates obtained from archival data be constructed. The procedure proposed makes it possible to reconstruct the field even with a significant loss of measurement information. Improving the accuracy of the restoration of the neutron flux density in partial loss of measurement information to minimize the stock of necessary components and the associated losses.

  17. Choosing colors for map display icons using models of visual search.

    PubMed

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  18. SPITZER SECONDARY ECLIPSE DEPTHS WITH MULTIPLE INTRAPIXEL SENSITIVITY CORRECTION METHODS OBSERVATIONS OF WASP-13b, WASP-15b, WASP-16b, WASP-62b, AND HAT-P-22b

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilpatrick, Brian M.; Tucker, Gregory S.; Lewis, Nikole K.

    2017-01-01

    We measure the 4.5 μ m thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope . Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for themore » intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.« less

  19. Spitzer Secondary Eclipse Depths with Multiple Intrapixel Sensitivity Correction Methods Observations of WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Brian M.; Lewis, Nikole K.; Kataria, Tiffany; Deming, Drake; Ingalls, James G.; Krick, Jessica E.; Tucker, Gregory S.

    2017-01-01

    We measure the 4.5 μm thermal emission of five transiting hot Jupiters, WASP-13b, WASP-15b, WASP-16b, WASP-62b, and HAT-P-22b using channel 2 of the Infrared Array Camera (IRAC) on the Spitzer Space Telescope. Significant intrapixel sensitivity variations in Spitzer IRAC data require careful correction in order to achieve precision on the order of several hundred parts per million (ppm) for the measurement of exoplanet secondary eclipses. We determine eclipse depths by first correcting the raw data using three independent data reduction methods. The Pixel Gain Map (PMAP), Nearest Neighbors (NNBR), and Pixel Level Decorrelation (PLD) each correct for the intrapixel sensitivity effect in Spitzer photometric time-series observations. The results from each methodology are compared against each other to establish if they reach a statistically equivalent result in every case and to evaluate their ability to minimize uncertainty in the measurement. We find that all three methods produce reliable results. For every planet examined here NNBR and PLD produce results that are in statistical agreement. However, the PMAP method appears to produce results in slight disagreement in cases where the stellar centroid is not kept consistently on the most well characterized area of the detector. We evaluate the ability of each method to reduce the scatter in the residuals as well as in the correlated noise in the corrected data. The NNBR and PLD methods consistently minimize both white and red noise levels and should be considered reliable and consistent. The planets in this study span equilibrium temperatures from 1100 to 2000 K and have brightness temperatures that require either high albedo or efficient recirculation. However, it is possible that other processes such as clouds or disequilibrium chemistry may also be responsible for producing these brightness temperatures.

  20. Scleral fixation of one piece intraocular lens by injector implantation

    PubMed Central

    Can, Ertuğrul; Başaran, Reşat; Gül, Adem; Birinci, Hakkı

    2014-01-01

    Aim of Study: With an ab-interno technique of transscleral suturing of current one-piece posterior chamber intraocular lenses (PC IOLs) by injector implantation in the absence of capsular support, we aimed to demonstrate the possibility of the implantation of one-piece acrylic PC IOLs that might be produced in the future for only scleral fixation through small clear corneal incision. Materials and Methods: Case report and literature review. Results: This procedure has been performed in eight aphakic eyes with four different types of IOLs. Good centration was achieved with minimal technical effort. All patients had well-centered and stable lenses postoperatively during 9-18 months follow-up. Conclusion: We managed to decrease the risks of surgical trauma and intricate surgical maneuvers requirement. With this technique, excessive fluid leakage and consecutive hypotony can be minimized. PMID:25230961

  1. System design and animal experiment study of a novel minimally invasive surgical robot.

    PubMed

    Wang, Wei; Li, Jianmin; Wang, Shuxin; Su, He; Jiang, Xueming

    2016-03-01

    Robot-assisted minimally invasive surgery has shown tremendous advances over the traditional technique. However, currently commercialized systems are large and complicated, which vastly raises the system cost and operation room requirements. A MIS robot named 'MicroHand' was developed over the past few years. The basic principle and the key technologies are analyzed in this paper. Comparison between the proposed robot and the da Vinci system is also presented. Finally, animal experiments were carried out to test the performance of MicroHand. Fifteen animal experiments were carried out from July 2013 to December 2013. All animal experiments were finished successfully. The proposed design method is an effective way to resolve the drawbacks of previous generations of the da Vinci surgical system. The animal experiment results confirmed the feasibility of the design. Copyright © 2015 John Wiley & Sons, Ltd.

  2. New minimally access hydrocelectomy.

    PubMed

    Saber, Aly

    2011-02-01

    To ascertain the acceptability of minimally access hydrocelectomy through a 2-cm incision and the outcome in terms of morbidity reduction and recurrence rate. Although controversy exists regarding the treatment of hydrocele, hydrocelectomy remains the treatment of choice for hydroceles. However, the standard surgical procedures for hydrocele can cause postoperative discomfort and complications. A total of 42 adult patients, aged 18-56 years, underwent hydrocelectomy as an outpatient procedure using a 2-cm scrotal skin incision and excision of only a small disk of the parietal tunica vaginalis. The operative time was 12-18 minutes (mean 15). The outcome measures included patient satisfaction and postoperative complications. This procedure requires minor dissection and minimal manipulation during treatment. It also resulted in no recurrence and minimal complications and required a short operative time. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method.

    PubMed

    Zhao, Zijian; Voros, Sandrine; Weng, Ying; Chang, Faliang; Li, Ruijian

    2017-12-01

    Worldwide propagation of minimally invasive surgeries (MIS) is hindered by their drawback of indirect observation and manipulation, while monitoring of surgical instruments moving in the operated body required by surgeons is a challenging problem. Tracking of surgical instruments by vision-based methods is quite lucrative, due to its flexible implementation via software-based control with no need to modify instruments or surgical workflow. A MIS instrument is conventionally split into a shaft and end-effector portions, while a 2D/3D tracking-by-detection framework is proposed, which performs the shaft tracking followed by the end-effector one. The former portion is described by line features via the RANSAC scheme, while the latter is depicted by special image features based on deep learning through a well-trained convolutional neural network. The method verification in 2D and 3D formulation is performed through the experiments on ex-vivo video sequences, while qualitative validation on in-vivo video sequences is obtained. The proposed method provides robust and accurate tracking, which is confirmed by the experimental results: its 3D performance in ex-vivo video sequences exceeds those of the available state-of -the-art methods. Moreover, the experiments on in-vivo sequences demonstrate that the proposed method can tackle the difficult condition of tracking with unknown camera parameters. Further refinements of the method will refer to the occlusion and multi-instrumental MIS applications.

  4. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  5. A method to assess the population-level consequences of wind energy facilities on bird and bat species: Chapter

    USGS Publications Warehouse

    Diffendorfer, James E.; Beston, Julie A.; Merrill, Matthew; Stanton, Jessica C.; Corum, Margo D.; Loss, Scott R.; Thogmartin, Wayne E.; Johnson, Douglas H.; Erickson, Richard A.; Heist, Kevin W.

    2016-01-01

    For this study, a methodology was developed for assessing impacts of wind energy generation on populations of birds and bats at regional to national scales. The approach combines existing methods in applied ecology for prioritizing species in terms of their potential risk from wind energy facilities and estimating impacts of fatalities on population status and trend caused by collisions with wind energy infrastructure. Methods include a qualitative prioritization approach, demographic models, and potential biological removal. The approach can be used to prioritize species in need of more thorough study as well as to identify species with minimal risk. However, the components of this methodology require simplifying assumptions and the data required may be unavailable or of poor quality for some species. These issues should be carefully considered before using the methodology. The approach will increase in value as more data become available and will broaden the understanding of anthropogenic sources of mortality on bird and bat populations.

  6. USER'S GUIDE: Strategic Waste Minimization Initiative (SWAMI) Version 2.0 - A Software Tool to Aid in Process Analysis for Pollution Prevention

    EPA Science Inventory

    The Strategic WAste Minimization Initiative (SWAMI) Software, Version 2.0 is a tool for using process analysis for identifying waste minimization opportunities within an industrial setting. The software requires user-supplied information for process definition, as well as materia...

  7. Low-Friction, High-Stiffness Joint for Uniaxial Load Cell

    NASA Technical Reports Server (NTRS)

    Lewis, James L.; Le, Thang; Carroll, Monty B.

    2007-01-01

    A universal-joint assembly has been devised for transferring axial tension or compression to a load cell. To maximize measurement accuracy, the assembly is required to minimize any moments and non-axial forces on the load cell and to exhibit little or no hysteresis. The requirement to minimize hysteresis translates to a requirement to maximize axial stiffness (including minimizing backlash) and a simultaneous requirement to minimize friction. In practice, these are competing requirements, encountered repeatedly in efforts to design universal joints. Often, universal-joint designs represent compromises between these requirements. The improved universal-joint assembly contains two universal joints, each containing two adjustable pairs of angular-contact ball bearings. One might be tempted to ask why one could not use simple ball-and-socket joints rather than something as complex as universal joints containing adjustable pairs of angularcontact ball bearings. The answer is that ball-and-socket joints do not offer sufficient latitude to trade stiffness versus friction: the inevitable result of an attempt to make such a trade in a ball-and-socket joint is either too much backlash or too much friction. The universal joints are located at opposite ends of an axial subassembly that contains the load cell. The axial subassembly includes an axial shaft, an axial housing, and a fifth adjustable pair of angular-contact ball bearings that allows rotation of the axial housing relative to the shaft. The preload on each pair of angular-contact ball bearings can be adjusted to obtain the required stiffness with minimal friction, tailored for a specific application. The universal joint at each end affords two degrees of freedom, allowing only axial force to reach the load cell regardless of application of moments and non-axial forces. The rotational joint on the axial subassembly affords a fifth degree of freedom, preventing application of a torsion load to the load cell.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    March-Leuba, J.A.

    Nuclear plants of the 21st century will employ higher levels of automation and fault tolerance to increase availability, reduce accident risk, and lower operating costs. Key developments in control algorithms, fault diagnostics, fault tolerance, and communication in a distributed system are needed to implement the fully automated plant. Equally challenging will be integrating developments in separate information and control fields into a cohesive system, which collectively achieves the overall goals of improved performance, safety, reliability, maintainability, and cost-effectiveness. Under the Nuclear Energy Research Initiative (NERI), the U. S. Department of Energy is sponsoring a project to address some of themore » technical issues involved in meeting the long-range goal of 21st century reactor control systems. This project, ''A New Paradigm for Automated Development Of Highly Reliable Control Architectures For Future Nuclear Plants,'' involves researchers from Oak Ridge National Laboratory, University of Tennessee, and North Carolina State University. This paper documents a research effort to develop methods for automated generation of control systems that can be traced directly to the design requirements. Our final goal is to allow the designer to specify only high-level requirements and stress factors that the control system must survive (e.g. a list of transients, or a requirement to withstand a single failure.) To this end, the ''control engine'' automatically selects and validates control algorithms and parameters that are optimized to the current state of the plant, and that have been tested under the prescribed stress factors. The control engine then automatically generates the control software from validated algorithms. Examples of stress factors that the control system must ''survive'' are: transient events (e.g., set-point changes, or expected occurrences such a load rejection,) and postulated component failures. These stress factors are specified by the designer and become a database of prescribed transients and component failures. The candidate control systems are tested, and their parameters optimized, for each of these stresses. Examples of high-level requirements are: response time less than xx seconds, or overshoot less than xx% ... etc. In mathematical terms, these types of requirements are defined as ''constraints,'' and there are standard mathematical methods to minimize an objective function subject to constraints. Since, in principle, any control design that satisfies all the above constraints is acceptable, the designer must also select an objective function that describes the ''goodness'' of the control design. Examples of objective functions are: minimize the number or amount of control motions, minimize an energy balance... etc.« less

  9. Continuous Shape Estimation of Continuum Robots Using X-ray Images

    PubMed Central

    Lobaton, Edgar J.; Fu, Jinghua; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot’s shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints. PMID:26279960

  10. Continuous Shape Estimation of Continuum Robots Using X-ray Images.

    PubMed

    Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron

    2013-05-06

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.

  11. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    NASA Astrophysics Data System (ADS)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  12. Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging

    NASA Astrophysics Data System (ADS)

    Lin, Bingxiong; Sun, Yu; Qian, Xiaoning

    2013-03-01

    Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.

  13. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    PubMed

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  14. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurence, T; Chromy, B

    2009-11-10

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less

  15. Minimization of organic and metallic industrial waste via lemna minor concentration. Final report, 1 September 1991-1 December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers-Irons, G.L.

    1992-12-30

    In recent years, new strict environmental laws have required improved and cost-effective water purification methods by Air Force complexes. Naturally assisted primary units (microbiological) and secondary units (macrophyte) could bring waste treatment systems into tighter compliance. Aquatic macrophytes which have rapid growth rates and absorb large quantities of nutrients could provide a practical and economic method for more complete wastewater maintenance, hazardous waste clean-up or river, lake and ground water purification. This work has shown that Lemna minor, or Common Duckweed, can successfully and thoroughly accumulate organics and metals from Air Force wastewaters.

  16. Evaluation of random errors in Williams’ series coefficients obtained with digital image correlation

    NASA Astrophysics Data System (ADS)

    Lychak, Oleh V.; Holyns'kiy, Ivan S.

    2016-03-01

    The use of the Williams’ series parameters for fracture analysis requires valid information about their error values. The aim of this investigation is the development of the method for estimation of the standard deviation of random errors of the Williams’ series parameters, obtained from the measured components of the stress field. Also, the criteria for choosing the optimal number of terms in the truncated Williams’ series for derivation of their parameters with minimal errors is proposed. The method was used for the evaluation of the Williams’ parameters, obtained from the data, and measured by the digital image correlation technique for testing a three-point bending specimen.

  17. Computer aided flexible envelope designs

    NASA Technical Reports Server (NTRS)

    Resch, R. D.

    1975-01-01

    Computer aided design methods are presented for the design and construction of strong, lightweight structures which require complex and precise geometric definition. The first, flexible structures, is a unique system of modeling folded plate structures and space frames. It is possible to continuously vary the geometry of a space frame to produce large, clear spans with curvature. The second method deals with developable surfaces, where both folding and bending are explored with the observed constraint of available building materials, and what minimal distortion result in maximum design capability. Alternative inexpensive fabrication techniques are being developed to achieve computer defined enclosures which are extremely lightweight and mathematically highly precise.

  18. Control of integrated micro-resonator wavelength via balanced homodyne locking.

    PubMed

    Cox, Jonathan A; Lentine, Anthony L; Trotter, Douglas C; Starbuck, Andrew L

    2014-05-05

    We describe and experimentally demonstrate a method for active control of resonant modulators and filters in an integrated photonics platform. Variations in resonance frequency due to manufacturing processes and thermal fluctuations are corrected by way of balanced homodyne locking. The method is compact, insensitive to intensity fluctuations, minimally disturbs the micro-resonator, and does not require an arbitrary reference to lock. We demonstrate long-term stable locking of an integrated filter to a laser swept over 1.25 THz. In addition, we show locking of a modulator with low bit error rate while the chip temperature is varied from 5 to 60° C.

  19. Fabrication of implantable microelectrode arrays by laser cutting of silicone rubber and platinum foil.

    PubMed

    Schuettler, M; Stiess, S; King, B V; Suaning, G J

    2005-03-01

    A new method for fabrication of microelectrode arrays comprised of traditional implant materials is presented. The main construction principle is the use of spun-on medical grade silicone rubber as insulating substrate material and platinum foil as conductor (tracks, pads and electrodes). The silicone rubber and the platinum foil are patterned by laser cutting using an Nd:YAG laser and a microcontroller-driven, stepper-motor operated x-y table. The method does not require expensive clean room facilities and offers an extremely short design-to-prototype time of below 1 day. First prototypes demonstrate a minimal achievable feature size of about 30 microm.

  20. Genetic Interaction Score (S-Score) Calculation, Clustering, and Visualization of Genetic Interaction Profiles for Yeast.

    PubMed

    Roguev, Assen; Ryan, Colm J; Xu, Jiewei; Colson, Isabelle; Hartsuiker, Edgar; Krogan, Nevan

    2018-02-01

    This protocol describes computational analysis of genetic interaction screens, ranging from data capture (plate imaging) to downstream analyses. Plate imaging approaches using both digital camera and office flatbed scanners are included, along with a protocol for the extraction of colony size measurements from the resulting images. A commonly used genetic interaction scoring method, calculation of the S-score, is discussed. These methods require minimal computer skills, but some familiarity with MATLAB and Linux/Unix is a plus. Finally, an outline for using clustering and visualization software for analysis of resulting data sets is provided. © 2018 Cold Spring Harbor Laboratory Press.

  1. Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography.

    PubMed

    Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2007-06-01

    Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.

  2. Minimal invasive right ventricular and atrial pacemaker lead repositioning as a first alternative is superior in avoiding pocket complications with passive fixation leads.

    PubMed

    Osztheimer, István; Szilágyi, Szabolcs; Pongor, Zsuzsanna; Zima, Endre; Molnár, Levente; Tahin, Tamás; Özcan, Emin Evren; Széplaki, Gábor; Merkely, Béla; Gellér, László

    2017-06-01

    Lead dislocations of pacemaker systems are reported in all and even in high-volume centers. Repeated procedures necessitated by lead dislocations are associated with an increased risk of complications. We investigated a minimal invasive method for right atrial and ventricular lead repositioning. The minimal invasive method was applied only when passive fixation leads were implanted. During the minimal invasive procedure, a steerable catheter was advanced through the femoral vein to move the distal end of the lead to the appropriate position. Retrospective data collection was conducted in all patients with minimal invasive and with conventional method, at a single center between September 2006 and December 2012. Forty-five minimal invasive lead repositionings were performed, of which eight were acutely unsuccessful and nine electrodes re-dislocated after the procedure. One hundred two leads were repositioned with opening of the pocket during the same time, including the ones with unsuccessful minimal invasive repositionings. One procedure was acutely unsuccessful in this group and four re-dislocations happened. A significant difference of success rates was noted (66.6% vs. 95.1%, p = 0.001). One complication was observed during the minimal invasive lead repositionings (left ventricular lead microdislodgement). Open-pocket procedures showed different types of complications (pneumothorax, subclavian artery puncture, pericardial effusion, hematoma, fever, device-associated infection which necessitated explantation, atrial lead dislodgement while repositioning the ventricular one, deterioration of renal function). The minimal invasive method as a first alternative is safe and feasible. In those cases when it cannot be carried out successfully, the conventional method is applicable.

  3. Effectiveness of Rapid Cooling as a Method of Euthanasia for Young Zebrafish (Danio rerio).

    PubMed

    Wallace, Chelsea K; Bright, Lauren A; Marx, James O; Andersen, Robert P; Mullins, Mary C; Carty, Anthony J

    2018-01-01

    Despite increased use of zebrafish (Danio rerio) in biomedical research, consistent information regarding appropriate euthanasia methods, particularly for embryos, is sparse. Current literature indicates that rapid cooling is an effective method of euthanasia for adult zebrafish, yet consistent guidelines regarding zebrafish younger than 6 mo are unavailable. This study was performed to distinguish the age at which rapid cooling is an effective method of euthanasia for zebrafish and the exposure times necessary to reliably euthanize zebrafish using this method. Zebrafish at 3, 4, 7, 14, 16, 19, 21, 28, 60, and 90 d postfertilization (dpf) were placed into an ice water bath for 5, 10, 30, 45, or 60 min (n = 12 to 40 per group). In addition, zebrafish were placed in ice water for 12 h (age ≤14 dpf) or 30 s (age ≥14 dpf). After rapid cooling, fish were transferred to a recovery tank and the number of fish alive at 1, 4, and 12-24 h after removal from ice water was documented. Euthanasia was defined as a failure when evidence of recovery was observed at any point after removal from ice water. Results showed that younger fish required prolonged exposure to rapid cooling for effective euthanasia, with the required exposure time decreasing as fish age. Although younger fish required long exposure times, animals became immobilized immediately upon exposure to the cold water, and behavioral indicators of pain or distress rarely occurred. We conclude that zebrafish 14 dpf and younger require as long as 12 h, those 16 to 28 dpf of age require 5 min, and those older than 28 dpf require 30 s minimal exposure to rapid cooling for reliable euthanasia.

  4. On the convergence of nonconvex minimization methods for image recovery.

    PubMed

    Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei

    2015-05-01

    Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.

  5. 40 CFR 230.73 - Actions affecting the method of dispersion.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Actions To Minimize Adverse Effects § 230.73 Actions affecting the method of dispersion. The effects of a... obstruction to the water current or circulation pattern, and utilizing natural bottom contours to minimize the... patterns to mix, disperse and dilute the discharge; (e) Minimizing water column turbidity by using a...

  6. 40 CFR 230.73 - Actions affecting the method of dispersion.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Actions To Minimize Adverse Effects § 230.73 Actions affecting the method of dispersion. The effects of a... obstruction to the water current or circulation pattern, and utilizing natural bottom contours to minimize the... patterns to mix, disperse and dilute the discharge; (e) Minimizing water column turbidity by using a...

  7. 40 CFR 230.73 - Actions affecting the method of dispersion.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Actions To Minimize Adverse Effects § 230.73 Actions affecting the method of dispersion. The effects of a... obstruction to the water current or circulation pattern, and utilizing natural bottom contours to minimize the... patterns to mix, disperse and dilute the discharge; (e) Minimizing water column turbidity by using a...

  8. 40 CFR 230.73 - Actions affecting the method of dispersion.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Actions To Minimize Adverse Effects § 230.73 Actions affecting the method of dispersion. The effects of a... obstruction to the water current or circulation pattern, and utilizing natural bottom contours to minimize the... patterns to mix, disperse and dilute the discharge; (e) Minimizing water column turbidity by using a...

  9. An evaluation on CT image acquisition method for medical VR applications

    NASA Astrophysics Data System (ADS)

    Jang, Seong-wook; Ko, Junho; Yoo, Yon-sik; Kim, Yoonsang

    2017-02-01

    Recent medical virtual reality (VR) applications to minimize re-operations are being studied for improvements in surgical efficiency and reduction of operation error. The CT image acquisition method considering three-dimensional (3D) modeling for medical VR applications is important, because the realistic model is required for the actual human organ. However, the research for medical VR applications has focused on 3D modeling techniques and utilized 3D models. In addition, research on a CT image acquisition method considering 3D modeling has never been reported. The conventional CT image acquisition method involves scanning a limited area of the lesion for the diagnosis of doctors once or twice. However, the medical VR application is required to acquire the CT image considering patients' various postures and a wider area than the lesion. A wider area than the lesion is required because of the necessary process of comparing bilateral sides for dyskinesia diagnosis of the shoulder, pelvis, and leg. Moreover, patients' various postures are required due to the different effects on the musculoskeletal system. Therefore, in this paper, we perform a comparative experiment on the acquired CT images considering image area (unilateral/bilateral) and patients' postures (neutral/abducted). CT images are acquired from 10 patients for the experiments, and the acquired CT images are evaluated based on the length per pixel and the morphological deviation. Finally, by comparing the experiment results, we evaluate the CT image acquisition method for medical VR applications.

  10. Behavioural cues of reproductive status in seahorses Hippocampus abdominalis.

    PubMed

    Whittington, C M; Musolf, K; Sommer, S; Wilson, A B

    2013-07-01

    A method is described to assess the reproductive status of male Hippocampus abdominalis on the basis of behavioural traits. The non-invasive nature of this technique minimizes handling stress and reduces sampling requirements for experimental work. It represents a useful tool to assist researchers in sample collection for studies of reproduction and development in viviparous syngnathids, which are emerging as important model species. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.

  11. A computerized traffic control algorithm to determine optimal traffic signal settings. Ph.D. Thesis - Toledo Univ.

    NASA Technical Reports Server (NTRS)

    Seldner, K.

    1977-01-01

    An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.

  12. Implementing Model-Check for Employee and Management Satisfaction

    NASA Technical Reports Server (NTRS)

    Jones, Corey; LaPha, Steven

    2013-01-01

    This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.

  13. A fast Cauchy-Riemann solver. [differential equation solution for boundary conditions by finite difference approximation

    NASA Technical Reports Server (NTRS)

    Ghil, M.; Balgovind, R.

    1979-01-01

    The inhomogeneous Cauchy-Riemann equations in a rectangle are discretized by a finite difference approximation. Several different boundary conditions are treated explicitly, leading to algorithms which have overall second-order accuracy. All boundary conditions with either u or v prescribed along a side of the rectangle can be treated by similar methods. The algorithms presented here have nearly minimal time and storage requirements and seem suitable for development into a general-purpose direct Cauchy-Riemann solver for arbitrary boundary conditions.

  14. Novel Method for Exchange of Impella Circulatory Assist Catheter: The "Trojan Horse" Technique.

    PubMed

    Phillips, Colin T; Tamez, Hector; Tu, Thomas M; Yeh, Robert W; Pinto, Duane S

    2017-07-01

    Patients with an indwelling Impella may require escalation of hemodynamic support or exchange to another circulatory assistance platform. As such, preservation of vascular access is preferable in cases where anticoagulation cannot be discontinued or to facilitate exchange to an alternative catheter or closure device. Challenges exist in avoiding bleeding and loss of wire access in these situations. We describe a single-access "Trojan Horse" technique that minimizes bleeding while maintaining arterial access for rapid exchange of this percutaneous ventricular assist device.

  15. Development of a turbomachinery design optimization procedure using a multiple-parameter nonlinear perturbation method

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.

    1984-01-01

    An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.

  16. Optimal sensors placement and spillover suppression

    NASA Astrophysics Data System (ADS)

    Hanis, Tomas; Hromcik, Martin

    2012-04-01

    A new approach to optimal placement of sensors (OSP) in mechanical structures is presented. In contrast to existing methods, the presented procedure enables a designer to seek for a trade-off between the presence of desirable modes in captured measurements and the elimination of influence of those mode shapes that are not of interest in a given situation. An efficient numerical algorithm is presented, developed from an existing routine based on the Fischer information matrix analysis. We consider two requirements in the optimal sensor placement procedure. On top of the classical EFI approach, the sensors configuration should also minimize spillover of unwanted higher modes. We use the information approach to OSP, based on the effective independent method (EFI), and modify the underlying criterion to meet both of our requirements—to maximize useful signals and minimize spillover of unwanted modes at the same time. Performance of our approach is demonstrated by means of examples, and a flexible Blended Wing Body (BWB) aircraft case study related to a running European-level FP7 research project 'ACFA 2020—Active Control for Flexible Aircraft'.

  17. [Radiation protection in interventional cardiology].

    PubMed

    Durán, Ariel

    2015-01-01

    INTERVENTIONAL: cardiology progress makes each year a greater number of procedures and increasing complexity with a very good success rate. The problem is that this progress brings greater dose of radiation not only for the patient but to occupationally exposed workers as well. Simple methods for reducing or minimizing occupational radiation dose include: minimizing fluoroscopy time and the number of acquired images; using available patient dose reduction technologies; using good imaging-chain geometry; collimating; avoiding high-scatter areas; using protective shielding; using imaging equipment whose performance is controlled through a quality assurance programme; and wearing personal dosimeters so that you know your dose. Effective use of these methods requires both appropriate education and training in radiation protection for all interventional cardiology personnel, and the availability and use of appropriate protective tools and equipment. Regular review and investigation of personnel monitoring results, accompanied as appropriate by changes in how procedures are performed and equipment used, will ensure continual improvement in the practice of radiation protection in the interventional suite. Copyright © 2014 Instituto Nacional de Cardiología Ignacio Chávez. Published by Masson Doyma México S.A. All rights reserved.

  18. Performance Test Data Analysis of Scintillation Cameras

    NASA Astrophysics Data System (ADS)

    Demirkaya, Omer; Mazrou, Refaat Al

    2007-10-01

    In this paper, we present a set of image analysis tools to calculate the performance parameters of gamma camera systems from test data acquired according to the National Electrical Manufacturers Association NU 1-2001 guidelines. The calculation methods are either completely automated or require minimal user interaction; minimizing potential human errors. The developed methods are robust with respect to varying conditions under which these tests may be performed. The core algorithms have been validated for accuracy. They have been extensively tested on images acquired by the gamma cameras from different vendors. All the algorithms are incorporated into a graphical user interface that provides a convenient way to process the data and report the results. The entire application has been developed in MATLAB programming environment and is compiled to run as a stand-alone program. The developed image analysis tools provide an automated, convenient and accurate means to calculate the performance parameters of gamma cameras and SPECT systems. The developed application is available upon request for personal or non-commercial uses. The results of this study have been partially presented in Society of Nuclear Medicine Annual meeting as an InfoSNM presentation.

  19. Probabilistic sparse matching for robust 3D/3D fusion in minimally invasive surgery.

    PubMed

    Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan

    2015-01-01

    Classical surgery is being overtaken by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm computed tomography (CT) and C-arm fluoroscopy are routinely used in clinical practice for intraoperative guidance. However, due to constraints regarding acquisition time and device configuration, intraoperative modalities have limited soft tissue image quality and reliable assessment of the cardiac anatomy typically requires contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a probabilistic sparse matching approach to fuse high-quality preoperative CT images and nongated, noncontrast intraoperative C-arm CT images by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the preoperative CT and mapped to the intraoperative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments on 95 clinical datasets demonstrate that our model-based fusion approach has an average execution time of 1.56 s, while the accuracy of 5.48 mm between the anchor anatomy in both images lies within expert user confidence intervals. In direct comparison with image-to-image registration based on an open-source state-of-the-art medical imaging library and a recently proposed quasi-global, knowledge-driven multi-modal fusion approach for thoracic-abdominal images, our model-based method exhibits superior performance in terms of registration accuracy and robustness with respect to both target anatomy and anchor anatomy alignment errors.

  20. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1989-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  1. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1987-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  2. Occupational health and safety assessment of exposure to jet fuel combustion products in air medical transport.

    PubMed

    MacDonald, Russell D; Thomas, Laura; Rusk, Frederick C; Marques, Shauna D; McGuire, Dan

    2010-01-01

    Transport medicine personnel are potentially exposed to jet fuel combustion products. Setting-specific data are required to determine whether this poses a risk. This study assessed exposure to jet fuel combustion products, compared various engine ignition scenarios, and determined methods to minimize exposure. The Beechcraft King Air B200 turboprop aircraft equipped with twin turbine engines, using a kerosene-based jet fuel (Jet A-1), was used to measure products of combustion during boarding, engine startup, and flight in three separate engine start scenarios ("shielded": internal engine start, door closed; "exposed": ground power unit start, door open; and "minimized": ground power unit right engine start, door open). Real-time continuous monitoring equipment was used for oxygen, carbon dioxide, carbon monoxide, nitrogen dioxide, hydrogen sulfide, sulfur dioxide, volatile organic compounds, and particulate matter. Integrated methods were used for aldehydes, polycyclic aromatic hydrocarbons, volatile organic compounds, and aliphatic hydrocarbons. Samples were taken in the paramedic breathing zone for approximately 60 minutes, starting just before the paramedics boarded the aircraft. Data were compared against regulated time-weighted exposure thresholds to determine the presence of potentially harmful products of combustion. Polycyclic aromatic hydrocarbons, aldehydes, volatile organic compounds, and aliphatic hydrocarbons were found at very low concentrations or beneath the limits of detection. There were significant differences in exposures to particulates, carbon monoxide, and total volatile organic compound between the "exposed" and "minimized" scenarios. Elevated concentrations of carbon monoxide and total volatile organic compounds were present during the ground power unit-assisted dual-engine start. There were no appreciable exposures during the "minimized" or "shielded" scenarios. Air medical personnel exposures to jet fuel combustion products were generally low and did not exceed established U.S. or Canadian health and safety exposure limits. Avoidance of ground power unit-assisted dual-engine starts and closing the hangar door prior to start minimize or eliminate the occupational exposure.

  3. Cash Management in the United States Marine Corps.

    DTIC Science & Technology

    1984-12-01

    procedures and requires that such departments and agencies conduct financial * activities in a manner that will make cash holding require- ments...balances so as to minimize the overall cost of holding cash" [Ref. 3: 2). Simply stated, effective cash management implies the minimization of cash...balances held, as opposed to invested , as well as timely receipt and disbursement of government funds. B. ORGANIZATION RESPONSIBILITY FOR FINANCIAL

  4. Examining the Minimal Required Elements of a Computer-Tailored Intervention Aimed at Dietary Fat Reduction: Results of a Randomized Controlled Dismantling Study

    ERIC Educational Resources Information Center

    Kroeze, Willemieke; Oenema, Anke; Dagnelie, Pieter C.; Brug, Johannes

    2008-01-01

    This study investigated the minimally required feedback elements of a computer-tailored dietary fat reduction intervention to be effective in improving fat intake. In all 588 Healthy Dutch adults were randomly allocated to one of four conditions in an randomized controlled trial: (i) feedback on dietary fat intake [personal feedback (P feedback)],…

  5. Work intensity in sacroiliac joint fusion and lumbar microdiscectomy

    PubMed Central

    Frank, Clay; Kondrashov, Dimitriy; Meyer, S Craig; Dix, Gary; Lorio, Morgan; Kovalsky, Don; Cher, Daniel

    2016-01-01

    Background The evidence base supporting minimally invasive sacroiliac (SI) joint fusion (SIJF) surgery is increasing. The work relative value units (RVUs) associated with minimally invasive SIJF are seemingly low. To date, only one published study describes the relative work intensity associated with minimally invasive SIJF. No study has compared work intensity vs other commonly performed spine surgery procedures. Methods Charts of 192 patients at five sites who underwent either minimally invasive SIJF (American Medical Association [AMA] CPT® code 27279) or lumbar microdiscectomy (AMA CPT® code 63030) were reviewed. Abstracted were preoperative times associated with diagnosis and patient care, intraoperative parameters including operating room (OR) in/out times and procedure start/stop times, and postoperative care requirements. Additionally, using a visual analog scale, surgeons estimated the intensity of intraoperative care, including mental, temporal, and physical demands and effort and frustration. Work was defined as operative time multiplied by task intensity. Results Patients who underwent minimally invasive SIJF were more likely female. Mean procedure times were lower in SIJF by about 27.8 minutes (P<0.0001) and mean total OR times were lower by 27.9 minutes (P<0.0001), but there was substantial overlap across procedures. Mean preservice and post-service total labor times were longer in minimally invasive SIJF (preservice times longer by 63.5 minutes [P<0.0001] and post-service labor times longer by 20.2 minutes [P<0.0001]). The number of postoperative visits was higher in minimally invasive SIJF. Mean total service time (preoperative + OR time + postoperative) was higher in the minimally invasive SIJF group (261.5 vs 211.9 minutes, P<0.0001). Intraoperative intensity levels were higher for mental, physical, effort, and frustration domains (P<0.0001 each). After taking into account intensity, intraoperative workloads showed substantial overlap. Conclusion Compared to a commonly performed lumbar spine surgical procedure, lumbar microdiscectomy, that currently has a higher work RVU, preoperative, intraoperative, and postoperative workload for minimally invasive SIJF is higher. The work RVU for minimally invasive SIJF should be adjusted upward as the relative amount of work is comparable. PMID:27555790

  6. BetaSCPWeb: side-chain prediction for protein structures using Voronoi diagrams and geometry prioritization.

    PubMed

    Ryu, Joonghyun; Lee, Mokwon; Cha, Jehyun; Laskowski, Roman A; Ryu, Seong Eon; Kim, Deok-Soo

    2016-07-08

    Many applications, such as protein design, homology modeling, flexible docking, etc. require the prediction of a protein's optimal side-chain conformations from just its amino acid sequence and backbone structure. Side-chain prediction (SCP) is an NP-hard energy minimization problem. Here, we present BetaSCPWeb which efficiently computes a conformation close to optimal using a geometry-prioritization method based on the Voronoi diagram of spherical atoms. Its outputs are visual, textual and PDB file format. The web server is free and open to all users at http://voronoi.hanyang.ac.kr/betascpweb with no login requirement. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Cleaning and disinfection of patient care items, in relation to small animals.

    PubMed

    Weese, J Scott

    2015-03-01

    Patient care involves several medical and surgical items, including those that come into contact with sterile or other high-risk body sites and items that have been used on other patients. These situations create a risk for infection if items are contaminated, and the implications can range from single infections to large outbreaks. To minimize the risk, proper equipment cleaning, disinfection/sterilization, storage, and monitoring practices are required. Risks posed by different items; the required level of cleaning, disinfection, or sterilization; the methods that are available and appropriate; and how to ensure efficacy, must be considered when designing and implementing an infection control program. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. 3D hyperpolarized C-13 EPI with calibrationless parallel imaging

    NASA Astrophysics Data System (ADS)

    Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.

    2018-04-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.

  9. 3D silicon breast surface mapping via structured light profilometry

    NASA Astrophysics Data System (ADS)

    Vairavan, R.; Ong, N. R.; Sauli, Z.; Kirtsaeng, S.; Sakuntasathien, S.; Shahimin, M. M.; Alcain, J. B.; Lai, S. L.; Paitong, P.; Retnasamy, V.

    2017-09-01

    Digital fringe projection technique is one of the promising optical methods for 3D surface imaging as it demonstrates non contact and non invasive characteristics. The potential of this technique matches the requirement for human body evaluation, as it is vital for disease diagnosis and for treatment option selection. Thus, the digital fringe projection has addressed this requirement with its wide clinical related application and studies. However, the application of this technique for 3D surface mapping of the breast is very minimal. Hence, in this work, the application of digital fringe projection for 3D breast surface mapping is reported. Phase shift fringe projection technique was utilized to perform the 3D breast surface mapping. Maiden results have confirmed the feasibility of using the digital fringe projection method for 3D surface mapping of the breast and it can be extended for breast cancer detection.

  10. Requirements for diagnosis of malaria at different levels of the laboratory network in Africa.

    PubMed

    Long, Earl G

    2009-06-01

    The rapid increase of resistance to cheap, reliable antimalarials, the increasing cost of effective drugs, and the low specificity of clinical diagnosis has increased the need for more reliable diagnostic methods for malaria. The most commonly used and most reliable remains microscopic examination of stained blood smears, but this technique requires skilled personnel, precision instruments, and ideally a source of electricity. Microscopy has the advantage of enabling the examiner to identify the species, stage, and density of an infection. An alternative to microscopy is the rapid diagnostic test (RDT), which uses a labeled monoclonal antibody to detect circulating parasitic antigens. This test is most commonly used to detect Plasmodium falciparum infections and is available in a plastic cassette format. Both microscopy and RDTs should be available at all levels of laboratory service in endemic areas, but in peripheral laboratories with minimally trained staff, the RDT may be a more practical diagnostic method.

  11. Power Maximization Control of Variable Speed Wind Generation System Using Permanent Magnet Synchronous Generator

    NASA Astrophysics Data System (ADS)

    Morimoto, Shigeo; Nakamura, Tomohiko; Takeda, Yoji

    This paper proposes the sensorless output power maximization control of the wind generation system. A permanent magnet synchronous generator (PMSG) is used as a variable speed generator in the proposed system. The generator torque is suitably controlled according to the generator speed and thus the power from a wind turbine settles down on the maximum power point by the proposed MPPT control method, where the information of wind velocity is not required. Moreover, the maximum available generated power is obtained by the optimum current vector control. The current vector of PMSG is optimally controlled according to the generator speed and the required torque in order to minimize the losses of PMSG considering the voltage and current constraints. The proposed wind power generation system can be achieved without mechanical sensors such as a wind velocity detector and a position sensor. Several experimental results show the effectiveness of the proposed control method.

  12. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1993-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  13. Techniques for optimal crop selection in a controlled ecological life support system

    NASA Technical Reports Server (NTRS)

    Mccormack, Ann; Finn, Cory; Dunsky, Betsy

    1992-01-01

    A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.

  14. Parallelization of implicit finite difference schemes in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Naik, Vijay K.; Nicoules, Michel

    1990-01-01

    Implicit finite difference schemes are often the preferred numerical schemes in computational fluid dynamics, requiring less stringent stability bounds than the explicit schemes. Each iteration in an implicit scheme involves global data dependencies in the form of second and higher order recurrences. Efficient parallel implementations of such iterative methods are considerably more difficult and non-intuitive. The parallelization of the implicit schemes that are used for solving the Euler and the thin layer Navier-Stokes equations and that require inversions of large linear systems in the form of block tri-diagonal and/or block penta-diagonal matrices is discussed. Three-dimensional cases are emphasized and schemes that minimize the total execution time are presented. Partitioning and scheduling schemes for alleviating the effects of the global data dependencies are described. An analysis of the communication and the computation aspects of these methods is presented. The effect of the boundary conditions on the parallel schemes is also discussed.

  15. CAVE3: A general transient heat transfer computer code utilizing eigenvectors and eigenvalues

    NASA Technical Reports Server (NTRS)

    Palmieri, J. V.; Rathjen, K. A.

    1978-01-01

    The method of solution is a hybrid analytical numerical technique which utilizes eigenvalues and eigenvectors. The method is inherently stable, permitting large time steps even with the best of conductors with the finest of mesh sizes which can provide a factor of five reduction in machine time compared to conventional explicit finite difference methods when structures with small time constants are analyzed over long time periods. This code will find utility in analyzing hypersonic missile and aircraft structures which fall naturally into this class. The code is a completely general one in that problems involving any geometry, boundary conditions and materials can be analyzed. This is made possible by requiring the user to establish the thermal network conductances between nodes. Dynamic storage allocation is used to minimize core storage requirements. This report is primarily a user's manual for CAVE3 code. Input and output formats are presented and explained. Sample problems are included which illustrate the usage of the code as well as establish the validity and accuracy of the method.

  16. Evaluating the risks of clinical research: direct comparative analysis.

    PubMed

    Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David

    2014-09-01

    Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.

  17. Method and apparatus for measuring irradiated fuel profiles

    DOEpatents

    Lee, D.M.

    1980-03-27

    A new apparatus is used to substantially instantaneously obtain a profile of an object, for example a spent fuel assembly, which profile (when normalized) has unexpectedly been found to be substantially identical to the normalized profile of the burnup monitor Cs-137 obtained with a germanium detector. That profile can be used without normalization in a new method of identifying and monitoring in order to determine for example whether any of the fuel has been removed. Alternatively, two other new methods involve calibrating that profile so as to obtain a determination of fuel burnup (which is important for complying with safeguards requirements, for utilizing fuel to an optimal extent, and for storing spent fuel in a minimal amount of space).

  18. Current advances on polynomial resultant formulations

    NASA Astrophysics Data System (ADS)

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar

    2017-08-01

    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  19. A simple method for in situ monitoring of water temperature in substrates used by spawning salmonids

    USGS Publications Warehouse

    Zimmerman, Christian E.; Finn, James E.

    2012-01-01

    Interstitial water temperature within spawning habitats of salmonids may differ from surface-water temperature depending on intragravel flow paths, geomorphic setting, or presence of groundwater. Because survival and developmental timing of salmon are partly controlled by temperature, monitoring temperature within gravels used by spawning salmonids is required to adequately describe the environment experienced by incubating eggs and embryos. Here we describe a simple method of deploying electronic data loggers within gravel substrates with minimal alteration of the natural gravel structure and composition. Using data collected in spawning sites used by summer and fall chum salmon Oncorhynchus keta from two streams within the Yukon River watershed, we compare contrasting thermal regimes to demonstrate the utility of this method.

  20. A rapid method for measuring intracellular pH using BCECF-AM.

    PubMed

    Ozkan, Pinar; Mutharasan, Raj

    2002-08-15

    A rapid intracellular pH (pH(i)) measurement method based on initial rate of increase of fluorescence ratio of 2',7'-bis(2-carboxyethyl)-5,6-carboxyfluorescein upon dye addition to a cell suspension in growth medium is reported. A dye transport model that describes dye concentration and fluorescence values in intracellular and extracellular spaces provides the mathematical basis for the approach. Experimental results of ammonium chloride challenge response of the two suspension cells, Spodoptera frugiperda and Chinese hamster ovary (CHO) cells, successfully compared with results obtained using traditional perfusion method. Since the cell suspension does not require any preparation, measurement of pH(i) can be completed in about 1 min minimizing any potential errors due to dye leakage.

Top