Sample records for l1 minimization problem

  1. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  2. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  3. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  4. Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I

    DTIC Science & Technology

    2016-09-01

    which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision

  5. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  6. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.

  7. A Fast and Accurate Algorithm for l1 Minimization Problems in Compressive Sampling (Preprint)

    DTIC Science & Technology

    2013-01-22

    However, updating uk+1 via the formulation of Step 2 in Algorithm 1 can be implemented through the use of the component-wise Gauss - Seidel iteration which...may accelerate the rate of convergence of the algorithm and therefore reduce the total CPU-time consumed. The efficiency of component-wise Gauss - Seidel ...Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse Problems, 28 (2012), p

  8. L^1 -optimality conditions for the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Chen, Zheng

    2016-11-01

    In this paper, the L^1 -minimization for the translational motion of a spacecraft in the circular restricted three-body problem (CRTBP) is considered. Necessary conditions are derived by using the Pontryagin Maximum Principle (PMP), revealing the existence of bang-bang and singular controls. Singular extremals are analyzed, recalling the existence of the Fuller phenomenon according to the theories developed in (Marchal in J Optim Theory Appl 11(5):441-486, 1973; Zelikin and Borisov in Theory of Chattering Control with Applications to Astronautics, Robotics, Economics, and Engineering. Birkhäuser, Basal 1994; in J Math Sci 114(3):1227-1344, 2003). The sufficient optimality conditions for the L^1 -minimization problem with fixed endpoints have been developed in (Chen et al. in SIAM J Control Optim 54(3):1245-1265, 2016). In the current paper, we establish second-order conditions for optimal control problems with more general final conditions defined by a smooth submanifold target. In addition, the numerical implementation to check these optimality conditions is given. Finally, approximating the Earth-Moon-Spacecraft system by the CRTBP, an L^1 -minimization trajectory for the translational motion of a spacecraft is computed by combining a shooting method with a continuation method in (Caillau et al. in Celest Mech Dyn Astron 114:137-150, 2012; Caillau and Daoud in SIAM J Control Optim 50(6):3178-3202, 2012). The local optimality of the computed trajectory is asserted thanks to the second-order optimality conditions developed.

  9. Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.

    PubMed

    Majumdar, Angshul

    2013-06-01

    In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0

  10. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  11. $L^1$ penalization of volumetric dose objectives in optimal control of PDEs

    DOE PAGES

    Barnard, Richard C.; Clason, Christian

    2017-02-11

    This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less

  12. Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II

    DTIC Science & Technology

    2016-09-01

    of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined

  13. Resource Balancing Control Allocation

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Bodson, Marc

    2010-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.

  14. Sparsest representations and approximations of an underdetermined linear system

    NASA Astrophysics Data System (ADS)

    Tardivel, Patrick J. C.; Servien, Rémi; Concordet, Didier

    2018-05-01

    In an underdetermined linear system of equations, constrained l 1 minimization methods such as the basis pursuit or the lasso are often used to recover one of the sparsest representations or approximations of the system. The null space property is a sufficient and ‘almost’ necessary condition to recover a sparsest representation with the basis pursuit. Unfortunately, this property cannot be easily checked. On the other hand, the mutual coherence is an easily checkable sufficient condition insuring the basis pursuit to recover one of the sparsest representations. Because the mutual coherence condition is too strong, it is hardly met in practice. Even if one of these conditions holds, to our knowledge, there is no theoretical result insuring that the lasso solution is one of the sparsest approximations. In this article, we study a novel constrained problem that gives, without any condition, one of the sparsest representations or approximations. To solve this problem, we provide a numerical method and we prove its convergence. Numerical experiments show that this approach gives better results than both the basis pursuit problem and the reweighted l 1 minimization problem.

  15. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  16. Low thrust spacecraft transfers optimization method with the stepwise control structure in the Earth-Moon system in terms of the L1-L2 transfer

    NASA Astrophysics Data System (ADS)

    Fain, M. K.; Starinova, O. L.

    2016-04-01

    The paper outlines the method for determination of the locally optimal stepwise control structure in the problem of the low thrust spacecraft transfer optimization in the Earth-Moon system, including the L1-L2 transfer. The total flight time as an optimization criterion is considered. The optimal control programs were obtained by using the Pontryagin's maximum principle. As a result of optimization, optimal control programs, corresponding trajectories, and minimal total flight times were determined.

  17. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  18. Laboratory Evaluation of Expedient Low-Temperature Concrete Admixtures for Repairing Blast Holes in Cold Weather

    DTIC Science & Technology

    2013-01-08

    This re- search ignores effects on long-term durability, trafficability, temperature rebar corrosion , and other concerns that are of minimal... concrete because it can cause corrosion of steel reinforcement. However, the corrosion problem develops slowly with time; therefore, this problem has a...ER D C/ CR RE L TR -1 3- 1 Laboratory Evaluation of Expedient Low- Temperature Concrete Admixtures for Repairing Blast Holes in Cold

  19. Control Allocation with Load Balancing

    NASA Technical Reports Server (NTRS)

    Bodson, Marc; Frost, Susan A.

    2009-01-01

    Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.

  20. Opposed Jet Turbulent Diffusion Flames

    DTIC Science & Technology

    1990-09-05

    not well known, and the effect of turbulence on the mixing 1 process in a stagnation flame, is still an important issue. Our purpose is to address this...is obtained by the relationship L()£)=N(c)c To minimize noise problems statistics over 30 samples have been processed . Figs 10 a,b show a comparison...ToaFinal IFRomiQLjL5L.To 3/31/J94 September 5, 1990 68 I$ u’I14A16. SUP6LSMENTAAY NOTATION1 1 .COSAn COME I& SuejEcT TanaS (ca񓂎 4Ww o"I inwv &Wd iW

  1. RNAslider: a faster engine for consecutive windows folding and its application to the analysis of genomic folding asymmetry.

    PubMed

    Horesh, Yair; Wexler, Ydo; Lebenthal, Ilana; Ziv-Ukelson, Michal; Unger, Ron

    2009-03-04

    Scanning large genomes with a sliding window in search of locally stable RNA structures is a well motivated problem in bioinformatics. Given a predefined window size L and an RNA sequence S of size N (L < N), the consecutive windows folding problem is to compute the minimal free energy (MFE) for the folding of each of the L-sized substrings of S. The consecutive windows folding problem can be naively solved in O(NL3) by applying any of the classical cubic-time RNA folding algorithms to each of the N-L windows of size L. Recently an O(NL2) solution for this problem has been described. Here, we describe and implement an O(NLpsi(L)) engine for the consecutive windows folding problem, where psi(L) is shown to converge to O(1) under the assumption of a standard probabilistic polymer folding model, yielding an O(L) speedup which is experimentally confirmed. Using this tool, we note an intriguing directionality (5'-3' vs. 3'-5') folding bias, i.e. that the minimal free energy (MFE) of folding is higher in the native direction of the DNA than in the reverse direction of various genomic regions in several organisms including regions of the genomes that do not encode proteins or ncRNA. This bias largely emerges from the genomic dinucleotide bias which affects the MFE, however we see some variations in the folding bias in the different genomic regions when normalized to the dinucleotide bias. We also present results from calculating the MFE landscape of a mouse chromosome 1, characterizing the MFE of the long ncRNA molecules that reside in this chromosome. The efficient consecutive windows folding engine described in this paper allows for genome wide scans for ncRNA molecules as well as large-scale statistics. This is implemented here as a software tool, called RNAslider, and applied to the scanning of long chromosomes, leading to the observation of features that are visible only on a large scale.

  2. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  3. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  4. Developing cross entropy genetic algorithm for solving Two-Dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP)

    NASA Astrophysics Data System (ADS)

    Paramestha, D. L.; Santosa, B.

    2018-04-01

    Two-dimensional Loading Heterogeneous Fleet Vehicle Routing Problem (2L-HFVRP) is a combination of Heterogeneous Fleet VRP and a packing problem well-known as Two-Dimensional Bin Packing Problem (BPP). 2L-HFVRP is a Heterogeneous Fleet VRP in which these costumer demands are formed by a set of two-dimensional rectangular weighted item. These demands must be served by a heterogeneous fleet of vehicles with a fix and variable cost from the depot. The objective function 2L-HFVRP is to minimize the total transportation cost. All formed routes must be consistent with the capacity and loading process of the vehicle. Sequential and unrestricted scenarios are considered in this paper. We propose a metaheuristic which is a combination of the Genetic Algorithm (GA) and the Cross Entropy (CE) named Cross Entropy Genetic Algorithm (CEGA) to solve the 2L-HFVRP. The mutation concept on GA is used to speed up the algorithm CE to find the optimal solution. The mutation mechanism was based on local improvement (2-opt, 1-1 Exchange, and 1-0 Exchange). The probability transition matrix mechanism on CE is used to avoid getting stuck in the local optimum. The effectiveness of CEGA was tested on benchmark instance based 2L-HFVRP. The result of experiments shows a competitive result compared with the other algorithm.

  5. J.-L. Lions' problem concerning maximal regularity of equations governed by non-autonomous forms

    NASA Astrophysics Data System (ADS)

    Fackler, Stephan

    2017-05-01

    An old problem due to J.-L. Lions going back to the 1960s asks whether the abstract Cauchy problem associated to non-autonomous forms has maximal regularity if the time dependence is merely assumed to be continuous or even measurable. We give a negative answer to this question and discuss the minimal regularity needed for positive results.

  6. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  7. Traveling salesman problem with a center.

    PubMed

    Lipowski, Adam; Lipowska, Dorota

    2005-06-01

    We study a traveling salesman problem where the path is optimized with a cost function that includes its length L as well as a certain measure C of its distance from the geometrical center of the graph. Using simulated annealing (SA) we show that such a problem has a transition point that separates two phases differing in the scaling behavior of L and C, in efficiency of SA, and in the shape of minimal paths.

  8. Convergence Speed of a Dynamical System for Sparse Recovery

    NASA Astrophysics Data System (ADS)

    Balavoine, Aurele; Rozell, Christopher J.; Romberg, Justin

    2013-09-01

    This paper studies the convergence rate of a continuous-time dynamical system for L1-minimization, known as the Locally Competitive Algorithm (LCA). Solving L1-minimization} problems efficiently and rapidly is of great interest to the signal processing community, as these programs have been shown to recover sparse solutions to underdetermined systems of linear equations and come with strong performance guarantees. The LCA under study differs from the typical L1 solver in that it operates in continuous time: instead of being specified by discrete iterations, it evolves according to a system of nonlinear ordinary differential equations. The LCA is constructed from simple components, giving it the potential to be implemented as a large-scale analog circuit. The goal of this paper is to give guarantees on the convergence time of the LCA system. To do so, we analyze how the LCA evolves as it is recovering a sparse signal from underdetermined measurements. We show that under appropriate conditions on the measurement matrix and the problem parameters, the path the LCA follows can be described as a sequence of linear differential equations, each with a small number of active variables. This allows us to relate the convergence time of the system to the restricted isometry constant of the matrix. Interesting parallels to sparse-recovery digital solvers emerge from this study. Our analysis covers both the noisy and noiseless settings and is supported by simulation results.

  9. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  10. Oral Health Related Quality of Life among Tamil Speaking Adults Attending a Dental Institution in Chennai, Southern India.

    PubMed

    Appukuttan, Deva Priya; Tadepalli, Anupama; Victor, Dhayanand John; Dharuman, Smriti

    2016-10-01

    Oral Health-Related Quality of Life (OHRQoL) indicates an individual's perception of how their well-being and quality of life is influenced by oral health. It facilitates treatment planning, assessing patient centred treatment outcomes and satisfaction. The study aimed to identify the factors influencing OHRQoL among Tamil speaking South Indian adult population. Non-probability sampling was done and 199 subjects aged 20-70 years were recruited for this observational study. The subjects were requested to fill a survey form along with the validated Tamil General Oral Health Assessment Index (GOHAI-Tml) questionnaire in the waiting area following which clinical examination was done by a single experienced Periodontist. The mean score with standard deviation for physical dimension was 4.34±0.96, psychological dimension was 4.03±1.13 and pain was 4.05±1.09 on GOHAI. Greater impacts were seen for psychosocial dimensions like pleased with the appearance of teeth/denture Q7 (3.7±1.2), worried about the problems with teeth/denture Q9 (3.7±1) and pain or discomfort in teeth Q12 (3.8±1). Functions like swallowing Q3 (4.5±0.8) and speaking Q4 (4.6±0.7) were minimally affected. As age increased subjects perceived more negative impacts as indicated by lower ADD-GOHAI and higher SC-GOHAI scores (p<0.01). Subjects complaining of bad breath, bleeding gums and Temporomandibular Joint (TMJ) problems, reported poor OHRQoL (p<0.05). It was observed that as self-perceived oral and general health status deteriorated, OHRQoL also worsened (p<0.01). Subjects with missing teeth, cervical abrasion, restorations, gingival recession and mobility had more impacts on OHRQoL (p<0.05). Subjects diagnosed with periodontitis had lower OHRQoL as reported on the scale than gingivitis subjects (p<0.01). In this study minimal impact was seen in all the three dimensions assessed with GOHAI. Factors like age, education, employment status, income, self-reported oral health, self-perceived general health, satisfaction with oral health, perceived need for treatment and denture wearing status influenced perceived OHRQoL. Bad breath, bleeding gums, TMJ problems, more number of missing teeth, decayed teeth, cervical abrasion, gingival recession and mobility were associated with poor OHRQoL.

  11. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    PubMed

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  12. $L_{0}$ Gradient Projection.

    PubMed

    Ono, Shunsuke

    2017-04-01

    Minimizing L 0 gradient, the number of the non-zero gradients of an image, together with a quadratic data-fidelity to an input image has been recognized as a powerful edge-preserving filtering method. However, the L 0 gradient minimization has an inherent difficulty: a user-given parameter controlling the degree of flatness does not have a physical meaning since the parameter just balances the relative importance of the L 0 gradient term to the quadratic data-fidelity term. As a result, the setting of the parameter is a troublesome work in the L 0 gradient minimization. To circumvent the difficulty, we propose a new edge-preserving filtering method with a novel use of the L 0 gradient. Our method is formulated as the minimization of the quadratic data-fidelity subject to the hard constraint that the L 0 gradient is less than a user-given parameter α . This strategy is much more intuitive than the L 0 gradient minimization because the parameter α has a clear meaning: the L 0 gradient value of the output image itself, so that one can directly impose a desired degree of flatness by α . We also provide an efficient algorithm based on the so-called alternating direction method of multipliers for computing an approximate solution of the nonconvex problem, where we decompose it into two subproblems and derive closed-form solutions to them. The advantages of our method are demonstrated through extensive experiments.

  13. Second-Order Two-Sided Estimates in Nonlinear Elliptic Problems

    NASA Astrophysics Data System (ADS)

    Cianchi, Andrea; Maz'ya, Vladimir G.

    2018-05-01

    Best possible second-order regularity is established for solutions to p-Laplacian type equations with {p \\in (1, ∞)} and a square-integrable right-hand side. Our results provide a nonlinear counterpart of the classical L 2-coercivity theory for linear problems, which is missing in the existing literature. Both local and global estimates are obtained. The latter apply to solutions to either Dirichlet or Neumann boundary value problems. Minimal regularity on the boundary of the domain is required, although our conclusions are new even for smooth domains. If the domain is convex, no regularity of its boundary is needed at all.

  14. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  15. Hamiltonian stability for weighted measure and generalized Lagrangian mean curvature flow

    NASA Astrophysics Data System (ADS)

    Kajigaya, Toru; Kunikawa, Keita

    2018-06-01

    In this paper, we generalize several results for the Hamiltonian stability and the mean curvature flow of Lagrangian submanifolds in a Kähler-Einstein manifold to more general Kähler manifolds including a Fano manifold equipped with a Kähler form ω ∈ 2 πc1(M) by using the method proposed by Behrndt (2011). Namely, we first consider a weighted measure on a Lagrangian submanifold L in a Kähler manifold M and investigate the variational problem of L for the weighted volume functional. We call a stationary point of the weighted volume functional f-minimal, and define the notion of Hamiltonian f-stability as a local minimizer under Hamiltonian deformations. We show such examples naturally appear in a toric Fano manifold. Moreover, we consider the generalized Lagrangian mean curvature flow in a Fano manifold which is introduced by Behrndt and Smoczyk-Wang. We generalize the result of H. Li, and show that if the initial Lagrangian submanifold is a small Hamiltonian deformation of an f-minimal and Hamiltonian f-stable Lagrangian submanifold, then the generalized MCF converges exponentially fast to an f-minimal Lagrangian submanifold.

  16. Determining biosonar images using sparse representations.

    PubMed

    Fontaine, Bertrand; Peremans, Herbert

    2009-05-01

    Echolocating bats are thought to be able to create an image of their environment by emitting pulses and analyzing the reflected echoes. In this paper, the theory of sparse representations and its more recent further development into compressed sensing are applied to this biosonar image formation task. Considering the target image representation as sparse allows formulation of this inverse problem as a convex optimization problem for which well defined and efficient solution methods have been established. The resulting technique, referred to as L1-minimization, is applied to simulated data to analyze its performance relative to delay accuracy and delay resolution experiments. This method performs comparably to the coherent receiver for the delay accuracy experiments, is quite robust to noise, and can reconstruct complex target impulse responses as generated by many closely spaced reflectors with different reflection strengths. This same technique, in addition to reconstructing biosonar target images, can be used to simultaneously localize these complex targets by interpreting location cues induced by the bat's head related transfer function. Finally, a tentative explanation is proposed for specific bat behavioral experiments in terms of the properties of target images as reconstructed by the L1-minimization method.

  17. Quality of Life after Open or Minimally Invasive Esophagectomy in Patients With Esophageal Cancer-A Systematic Review.

    PubMed

    Taioli, Emanuela; Schwartz, Rebecca M; Lieberman-Cribbin, Wil; Moskowitz, Gil; van Gerwen, Maaike; Flores, Raja

    2017-01-01

    Although esophageal cancer is rare in the United States, 5-year survival and quality of life (QoL) are poor following esophageal cancer surgery. Although esophageal cancer has been surgically treated with esophagectomy through thoracotomy, an open procedure, minimally invasive surgical procedures have been recently introduced to decrease the risk of complications and improve QoL after surgery. The current study is a systematic review of the published literature to assess differences in QoL after traditional (open) or minimally invasive esophagectomy. We hypothesized that QoL is consistently better in patients treated with minimally invasive surgery than in those treated with a more traditional and invasive approach. Although global health, social function, and emotional function improved more commonly after minimally invasive surgery compared with open surgery, physical function and role function, as well as symptoms including choking, dysphagia, eating problems, and trouble swallowing saliva, declined for both surgery types. Cognitive function was equivocal across both groups. The potential small benefits in global and mental health status among those who experience minimally invasive surgery should be considered with caution given the possibility of publication and selection bias. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  19. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  20. Rotordynamic Instability Problems in High-Performance Turbomachinery - 1982, Proceedings of a Workshop (2nd) Held at College Station, Texas on 10-12 May 1982.

    DTIC Science & Technology

    1982-01-01

    selected from a horse- power-.peed aspect. This method is adequate in most cases because the spacer be- haves as a rigid body with minimal influence on...The theoretical model of this rotor does not include the measured rigid - body eigenmode at 2400 rev/min (figs. 16 and 18). The resonance frequency at...Eigenvectors The equations of motion for the rigid rotor with elastic supports are i 2kl k q L 1 ~1+ L2 ] [2 0 (24)0 Lq 2 -k 2k 22 Lq2 or with the definitions

  1. Oral Health Related Quality of Life among Tamil Speaking Adults Attending a Dental Institution in Chennai, Southern India

    PubMed Central

    Tadepalli, Anupama; Victor, Dhayanand John; Dharuman, Smriti

    2016-01-01

    Introduction Oral Health-Related Quality of Life (OHRQoL) indicates an individual’s perception of how their well-being and quality of life is influenced by oral health. It facilitates treatment planning, assessing patient centred treatment outcomes and satisfaction. Aim The study aimed to identify the factors influencing OHRQoL among Tamil speaking South Indian adult population. Materials and Methods Non-probability sampling was done and 199 subjects aged 20-70 years were recruited for this observational study. The subjects were requested to fill a survey form along with the validated Tamil General Oral Health Assessment Index (GOHAI-Tml) questionnaire in the waiting area following which clinical examination was done by a single experienced Periodontist. Results The mean score with standard deviation for physical dimension was 4.34±0.96, psychological dimension was 4.03±1.13 and pain was 4.05±1.09 on GOHAI. Greater impacts were seen for psychosocial dimensions like pleased with the appearance of teeth/denture Q7 (3.7±1.2), worried about the problems with teeth/denture Q9 (3.7±1) and pain or discomfort in teeth Q12 (3.8±1). Functions like swallowing Q3 (4.5±0.8) and speaking Q4 (4.6±0.7) were minimally affected. As age increased subjects perceived more negative impacts as indicated by lower ADD-GOHAI and higher SC-GOHAI scores (p<0.01). Subjects complaining of bad breath, bleeding gums and Temporomandibular Joint (TMJ) problems, reported poor OHRQoL (p<0.05). It was observed that as self-perceived oral and general health status deteriorated, OHRQoL also worsened (p<0.01). Subjects with missing teeth, cervical abrasion, restorations, gingival recession and mobility had more impacts on OHRQoL (p<0.05). Subjects diagnosed with periodontitis had lower OHRQoL as reported on the scale than gingivitis subjects (p<0.01). Conclusion In this study minimal impact was seen in all the three dimensions assessed with GOHAI. Factors like age, education, employment status, income, self-reported oral health, self-perceived general health, satisfaction with oral health, perceived need for treatment and denture wearing status influenced perceived OHRQoL. Bad breath, bleeding gums, TMJ problems, more number of missing teeth, decayed teeth, cervical abrasion, gingival recession and mobility were associated with poor OHRQoL. PMID:27891472

  2. Time-optimal control of the spacecraft trajectories in the Earth-Moon system

    NASA Astrophysics Data System (ADS)

    Starinova, O. L.; Fain, M. K.; Materova, I. L.

    2017-01-01

    This paper outlines the multiparametric optimization of the L1-L2 and L2-L1 missions in the Earth-Moon system using electric propulsion. The optimal control laws are obtained using the Fedorenko successful linearization method to estimate the derivatives and the gradient method to optimize the control laws. The study of the transfers is based on the restricted circular three-body problem. The mathematical model of the missions is described within the barycentric system of coordinates. The optimization criterion is the total flight time. The perturbation from the Earth, the Moon and the Sun are taking into account. The impact of the shaded areas, induced by the Earth and the Moon, is also accounted. As the results of the optimization we obtained optimal control laws, corresponding trajectories and minimal total flight times.

  3. A Mathematical Theory of Command and Control Structures.

    DTIC Science & Technology

    1984-08-30

    minimize the functional J over the space of all linear maps,then (a) 6J ( HKL l 0)n 0 0 6JKi+Li (HK’L K i +Li) =0 .61H (H,K,L, )= 0 13 for all i=l,...N, and...Castanon, G. C. Verghese, A. S. Willsky, "A Scaterring Framework for Decentralized Estimation Problems," MIT/LIDS paper 1075 , March 1981, Submitted to

  4. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  5. Pathophysiology of Peptide Toxins of Microcystis aeruginosa and Amanita phalloides

    DTIC Science & Technology

    1986-06-30

    minimal for dexamethasone . Thp chronic toxicity of repeated sublethal doses of toxin-LR has received limited study, but a definite lesion has been... 1 /12 1 /12 Dexamethasone 0.02 mg/gbw 5/5 4/5 5/5 4/5 Dexamethasone 0.20 mg/gbw 10/10 15/15 12/12 9/10 *Fractional mortality. 52 Table 15: Effect of...n For U’i. [IJ*•i •l .......................... 1 1 t !7 ... ... ’, D •-. .. .. a -+ .< # TABLE OF CONTENTS Page STATEMENT OF PROBLEM UNDER STUDY BACK

  6. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  7. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  8. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    PubMed

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  9. Electroweak vacuum stability in classically conformal B - L extension of the standard model

    DOE PAGES

    Das, Arindam; Okada, Nobuchika; Papapietro, Nathan

    2017-02-23

    Here, we consider the minimal U(1) B - L extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1) B - L gauge symme- try is introduced along with three generations of right-handed neutrinos and a U(1) B - L Higgs field. Because of the classi- cally conformal symmetry, all dimensional parameters are forbidden. The B - L gauge symmetry is radiatively bro- ken through the Coleman–Weinberg mechanism, generating the mass for the U(1) B - L gauge boson (Z' boson) and the right-handed neutrinos. Through a small negative coupling betweenmore » the SM Higgs doublet and the B - L Higgs field, the negative mass term for the SM Higgs doublet is gener- ated and the electroweak symmetry is broken. We investigate the electroweak vacuum instability problem in the SM in this model context. It is well known that in the classically conformal U(1) B - L extension of the SM, the electroweak vacuum remains unstable in the renormalization group anal- ysis at the one-loop level. In this paper, we extend the anal- ysis to the two-loop level, and perform parameter scans. We also identify a parameter region which not only solve the vacuum instability problem, but also satisfy the recent ATLAS and CMS bounds from search for Z ' boson resonance at the LHC Run-2. Considering self-energy corrections to the SM Higgs doublet through the right-handed neutrinos and the Z ' boson, we derive the naturalness bound on the model parameters to realize the electroweak scale without fine-tunings.« less

  10. Electroweak vacuum stability in classically conformal B - L extension of the standard model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Arindam; Okada, Nobuchika; Papapietro, Nathan

    Here, we consider the minimal U(1) B - L extension of the standard model (SM) with the classically conformal invariance, where an anomaly-free U(1) B - L gauge symme- try is introduced along with three generations of right-handed neutrinos and a U(1) B - L Higgs field. Because of the classi- cally conformal symmetry, all dimensional parameters are forbidden. The B - L gauge symmetry is radiatively bro- ken through the Coleman–Weinberg mechanism, generating the mass for the U(1) B - L gauge boson (Z' boson) and the right-handed neutrinos. Through a small negative coupling betweenmore » the SM Higgs doublet and the B - L Higgs field, the negative mass term for the SM Higgs doublet is gener- ated and the electroweak symmetry is broken. We investigate the electroweak vacuum instability problem in the SM in this model context. It is well known that in the classically conformal U(1) B - L extension of the SM, the electroweak vacuum remains unstable in the renormalization group anal- ysis at the one-loop level. In this paper, we extend the anal- ysis to the two-loop level, and perform parameter scans. We also identify a parameter region which not only solve the vacuum instability problem, but also satisfy the recent ATLAS and CMS bounds from search for Z ' boson resonance at the LHC Run-2. Considering self-energy corrections to the SM Higgs doublet through the right-handed neutrinos and the Z ' boson, we derive the naturalness bound on the model parameters to realize the electroweak scale without fine-tunings.« less

  11. Fade Analysis of ORCA Data Beam at NTTR and Pax River

    DTIC Science & Technology

    2010-08-01

    aJexp(Djj. Solving the minimization problem resulted in the path-averaged atmospheric parameters, C„", /o, and Lo , encountered by the ORCA beacon...spot size at fiber: P = WlP.,urt I Wcore ) + Wore ’ ^,P., uro ) Power in Fiber (PIF): PIF = PIB.V./^,.SR.-i- Power in Fiber (With Tip-Tilt @ Tx, Rx...lKH 2.7S X 10 $RH! I 4.86 X 10 ’RH5 4.48 X 10 ’RH* f 1.66 X 10 "S«! 6.26 x 10"s In (JJH) - 1.34 x lO -’JF* + 7.30 x 10"’ frrcsA) —u x io-"rc5

  12. An iterative algorithm for L1-TV constrained regularization in image restoration

    NASA Astrophysics Data System (ADS)

    Chen, K.; Loli Piccolomini, E.; Zama, F.

    2015-11-01

    We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.

  13. Circulating cell-derived microparticles in patients with minimally symptomatic obstructive sleep apnoea.

    PubMed

    Ayers, L; Ferry, B; Craig, S; Nicoll, D; Stradling, J R; Kohler, M

    2009-03-01

    Moderate-severe obstructive sleep apnoea (OSA) has been associated with several pro-atherogenic mechanisms and increased cardiovascular risk, but it is not known if minimally symptomatic OSA has similar effects. Circulating cell-derived microparticles have been shown to have pro-inflammatory, pro-coagulant and endothelial function-impairing effects, as well as to predict subclinical atherosclerosis and cardiovascular risk. In 57 patients with minimally symptomatic OSA, and 15 closely matched control subjects without OSA, AnnexinV-positive, platelet-, leukocyte- and endothelial cell-derived microparticles were measured by flow cytometry. In patients with OSA, median (interquartile range) levels of AnnexinV-positive microparticles were significantly elevated compared with control subjects: 2,586 (1,566-3,964) microL(-1) versus 1,206 (474-2,501) microL(-1), respectively. Levels of platelet-derived and leukocyte-derived microparticles were also significantly higher in patients with OSA (2,267 (1,102-3,592) microL(-1) and 20 (14-31) microL(-1), respectively) compared with control subjects (925 (328-2,068) microL(-1) and 15 (5-23) microL(-1), respectively). Endothelial cell-derived microparticle levels were similar in patients with OSA compared with control subjects (13 (8-25) microL(-1) versus 11 (6-17) microL(-1)). In patients with minimally symptomatic obstructive sleep apnoea, levels of AnnexinV-positive, platelet- and leukocyte-derived microparticles are elevated when compared with closely matched control subjects without obstructive sleep apnoea. These findings suggest that these patients may be at increased cardiovascular risk, despite being minimally symptomatic.

  14. Polyphenol oxidase activity from three sicilian artichoke [ Cynara cardunculus L. Var. scolymus L. (Fiori)] cultivars: studies and technological application on minimally processed production.

    PubMed

    Todaro, Aldo; Peluso, Orazio; Catalano, Anna Eghle; Mauromicale, Giovanni; Spagna, Giovanni

    2010-02-10

    Several papers helped with the development of more methods to control browning, or study thermal polyphenol oxidase (PPO) inactivation, but did not provide any solutions to technological process problems and food process improvement. Artichokes [ Cynara cardunculus L. var. scolymus L. (Fiori)] are susceptible to browning; this alteration could affect and reduce the suitability for its use, fresh or processed. Within this study, the catecholase and cresolase activities of PPO from three different Sicilian artichokes cultivar were characterized with regard to substrate specificity and enzyme kinetics, optimum pH and temperature, temperature and pH stability, and inhibitor test; all of the results were used for technological purposes, particularly to optimize minimally processed productions (ready-to-eat and cook-chilled artichokes).

  15. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  16. A restricted Steiner tree problem is solved by Geometric Method II

    NASA Astrophysics Data System (ADS)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  17. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.

  18. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  19. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  20. Application of Modern Control Design Methodologies to a Multi-Segmented Deformable Mirror System

    DTIC Science & Technology

    1991-05-23

    state matrices, and the state equations are X= Ax + Bu (2.3) y = Cm + Du (2.4) The only dynamics modeled are associated with the six segment phasing...relationship between the L 2 and H2 spaces, the vector H2 norm can be found from the application of Parseval’s Theorem to Equation 3.1, yielding V112...of this minimization problem can be found using Riccati equations {1]. ’With a slight abuse of notation, time domain functions and frequency domain

  1. Non-localization of eigenfunctions for Sturm-Liouville operators and applications

    NASA Astrophysics Data System (ADS)

    Liard, Thibault; Lissy, Pierre; Privat, Yannick

    2018-02-01

    In this article, we investigate a non-localization property of the eigenfunctions of Sturm-Liouville operators Aa = -∂xx + a (ṡ) Id with Dirichlet boundary conditions, where a (ṡ) runs over the bounded nonnegative potential functions on the interval (0 , L) with L > 0. More precisely, we address the extremal spectral problem of minimizing the L2-norm of a function e (ṡ) on a measurable subset ω of (0 , L), where e (ṡ) runs over all eigenfunctions of Aa, at the same time with respect to all subsets ω having a prescribed measure and all L∞ potential functions a (ṡ) having a prescribed essentially upper bound. We provide some existence and qualitative properties of the minimizers, as well as precise lower and upper estimates on the optimal value. Several consequences in control and stabilization theory are then highlighted.

  2. Doughnut-shaped soap bubbles.

    PubMed

    Préve, Deison; Saa, Alberto

    2015-10-01

    Soap bubbles are thin liquid films enclosing a fixed volume of air. Since the surface tension is typically assumed to be the only factor responsible for conforming the soap bubble shape, the realized bubble surfaces are always minimal area ones. Here, we consider the problem of finding the axisymmetric minimal area surface enclosing a fixed volume V and with a fixed equatorial perimeter L. It is well known that the sphere is the solution for V=L(3)/6π(2), and this is indeed the case of a free soap bubble, for instance. Surprisingly, we show that for V<αL(3)/6π(2), with α≈0.21, such a surface cannot be the usual lens-shaped surface formed by the juxtaposition of two spherical caps, but is rather a toroidal surface. Practically, a doughnut-shaped bubble is known to be ultimately unstable and, hence, it will eventually lose its axisymmetry by breaking apart in smaller bubbles. Indisputably, however, the topological transition from spherical to toroidal surfaces is mandatory here for obtaining the global solution for this axisymmetric isoperimetric problem. Our result suggests that deformed bubbles with V<αL(3)/6π(2) cannot be stable and should not exist in foams, for instance.

  3. Doughnut-shaped soap bubbles

    NASA Astrophysics Data System (ADS)

    Préve, Deison; Saa, Alberto

    2015-10-01

    Soap bubbles are thin liquid films enclosing a fixed volume of air. Since the surface tension is typically assumed to be the only factor responsible for conforming the soap bubble shape, the realized bubble surfaces are always minimal area ones. Here, we consider the problem of finding the axisymmetric minimal area surface enclosing a fixed volume V and with a fixed equatorial perimeter L . It is well known that the sphere is the solution for V =L3/6 π2 , and this is indeed the case of a free soap bubble, for instance. Surprisingly, we show that for V <α L3/6 π2 , with α ≈0.21 , such a surface cannot be the usual lens-shaped surface formed by the juxtaposition of two spherical caps, but is rather a toroidal surface. Practically, a doughnut-shaped bubble is known to be ultimately unstable and, hence, it will eventually lose its axisymmetry by breaking apart in smaller bubbles. Indisputably, however, the topological transition from spherical to toroidal surfaces is mandatory here for obtaining the global solution for this axisymmetric isoperimetric problem. Our result suggests that deformed bubbles with V <α L3/6 π2 cannot be stable and should not exist in foams, for instance.

  4. Supersymmetric B – L inflation near the conformal coupling

    DOE PAGES

    Arai, Masato; Kawai, Shinsuke; Okada, Nobuchika

    2014-06-01

    We investigate a novel scenario of cosmological inflation in a gauged B-L extended minimal supersymmetric Standard Model with R-symmetry. We use a noncanonical Kähler potential and a superpotential, both preserving the R-symmetry to construct a model of slow-roll inflation. The model is controlled by two real parameters: the nonminimal coupling ξ that originates from the Kähler potential, and the breaking scale v of the U (1)B-L symmetry. We compute the spectrum of the cosmological microwave background radiation and show that the prediction of the model fits well the recent Planck satellite observation for a wide range of the parameter space.more » We also find that the typical reheating temperature of the model is low enough to avoid the gravitino problem but nevertheless allows sufficient production of the baryon asymmetry if we take into account the effect of resonance enhancement. The model is free from cosmic strings that impose stringent constraints on generic U (1)B-L based scenarios, as in our scenario the U (1)B-L symmetry is broken from the onset.« less

  5. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  6. Determining the Minimal Required Radioactivity of 18F-FDG for Reliable Semiquantification in PET/CT Imaging: A Phantom Study.

    PubMed

    Chen, Ming-Kai; Menard, David H; Cheng, David W

    2016-03-01

    In pursuit of as-low-as-reasonably-achievable (ALARA) doses, this study investigated the minimal required radioactivity and corresponding imaging time for reliable semiquantification in PET/CT imaging. Using a phantom containing spheres of various diameters (3.4, 2.1, 1.5, 1.2, and 1.0 cm) filled with a fixed (18)F-FDG concentration of 165 kBq/mL and a background concentration of 23.3 kBq/mL, we performed PET/CT at multiple time points over 20 h of radioactive decay. The images were acquired for 10 min at a single bed position for each of 10 half-lives of decay using 3-dimensional list mode and were reconstructed into 1-, 2-, 3-, 4-, 5-, and 10-min acquisitions per bed position using an ordered-subsets expectation maximum algorithm with 24 subsets and 2 iterations and a gaussian 2-mm filter. SUVmax and SUVavg were measured for each sphere. The minimal required activity (±10%) for precise SUVmax semiquantification in the spheres was 1.8 kBq/mL for an acquisition of 10 min, 3.7 kBq/mL for 3-5 min, 7.9 kBq/mL for 2 min, and 17.4 kBq/mL for 1 min. The minimal required activity concentration-acquisition time product per bed position was 10-15 kBq/mL⋅min for reproducible SUV measurements within the spheres without overestimation. Using the total radioactivity and counting rate from the entire phantom, we found that the minimal required total activity-time product was 17 MBq⋅min and the minimal required counting rate-time product was 100 kcps⋅min. Our phantom study determined a threshold for minimal radioactivity and acquisition time for precise semiquantification in (18)F-FDG PET imaging that can serve as a guide in pursuit of achieving ALARA doses. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  7. TR34/L98H Mutation in CYP51A Gene in Aspergillus fumigatus Clinical Isolates During Posaconazole Prophylaxis: First Case in Korea.

    PubMed

    Lee, Hyeon-Jeong; Cho, Sung-Yeon; Lee, Dong-Gun; Park, Chulmin; Chun, Hye-Sun; Park, Yeon-Joon

    2018-06-01

    Azole resistance in Aspergillus fumigatus is an emerging problem, especially in immunocompromised patients. It has been reported worldwide, including in Asia, but has not yet been reported in Korea. Here, we report a case of invasive pulmonary aspergillosis (IPA) caused by azole-resistant A. fumigatus that developed in a hematopoietic stem cell transplantation recipient during posaconazole prophylaxis for immunosuppressive therapy of graft-versus-host diseases. We identified TR34/L98H/S297T/F495L mutation in the CYP51A gene of A. fumigatus clinical isolate obtained from bronchial washing fluid. Minimal inhibitory concentrations for itraconazole, voriconazole, and posaconazole were > 16, 1, and 4 μg/mL, respectively. While IPA improved partially under voriconazole treatment, the patient died from carbapenemase-producing Klebsiella pneumoniae bacteremia. Further epidemiological surveillance studies are warranted.

  8. Blow-up behavior of ground states for a nonlinear Schrödinger system with attractive and repulsive interactions

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Zeng, Xiaoyu; Zhou, Huan-Song

    2018-01-01

    We consider a nonlinear Schrödinger system arising in a two-component Bose-Einstein condensate (BEC) with attractive intraspecies interactions and repulsive interspecies interactions in R2. We get ground states of this system by solving a constrained minimization problem. For some kinds of trapping potentials, we prove that the minimization problem has a minimizer if and only if the attractive interaction strength ai (i = 1 , 2) of each component of the BEC system is strictly less than a threshold a*. Furthermore, as (a1 ,a2) ↗ (a* ,a*), the asymptotical behavior for the minimizers of the minimization problem is discussed. Our results show that each component of the BEC system concentrates at a global minimum of the associated trapping potential.

  9. A novel cleaner production process of citric acid by recycling its treated wastewater.

    PubMed

    Xu, Jian; Su, Xian-Feng; Bao, Jia-Wei; Zhang, Hong-Jian; Zeng, Xin; Tang, Lei; Wang, Ke; Zhang, Jian-Hua; Chen, Xu-Sheng; Mao, Zhong-Gui

    2016-07-01

    In this study, a novel cleaner production process of citric acid was proposed to completely solve the problem of wastewater management in citric acid industry. In the process, wastewater from citric acid fermentation was used to produce methane through anaerobic digestion and then the anaerobic digestion effluent was further treated with air stripping and electrodialysis before recycled as process water for the later citric acid fermentation. This proposed process was performed for 10 batches and the average citric acid production in recycling batches was 142.4±2.1g/L which was comparable to that with tap water (141.6g/L). Anaerobic digestion was also efficient and stable in operation. The average chemical oxygen demand (COD) removal rate was 95.1±1.2% and methane yield approached to 297.7±19.8mL/g TCODremoved. In conclusion, this novel process minimized the wastewater discharge and achieved the cleaner production in citric acid industry. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Alternative sanitization methods for minimally processed lettuce in comparison to sodium hypochlorite.

    PubMed

    Bachelli, Mara Lígia Biazotto; Amaral, Rívia Darla Álvares; Benedetti, Benedito Carlos

    2013-01-01

    Lettuce is a leafy vegetable widely used in industry for minimally processed products, in which the step of sanitization is the crucial moment for ensuring a safe food for consumption. Chlorinated compounds, mainly sodium hypochlorite, are the most used in Brazil, but the formation of trihalomethanes from this sanitizer is a drawback. Then, the search for alternative methods to sodium hypochlorite has been emerging as a matter of great interest. The suitability of chlorine dioxide (60 mg L(-1)/10 min), peracetic acid (100 mg L(-1)/15 min) and ozonated water (1.2 mg L(-1)/1 min) as alternative sanitizers to sodium hypochlorite (150 mg L(-1) free chlorine/15 min) were evaluated. Minimally processed lettuce washed with tap water for 1 min was used as a control. Microbiological analyses were performed in triplicate, before and after sanitization, and at 3, 6, 9 and 12 days of storage at 2 ± 1 °C with the product packaged on LDPE bags of 60 μm. It was evaluated total coliforms, Escherichia coli, Salmonella spp., psicrotrophic and mesophilic bacteria, yeasts and molds. All samples of minimally processed lettuce showed absence of E. coli and Salmonella spp. The treatments of chlorine dioxide, peracetic acid and ozonated water promoted reduction of 2.5, 1.1 and 0.7 log cycle, respectively, on count of microbial load of minimally processed product and can be used as substitutes for sodium hypochlorite. These alternative compounds promoted a shelf-life of six days to minimally processed lettuce, while the shelf-life with sodium hypochlorite was 12 days.

  11. Rendezvous missions with minimoons from L1

    NASA Astrophysics Data System (ADS)

    Chyba, M.; Haberkorn, T.; Patterson, G.

    2014-07-01

    We propose to present asteroid capture missions with the so-called minimoons. Minimoons are small asteroids that are temporarily captured objects on orbits in the Earth-Moon system. It has been suggested that, despite their small capture probability, at any time there are one or two meter diameter minimoons, and progressively greater numbers at smaller diameters. The minimoons orbits differ significantly from elliptical orbits which renders a rendezvous mission more challenging, however they offer many advantages for such missions that overcome this fact. First, they are already on geocentric orbits which results in short duration missions with low Delta-v, this translates in cost efficiency and low-risk targets. Second, beside their close proximity to Earth, an advantage is their small size since it provides us with the luxury to retrieve the entire asteroid and not only a sample of material. Accessing the interior structure of a near-Earth satellite in its morphological context is crucial to an in-depth analysis of the structure of the asteroid. Historically, 2006 RH120 is the only minimoon that has been detected but work is ongoing to determine which modifications to current observation facilities is necessary to provide detection algorithm capabilities. In the event that detection is successful, an efficient algorithm to produce a space mission to rendezvous with the detected minimoon is highly desirable to take advantage of this opportunity. This is the main focus of our work. For the design of the mission we propose the following. The spacecraft is first placed in hibernation on a Lissajoux orbit around the liberation point L1 of the Earth-Moon system. We focus on eight-shaped Lissajoux orbits to take advantage of the stability properties of their invariant manifolds for our transfers since the cost to minimize is the spacecraft fuel consumption. Once a minimoon has been detected we must choose a point on its orbit to rendezvous (in position and velocities) with the spacecraft. This is determined using a combination of distance between the minimoon's orbit to L1 and its energy level with respect to the Lissajoux orbit on which the spacecraft is hibernating. Once the spacecraft rendezvous with the minimoon, it will escort the temporarily captured object to analyze it until the withdrawal time when the spacecraft exits the orbit to return to its hibernating location awaiting for another minimoon to be detected. The entire mission including the return portion can be stated as an optimal control problem, however we choose to break it into smaller sub-problems as a first step to be refined later. To model our control system, we use the circular three-body problem since it provides a good approximation in the vicinity of the Earth-Moon dynamics. Expansion to more refined models will be considered once the problem has been solved for this first approximation. The problem is solved in several steps. First, we consider the time minimal problem since we will use a multiple of it for the minimal fuel consumption problem with fixed time. The techniques used to produce the transfers involve an indirect method based on the necessary optimality condition of the Pontriagyn maximum principle coupled with a continuation method to address the sensitivity of the numerical algorithm to initial values. Time local optimality is verified by computing the Jacobi fields of the Hamiltonian system associated to our optimal control problem to check the second-order conditions of optimality and determine the non-existence of conjugate points.

  12. Running non-minimal inflation with stabilized inflaton potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okada, Nobuchika; Raut, Digesh

    In the context of the Higgs model involving gauge and Yukawa interactions with the spontaneous gauge symmetry breaking, we consider λφ4 inflation with non- minimal gravitational coupling, where the Higgs field is identified as the inflaton. Since the inflaton quartic coupling is very small, once quantum corrections through the gauge and Yukawa interactions are taken into account, the inflaton effective potential most likely becomes unstable. Furthermore, in order to avoid this problem, we need to impose stability conditions on the effective inflaton potential, which lead to not only non-trivial relations amongst the particle mass spectrum of the model, but alsomore » correlations between the inflationary predictions and the mass spectrum. For reasons of concrete discussion, we investigate the minimal B - L extension of the standard model with identification of the B - L Higgs field as the inflaton. The stability conditions for the inflaton effective potential fix the mass ratio amongst the B - L gauge boson, the right-handed neutrinos and the inflaton. This mass ratio also correlates with the inflationary predictions. So, if the B - L gauge boson and the right-handed neutrinos are discovered in the future, their observed mass ratio provides constraints on the inflationary predictions.« less

  13. Running non-minimal inflation with stabilized inflaton potential

    DOE PAGES

    Okada, Nobuchika; Raut, Digesh

    2017-04-18

    In the context of the Higgs model involving gauge and Yukawa interactions with the spontaneous gauge symmetry breaking, we consider λφ4 inflation with non- minimal gravitational coupling, where the Higgs field is identified as the inflaton. Since the inflaton quartic coupling is very small, once quantum corrections through the gauge and Yukawa interactions are taken into account, the inflaton effective potential most likely becomes unstable. Furthermore, in order to avoid this problem, we need to impose stability conditions on the effective inflaton potential, which lead to not only non-trivial relations amongst the particle mass spectrum of the model, but alsomore » correlations between the inflationary predictions and the mass spectrum. For reasons of concrete discussion, we investigate the minimal B - L extension of the standard model with identification of the B - L Higgs field as the inflaton. The stability conditions for the inflaton effective potential fix the mass ratio amongst the B - L gauge boson, the right-handed neutrinos and the inflaton. This mass ratio also correlates with the inflationary predictions. So, if the B - L gauge boson and the right-handed neutrinos are discovered in the future, their observed mass ratio provides constraints on the inflationary predictions.« less

  14. Evaluation of the effect of non-surgical periodontal treatment on oral health-related quality of life: estimation of minimal important differences 1 year after treatment.

    PubMed

    Jönsson, Birgitta; Öhrn, Kerstin

    2014-03-01

    To evaluate an individually tailored oral health educational programme on patient-reported outcome compared with a standard oral health educational programme, assess change over time and determine minimal important differences (MID) in change scores for two different oral health related quality of life (OHRQoL) instrument after non-surgical periodontal treatment (NSPT). In a randomized controlled trial evaluating two educational programmes, patients (n = 87) with chronic periodontitis completed a questionnaire at baseline and after 12 months. OHRQoL was assessed with the General Oral Health Assessment Index (GOHAI) and the UK oral health-related quality-of-life measure (OHQoL-UK). In addition, patients' global rating of oral health and socio-demographic variables were recorded. The MID was estimated with anchor-based and distributions-based methods. There were no differences between the two educational groups. The OHRQoL was significantly improved after treatment. The MID was approximately five for OHQoL-UK with a moderate ES, and three for GOHAI with a Small ES, and 46-50% of the patients showed improvements beyond the MID. Both oral health educational groups reported higher scores in OHRQoL after NSPT resulting in more positive well-being (OHQoL-UK) and less frequent oral problems (GOHAI). OHQoL-UK gave a greater effect size and mean change scores but both instruments were associated with the participants' self-rated change in oral health. The changes were meaningful for the patients supported by the estimated MID. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  16. No-load sirolimus with tacrolimus and steroids is safe and effective in renal transplantation.

    PubMed

    Tellis, V; Schechner, R; Mallis, M; Rosegreen, S; Glicklich, D; Moore, N; Greenstein, S

    2005-03-01

    Basiliximab (BX) induction, tacrolimus (TAC), and steroids have sharply reduced acute cellular rejection at our institution. However, late graft loss has continued, for which sirolimus (SL) was introduced into the protocol. From July 1, 2001 to December 31, 2003, 152 live donor (LD) renal transplant recipients received TAC (level 15 to 20 ng/mL) and steroids, with BX induction. One hundred twenty-two patients (Group 1) received SL (3 mg/d African-americans; 2 mg/d for others) starting on days 2 and 3. The SL level was adjusted to 8 to 10 ng/d, usually by weeks 3 to 4 posttransplant. The TAC doses were then progressively reduced. Records were reviewed for demographics, immunosuppressive drug levels, serum cholesterol and blood pressure, and complications. Graft and patient survival rates were calculated. Comparison was made to 53 LD recipients transplanted from July 1, 1998, to June 30, 2001 (Group 2) receiving BX, steroids and TAC, without SL. Recipients of deceased donor kidneys were excluded because of variability in kidney quality, ischemic time, and patient management. Demographics were similar between groups: African Americans, 25% to 35%; mean age 36 years; mean HLA mismatch 3.7. Wound problems and infection were minimal in both groups. Mean serum creatinine and cholesterol and systolic and diastolic blood pressure measured periodically up to 1 year were similar, as was the incidence of rejection. In 25% of patients, SL was discontinued. Gradual introduction of SL appears to be associated with minimal wound problems. With more aggressive reduction in TAC, better renal function, and better long-term graft survival may be attainable. We currently lower TAC levels to 5 ng/mL by 3 months.

  17. Partitioning Algorithms for Simultaneously Balancing Iterative and Direct Methods

    DTIC Science & Technology

    2004-03-03

    is defined as 57698&:&;=<$>?8A@B8 DC E & /F <G8H IJ0 K L 012 1NM? which is the ratio of the highest partition weight over the average...OQPSR , 57698T:;=<$>U8T@B8 DC E & /VXWZYK[\\O , and E :^] E_CU`4ab /V is minimized. The load imbalance is the constraint we have to satisfy, and...that the initial partitioning can be improved [16, 19, 20]. 3 Problem Definition and Challenges Consider a graph )c2 with d e f vertices

  18. Proceedings of the College Reading Association, Volume 6, Fall 1965.

    ERIC Educational Resources Information Center

    Ketcham, Clay A., Ed.

    The proceedings of the eighth annual meeting of the College Reading Association consisted of the following papers: "President's Report" (M. J. Weiss); "What Constitutes a Minimal Program for Clinical Diagnosis of Reading Disabilities?" (G. L. Bond); "Problems in Establishing Developmental Reading Programs in Junior…

  19. In vitro Antibacterial and Morphological Effects of the Urushiol Component of the Sap of the Korean lacquer tree (Rhus vernicifera Stokes) on Helicobacter pylori

    PubMed Central

    Suk, Ki Tae; Kim, Hyun Soo; Kim, Moon Young; Kim, Jae Woo; Uh, Young; Jang, In Ho; Kim, Soo Ki; Choi, Eung Ho; Kim, Myong Jo; Joo, Jung Soo

    2010-01-01

    Eradication regimens for Helicobacter pylori infection have some side effects, compliance problems, relapses, and antibiotic resistance. Therefore, alternative anti-H. pylori or supportive antimicrobial agents with fewer disadvantages are necessary for the treatment of H. pylori. We investigated the pH-(5.0, 6.0, 7.0, 8.0, 9.0, and 10.0) and concentration (0.032, 0.064, 0.128, 0.256, 0.514, and 1.024 mg/mL)-dependent antibacterial activity of crude urushiol extract from the sap of the Korean lacquer tree (Rhus vernicifera Stokes) against 3 strains (NCTC11637, 69, and 219) of H. pylori by the agar dilution method. In addition, the serial (before incubation, 3, 6, and 10 min after incubation) morphological effects of urushiol on H. pylori were examined by electron microscopy. All strains survived only within pH 6.0-9.0. The minimal inhibitory concentrations of the extract against strains ranged from 0.064 mg/mL to 0.256 mg/mL. Urushiol caused mainly separation of the membrane, vacuolization, and lysis of H. pylori. Interestingly, these changes were observed within 10 min following incubation with the 1×minimal inhibitory concentrations of urushiol. The results of this work suggest that urushiol has potential as a rapid therapeutic against H. pylori infection by disrupting the bacterial cell membrane. PMID:20191039

  20. Associations between social vulnerabilities and psychosocial problems in European children. Results from the IDEFICS study.

    PubMed

    Iguacel, Isabel; Michels, Nathalie; Fernández-Alvira, Juan M; Bammann, Karin; De Henauw, Stefaan; Felső, Regina; Gwozdz, Wencke; Hunsberger, Monica; Reisch, Lucia; Russo, Paola; Tornaritis, Michael; Thumann, Barbara Franziska; Veidebaum, Toomas; Börnhorst, Claudia; Moreno, Luis A

    2017-09-01

    The effect of socioeconomic inequalities on children's mental health remains unclear. This study aims to explore the cross-sectional and longitudinal associations between social vulnerabilities and psychosocial problems, and the association between accumulation of vulnerabilities and psychosocial problems. 5987 children aged 2-9 years from eight European countries were assessed at baseline and 2-year follow-up. Two different instruments were employed to assess children's psychosocial problems: the KINDL (Questionnaire for Measuring Health-Related Quality of Life in Children and Adolescents) was used to evaluate children's well-being and the Strengths and Difficulties Questionnaire (SDQ) was used to evaluate children's internalising problems. Vulnerable groups were defined as follows: children whose parents had minimal social networks, children from non-traditional families, children of migrant origin or children with unemployed parents. Logistic mixed-effects models were used to assess the associations between social vulnerabilities and psychosocial problems. After adjusting for classical socioeconomic and lifestyle indicators, children whose parents had minimal social networks were at greater risk of presenting internalising problems at baseline and follow-up (OR 1.53, 99% CI 1.11-2.11). The highest risk for psychosocial problems was found in children whose status changed from traditional families at T0 to non-traditional families at T1 (OR 1.60, 99% CI 1.07-2.39) and whose parents had minimal social networks at both time points (OR 1.97, 99% CI 1.26-3.08). Children with one or more vulnerabilities accumulated were at a higher risk of developing psychosocial problems at baseline and follow-up. Therefore, policy makers should implement measures to strengthen the social support for parents with a minimal social network.

  1. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  2. Banach spaces that realize minimal fillings

    NASA Astrophysics Data System (ADS)

    Bednov, B. B.; Borodin, P. A.

    2014-04-01

    It is proved that a real Banach space realizes minimal fillings for all its finite subsets (a shortest network spanning a fixed finite subset always exists and has the minimum possible length) if and only if it is a predual of L_1. The spaces L_1 are characterized in terms of Steiner points (medians). Bibliography: 25 titles.

  3. Synthesis and surface immobilization of antibacterial hybrid silver-poly(l-lactide) nanoparticles

    NASA Astrophysics Data System (ADS)

    Taheri, Shima; Baier, Grit; Majewski, Peter; Barton, Mary; Förch, Renate; Landfester, Katharina; Vasilev, Krasimir

    2014-08-01

    Infections associated with medical devices are a substantial healthcare problem. Consequently, there has been increasing research and technological efforts directed toward the development of coatings that are capable of preventing bacterial colonization of the device surface. Herein, we report on novel hybrid silver loaded poly(L-lactic acid) nanoparticles (PLLA-AgNPs) with narrowly distributed sizes (17 ± 3 nm) prepared using a combination of solvent evaporation and mini-emulsion technology. These particles were then immobilized onto solid surfaces premodified with a thin layer of allylamine plasma polymer (AApp). The antibacterial efficacy of the PLLA-AgNPs nanoparticles was studied in vitro against both gram-positive (Staphylococcus epidermidis) and gram-negative (Escherichia coli) bacteria. The minimal inhibitory concentration values against Staphylococcus epidermidis and Escherichia coli were 0.610 and 1.156 μg · mL-1, respectively. The capacity of the prepared coatings to prevent bacterial surface colonization was assessed in the presence of Staphylococcus epidermidis, which is a strong biofilm former that causes substantial problems with medical device associated infections. The level of inhibition of bacterial growth was 98%. The substrate independent nature and the high antibacterial efficacy of coatings presented in this study may offer new alternatives for antibacterial coatings for medical devices.

  4. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  5. L 1-2 minimization for exact and stable seismic attenuation compensation

    NASA Astrophysics Data System (ADS)

    Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang

    2018-06-01

    Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.

  6. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hojin; Becker, Stephen; Lee, Rena

    2013-07-15

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.« less

  7. Action-minimizing solutions of the one-dimensional N-body problem

    NASA Astrophysics Data System (ADS)

    Yu, Xiang; Zhang, Shiqing

    2018-05-01

    We supplement the following result of C. Marchal on the Newtonian N-body problem: A path minimizing the Lagrangian action functional between two given configurations is always a true (collision-free) solution when the dimension d of the physical space R^d satisfies d≥2. The focus of this paper is on the fixed-ends problem for the one-dimensional Newtonian N-body problem. We prove that a path minimizing the action functional in the set of paths joining two given configurations and having all the time the same order is always a true (collision-free) solution. Considering the one-dimensional N-body problem with equal masses, we prove that (i) collision instants are isolated for a path minimizing the action functional between two given configurations, (ii) if the particles at two endpoints have the same order, then the path minimizing the action functional is always a true (collision-free) solution and (iii) when the particles at two endpoints have different order, although there must be collisions for any path, we can prove that there are at most N! - 1 collisions for any action-minimizing path.

  8. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  9. What is the optimal architecture for visual information routing?

    PubMed

    Wolfrum, Philipp; von der Malsburg, Christoph

    2007-12-01

    Analyzing the design of networks for visual information routing is an underconstrained problem due to insufficient anatomical and physiological data. We propose here optimality criteria for the design of routing networks. For a very general architecture, we derive the number of routing layers and the fanout that minimize the required neural circuitry. The optimal fanout l is independent of network size, while the number k of layers scales logarithmically (with a prefactor below 1), with the number n of visual resolution units to be routed independently. The results are found to agree with data of the primate visual system.

  10. Complications and Morbidities of Mini-open Anterior Retroperitoneal Lumbar Interbody Fusion: Oblique Lumbar Interbody Fusion in 179 Patients.

    PubMed

    Silvestre, Clément; Mac-Thiong, Jean-Marc; Hilmi, Radwan; Roussouly, Pierre

    2012-06-01

    A retrospective study including 179 patients who underwent oblique lumbar interbody fusion (OLIF) at one institution. To report the complications associated with a minimally invasive technique of a retroperitoneal anterolateral approach to the lumbar spine. Different approaches to the lumbar spine have been proposed, but they are associated with an increased risk of complications and a longer operation. A total of 179 patients with previous posterior instrumented fusion undergoing OLIF were included. The technique is described in terms of: the number of levels fused, operative time and blood loss. Persurgical and postsurgical complications were noted. Patients were age 54.1 ± 10.6 with a BMI of 24.8 ± 4.1 kg/m(2). The procedure was performed in the lumbar spine at L1-L2 in 4, L2-L3 in 54, L3-L4 in 120, L4-L5 in 134, and L5-S1 in 6 patients. It was done at 1 level in 56, 2 levels in 107, and 3 levels in 16 patients. Surgery time and blood loss were, respectively, 32.5 ± 13.2 minutes and 57 ± 131 ml per level fused. There were 19 patients with a single complication and one with two complications, including two patients with postoperative radiculopathy after L3-5 OLIF. There was no abdominal weakness or herniation. Minimally invasive OLIF can be performed easily and safely in the lumbar spine from L2 to L5, and at L1-2 for selected cases. Up to 3 levels can be addressed through a 'sliding window'. It is associated with minimal blood loss and short operations, and with decreased risk of abdominal wall weakness or herniation.

  11. Arbitrary norm support vector machines.

    PubMed

    Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R

    2009-02-01

    Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.

  12. How many invariant polynomials are needed to decide local unitary equivalence of qubit states?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maciążek, Tomasz; Faculty of Physics, University of Warsaw, ul. Hoża 69, 00-681 Warszawa; Oszmaniec, Michał

    2013-09-15

    Given L-qubit states with the fixed spectra of reduced one-qubit density matrices, we find a formula for the minimal number of invariant polynomials needed for solving local unitary (LU) equivalence problem, that is, problem of deciding if two states can be connected by local unitary operations. Interestingly, this number is not the same for every collection of the spectra. Some spectra require less polynomials to solve LU equivalence problem than others. The result is obtained using geometric methods, i.e., by calculating the dimensions of reduced spaces, stemming from the symplectic reduction procedure.

  13. Highly sensitive MYD88L265P mutation detection by droplet digital PCR in Waldenström Macroglobulinemia.

    PubMed

    Drandi, Daniela; Genuardi, Elisa; Dogliotti, Irene; Ferrante, Martina; Jiménez, Cristina; Guerrini, Francesca; Lo Schirico, Mariella; Mantoan, Barbara; Muccio, Vittorio; Lia, Giuseppe; Zaccaria, Gian Maria; Omedè, Paola; Passera, Roberto; Orsucci, Lorella; Benevolo, Giulia; Cavallo, Federica; Galimberti, Sara; García-Sanz, Ramón; Boccadoro, Mario; Ladetto, Marco; Ferrero, Simone

    2018-03-22

    We here describe a novel method for MYD88 L265P mutation detection and minimal residual disease monitoring in Waldenström Macroglobulinemia, by droplet digital PCR, in bone marrow and peripheral blood cells, as well as in circulating cell free DNA. Our method shows a sensitivity of 5.00E-05, by far superior to the widely used allele-specific polymerase chain reaction (1.00E-03). Overall, 291 unsorted samples from 148 patients (133 Waldenstrom 11 IgG-lymphoplasmacytic lymphoma and 4 IgM-monoclonal gammopathy of undetermined significance), 194 baseline and 97 follow-up, were analyzed. 122/128 (95.3%) bone marrow and 47/66 (71.2%) baseline peripheral blood samples scored positive for MYD88 L265P Moreover, to investigate whether MYD88 L265P by droplet digital PCR could be used for minimal residual disease monitoring, mutation levels were compared with IGH-based minimal residual disease analysis in 10 patients, showing to be as informative as to the classical, standardized but not yet validated in Waldenström Macroglobulinemia, IGH-based minimal residual disease assay (r 2 =0.64). Finally, MYD88 L265P detection performed by droplet digital PCR on plasmatic circulating tumor DNA from 60 patients showed a good correlation with bone marrow (bone marrow median mutational value 1.92E-02, plasmatic circulating tumor DNA value: 1.4E-02, peripheral blood value: 1.03E-03). This study indicates that droplet digital PCR MYD88 L265P assay is a feasible and sensitive tool for mutational screening and minimal residual disease monitoring in Waldenström Macroglobulinemia. Both unsorted bone marrow and peripheral blood samples can be reliably tested, as well as circulating tumor DNA, that represents an attractive, less invasive alternative to bone marrow for MYD88 L265P detection. Copyright © 2018, Ferrata Storti Foundation.

  14. Characterization of the chitinase gene family and the effect on A. flavus and aflatoxin resistance in maize.

    USDA-ARS?s Scientific Manuscript database

    Maize (Zea mays L.) is a crop of global importance, but is prone to contamination by aflatoxins produced by fungi in the genus Aspergillus. The development of resistant germplasm and the identification of genes contributing to resistance would aid in the reduction of the problem with a minimal need ...

  15. Hospital-acquired listeriosis associated with sandwiches in the UK: a cause for concern.

    PubMed

    Little, C L; Amar, C F L; Awofisayo, A; Grant, K A

    2012-09-01

    Hospital-acquired outbreaks of listeriosis are not commonly reported but remain a significant public health problem. To raise awareness of listeriosis outbreaks that have occurred in hospitals and describe actions that can be taken to minimize the risk of foodborne listeriosis to vulnerable patients. Foodborne outbreaks and incidents of Listeria monocytogenes reported to the Health Protection Agency national surveillance systems were investigated and those linked to hospitals were extracted. The data were analysed to identify the outbreak/incident setting, the food vehicle, outbreak contributory factors and origin of problem. Most (8/11, 73%) foodborne outbreaks of listeriosis that occurred in the UK between 1999 and 2011 were associated with sandwiches purchased from or provided in hospitals. Recurrently in the outbreaks the infecting subtype of L. monocytogenes was detected in supplied prepacked sandwiches and sandwich manufacturing environments. In five of the outbreaks breaches in cold chain controls of food also occurred at hospital level. The outbreaks highlight the potential for sandwiches contaminated with L. monocytogenes to cause severe infection in vulnerable people. Control of L. monocytogenes in sandwich manufacturing and within hospitals is essential to minimize the potential for consumption of this bacterium at levels hazardous to health. Manufacturers supplying sandwiches to hospitals should aim to ensure absence of L. monocytogenes in sandwiches at the point of production and hospital-documented food safety management systems should ensure the integrity of the food cold chain. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  16. Nutritional considerations in triathlon.

    PubMed

    Jeukendrup, Asker E; Jentjens, Roy L P G; Moseley, Luke

    2005-01-01

    Triathlon combines three disciplines (swimming, cycling and running) and competitions last between 1 hour 50 minutes (Olympic distance) and 14 hours (Ironman distance). Independent of the distance, dehydration and carbohydrate (CHO) depletion are the most likely causes of fatigue in triathlon, whereas gastrointestinal (GI) problems, hyperthermia and hyponatraemia are potentially health threatening, especially in longer events. Although glycogen supercompensation may be beneficial for triathlon performance (even Olympic distance), this does not necessarily have to be achieved by the traditional supercompensation protocol. More recently, studies have revealed ways to increase muscle glycogen concentrations to very high levels with minimal modifications in diet and training. During competition, cycling provides the best opportunity to ingest fluids. The optimum CHO concentration seems to be in the range of 5-8% and triathletes should aim to achieve a CHO intake of 60-70 g/hour. Triathletes should attempt to limit body mass losses to 1% of body mass. In all cases, a drink should contain sodium (30-50 mmol/L) for optimal absorption and prevention of hyponatraemia.Post-exercise rehydration is best achieved by consuming beverages that have a high sodium content (>60 mmol/L) in a volume equivalent to 150% of body mass loss. GI problems occur frequently, especially in long-distance triathlon. Problems seem related to the intake of highly concentrated carbohydrate solutions, or hyperosmotic drinks, and the intake of fibre, fat and protein. Endotoxaemia has been suggested as an explanation for some of the GI problems, but this has not been confirmed by recent research. Although mild endotoxaemia may occur after an Ironman-distance triathlon, this does not seem to be related to the incidence of GI problems. Hyponatraemia has occasionally been reported, especially among slow competitors in triathlons and probably arises due to loss of sodium in sweat coupled with very high intakes (8-10 L) of water or other low-sodium drinks.

  17. Estimating Optimal Transformations for Multiple Regression and Correlation.

    DTIC Science & Technology

    1982-07-01

    algorithm; we minimize (2.4) e2 (,,, ...,) = E[e(Y) - 1I (X 2 j=l j 2holding EO =1, E6 = E0, =.-. =Ecp = 0, through a series of single function minimizations...X, x = INU = lIVe . Then (5.16) THEOREM. If 6*, p* is an optimal transformation for regression, then = ue*o Conversely, if e satisfies Xe = U6, Nll1...Stanford University, Tech. Report ORIONOO6. Gasser, T. and Rosenblatt, M. (eds.) (1979). Smoothing Techniques for Curve Estimation, in Lecture Notes in

  18. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    PubMed

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  19. Isolation, Cloning and Expression of the Genes for Microbial Polyurethane Degradation

    DTIC Science & Technology

    1991-02-20

    Aspergillus niger --fungi ATCC 12668 Trichoderma sp.--fungi The minimal salts medium is as follows: 72 Minimal (NH4) 2s O 4 1.000 gm/L KH2PO4 5.000 gm/L MGSO 4...culture ATCC 35698 Arthobacter globiformis--bacteria ATCC 11172 Pseudomonas putida--bacteria ATCC 10196 Aspergillus oryzae--fungi ATCC 9642

  20. Improved HDRG decoders for qudit and non-Abelian quantum error correction

    NASA Astrophysics Data System (ADS)

    Hutter, Adrian; Loss, Daniel; Wootton, James R.

    2015-03-01

    Hard-decision renormalization group (HDRG) decoders are an important class of decoding algorithms for topological quantum error correction. Due to their versatility, they have been used to decode systems with fractal logical operators, color codes, qudit topological codes, and non-Abelian systems. In this work, we develop a method of performing HDRG decoding which combines strengths of existing decoders and further improves upon them. In particular, we increase the minimal number of errors necessary for a logical error in a system of linear size L from \\Theta ({{L}2/3}) to Ω ({{L}1-ε }) for any ε \\gt 0. We apply our algorithm to decoding D({{{Z}}d}) quantum double models and a non-Abelian anyon model with Fibonacci-like fusion rules, and show that it indeed significantly outperforms previous HDRG decoders. Furthermore, we provide the first study of continuous error correction with imperfect syndrome measurements for the D({{{Z}}d}) quantum double models. The parallelized runtime of our algorithm is poly(log L) for the perfect measurement case. In the continuous case with imperfect syndrome measurements, the averaged runtime is O(1) for Abelian systems, while continuous error correction for non-Abelian anyons stays an open problem.

  1. A Multilevel Approach to the Algebraic Image Reconstruction Problem

    DTIC Science & Technology

    1994-06-01

    and later use this fact to show that the Gauss-Seidel method when applied to the problem cannot diverge and in fact must converge. Theorem 4.2: B is...First, we show that the Gauss-Seidel method cannot diverge for this problem. We introduce the following definitions: 71 Definition 5.1: The energy...Seidel cannot diverge . Recall that (5.4) is ~k+l) - _2_ (b·- ~ .. (k+l) - ~ .. (k)) x~ - , L......t q,1 x 1 L......t a,1 x 1 , qii j=l j=i+l 1 ~ i

  2. Control of browning of minimally processed mangoes subjected to ultraviolet radiation pulses.

    PubMed

    de Sousa, Aline Ellen Duarte; Fonseca, Kelem Silva; da Silva Gomes, Wilny Karen; Monteiro da Silva, Ana Priscila; de Oliveira Silva, Ebenézer; Puschmann, Rolf

    2017-01-01

    The pulsed ultraviolet radiation (UV P ) has been used as an alternative strategy for the control of microorganisms in food. However, its application causes the browning of minimally processed fruits and vegetables. In order to control the browning of the 'Tommy Atkins' minimally processed mango and treated with UV P (5.7 J cm -2 ) it was used 1-methylcyclopropene (1-MCP) (0.5 μL L -1 ), an ethylene action blocker in separate stages, comprising five treatments: control, UV P (U), 1-MCP + UV P (M + U), UV P  + 1-MCP (U + M) e 1-MCP + UV P  + 1-MCP (M + U + M). At the 1st, 7th and 14th days of storage at 12 °C, we evaluated the color (L* and b*), electrolyte leakage, polyphenol oxidase, total extractable polyphenols, vitamin C and total antioxidant activity. The 1-MCP, when applied before UV P , prevented the loss of vitamin C and when applied in a double dose, retained the yellow color (b*) of the cubes. However, the 1-MCP reduced lightness (L*) of independent mango cubes whatever applied before and/or after the UV P . Thus, the application of 1-MCP did not control, but intensified the browning of minimally processed mangoes irradiated with UV P .

  3. Discrete-time neural network for fast solving large linear L1 estimation problems and its application to image restoration.

    PubMed

    Xia, Youshen; Sun, Changyin; Zheng, Wei Xing

    2012-05-01

    There is growing interest in solving linear L1 estimation problems for sparsity of the solution and robustness against non-Gaussian noise. This paper proposes a discrete-time neural network which can calculate large linear L1 estimation problems fast. The proposed neural network has a fixed computational step length and is proved to be globally convergent to an optimal solution. Then, the proposed neural network is efficiently applied to image restoration. Numerical results show that the proposed neural network is not only efficient in solving degenerate problems resulting from the nonunique solutions of the linear L1 estimation problems but also needs much less computational time than the related algorithms in solving both linear L1 estimation and image restoration problems.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolda, Christopher

    In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].

  5. Complications and Morbidities of Mini-open Anterior Retroperitoneal Lumbar Interbody Fusion: Oblique Lumbar Interbody Fusion in 179 Patients

    PubMed Central

    Mac-Thiong, Jean-Marc; Hilmi, Radwan; Roussouly, Pierre

    2012-01-01

    Study Design A retrospective study including 179 patients who underwent oblique lumbar interbody fusion (OLIF) at one institution. Purpose To report the complications associated with a minimally invasive technique of a retroperitoneal anterolateral approach to the lumbar spine. Overview of Literature Different approaches to the lumbar spine have been proposed, but they are associated with an increased risk of complications and a longer operation. Methods A total of 179 patients with previous posterior instrumented fusion undergoing OLIF were included. The technique is described in terms of: the number of levels fused, operative time and blood loss. Persurgical and postsurgical complications were noted. Results Patients were age 54.1 ± 10.6 with a BMI of 24.8 ± 4.1 kg/m2. The procedure was performed in the lumbar spine at L1-L2 in 4, L2-L3 in 54, L3-L4 in 120, L4-L5 in 134, and L5-S1 in 6 patients. It was done at 1 level in 56, 2 levels in 107, and 3 levels in 16 patients. Surgery time and blood loss were, respectively, 32.5 ± 13.2 minutes and 57 ± 131 ml per level fused. There were 19 patients with a single complication and one with two complications, including two patients with postoperative radiculopathy after L3-5 OLIF. There was no abdominal weakness or herniation. Conclusions Minimally invasive OLIF can be performed easily and safely in the lumbar spine from L2 to L5, and at L1-2 for selected cases. Up to 3 levels can be addressed through a 'sliding window'. It is associated with minimal blood loss and short operations, and with decreased risk of abdominal wall weakness or herniation. PMID:22708012

  6. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  7. Kinetic magnetic resonance imaging analysis of lumbar segmental mobility in patients without significant spondylosis.

    PubMed

    Tan, Yanlin; Aghdasi, Bayan G; Montgomery, Scott R; Inoue, Hirokazu; Lu, Chang; Wang, Jeffrey C

    2012-12-01

    The purpose of this study was to examine lumbar segmental mobility using kinetic magnetic resonance imaging (MRI) in patients with minimal lumbar spondylosis. Mid-sagittal images of patients who underwent weight-bearing, multi-position kinetic MRI for symptomatic low back pain or radiculopathy were reviewed. Only patients with a Pfirrmann grade of I or II, indicating minimal disc disease, in all lumbar discs from L1-2 to L5-S1 were included for further analysis. Translational and angular motion was measured at each motion segment. The mean translational motion of the lumbar spine at each level was 1.38 mm at L1-L2, 1.41 mm at L2-L3, 1.14 mm at L3-L4, 1.10 mm at L4-L5 and 1.01 mm at L5-S1. Translational motion at L1-L2 and L2-L3 was significantly greater than L3-4, L4-L5 and L5-S1 levels (P < 0.007). The mean angular motion at each level was 7.34° at L1-L2, 8.56° at L2-L3, 8.34° at L3-L4, 8.87° at L4-L5, and 5.87° at L5-S1. The L5-S1 segment had significantly less angular motion when compared to all other levels (P < 0.006). The mean percentage contribution of each level to the total angular mobility of the lumbar spine was highest at L2-L3 (22.45 %) and least at L5/S1 (14.71 %) (P < 0.001). In the current study, we evaluated lumbar segmental mobility in patients without significant degenerative disc disease and found that translational motion was greatest in the proximal lumbar levels whereas angular motion was similar in the mid-lumbar levels but decreased at L1-L2 and L5-S1.

  8. Robust L1-norm two-dimensional linear discriminant analysis.

    PubMed

    Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang

    2015-05-01

    In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. INCA Modelling of the Lee System: strategies for the reduction of nitrogen loads

    NASA Astrophysics Data System (ADS)

    Flynn, N. J.; Paddison, T.; Whitehead, P. G.

    The Integrated Nitrogen Catchment model (INCA) was applied successfully to simulate nitrogen concentrations in the River Lee, a northern tributary of the River Thames for 1995-1999. Leaching from urban and agricultural areas was found to control nitrogen dynamics in reaches unaffected by effluent discharges and abstractions; the occurrence of minimal flows resulted in an upward trend in nitrate concentration. Sewage treatment works (STW) discharging into the River Lee raised nitrate concentrations substantially, a problem which was compounded by abstractions in the Lower Lee. The average concentration of nitrate (NO3) for the simulation period 1995-96 was 7.87 mg N l-1. Ammonium (NH4) concentrations were simulated less successfully. However, concentrations of ammonium rarely rose to levels which would be of environmental concern. Scenarios were run through INCA to assess strategies for the reduction of nitrate concentrations in the catchment. The conversion of arable land to ungrazed vegetation or to woodland would reduce nitrate concentrations substantially, whilst inclusion of riparian buffer strips would be unsuccessful in reducing nitrate loading. A 50% reduction in nitrate loading from Luton STW would result in a fall of up to 5 mg N l-1 in the reach directly affected (concentrations fell from maxima of 13 to 8 mg N l-1 , nearly a 40 % reduction), whilst a 20% reduction in abstractions would reduce maximum peaks in concentration in the lower Lee by up to 4 mg l-1 (from 17 to 13 mg N l-1, nearly a 25 % reduction),.

  10. Inferring the Minimal Genome of Mesoplasma florum by Comparative Genomics and Transposon Mutagenesis.

    PubMed

    Baby, Vincent; Lachance, Jean-Christophe; Gagnon, Jules; Lucier, Jean-François; Matteau, Dominick; Knight, Tom; Rodrigue, Sébastien

    2018-01-01

    The creation and comparison of minimal genomes will help better define the most fundamental mechanisms supporting life. Mesoplasma florum is a near-minimal, fast-growing, nonpathogenic bacterium potentially amenable to genome reduction efforts. In a comparative genomic study of 13 M. florum strains, including 11 newly sequenced genomes, we have identified the core genome and open pangenome of this species. Our results show that all of the strains have approximately 80% of their gene content in common. Of the remaining 20%, 17% of the genes were found in multiple strains and 3% were unique to any given strain. On the basis of random transposon mutagenesis, we also estimated that ~290 out of 720 genes are essential for M. florum L1 in rich medium. We next evaluated different genome reduction scenarios for M. florum L1 by using gene conservation and essentiality data, as well as comparisons with the first working approximation of a minimal organism, Mycoplasma mycoides JCVI-syn3.0. Our results suggest that 409 of the 473 M. mycoides JCVI-syn3.0 genes have orthologs in M. florum L1. Conversely, 57 putatively essential M. florum L1 genes have no homolog in M. mycoides JCVI-syn3.0. This suggests differences in minimal genome compositions, even for these evolutionarily closely related bacteria. IMPORTANCE The last years have witnessed the development of whole-genome cloning and transplantation methods and the complete synthesis of entire chromosomes. Recently, the first minimal cell, Mycoplasma mycoides JCVI-syn3.0, was created. Despite these milestone achievements, several questions remain to be answered. For example, is the composition of minimal genomes virtually identical in phylogenetically related species? On the basis of comparative genomics and transposon mutagenesis, we investigated this question by using an alternative model, Mesoplasma florum, that is also amenable to genome reduction efforts. Our results suggest that the creation of additional minimal genomes could help reveal different gene compositions and strategies that can support life, even within closely related species.

  11. Brain abnormality segmentation based on l1-norm minimization

    NASA Astrophysics Data System (ADS)

    Zeng, Ke; Erus, Guray; Tanwar, Manoj; Davatzikos, Christos

    2014-03-01

    We present a method that uses sparse representations to model the inter-individual variability of healthy anatomy from a limited number of normal medical images. Abnormalities in MR images are then defined as deviations from the normal variation. More precisely, we model an abnormal (pathological) signal y as the superposition of a normal part ~y that can be sparsely represented under an example-based dictionary, and an abnormal part r. Motivated by a dense error correction scheme recently proposed for sparse signal recovery, we use l1- norm minimization to separate ~y and r. We extend the existing framework, which was mainly used on robust face recognition in a discriminative setting, to address challenges of brain image analysis, particularly the high dimensionality and low sample size problem. The dictionary is constructed from local image patches extracted from training images aligned using smooth transformations, together with minor perturbations of those patches. A multi-scale sliding-window scheme is applied to capture anatomical variations ranging from fine and localized to coarser and more global. The statistical significance of the abnormality term r is obtained by comparison to its empirical distribution through cross-validation, and is used to assign an abnormality score to each voxel. In our validation experiments the method is applied for segmenting abnormalities on 2-D slices of FLAIR images, and we obtain segmentation results consistent with the expert-defined masks.

  12. Postprandial blood glucose control in type 1 diabetes for carbohydrates with varying glycemic index foods.

    PubMed

    Hashimoto, Shogo; Noguchi, Claudia Cecilia Yamamoto; Furutani, Eiko

    2014-01-01

    Treatment of type 1 diabetes consists of maintaining postprandial normoglycemia using the correct prandial insulin dose according to food intake. Nonetheless, it is hardly achieved in practice, which results in several diabetes-related complications. In this study we present a feedforward plus feedback blood glucose control system that considers the glycemic index of foods. It consists of a preprandial insulin bolus whose optimal bolus dose and timing are stated as a minimization problem, which is followed by a postprandial closed-loop control based on model predictive control. Simulation results show that, for a representative carbohydrate intake of 50 g, the present control system is able to maintain postprandial glycemia below 140 mg/dL while preventing postprandial hypoglycemia as well.

  13. Limit behavior of mass critical Hartree minimization problems with steep potential wells

    NASA Astrophysics Data System (ADS)

    Guo, Yujin; Luo, Yong; Wang, Zhi-Qiang

    2018-06-01

    We consider minimizers of the following mass critical Hartree minimization problem: eλ(N ) ≔inf {u ∈H1(Rd ) , ‖u‖2 2=N } Eλ(u ) , where d ≥ 3, λ > 0, and the Hartree energy functional Eλ(u) is defined by Eλ(u ) ≔∫Rd|∇u (x ) |2d x +λ ∫Rdg (x ) u2(x ) d x -1/2 ∫Rd∫Rdu/2(x ) u2(y ) |x -y |2 d x d y . Here the steep potential g(x) satisfies 0 =g (0 ) =infRdg (x ) ≤g (x ) ≤1 and 1 -g (x ) ∈Ld/2(Rd ) . We prove that there exists a constant N* > 0, independent of λg(x), such that if N ≥ N*, then eλ(N) does not admit minimizers for any λ > 0; if 0 < N < N*, then there exists a constant λ*(N) > 0 such that eλ(N) admits minimizers for any λ > λ*(N) and eλ(N) does not admit minimizers for 0 < λ < λ*(N). For any given 0 < N < N*, the limit behavior of positive minimizers for eλ(N) is also studied as λ → ∞, where the mass concentrates at the bottom of g(x).

  14. Linear Matrix Inequality Method for a Quadratic Performance Index Minimization Problem with a class of Bilinear Matrix Inequality Conditions

    NASA Astrophysics Data System (ADS)

    Tanemura, M.; Chida, Y.

    2016-09-01

    There are a lot of design problems of control system which are expressed as a performance index minimization under BMI conditions. However, a minimization problem expressed as LMIs can be easily solved because of the convex property of LMIs. Therefore, many researchers have been studying transforming a variety of control design problems into convex minimization problems expressed as LMIs. This paper proposes an LMI method for a quadratic performance index minimization problem with a class of BMI conditions. The minimization problem treated in this paper includes design problems of state-feedback gain for switched system and so on. The effectiveness of the proposed method is verified through a state-feedback gain design for switched systems and a numerical simulation using the designed feedback gains.

  15. Evaluation of Brief Group-Administered Instruction for Parents to Prevent or Minimize Sleep Problems in Young Children with Down Syndrome

    ERIC Educational Resources Information Center

    Stores, Rebecca; Stores, Gregory

    2004-01-01

    Background: The study concerns the unknown value of group instruction for mothers of young children with Down syndrome (DS) in preventing or minimizing sleep problems. Method: (1) Children with DS were randomly allocated to an Instruction group (given basic information about children's sleep) and a Control group for later comparison including…

  16. Section 17. Laparoscopic and minimal incisional donor hepatectomy.

    PubMed

    Choi, YoungRok; Yi, Nam-Joon; Lee, Kwang-Woong; Suh, Kyung-Suk

    2014-04-27

    Living donor hepatectomy is now a well-established surgical procedure. However, a large abdominal incision is still required, which results in a large permanent scar, especially for a right liver graft. This report reviews our techniques of minimally invasive or minimal incisional donor hepatectomy using a transverse incision.Twenty-five living donors underwent right hepatectomy with a transverse incision and 484 donors with a conventional incision between April 2007 and December 2012. Among the donors with a transverse incision, two cases were totally laparoscopic procedures using a hand-port device; 11 cases were laparoscopic-assisted hepatectomy (hybrid technique), and 14 cases were open procedures using a transverse incision without the aid of the laparoscopic technique. Currently, a hybrid method has been exclusively used because of the long operation time and surgical difficulty in totally laparoscopic hepatectomy and the exposure problems for the liver cephalic portion during the open technique using a transverse incision.All donors with a transverse incision were women except for one. Twenty-four of the grafts were right livers without middle hepatic vein (MHV) and one with MHV. The donors' mean BMI was 21.1 kg/m. The median operation time was 355 minutes, and the mean estimated blood loss was 346.1±247.3 mL (range, 70-1200). There was no intraoperative transfusion. These donors had 29 cases of grade I [14 pleural effusions (56%), 11 abdominal fluid collections (44%), 3 atelectasis (12%), 1bile leak (4%)], 1 case of grade II (1 pneumothorax) and two cases of grade III complications; two interventions were needed because of abdominal fluid collections by Clavien-Dindo classification. Meanwhile, donors with a conventional big incision, which included the Mercedes-Benz incision or an inverted L-shaped incision, had 433 cases of grade I, 19 cases of grade II and 18 cases of grade III complications. However, the liver enzymes and total bilirubin of all donors were normalized within 1 month, and they recovered fully. Additionally, in a survey inquiring about cosmetic outcomes with a numeric scale of 1 through 10 (1, Not confident; 10, Very confident), the transverse incision had more satisfactory scores compared to the conventional big incision (9.80 vs. 6.17, P=0.001). In conclusion, the hybrid technique can be safely performed in donor right hepatectomy, with a minimal transverse skin incision, resulting in a good cosmetic outcome.

  17. Multi Objective Controller Design for Linear System via Optimal Interpolation

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay

    1996-01-01

    We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.

  18. Optimal UAS Assignments and Trajectories for Persistent Surveillance and Data Collection from a Wireless Sensor Network

    DTIC Science & Technology

    2015-12-24

    minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution

  19. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    NASA Astrophysics Data System (ADS)

    Gato-Rivera, B.; Semikhatov, A. M.

    1992-08-01

    A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.

  20. A novel formulation of L-thyroxine (L-T4) reduces the problem of L-T4 malabsorption by coffee observed with traditional tablet formulations.

    PubMed

    Vita, Roberto; Saraceno, Giovanna; Trimarchi, Francesco; Benvenga, Salvatore

    2013-02-01

    The purpose of this work is to evaluate if the coffee-associated malabsorption of tablet levothyroxine (L-T4) is reduced by soft gel capsule. We recruited 8 patients with coffee-associated L-T4 malabsorption including one hypothyroid patient. For 6 months, the patients were switched to the capsule maintaining the L-T4 daily dose. Patients took the capsule with water, having coffee 1 h later (proper habit, PH) on days 1-90, or with coffee ≤ 5 min later (improper habit, IH) on days 91-180. After 6 months, 2 patients volunteered for an acute loading test of 600 μg L-T4 (capsule) ingested with water (PH) or with coffee (IH). In the single hypothyroid patient, the post-switch TSH ranged 0.06-0.16 mU/L (PH) versus 5.8-22.4 mU/L pre-switch (PH) and 0.025-0.29 mU/L (IH) versus 26-34 mU/L pre-switch (IH). In the other 7 patients, post-switch TSH was 0.41 ± 0.46 (PH) versus 0.28 ± 0.20 pre-switch (PH) (P = 0.61) and 0.34 ± 0.30 (IH) versus 1.23 ± 1.47 pre-switch (IH) (P < 0.001). Importantly, TSH levels in PH versus IH habit did not differ post-switch (P = 0.90), but they did pre-switch (P < 0.0001). The proportions of post-switch TSH levels <0.10 mU/L with PH (33.3 %) or with IH (33.3 %) were borderline significantly greater than the corresponding pre-switch levels with PH (10.3 %) (P = 0.088) or with IH (0 %) (P = 0.0096). In the two volunteers, the L-T4 loading test showed that coffee influenced L-T4 pharmacokinetics minimally. Soft gel capsules can be used in patients who are unable/unwilling to change their IH of taking L-T4.

  1. [Role of vitamin deficiency in pancytopenia in Djibouti. Findings in a series of 81 consectutive patients].

    PubMed

    Lavigne, C; Lavigne, E; Massenet, D; Binet, C; Brémond, J L; Prigent, D

    2005-01-01

    The purpose of this study of patients with pancytopenia in Republic of Djibouti was to identify etiologic factors and attempt to define diagnostic and therapeutic strategies adapted to local conditions. Clinical, biological and radiological assessment was performed in 81 patients hospitalized for pancytopenia. There were 56 men and 25 women. Mean hemoglobin, leukocyte and platelet rates were 56,5 +/- 22,7 g/l, 2,1 +/- 0,7.g/l and 56,2 +/- 24,7 g/l respectively. Vitamin deficiency was the most common cause of pancytopenia (49%), followed by hypersplenism (9%), HIV infection (6%) and leishmaniasis (6%). Vitamin-deficient patients had significantly more severe anemia and thrombopenia and significantly higher mean corpuscular volume than patients with pancytopenia related to other causes. Hemoglobin rate lower than 40 g/L and platelet rate lower than 35 G/L showed a positive predictive values of 90% and 100% respectively for a vitamin deficient pancytopenia. Vitamin deficiency is the most frequent etiology of pancytopenia and causes the most severe cytopenia in Djibouti. Rapid vitamin supplementation after minimal etiologic assessment including a myelogram is an effective treatment strategy for this public health problem.

  2. Domain Wall Fermion Inverter on Pentium 4

    NASA Astrophysics Data System (ADS)

    Pochinsky, Andrew

    2005-03-01

    A highly optimized domain wall fermion inverter has been developed as part of the SciDAC lattice initiative. By designing the code to minimize memory bus traffic, it achieves high cache reuse and performance in excess of 2 GFlops for out of L2 cache problem sizes on a GigE cluster with 2.66 GHz Xeon processors. The code uses the SciDAC QMP communication library.

  3. A microwave tomography strategy for structural monitoring

    NASA Astrophysics Data System (ADS)

    Catapano, I.; Crocco, L.; Isernia, T.

    2009-04-01

    The capability of the electromagnetic waves to penetrate optical dense regions can be conveniently exploited to provide high informative images of the internal status of manmade structures in a non destructive and minimally invasive way. In this framework, as an alternative to the wide adopted radar techniques, Microwave Tomography approaches are worth to be considered. As a matter of fact, they may accurately reconstruct the permittivity and conductivity distributions of a given region from the knowledge of a set of incident fields and measures of the corresponding scattered fields. As far as cultural heritage conservation is concerned, this allow not only to detect the anomalies, which can possibly damage the integrity and the stability of the structure, but also characterize their morphology and electric features, which are useful information to properly address the repair actions. However, since a non linear and ill-posed inverse scattering problem has to be solved, proper regularization strategies and sophisticated data processing tools have to be adopt to assure the reliability of the results. To pursue this aim, in the last years huge attention has been focused on the advantages introduced by diversity in data acquisition (multi-frequency/static/view data) [1,2] as well as on the analysis of the factors affecting the solution of an inverse scattering problem [3]. Moreover, how the degree of non linearity of the relationship between the scattered field and the electromagnetic parameters of the targets can be changed by properly choosing the mathematical model adopt to formulate the scattering problem has been shown in [4]. Exploiting the above results, in this work we propose an imaging procedure in which the inverse scattering problem is formulated as an optimization problem where the mathematical relationship between data and unknowns is expressed by means of a convenient integral equations model and the sought solution is defined as the global minimum of a cost functional. In particular, a local minimization scheme is exploited and a pre-processing step, devoted to preliminary asses the location and the shape of the anomalies, is exploited. The effectiveness of the proposed strategy has been preliminary assessed by means of numerical examples concerning the diagnostic of masonry structures, which will be shown in the Conference. [1] O. M. Bucci, L. Crocco, T. Isernia, and V. Pascazio, Subsurface inverse scattering problems: Quantifying, qualifying and achieving the available information, IEEE Trans. Geosci. Remote Sens., 39(5), 2527-2538, 2001. [2] R. Persico, R. Bernini, and F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the distorted Born approximation," IEEE Trans. Antennas Propag., vol. 53, no. 6, pp. 1875-1887, Jun. 2005. [3] I. Catapano, L. Crocco, M. D'Urso, T. Isernia, "On the Effect of Support Estimation and of a New Model in 2-D Inverse Scattering Problems," IEEE Trans. Antennas Propagat., vol.55, no.6, pp.1895-1899, 2007. [4] M. D'Urso, I. Catapano, L. Crocco and T. Isernia, Effective solution of 3D scattering problems via series expansions: applicability and a new hybrid scheme, IEEE Trans. On Geosci. Remote Sens., vol.45, no.3, pp. 639-648, 2007.

  4. New techniques to control salinity-wastewater reuse interactions in golf courses of the Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Beltrao, J.; Costa, M.; Rosado, V.; Gamito, P.; Santos, R.; Khaydarova, V.

    2003-04-01

    Due to the lack water around the Mediterranean regions, potable water luxurious uses - as in golf courses - are increasingly contested. In order to solve this problem, non conventional water resources (effluent, gray, recycled, reclaimed, brackish), like treated wastewater, for irrigation gained increasing role in the planning and development of additional water supplies in golf courses. In most cases, the intense use of effluent for irrigation attracted public awareness in respect of contaminating pathogens and heavy metals. The contaminating effect of salinity in soil and underground water is very often neglected. The objective of this work is to present the conventional techniques to control salinity of treated wastewater and to present some results on new clean techniques to solve this problem, in the framework of the INCO-COPERNICUS project (no. IC-15CT98-0105) "Adaptation of Efficient Water Use Criteria in Marginal Regions of Europe and Middle Asia with Scarce Sources Subject to Environmental Control, Climate Change and Socio-Economic Development" and of the INCO-DC project (no. IC18-CT98-0266) "Control of Salination and Combating Desertification Effects in the Mediterranean Region. Phase II". Saline water is the most common irrigation water in arid climates. Moreover, for each region treated wastewater is always more saline than tap water, and therefore, when treated wastewater is reused in golf courses, more salinity problems occur. Conventional techniques to combat the salination process in golf courses can be characterized by four generations: 1) Problem of root zone salination by soil leaching - two options can occur - when there is an impermeable layer, salts will be concentrated above this layer; on the other hand, when there is no impermeable layer, aquifers contamination can be observed; 2) Use of subsurface trickle irrigation - economy of water, and therefore less additional salts; however the problem of groundwater contamination due to natural rain or artificial leaching remained; 3) Enhanced fertilization increases turfgrass tolerance to salinity, but the contamination will be increased by other hazardous chemicals such as nitrate; 4) Use of salt tolerant turfgrass species this technique will be very useful to the plants, but does not solve the problem os soil or groundwater contamination. When reusing treated wastewater in the Mediterranean areas, the only way to control the salination process and to maintain the sustainability of golf courses is to combat the salination problems by environmentally safe and clean techniques. These new clean techniques include: 1) Use of salt removing turfgrass species; 2) Use of drought tolerant turfgrass species - reduction of salt application by deficit irrigation; 3) Reuse of minimal levels of wastewater enough to obtain a good visual appearance GVA of the turfgrass. Regarding these new clean techniques, experiments were carried out in golf courses of Algarve, Portugal, the most southwest part of Europe. It was shown: 1) Use of salt removing turfgrass species - 3 sprinkle irrigated cultivars were studied (Agrostis solonífera L.; Cynodon dactylon, L. and Penninsetum clandestinum Hochst ex Chiov). 2) Use of drought tolerant turfgrass species -responses to several levels of sprinkle irrigation wastewater and potable water (with and without fertilization). An experimental design, known as sprinkle point source was specially used to simulate the several levels of water application, expressed by the crop coefficient kc and by the crop evapotranspiration rate ETc. Turfgrass yield was enhanced linearly with the increased application of treated wastewater. 3) Reuse of minimal levels of wastewater enough to obtain a good visual appearance GVA of the turfgrass - The minimal crop coefficient kc for a good visual appearance GVA of the turfgrass was around 1.0 to potable water irrigated mixed cultivars (with 30 kg nitrogen ha-1 month-1) and 1.2 to wastewater irrigated Bermuda grass (without any mineral fertilization). As concluding remarks, our results show that these new clean techniques are a strong and powerful tool to control salinity and to avoid soil salination and to maintain sustainability of golf courses.

  5. Removing uranium (VI) from aqueous solution with insoluble humic acid derived from leonardite.

    PubMed

    Meng, Fande; Yuan, Guodong; Larson, Steven L; Ballard, John H; Waggoner, Charles A; Arslan, Zikri; Han, Fengxiang X

    2017-12-01

    The occurrence of uranium (U) and depleted uranium (DU)-contaminated wastes from anthropogenic activities is an important environmental problem. Insoluble humic acid derived from leonardite (L-HA) was investigated as a potential adsorbent for immobilizing U in the environment. The effect of initial pH, contact time, U concentration, and temperature on U(VI) adsorption onto L-HA was assessed. The U(VI) adsorption was pH-dependent and achieved equilibrium in 2 h. It could be well described with pseudo-second-order model, indicating that U(VI) adsorption onto L-HA involved chemisorption. The U(VI) adsorption mass increased with increasing temperature with maximum adsorption capacities of 91, 112 and 120 mg g -1 at 298, 308 and 318 K, respectively. The adsorption reaction was spontaneous and endothermic. We explored the processes of U(VI) desorption from the L-HA-U complex through batch desorption experiments in 1 mM NaNO 3 and in artificial seawater. The desorption process could be well described by pseudo-first-order model and reached equilibrium in 3 h. L-HA possessed a high propensity to adsorb U(VI). Once adsorbed, the release of U(VI) from L-HA-U complex was minimal in both 1 mM NaNO 3 and artificial seawater (0.06% and 0.40%, respectively). Being abundant, inexpensive, and safe, L-HA has good potential for use as a U adsorbent from aqueous solution or immobilizing U in soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. On optimal control problem for conservation law modelling one class of highly re-entrant production systems

    NASA Astrophysics Data System (ADS)

    D'Apice, Ciro; Kogut, Peter I.

    2017-07-01

    We discuss the optimal control problem stated as the minimization in the L2-sense of the mismatch between the actual out-flux and a demand forecast for a hyperbolic conservation law that models a highly re-entrant production system. The output of the factory is described as a function of the work in progress and the position of the so-called push-pull point (PPP) where we separate the beginning of the factory employing a push policy from the end of the factory, which uses a pull policy.

  7. End-of-life disposal of high elliptical orbit missions: The case of INTEGRAL

    NASA Astrophysics Data System (ADS)

    Armellin, Roberto; San-Juan, Juan F.; Lara, Martin

    2015-08-01

    Nowadays there is international consensus that space activities must be managed to minimize debris generation and risk. The paper presents a method for the end-of-life (EoL) disposal of spacecraft in high elliptical orbits (HEO). The time evolution of HEO is strongly affected by Earth's oblateness and luni-solar perturbation, and this can cause in the long-term to extended interferences with low Earth orbit (LEO) protected region and uncontrolled Earth re-entry. An EoL disposal concept that exploits the effect of orbital perturbations to reduce the disposal cost is presented. The problem is formulated as a multiobjective optimization problem, which is solved with an evolutionary algorithm. To explore at the best the search space a semi-analytical orbit propagator, which allows the propagation of the orbit motion for 100 years in few seconds, is adopted. The EoL disposal of the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) mission is used as a practical test-case to show the effectiveness of the proposed methodology.

  8. Bilevel Model-Based Discriminative Dictionary Learning for Recognition.

    PubMed

    Zhou, Pan; Zhang, Chao; Lin, Zhouchen

    2017-03-01

    Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.

  9. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.

  10. Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.

    PubMed

    Gong, Changcheng; Cai, Yufang; Zeng, Li

    2018-01-01

    For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.

  11. Face recognition based on two-dimensional discriminant sparse preserving projection

    NASA Astrophysics Data System (ADS)

    Zhang, Dawei; Zhu, Shanan

    2018-04-01

    In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.

  12. Clinical performance of three bolus calculators in subjects with type 1 diabetes mellitus: a head-to-head-to-head comparison.

    PubMed

    Zisser, Howard; Wagner, Robin; Pleus, Stefan; Haug, Cornelia; Jendrike, Nina; Parkin, Chris; Schweitzer, Matthias; Freckmann, Guido

    2010-12-01

    Insulin pump systems now provide automated bolus calculators (ABCs) that electronically calculate insulin boluses to address carbohydrate intake and out-of-range blood glucose (bG) levels. We compared the efficacy of three ABCs (Accu-Chek(®) Combo [Roche Insulin Delivery Systems (IDS), Inc., Fishers, IN, a member of the Roche Group], Animas(®) 2020 [Animas Corp., West Chester, PA, a Johnson and Johnson company], and MiniMed Paradigm Bolus Wizard(®) [Medtronic MiniMed, Northridge, CA]) to safely reduce postprandial hyperglycemia in type 1 diabetes mellitus (T1DM). T1DM subjects (n = 24) were recruited at a single center for a prospective, triple crossover study. ABCs with the programmed target range (80-140 mg/dL) were used in random order. Postprandial hyperglycemia was induced by reducing the calculated bolus by 25%. Two hours after test meals, the ABCs were allowed to determine whether a correction bolus was needed. Differences between 6-h bG values after test meals that achieved 2-h postprandial hyperglycemia and the mean of the target range (110 mg/dL) were determined. The mean difference between 6-h bG levels following test meals and the 110 mg/dL bG target with the MiniMed device (47.4 ± 31.8 mg/dL) was significantly higher than the Animas (17.3 ± 30.9 mg/dL) and Roche IDS (18.8 ± 33.8 mg/dL) devices (P = 0.0022 and P = 0.0049, respectively). The number of meals with 2-h postprandial hyperglycemia and bG levels at 2 h was similar. Roche IDS and Animas devices recommended correction boluses significantly (P = 0.0001 and P = 0.0002, respectively) more frequently than the MiniMed device. ABC use was not associated with severe hypoglycemia. There was no significant difference in the rate of mild hypoglycemia (bG <60 mg/dL not requiring assistance) among the three groups (Roche IDS and Animas, n = 2; MiniMed, n = 0). In this study, the Roche IDS and Animas devices were more efficacious in controlling postprandial hyperglycemia than the MiniMed device. This may be due, in part, to differences in ABC setup protocols and algorithms. Use of ABCs can assist in controlling postprandial glycemia without significant hypoglycemia.

  13. Minimally invasive versus conventional extracorporeal circulation in minimally invasive cardiac valve surgery.

    PubMed

    Baumbach, Hardy; Rustenbach, Christian; Michaelsen, Jens; Hipp, Gernot; Pressmar, Markus; Leinweber, Marco; Franke, Ulrich Friedrich Wilhelm

    2014-02-01

    Minimally invasive extracorporeal circulation (MECC) technology was applied predominantly in coronary surgery. Data regarding the application of MECC in minimally invasive valve surgery are missing largely. Patients undergoing isolated minimally invasive mitral or aortic valve procedures were allocated either to conventional extracorporeal circulation (CECC) group (n = 63) or MECC group (n = 105), and their prospectively generated data were analyzed. Demographic data were comparable between the groups regarding age (CECC vs. MECC: 71.0 ± 7.5 vs. 66.2 ± 10.1 years, p = 0.091) and logistic EuroSCORE I (6.2 ± 2.5 vs. 5.4 ± 3.0, p = 0.707). Hospital mortality was one patient in each group (1.6 vs. 1.0%, p = 0.688). The levels of leukocytes were lower in the MECC group (11.6 ± 3.2 vs. 9.4 ± 4.3 109/L, p = 0.040). Levels of platelets (137.2 ± 45.5 vs. 152.4 ± 50.3 109/L, p = 0.015) and hemoglobin (103.3 ± 11.3 vs. 107.3 ± 14.7 g/L, p = 0.017) were higher in the MECC group. Renal function was better preserved (creatinine: 1.1 ± 0.4 vs. 0.9 ± 0.2 mg/dL, p = 0.019). We were able to validate shorter time of postoperative ventilation (9.5 ± 15.1 vs. 6.3 ± 3.4 h, p = 0.054) as well as significantly shorter intensive care unit (ICU) stay (1.8 ± 1.3 vs. 1.2 ± 1.0 d, p = 0.005) for MECC patients. The course of C-reactive protein did not differ between the groups. We were able to prove the feasibility of MECC even in minimally invasive performed mitral and aortic valve procedures. In addition, the use of MECC provides decreased platelet consumption and less hemodilution. The use of MECC in these selected patients lead to a shorter ventilation time and ICU stay. Georg Thieme Verlag KG Stuttgart · New York.

  14. Retroperitoneal oblique corridor to the L2-S1 intervertebral discs in the lateral position: an anatomic study.

    PubMed

    Davis, Timothy T; Hynes, Richard A; Fung, Daniel A; Spann, Scott W; MacMillan, Michael; Kwon, Brian; Liu, John; Acosta, Frank; Drochner, Thomas E

    2014-11-01

    Access to the intervertebral discs from L2-S1 in one surgical position can be challenging. The transpsoas minimally invasive surgical (MIS) approach is preferred by many surgeons, but this approach poses potential risk to neural structures of the lumbar plexus as they course through the psoas. The lumbar plexus and iliac crest often restrict the L4-5 disc access, and the L5-S1 level has not been a viable option from a direct lateral approach. The purpose of the present study was to investigate an MIS oblique corridor to the L2-S1 intervertebral disc space in cadaveric specimens while keeping the specimens in a lateral decubitus position with minimal disruption of the psoas and lumbar plexus. Twenty fresh-frozen full-torso cadaveric specimens were dissected, and an oblique anatomical corridor to access the L2-S1 discs was examined. Measurements were taken in a static state and with mild retraction of the psoas. The access corridor was defined at L2-5 as the left lateral border of the aorta (or iliac artery) and the anterior medial border of the psoas. The L5-S1 corridor of access was defined transversely from the midsagittal line of the inferior endplate of L-5 to the medial border of the left common iliac vessel and vertically to the first vascular structure that crosses midline. The mean access corridor diameters in the static state and with mild psoas retraction, respectively, were as follows: at L2-3, 18.60 mm and 25.50 mm; at L3-4, 19.25 mm and 27.05 mm; and at L4-5, 15.00 mm and 24.45 mm. The L5-S1 corridor mean values were 14.75 mm transversely, from midline to the left common iliac vessel and 23.85 mm from the inferior endplate of L-5 cephalad to the first midline vessel. The oblique corridor allows access to the L2-S1 discs while keeping the patient in a lateral decubitus position without a break in the table. Minimal psoas retraction without significant tendon disruption allowed for a generous corridor to the disc space. The L5-S1 disc space can be accessed from an oblique angle consistently with gentle retraction of the iliac vessels. This study supports the potential of an MIS oblique retroperitoneal approach to the L2-S1 discs.

  15. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  16. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  17. Geometric versus numerical optimal control of a dissipative spin-(1/2) particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapert, M.; Sugny, D.; Zhang, Y.

    2010-12-15

    We analyze the saturation of a nuclear magnetic resonance (NMR) signal using optimal magnetic fields. We consider both the problems of minimizing the duration of the control and its energy for a fixed duration. We solve the optimal control problems by using geometric methods and a purely numerical approach, the grape algorithm, the two methods being based on the application of the Pontryagin maximum principle. A very good agreement is obtained between the two results. The optimal solutions for the energy-minimization problem are finally implemented experimentally with available NMR techniques.

  18. Predicting the safe load on backpacker's arm using Lagrange multipliers method

    NASA Astrophysics Data System (ADS)

    Abdalla, Faisal Saleh; Rambely, Azmin Sham

    2014-09-01

    In this study, a technique has been suggested to reduce a backpack load by transmitting determined loads to the children arm. The purpose of this paper is to estimate school children arm muscles while load carriage as well as to determine the safe load can be carried at wrist while walking with backpack. A mathematical model, as three DOFs model, was investigated in the sagittal plane and Lagrange multipliers method (LMM) was utilized to minimize a quadratic objective function of muscle forces. The muscle forces were minimized with three different load conditions which are termed as 0-L=0 N, 1-L=21.95 N, and 2-L=43.9 N. The investigated muscles were estimated and compared to their maximum forces throughout the load conditions. Flexor and extensor muscles were estimated and the results showed that flexor muscles were active while extensor muscles showed inactivity. The estimated muscle forces were didn't exceed their maximum forces with 0-L and 1-L conditions whereas biceps and FCR muscles exceeded their maximum forces with 2-L condition. Consequently, 1-L condition is quiet safe to be carried by hand whereas 2-L condition is not. Thus to reduce the load in the backpack the transmitted load shouldn't exceed 1-L condition.

  19. On l(1): Optimal decentralized performance

    NASA Technical Reports Server (NTRS)

    Sourlas, Dennis; Manousiouthakis, Vasilios

    1993-01-01

    In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.

  20. Familial confounding of the association between maternal smoking during pregnancy and offspring substance use and problems.

    PubMed

    D'Onofrio, Brian M; Rickert, Martin E; Langström, Niklas; Donahue, Kelly L; Coyne, Claire A; Larsson, Henrik; Ellingson, Jarrod M; Van Hulle, Carol A; Iliadou, Anastasia N; Rathouz, Paul J; Lahey, Benjamin B; Lichtenstein, Paul

    2012-11-01

    Previous epidemiological, animal, and human cognitive neuroscience research suggests that maternal smoking during pregnancy (SDP) causes increased risk of substance use/problems in offspring. To determine the extent to which the association between SDP and offspring substance use/problems depends on confounded familial background factors by using a quasi-experimental design. We used 2 separate samples from the United States and Sweden. The analyses prospectively predicted multiple indices of substance use and problems while controlling for statistical covariates and comparing differentially exposed siblings to minimize confounding. Offspring of a representative sample of women in the United States (sample 1) and the total Swedish population born during the period from January 1, 1983, to December 31, 1995 (sample 2). Adolescent offspring of the women in the National Longitudinal Survey of Youth 1979 (n = 6904) and all offspring born in Sweden during the 13-year period (n = 1,187,360). Self-reported adolescent alcohol, cigarette, and marijuana use and early onset (before 14 years of age) of each substance (sample 1) and substance-related convictions and hospitalizations for an alcohol- or other drug-related problem (sample 2). The same pattern emerged for each index of substance use/problems across the 2 samples. At the population level, maternal SDP predicted every measure of offspring substance use/problems in both samples, ranging from adolescent alcohol use (hazard ratio [HR](moderate), 1.32 [95% CI, 1.22-1.43]; HR(high), 1.33 [1.17-1.53]) to a narcotics-related conviction (HR(moderate), 2.23 [2.14-2.31]; HR(high), 2.97 [2.86-3.09]). When comparing differentially exposed siblings to minimize genetic and environmental confounds, however, the association between SDP and each measure of substance use/problems was minimal and not statistically significant. The association between maternal SDP and offspring substance use/problems is likely due to familial background factors, not a causal influence, because siblings have similar rates of substance use and problems regardless of their specific exposure to SDP.

  1. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    DTIC Science & Technology

    2002-03-07

    like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a

  2. Target-controlled total intravenous anesthesia associated with femoral nerve block for arthroscopic knee meniscectomy.

    PubMed

    Nora, Fernando Squeff

    2009-01-01

    The increased popularity of minimally invasive surgical techniques reduced recovery time of procedures that were usually associated with prolonged hospitalization. This study reports the technique of total intravenous anesthesia with propofol and remifentanil associated with femoral nerve block using the inguinal perivascular approach. Ninety patients undergoing knee arthroscopy for meniscectomy were included in this study. Target-controlled infusion (TCI) of propofol (target = 4 microg.mL(-1)) and remifentanil (target = 3 ng.mL(-1)) was used for induction of anesthesia. The concentrations of propofol and remifentanil were changed according to the bispectral index (BIS) and mean arterial pressure (MAP). Volume-controlled mechanical ventilation with a laryngeal mask was used. The concentrations of propofol and remifentanil at the effector site, corresponding to the predictive concentrations, were obtained using the pharmacokinetic models of the drugs inserted in the TCI pumps. Time for hospital discharge encompassed the period between the moment the patient arrived at the recovery room and hospital discharge. Maximal and minimal mean concentrations at the effector site (ng.mL(-1)) of remifentanil were 3.5 and 2.4, respectively. Maximal and minimal mean concentrations of propofol at the effector site (microg.mL(-1)) were 3.1 and 2.6, respectively. The mean flow of infusion of propofol and remifentanil was 8.54 mg.kg(-1).h(-1) and 0.12 microg.kg(-1).min(-1), respectively. Mean hospital discharge time was 180 min. All patients were maintained within established parameters.

  3. The Role of Channel Distribution Information for Interference Management and Network Performance Enhancement

    DTIC Science & Technology

    2010-11-03

    U,V wTp s.t. Γl ≥ γl,l = 1, . . . , L (2) For a fixed set of channel matrices Hli’s, this problem can be solved and will give a set of power...outage. The main optimization problem using CDI can then be formulated: min p≥0,U,V wTp s.t. Pr(Γl ≤ γl) ≤ αl, l = 1, . . . , L (3) In this formulation...can be written as min p≥0,U,V wTp s.t. 1− e− γl 2 σ2Nl cllpl ∏ i 6=l ( 1 + γl clipi cllpl )−1 ≤ αl, l = 1, . . . , L (11) To get the constraints in (11

  4. Minimizing the Diameter of a Network Using Shortcut Edges

    NASA Astrophysics Data System (ADS)

    Demaine, Erik D.; Zadimoghaddam, Morteza

    We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.

  5. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  6. Plasma chitinase 3-like 1 is persistently elevated during first month after minimally invasive colorectal cancer resection.

    PubMed

    Shantha Kumara, H M C; Gaita, David; Miyagaki, Hiromichi; Yan, Xiaohong; Hearth, Sonali Ac; Njoh, Linda; Cekic, Vesna; Whelan, Richard L

    2016-08-15

    To assess blood chitinase 3-like 1 (CHi3L1) levels for 2 mo after minimally invasive colorectal resection (MICR) for colorectal cancer (CRC). CRC patients in an Institutional Review Board approved data/plasma bank who underwent elective MICR for whom preoperative (PreOp), early postoperative (PostOp), and 1 or more late PostOp samples [postoperative day (POD) 7-27] available were included. Plasma CHi3L1 levels (ng/mL) were determined in duplicate by enzyme linked immunosorbent assay. PreOp and PostOp plasma sample were available for 80 MICR cancer patients for the study. The median PreOp CHi3L1 level was 56.8 CI: 41.9-78.6 ng/mL (n = 80). Significantly elevated (P < 0.001) median plasma levels (ng/mL) over PreOp levels were detected on POD1 (667.7 CI: 495.7, 771.7; n = 79), POD 3 (132.6 CI: 95.5, 173.7; n = 76), POD7-13 (96.4 CI: 67.7, 136.9; n = 62), POD14-20 (101.4 CI: 80.7, 287.4; n = 22), and POD 21-27 (98.1 CI: 66.8, 137.4; n = 20, P = 0.001). No significant difference in plasma levels were noted on POD27-41. Plasma CHi3L1 levels were significantly elevated for one month after MICR. Persistently elevated plasma CHi3L1 may support the growth of residual tumor and metastasis.

  7. One-dimensional Gromov minimal filling problem

    NASA Astrophysics Data System (ADS)

    Ivanov, Alexandr O.; Tuzhilin, Alexey A.

    2012-05-01

    The paper is devoted to a new branch in the theory of one-dimensional variational problems with branching extremals, the investigation of one-dimensional minimal fillings introduced by the authors. On the one hand, this problem is a one-dimensional version of a generalization of Gromov's minimal fillings problem to the case of stratified manifolds. On the other hand, this problem is interesting in itself and also can be considered as a generalization of another classical problem, the Steiner problem on the construction of a shortest network connecting a given set of terminals. Besides the statement of the problem, we discuss several properties of the minimal fillings and state several conjectures. Bibliography: 38 titles.

  8. Waveform Design for Multimedia Airborne Networks: Robust Multimedia Data Transmission in Cognitive Radio Networks

    DTIC Science & Technology

    2011-03-01

    at the sensor. According to Candes, Tao and Romberg [1], a small number of random projections of a signal that is compressible is all the...Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (AWGN ) De -noise Signal Original...Signal (Noisy) Random Projection of Signal Transform i. DWT ii. FFT iii. DCT Solve the Minimization problem Reconstruct Signal Channel (Noiseless) De

  9. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  10. European Science Notes Information Bulletin Reports on Current European/Middle Eastern Science

    DTIC Science & Technology

    1989-04-01

    provided byJ. Kreuter (In- skin. The purpose was to minimize one of the problems stitutc for Pharmaceutical Chemistry, Johann Wolfgang associated with...robotics, and database issues. papers by general category. The organizer of the meeting was Professor Dr. Wolfgang Strasser of the Wilhelm Schickard...prob- dural Models" lems because the active points on the boundary arc more . N. Yaramanoglu (coauthors F.-L. Krause , M. Bienert, difficult to find. A

  11. JPRS Report, Science & Technology, USSR: Computers

    DTIC Science & Technology

    1987-08-11

    Based on Game Models (V.O. Groppen, AVTOMATIKA I TELEMEKHANIKA, No 8, Aug 86) •••• 25 Problems of Documenting Activity of Data Bank Administration...DOKUMENTY, No 12, Dec 86) 55 Standardization of Management Documents - One of Methods of Qualitative Increase of Their Effectiveness (V.l. Kokorev...minimal effect or does not produce anything at all. Machines must be used for the entire manufactur- ing process cycle, at its most "critical

  12. Geometric Variational Methods for Controlled Active Vision

    DTIC Science & Technology

    2006-08-01

    Haker , L. Zhu, and A. Tannenbaum, ``Optimal mass transport for registration and warping’’ Int. Journal Computer Vision, volume 60, 2004, pp. 225-240. G...pp. 119-142. A. Angenent, S. Haker , and A. Tannenbaum, ``Minimizing flows for the Monge-Kantorovich problem,’’ SIAM J. Math. Analysis, volume 35...Shape analysis of structures using spherical wavelets’’ (with S. Haker and D. Nain), Proceeedings of MICCAI, 2005. ``Affine surface evolution for 3D

  13. A hybrid process of biofiltration of secondary effluent followed by ozonation and short soil aquifer treatment for water reuse.

    PubMed

    Zucker, I; Mamane, H; Cikurel, H; Jekel, M; Hübner, U; Avisar, D

    2015-11-01

    The Shafdan reclamation project facility (Tel Aviv, Israel) practices soil aquifer treatment (SAT) of secondary effluent with hydraulic retention times (HRTs) of a few months to a year for unrestricted agricultural irrigation. During the SAT, the high oxygen demand (>40 mg L(-1)) of the infiltrated effluent causes anoxic conditions and mobilization of dissolved manganese from the soil. An additional emerging problem is the occurrence of persistent trace organic compounds (TrOCs) in reclaimed water that should be removed prior to reuse. An innovative hybrid process based on biofiltration, ozonation and short SAT with ∼22 d HRT is proposed for treatment of the Shafdan secondary effluent to overcome limitations of the existing system and to reduce the SAT's physical footprint. Besides efficient removal of particulate matter to minimize clogging, coagulation/flocculation and filtration (5-6 m h(-1)) operated with the addition of hydrogen peroxide as an oxygen source efficiently removed dissolved organic carbon (DOC, to 17-22%), ammonium and nitrite. This resulted in reduced effluent oxygen demand during infiltration and oxidant (ozone) demand during ozonation by 23 mg L(-1) and 1.5 mg L(-1), respectively. Ozonation (1.0-1.2 mg O3 mg DOC(-1)) efficiently reduced concentrations of persistent TrOCs and supplied sufficient dissolved oxygen (>30 mg L(-1)) for fully oxic operation of the short SAT with negligible Mn(2+) mobilization (<50 μg L(-1)). Overall, the examined hybrid process provided DOC reduction of 88% to a value of 1.2 mg L(-1), similar to conventional SAT, while improving the removal of TrOCs and efficiently preventing manganese dissolution. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Phase retrieval from intensity-only data by relative entropy minimization.

    PubMed

    Deming, Ross W

    2007-11-01

    A recursive algorithm, which appears to be new, is presented for estimating the amplitude and phase of a wave field from intensity-only measurements on two or more scan planes at different axial positions. The problem is framed as a nonlinear optimization, in which the angular spectrum of the complex field model is adjusted in order to minimize the relative entropy, or Kullback-Leibler divergence, between the measured and reconstructed intensities. The most common approach to this so-called phase retrieval problem is a variation of the well-known Gerchberg-Saxton algorithm devised by Misell (J. Phys. D6, L6, 1973), which is efficient and extremely simple to implement. The new algorithm has a computational structure that is very similar to Misell's approach, despite the fundamental difference in the optimization criteria used for each. Based upon results from noisy simulated data, the new algorithm appears to be more robust than Misell's approach and to produce better results from low signal-to-noise ratio data. The convergence of the new algorithm is examined.

  15. Synthesis of positively charged hybrid PHMB-stabilized silver nanoparticles: the search for a new type of active substances used in plant protection products

    NASA Astrophysics Data System (ADS)

    Krutyakov, Yurii A.; Kudrinsky, Alexey A.; Gusev, Alexander A.; Zakharova, Olga V.; Klimov, Alexey I.; Yapryntsev, Alexey D.; Zherebin, Pavel M.; Shapoval, Olga A.; Lisichkin, Georgii V.

    2017-07-01

    Modern agriculture calls for a decrease in pesticide application, particularly in order to decrease the negative impact on the environment. Therefore the development of new active substances and plant protection products (PPP) to minimize the chemical load on ecosystems is a very important problem. Substances based on silver nanoparticles are a promising solution of this problem because of the fact that in correct doses such products significantly increase yields and decrease crop diseases while displaying low toxicity to humans and animals. In this paper we for the first time propose application of polymeric guanidine compounds with varying chain lengths (from 10 to 130 elementary links) for the design and synthesis of modified silver nanoparticles to be used as the basis of a new generation of PPP. Colloidal solutions of nanocrystalline silver containing 0.5 g l-1 of silver and 0.01-0.4 g l-1 of polyhexamethylene biguanide hydrochloride (PHMB) were obtained by reduction of silver nitrate with sodium borohydride in the presence of PHMB. The field experiment has shown that silver-containing solutions have a positive effect on agronomic properties of potato, wheat and apple. Also the increase in activity of such antioxidant system enzymes as peroxidase and catalase in the tissues of plants treated with nanosilver has been registered.

  16. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, B; Gao, H

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less

  17. Feasibility of minimally invasive radical prostatectomy in prostate cancer patients with high prostate-specific antigen: feasibility and 1-year outcomes.

    PubMed

    Do, Minh; Ragavan, Narasimhan; Dietel, Anja; Liatsikos, Evangelos; Anderson, Chris; McNeill, Alan; Stolzenburg, Jens-Uwe

    2012-10-01

    Urologists are cautious to offer minimally invasive radical prostatectomy in prostate cancer patients with high prostate-specific antigen (and therefore anticipated to have locally advanced or metastatic disease) because of concerns regarding lack of complete cure after minimally invasive radical prostatectomy and of worsening of continence if adjuvant radiotherapy is used. A retrospective review of our institutional database was carried out to identify patients with PSA ≥20 ng/mL who underwent minimally invasive radical prostatectomy between January 2002 and October 2010. Intraoperative, pathological, functional and short-term oncological outcomes were assessed. Overall, 233 patients met study criteria and were included in the analysis. The median prostate-specific antigen and prostate size were 28.5 ng/mL and 47 mL, respectively. Intraoperative complications were the following: rectal injury (0.86%) and blood transfusion (1.7%). Early postoperative complications included prolonged (>6 days) catheterization (9.4%), hematoma (4.7%), deep venous thrombosis (0.86%) and lymphocele (5.1%). Late postoperative complications included cerebrovascular accident (0.4%) and anastomotic stricture (0.8%). Pathology revealed poorly differentiated cancer in 48.9%, pT3/pT4 disease in 55.8%, positive margins in 28.3% and lymph node disease in 20.2% of the cases. Adverse pathological findings were more frequent in patients with prostate-specific antigen >40 ng/mL and (or) in those with locally advanced disease (pT3/pT4). In 62.2% of the cases, adjuvant radiotherapy was used. At 1-year follow up, 80% of patients did not show evidence of biochemical recurrence and 98.8% of them had good recovery of continence. Minimally invasive radical prostatectomy might represent a reasonable option in prostate cancer patients with high prostate-specific antigen as a part of a multimodality treatment approach. © 2012 The Japanese Urological Association.

  18. In vitro activity of origanum vulgare essential oil against candida species

    PubMed Central

    Cleff, Marlete Brum; Meinerz, Ana Raquel; Xavier, Melissa; Schuch, Luiz Filipe; Schuch, Luiz Filipe; Araújo Meireles, Mário Carlos; Alves Rodrigues, Maria Regina; de Mello, João Roberto Braga

    2010-01-01

    The aim of this study was to evaluate the in vitro activity of the essential oil extracted from Origanum vulgare against sixteen Candida species isolates. Standard strains tested comprised C. albicans (ATCC strains 44858, 4053, 18804 and 3691), C. parapsilosis (ATCC 22019), C. krusei (ATCC 34135), C. lusitaniae (ATCC 34449) and C. dubliniensis (ATCC MY646). Six Candida albicans isolates from the vaginal mucous membrane of female dogs, one isolate from the cutaneous tegument of a dog and one isolate of a capuchin monkey were tested in parallel. A broth microdilution technique (CLSI) was used, and the inoculum concentration was adjusted to 5 x 106 CFU mL-1. The essential oil was obtained by hydrodistillation in a Clevenger apparatus and analyzed by gas chromatography. Susceptibility was expressed as Minimal Inhibitory Concentration (MIC) and Minimal Fungicidal Concentration (MFC). All isolates tested in vitro were sensitive to O. vulgare essential oil. The chromatographic analysis revealed that the main compounds present in the essential oil were 4-terpineol (47.95%), carvacrol (9.42%), thymol (8.42%) and □-terpineol (7.57%). C. albicans isolates obtained from animal mucous membranes exhibited MIC and MFC values of 2.72 μL mL-1 and 5 μL mL-1, respectively. MIC and MFC values for C. albicans standard strains were 2.97 μL mL-1 and 3.54 μL mL-1, respectively. The MIC and MFC for non-albicans species were 2.10 μL mL-1 and 2.97 μL mL-1, respectively. The antifungal activity of O. vulgare essential oil against Candida spp. observed in vitro suggests its administration may represent an alternative treatment for candidiasis. PMID:24031471

  19. Adoption of waste minimization technology to benefit electroplaters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, E.M.K.; Li, C.P.H.; Yu, C.M.K.

    Because of increasingly stringent environmental legislation and enhanced environmental awareness, electroplaters in Hong Kong are paying more heed to protect the environment. To comply with the array of environmental controls, electroplaters can no longer rely solely on the end-of-pipe approach as a means for abating their pollution problems under the particular local industrial environment. The preferred approach is to adopt waste minimization measures that yield both economic and environmental benefits. This paper gives an overview of electroplating activities in Hong Kong, highlights their characteristics, and describes the pollution problems associated with conventional electroplating operations. The constraints of using pollution controlmore » measures to achieve regulatory compliance are also discussed. Examples and case studies are given on some low-cost waste minimization techniques readily available to electroplaters, including dragout minimization and water conservation techniques. Recommendations are given as to how electroplaters can adopt and exercise waste minimization techniques in their operations. 1 tab.« less

  20. Three New Abietane-Type Diterpenoids from Plectranthus africanus and Their Antibacterial Activities.

    PubMed

    Nzogong, Raïssa T; Nganou, Blaise K; Tedonkeu, Alex T; Awouafack, Maurice D; Tene, Mathieu; Ito, Takuya; Tane, Pierre; Morita, Hiroyuki

    2018-01-01

    Three new abietane-type diterpenoids, plectranthroyleanones A - C (1:  - 3: ), together with five known compounds (4:  - 8: ) were isolated from the methanol extract of the whole plant of Plectranthus africanus using column chromatography techniques. The structures of the new compounds were elucidated using a combination of 1D and 2D NMR and HRESIMS analyses. Compound 1: exhibited weak activities with minimal inhibitory concentration values of 75 µg/mL against gram-positive bacteria, Bacillus subtilis and Staphylococcus aureus , and 150 µg/mL against two gram-negative bacteria, Pseudomonas aeruginosa and Klebsiella pneumoniae , respectively, while 2: and 3: had moderate antibacterial activity against K. pneumoniae with a minimal inhibitory concentration value of 37.5 µg/mL. Georg Thieme Verlag KG Stuttgart · New York.

  1. Designing small universal k-mer hitting sets for improved analysis of high-throughput sequencing

    PubMed Central

    Kingsford, Carl

    2017-01-01

    With the rapidly increasing volume of deep sequencing data, more efficient algorithms and data structures are needed. Minimizers are a central recent paradigm that has improved various sequence analysis tasks, including hashing for faster read overlap detection, sparse suffix arrays for creating smaller indexes, and Bloom filters for speeding up sequence search. Here, we propose an alternative paradigm that can lead to substantial further improvement in these and other tasks. For integers k and L > k, we say that a set of k-mers is a universal hitting set (UHS) if every possible L-long sequence must contain a k-mer from the set. We develop a heuristic called DOCKS to find a compact UHS, which works in two phases: The first phase is solved optimally, and for the second we propose several efficient heuristics, trading set size for speed and memory. The use of heuristics is motivated by showing the NP-hardness of a closely related problem. We show that DOCKS works well in practice and produces UHSs that are very close to a theoretical lower bound. We present results for various values of k and L and by applying them to real genomes show that UHSs indeed improve over minimizers. In particular, DOCKS uses less than 30% of the 10-mers needed to span the human genome compared to minimizers. The software and computed UHSs are freely available at github.com/Shamir-Lab/DOCKS/ and acgt.cs.tau.ac.il/docks/, respectively. PMID:28968408

  2. Anti-Streptococcal activity of Brazilian Amazon Rain Forest plant extracts presents potential for preventive strategies against dental caries

    PubMed Central

    da SILVA, Juliana Paola Corrêa; de CASTILHO, Adriana Lígia; SARACENI, Cíntia Helena Couri; DÍAZ, Ingrit Elida Collantes; PACIÊNCIA, Mateus Luís Barradas; SUFFREDINI, Ivana Barbosa

    2014-01-01

    Caries is a global public health problem, whose control requires the introduction of low-cost treatments, such as strong prevention strategies, minimally invasive techniques and chemical prevention agents. Nature plays an important role as a source of new antibacterial substances that can be used in the prevention of caries, and Brazil is the richest country in terms of biodiversity. Objective In this study, the disk diffusion method (DDM) was used to screen over 2,000 Brazilian Amazon plant extracts against Streptococcus mutans. Material and Methods Seventeen active plant extracts were identified and fractionated. Extracts and their fractions, obtained by liquid-liquid partition, were tested in the DDM assay and in the microdilution broth assay (MBA) to determine their minimal inhibitory concentrations (MICs) and minimal bactericidal concentrations (MBCs). The extracts were also subjected to antioxidant analysis by thin layer chromatography. Results EB271, obtained from Casearia spruceana, showed significant activity against the bacterium in the DDM assay (20.67±0.52 mm), as did EB1129, obtained from Psychotria sp. (Rubiaceae) (15.04±2.29 mm). EB1493, obtained from Ipomoea alba, was the only extract to show strong activity against Streptococcus mutans (0.08 mg/mL

  3. Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.

  4. Interoperation transfer in Chinese-English bilinguals' arithmetic.

    PubMed

    Campbell, Jamie I D; Dowd, Roxanne R

    2012-10-01

    We examined interoperation transfer of practice in adult Chinese-English bilinguals' memory for simple multiplication (6 × 8 = 48) and addition (6 + 8 = 14) facts. The purpose was to determine whether they possessed distinct number-fact representations in both Chinese (L1) and English (L2). Participants repeatedly practiced multiplication problems (e.g., 4 × 5 = ?), answering a subset in L1 and another subset in L2. Then separate groups answered corresponding addition problems (4 + 5 = ?) and control addition problems in either L1 (N = 24) or L2 (N = 24). The results demonstrated language-specific negative transfer of multiplication practice to corresponding addition problems. Specifically, large simple addition problems (sum > 10) presented a significant response time cost (i.e., retrieval-induced forgetting) after their multiplication counterparts were practiced in the same language, relative to practice in the other language. The results indicate that our Chinese-English bilinguals had multiplication and addition facts represented in distinct language-specific memory stores.

  5. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    PubMed

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-01-01

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.

  6. Minimal modification of tri-bimaximal neutrino mixing and leptonic CP violation

    NASA Astrophysics Data System (ADS)

    Kang, Sin Kyu

    2017-12-01

    We confront possible forms of the minimal modification of the tri-bimaximal (TBM) neutrino mixing matrix proposed by Kang and Kim (Phys. Rev. D 90, 077301 (2014)) with the latest global fit to neutrino data. One form among them is singled out by the current experimental results at 1σ confidence level (C.L.) The minimal modification of the TBM mixing matrix makes possible the prediction of Dirac-type CP phase in the Pontecorbo-Maki-Nakagawa-Sakata neutrino mixing matrix in terms of two neutrino mixing angles. By carrying out a numerical analysis based on the latest experimental results for neutrino mixing angles, we are able to present new results on the prediction of the Dirac-type CP phase. We also compare our results on CP violation with those from the current global fit at 1 σ C.L.

  7. Bactericidal activity of antibiotics against Legionella micdadei (Pittsburgh pneumonia agent).

    PubMed Central

    Dowling, J N; Weyant, R S; Pasculle, A W

    1982-01-01

    The bactericidal activity of five antibiotics for Legionella micdadei was determined by the construction of time-kill curves. Erythromycin, rifampin, penicillin G, cephalothin, and gentamicin were bactericidal for L. micdadei at readily achievable concentrations. The minimal bactericidal concentrations, defined as those producing 99.9% killing within 24 h, were: erythromycin, 4.6; rifampin, 0.13; penicillin G, 0.25; cephalothin, 2.5; and gentamicin, 0.25 micrograms/ml. The ratios of the minimal bactericidal to minimal inhibitory concentrations for these antibiotics ranged from 1 to 8. Thus, the poor in vivo activity of beta-lactam and aminoglycoside antibiotics against L. micdadei cannot be ascribed to a lack of killing by these agents. PMID:6927637

  8. One-per-mil tumescent technique for upper extremity surgeries: broadening the indication.

    PubMed

    Prasetyono, Theddeus O H; Biben, Johannes A

    2014-01-01

    We studied the effect of 1:1,000,000 epinephrine concentration (1 per mil) to attain a bloodless operative field in hand and upper extremity surgery and to explore its effectiveness and safety profile. This retrospective observational study enrolled 45 consecutive patients with 63 operative fields consisting of various hand and upper extremity problems. One-per-mil solution was injected into the operative field with tumescent technique to create a bloodless operating field without tourniquet. The solution was formulated by adding a 1:1,000,000 concentration of epinephrine and 100 mg of lidocaine into saline solution to form 50 mL of tumescent solution. Observation was performed on the clarity of the operative field, which we described as totally bloodless, minimal bleeding, acceptable bleeding, or bloody. The volume of tumescent solution injected, duration of surgery, and surgical outcome were also reviewed. The tumescent technique with 1-per-mil solution achieved 29% totally bloodless, 48% minimal bleeding, 22% acceptable bleeding, and 2% bloody operative fields in cases that included burn contracture and congenital hand and upper extremity surgeries. One-per-mil tumescent solution created a clear operative field in hand and upper extremity surgery. It proved safe and effective for a wide range of indications. Therapeutic IV. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  9. Does core mobility of lumbar total disc arthroplasty influence sagittal and frontal intervertebral displacement? Radiologic comparison with fixed-core prosthesis

    PubMed Central

    Delécrin, Joël; Allain, Jérôme; Beaurain, Jacques; Steib, Jean-Paul; Chataigner, Hervé; Aubourg, Lucie; Huppert, Jean; Ameil, Marc; Nguyen, Jean-Michel

    2009-01-01

    Background An artificial disc prosthesis is thought to restore segmental motion in the lumbar spine. However, it is reported that disc prosthesis can increase the intervertebral translation (VT). The concept of the mobile-core prosthesis is to mimic the kinematic effects of the migration of the natural nucleus and therefore core mobility should minimize the VT. This study explored the hypothesis that core translation should influence VT and that a mobile core prosthesis may facilitate physiological motion. Methods Vertebral translation (measured with a new method presented here), core translation, range of motion (ROM), and distribution of flexion-extension were measured on flexion-extension, neutral standing, and lateral bending films in 89 patients (63 mobile-core [M]; 33 fixed-core [F]). Results At L4-5 levels the VT with M was lower than with F and similar to the VT of untreated levels. At L5-S1 levels the VT with M was lower than with F but was significantly different compared to untreated levels. At M levels a strong correlation was found between VT and core translation; the VT decreases as the core translation increases. At F levels the VT increases as the ROM increases. No significant difference was found between the ROM of untreated levels and levels implanted with either M or F. Regarding the mobility distribution with M and F we observed a deficit in extension at L5-S1 levels and a similar distribution at L4-5 levels compared to untreated levels. Conclusion The intervertebral mobility was different between M and F. The M at L4-5 levels succeeded to replicate mobility similar to L4-5 untreated levels. The M at L5-S1 succeeded in ROM, but failed regarding VT and mobility distribution. Nevertheless M minimized VT at L5-S1 levels. The F increased VT at both L4-5 and L5-S1. Clinical Relevance This study validates the concept that the core translation of an artificial lumbar disc prosthesis minimizes the VT. PMID:25802632

  10. Effect of Causal Stories in Solving Mathematical Story Problems

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Gerretson, Helen; Olkun, Sinan; Joutsenlahti, Jorma

    2010-01-01

    This study investigated whether infusing "causal" story elements into mathematical word problems improves student performance. In one experiment in the USA and a second in USA, Finland and Turkey, undergraduate elementary education majors worked word problems in three formats: 1) standard (minimal verbiage), 2) potential causation…

  11. Plasma chitinase 3-like 1 is persistently elevated during first month after minimally invasive colorectal cancer resection

    PubMed Central

    Shantha Kumara, H M C; Gaita, David; Miyagaki, Hiromichi; Yan, Xiaohong; Hearth, Sonali AC; Njoh, Linda; Cekic, Vesna; Whelan, Richard L

    2016-01-01

    AIM To assess blood chitinase 3-like 1 (CHi3L1) levels for 2 mo after minimally invasive colorectal resection (MICR) for colorectal cancer (CRC). METHODS CRC patients in an Institutional Review Board approved data/plasma bank who underwent elective MICR for whom preoperative (PreOp), early postoperative (PostOp), and 1 or more late PostOp samples [postoperative day (POD) 7-27] available were included. Plasma CHi3L1 levels (ng/mL) were determined in duplicate by enzyme linked immunosorbent assay. RESULTS PreOp and PostOp plasma sample were available for 80 MICR cancer patients for the study. The median PreOp CHi3L1 level was 56.8 CI: 41.9-78.6 ng/mL (n = 80). Significantly elevated (P < 0.001) median plasma levels (ng/mL) over PreOp levels were detected on POD1 (667.7 CI: 495.7, 771.7; n = 79), POD 3 (132.6 CI: 95.5, 173.7; n = 76), POD7-13 (96.4 CI: 67.7, 136.9; n = 62), POD14-20 (101.4 CI: 80.7, 287.4; n = 22), and POD 21-27 (98.1 CI: 66.8, 137.4; n = 20, P = 0.001). No significant difference in plasma levels were noted on POD27-41. CONCLUSION Plasma CHi3L1 levels were significantly elevated for one month after MICR. Persistently elevated plasma CHi3L1 may support the growth of residual tumor and metastasis. PMID:27574553

  12. The HST/STIS Next Generation Spectral Library

    NASA Technical Reports Server (NTRS)

    Gregg, M. D.; Silva, D.; Rayner, J.; Worthey, G.; Valdes, F.; Pickles, A.; Rose, J.; Carney, B.; Vacca, W.

    2006-01-01

    During Cycles 10, 12, and 13, we obtained STIS G230LB, G430L, and G750L spectra of 378 bright stars covering a wide range in abundance, effective temperature, and luminosity. This HST/STIS Next Generation Spectral Library was scheduled to reach its goal of 600 targets by the end of Cycle 13 when STIS came to an untimely end. Even at 2/3 complete, the library significantly improves the sampling of stellar atmosphere parameter space compared to most other spectral libraries by including the near-UV and significant numbers of metal poor and super-solar abundance stars. Numerous calibration challenges have been encountered, some expected, some not; these arise from the use of the E1 aperture location, non-standard wavelength calibration, and, most significantly, the serious contamination of the near-UV spectra by red light. Maximizing the utility of the library depends directly on overcoming or at least minimizing these problems, especially correcting the UV spectra.

  13. Identification of unknown spatial load distributions in a vibrating Euler-Bernoulli beam from limited measured data

    NASA Astrophysics Data System (ADS)

    Hasanov, Alemdar; Kawano, Alexandre

    2016-05-01

    Two types of inverse source problems of identifying asynchronously distributed spatial loads governed by the Euler-Bernoulli beam equation ρ (x){w}{tt}+μ (x){w}t+{({EI}(x){w}{xx})}{xx}-{T}r{u}{xx}={\\sum }m=1M{g}m(t){f}m(x), (x,t)\\in {{{Ω }}}T := (0,l)× (0,T), with hinged-clamped ends (w(0,t)={w}{xx}(0,t)=0,w(l,t) = {w}x(l,t)=0,t\\in (0,T)), are studied. Here {g}m(t) are linearly independent functions, describing an asynchronous temporal loading, and {f}m(x) are the spatial load distributions. In the first identification problem the values {ν }k(t),k=\\bar{1,K}, of the deflection w(x,t), are assumed to be known, as measured output data, in a neighbourhood of the finite set of points P:= \\{{x}k\\in (0,l),k=\\bar{1,K}\\}\\subset (0,l), corresponding to the internal points of a continuous beam, for all t\\in ]0,T[. In the second identification problem the values {θ }k(t),k=\\bar{1,K}, of the slope {w}x(x,t), are assumed to be known, as measured output data in a neighbourhood of the same set of points P for all t\\in ]0,T[. These inverse source problems will be defined subsequently as the problems ISP1 and ISP2. The general purpose of this study is to develop mathematical concepts and tools that are capable of providing effective numerical algorithms for the numerical solution of the considered class of inverse problems. Note that both measured output data {ν }k(t) and {θ }k(t) contain random noise. In the first part of the study we prove that each measured output data {ν }k(t) and {θ }k(t),k=\\bar{1,K} can uniquely determine the unknown functions {f}m\\in {H}-1(]0,l[),m=\\bar{1,M}. In the second part of the study we will introduce the input-output operators {{ K }}d :{L}2(0,T)\\mapsto {L}2(0,T),({{ K }}df)(t):= w(x,t;f),x\\in P, f(x) := ({f}1(x),\\ldots ,{f}M(x)), and {{ K }}s :{L}2(0,T)\\mapsto {L}2(0,T), ({{ K }}sf)(t):= {w}x(x,t;f), x\\in P , corresponding to the problems ISP1 and ISP2, and then reformulate these problems as the operator equations: {{ K }}df=ν and {{ K }}sf=θ , where ν (t):= ({ν }1(t),\\ldots ,{ν }K(t)) and {θ }k(t):= ({θ }1(t),\\ldots ,{θ }K(t)). Since both measured output data contain random noise, we use the most prominent regularisation method, Tikhonov regularisation, introducing the regularised cost functionals {J}1α (f):= (1/2)\\parallel {{ K }}df-ν {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2 and {J}2α (f):= (1/2)\\parallel {{ K }}sf-θ {\\parallel }{L2(0,T)}2+(1/2)α \\parallel f{\\parallel }{L2(0,T)}2. Using a priori estimates for the weak solution of the direct problem and the Tikhonov regularisation method combined with the adjoint problem approach, we prove that the Fréchet gradients {J}1\\prime (f) and {J}2\\prime (f) of both cost functionals can explicitly be derived via the corresponding weak solutions of adjoint problems and the known temporal loads {g}m(t). Moreover, we show that these gradients are Lipschitz continuous, which allows the use of gradient type iteration convergent algorithms. Two applications of the proposed theory are presented. It is shown that solvability results for inverse source problems related to the synchronous loading case, with a single interior measured data, are special cases of the obtained results for asynchronously distributed spatial load cases.

  14. Minimal investment risk of a portfolio optimization problem with budget and investment concentration constraints

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-02-01

    In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.

  15. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  16. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.

    PubMed

    Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M

    2012-03-01

    Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.

  17. Algorithms for bioluminescence tomography incorporating anatomical information and reconstruction of tissue optical properties

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2010-01-01

    Reconstruction algorithms are presented for a two-step solution of the bioluminescence tomography (BLT) problem. In the first step, a priori anatomical information provided by x-ray computed tomography or by other methods is used to solve the continuous wave (cw) diffuse optical tomography (DOT) problem. A Taylor series expansion approximates the light fluence rate dependence on the optical properties of each region where first and second order direct derivatives of the light fluence rate with respect to scattering and absorption coefficients are obtained and used for the reconstruction. In the second step, the reconstructed optical properties at different wavelengths are used to calculate the Green’s function of the system. Then an iterative minimization solution based on the L1 norm shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. This provides an efficient BLT reconstruction algorithm with the ability to determine relative source magnitudes and positions in the presence of noise. PMID:21258486

  18. The L sub 1 finite element method for pure convection problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1991-01-01

    The least squares (L sub 2) finite element method is introduced for 2-D steady state pure convection problems with smooth solutions. It is proven that the L sub 2 method has the same stability estimate as the original equation, i.e., the L sub 2 method has better control of the streamline derivative. Numerical convergence rates are given to show that the L sub 2 method is almost optimal. This L sub 2 method was then used as a framework to develop an iteratively reweighted L sub 2 finite element method to obtain a least absolute residual (L sub 1) solution for problems with discontinuous solutions. This L sub 1 finite element method produces a nonoscillatory, nondiffusive and highly accurate numerical solution that has a sharp discontinuity in one element on both coarse and fine meshes. A robust reweighting strategy was also devised to obtain the L sub 1 solution in a few iterations. A number of examples solved by using triangle and bilinear elements are presented.

  19. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  20. Iodine addition using triiodide solutions

    NASA Technical Reports Server (NTRS)

    Rutz, Jeffrey A.; Muckle, Susan V.; Sauer, Richard L.

    1992-01-01

    The study develops: a triiodide solution for use in preparing ground service equipment (GSE) water for Shuttle support, an iodine dissolution method that is reliable and requires minimal time and effort to prepare, and an iodine dissolution agent with a minimal concentration of sodium salt. Sodium iodide and hydriodic acid were both found to dissolve iodine to attain the desired GSE iodine concentrations of 7.5 +/- 2.5 mg/L and 25 +/- 5 mg/L. The 1.75:1 and 2:1 sodium iodide solutions produced higher iodine recoveries than the 1.2:1 hydriodic acid solution. A two-hour preparation time is required for the three sodium iodide solutions. The 1.2:1 hydriodic acid solution can be prepared in less than 5 min. Two sodium iodide stock solutions (2.5:1 and 2:1) were found to dissolve iodine without undergoing precipitation.

  1. (L)-Valine production with minimization of by-products' synthesis in Corynebacterium glutamicum and Brevibacterium flavum.

    PubMed

    Hou, Xiaohu; Chen, Xinde; Zhang, Yue; Qian, He; Zhang, Weiguo

    2012-12-01

    Corynebacterium glutamicum ATCC13032 and Brevibacterium flavum JV16 were engineered for L-valine production by over-expressing ilvEBN ( r ) C genes at 31 °C in 72 h fermentation. Different strategies were carried out to reduce the by-products' accumulation in L-valine fermentation and also to increase the availability of precursor for L-valine biosynthesis. The native promoter of ilvA of C. glutamicum was replaced with a weak promoter MPilvA (P-ilvAM1CG) to reduce the biosynthetic rate of L-isoleucine. Effect of different relative dissolved oxygen on L-valine production and by-products' formation was recorded, indicating that 15 % saturation may be the most appropriate relative dissolved oxygen for L-valine fermentation with almost no L-lactic acid and L-glutamate formed. To minimize L-alanine accumulation, alaT and/or avtA was inactivated in C. glutamicum and B. flavum, respectively. Compared to high concentration of L-alanine accumulated by alaT inactivated strains harboring ilvEBN ( r ) C genes, L-alanine concentration was reduced to 0.18 g/L by C. glutamicum ATCC13032MPilvA△avtA pDXW-8-ilvEBN ( r ) C, and 0.22 g/L by B. flavum JV16avtA::Cm pDXW-8-ilvEBN ( r ) C. Meanwhile, L-valine production and conversion efficiency were enhanced to 31.15 g/L and 0.173 g/g by C. glutamicum ATCC13032MPilvA△avtA pDXW-8-ilvEBN ( r ) C, 38.82 g/L and 0.252 g/g by B. flavum JV16avtA::Cm pDXW-8-ilvEBN ( r ) C. This study provides combined strategies to improve L-valine yield by minimization of by-products' production.

  2. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm.

    PubMed

    Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

    2016-02-01

    The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Radiation processing of minimally processed vegetables and aromatic plants

    NASA Astrophysics Data System (ADS)

    Trigo, M. J.; Sousa, M. B.; Sapata, M. M.; Ferreira, A.; Curado, T.; Andrada, L.; Botelho, M. L.; Veloso, M. G.

    2009-07-01

    Vegetables are an essential part of people's diet all around the world. Due to cultivate techniques and handling after harvest, these products, may contain high microbial load that can cause food borne outbreaks. The irradiation of minimally processed vegetables is an efficient way to reduce the level of microorganisms and to inhibit parasites, helping a safe global trade. Evaluation of the irradiation's effects was carried out in minimal processed vegetables, as coriander ( Coriandrum sativum L .), mint ( Mentha spicata L.), parsley ( Petroselinum crispum Mill, (A.W. Hill)), lettuce ( Lactuca sativa L.) and watercress ( Nasturium officinale L.). The inactivation level of natural microbiota and the D 10 values of Escherichia coli O157:H7 and Listeria innocua in these products were determined. The physical-chemical and sensorial characteristics before and after irradiation at a range of 0.5 up to 2.0 kGy applied doses were also evaluated. No differences were verified in the overall of sensorial and physical properties after irradiation up to 1 kGy, a decrease of natural microbiota was noticed (⩾2 log). Based on the determined D10, the amount of radiation necessary to kill 10 5E. coli and L. innocua was between 0.70 and 1.55 kGy. Shelf life of irradiated coriander, mint and lettuce at 0.5 kGy increased 2, 3 and 4 days, respectively, when compared with non-irradiated.

  4. Comparison of antimicrobial activities of naphthoquinones from Impatiens balsamina.

    PubMed

    Sakunphueak, Athip; Panichayupakaranant, Pharkphoom

    2012-01-01

    Lawsone (1), lawsone methyl ether (2), and methylene-3,3'-bilawsone (3) are the main naphthoquinones in the leaf extracts of Impatiens balsamina L. (Balsaminaceae). Antimicrobial activities of these three naphthoquinones against dermatophyte fungi, yeast, aerobic bacteria and facultative anaerobic and anaerobic bacteria were evaluated by determination of minimal inhibitory concentrations (MICs) and minimal bactericidal or fungicidal concentrations (MBCs or MFCs) using a modified agar dilution method. Compound 2 showed the highest antimicrobial activity. It showed antifungal activity against dermatophyte fungi and Candida albicans with the MICs and MFCs in the ranges of 3.9-23.4 and 7.8-23.4 µg mL(-1), respectively, and also had some antibacterial activity against aerobic, facultative anaerobic and anaerobic bacteria with MICs in the range of 23.4-93.8, 31.2-62.5 and 125 µg mL(-1), respectively. Compound 1 showed only moderate antimicrobial activity against dermatophytes (MICs and MFCs in the ranges of 62.5-250 and 125-250 µg mL(-1), respectively), but had low potency against aerobic bacteria, and was not active against C. albicans and facultative anaerobic bacteria. In contrast, 3 showed significant antimicrobial activity only against Staphylococus epidermidis and Bacillus subtilis (MIC and MBC of 46.9 and 93.8 µg mL(-1), respectively).

  5. Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.

    PubMed

    Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen

    2017-09-04

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  6. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  7. The Effects of Free-Stream Turbulence on the Turbulence Structure and Heat Transfer in Zero Pressure Gradient Boundary Layers.

    DTIC Science & Technology

    1982-11-01

    direction of the gradients) of the wires should be minimized. (2) To reduce end effects ( nonuniform temperature along the active length) and to...r 0l C. 1 ~0 m I I. I l l LLJ F|0. L9L "" - "lid lair &= 0 - -fu mEU 4 0 DO -- 1- a j 0 D 0 - ’n) N, > 0 *0󈧭 .0- -0- t I t .-I I co u X c , O6-, x0

  8. Concentration of the L{sub 1}-norm of trigonometric polynomials and entire functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malykhin, Yu V; Ryutin, K S

    2014-11-30

    For any sufficiently large n, the minimal measure of a subset of [−π,π] on which some nonzero trigonometric polynomial of order ≤n gains half of the L{sub 1}-norm is shown to be π/(n+1). A similar result for entire functions of exponential type is established. Bibliography: 13 titles.

  9. Standardization and performance evaluation of "modified" and "ultrasensitive" versions of the Abbott RealTime HIV-1 assay, adapted to quantify minimal residual viremia.

    PubMed

    Amendola, Alessandra; Bloisi, Maria; Marsella, Patrizia; Sabatini, Rosella; Bibbò, Angela; Angeletti, Claudio; Capobianchi, Maria Rosaria

    2011-09-01

    Numerous studies investigating clinical significance of HIV-1 minimal residual viremia (MRV) suggest potential utility of assays more sensitive than those routinely used to monitor viral suppression. However currently available methods, based on different technologies, show great variation in detection limit and input plasma volume, and generally suffer from lack of standardization. In order to establish new tools suitable for routine quantification of minimal residual viremia in patients under virological suppression, some modifications were introduced into standard procedure of the Abbott RealTime HIV-1 assay leading to a "modified" and an "ultrasensitive" protocols. The following modifications were introduced: calibration curve extended towards low HIV-1 RNA concentration; 4 fold increased sample volume by concentrating starting material; reduced volume of internal control; adoption of "open-mode" software for quantification. Analytical performances were evaluated using the HIV-1 RNA Working Reagent 1 for NAT assays (NIBSC). Both tests were applied to clinical samples from virologically suppressed patients. The "modified" and the "ultrasensitive" configurations of the assay reached a limit of detection of 18.8 (95% CI: 11.1-51.0 cp/mL) and 4.8 cp/mL (95% CI: 2.6-9.1 cp/mL), respectively, with high precision and accuracy. In clinical samples from virologically suppressed patients, "modified" and "ultrasensitive" protocols allowed to detect and quantify HIV RNA in 12.7% and 46.6%, respectively, of samples resulted "not-detectable", and in 70.0% and 69.5%, respectively, of samples "detected <40 cp/mL" in the standard assay. The "modified" and "ultrasensitive" assays are precise and accurate, and easily adoptable in routine diagnostic laboratories for measuring MRV. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Higgs mass corrections in the SUSY B - L model with inverse seesaw

    NASA Astrophysics Data System (ADS)

    Elsayed, A.; Khalil, S.; Moretti, S.

    2012-08-01

    In the context of the Supersymmetric (SUSY) B - L (Baryon minus Lepton number) model with inverse seesaw mechanism, we calculate the one-loop radiative corrections due to right-handed (s)neutrinos to the mass of the lightest Higgs boson when the latter is Standard Model (SM)-like. We show that such effects can be as large as O (100) GeV, thereby giving an absolute upper limit on such a mass around 180 GeV. The importance of this result from a phenomenological point of view is twofold. On the one hand, this enhancement greatly reconciles theory and experiment, by alleviating the so-called 'little hierarchy problem' of the minimal SUSY realization, whereby the current experimental limit on the SM-like Higgs mass is very near its absolute upper limit predicted theoretically, of 130 GeV. On the other hand, a SM-like Higgs boson with mass below 180 GeV is still well within the reach of the Large Hadron Collider (LHC), so that the SUSY realization discussed here is just as testable as the minimal version.

  11. Complex Problem Solving in L1 Education: Senior High School Students' Knowledge of the Language Problem-Solving Process

    ERIC Educational Resources Information Center

    van Velzen, Joke H.

    2017-01-01

    The solving of reasoning problems in first language (L1) education can produce an understanding of language, and student autonomy in language problem solving, both of which are contemporary goals in senior high school education. The purpose of this study was to obtain a better understanding of senior high school students' knowledge of the language…

  12. Transfers between libration-point orbits in the elliptic restricted problem

    NASA Astrophysics Data System (ADS)

    Hiday-Johnston, L. A.; Howell, K. C.

    1994-04-01

    A strategy is formulated to design optimal time-fixed impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L1 libration point of the Sun-Earth/Moon barycenter system. The adjoint equation in terms of rotating coordinates in the elliptic restricted three-body problem is shown to be of a distinctly different form from that obtained in the analysis of trajectories in the two-body problem. Also, the necessary conditions for a time-fixed two-impulse transfer to be optimal are stated in terms of the primer vector. Primer vector theory is then extended to nonoptimal impulsive trajectories in order to establish a criterion whereby the addition of an interior impulse reduces total fuel expenditure. The necessary conditions for the local optimality of a transfer containing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. Determination of location, orientation, and magnitude of each additional impulse is accomplished by the unconstrained minimization of the cost function using a multivariable search method. Results indicate that substantial savings in fuel can be achieved by the addition of interior impulsive maneuvers on transfers between libration-point orbits.

  13. Minimally processed vegetable salads: microbial quality evaluation.

    PubMed

    Fröder, Hans; Martins, Cecília Geraldes; De Souza, Katia Leani Oliveira; Landgraf, Mariza; Franco, Bernadette D G M; Destro, Maria Teresa

    2007-05-01

    The increasing demand for fresh fruits and vegetables and for convenience foods is causing an expansion of the market share for minimally processed vegetables. Among the more common pathogenic microorganisms that can be transmitted to humans by these products are Listeria monocytogenes, Escherichia coli O157:H7, and Salmonella. The aim of this study was to evaluate the microbial quality of a selection of minimally processed vegetables. A total of 181 samples of minimally processed leafy salads were collected from retailers in the city of Sao Paulo, Brazil. Counts of total coliforms, fecal coliforms, Enterobacteriaceae, psychrotrophic microorganisms, and Salmonella were conducted for 133 samples. L. monocytogenes was assessed in 181 samples using the BAX System and by plating the enrichment broth onto Palcam and Oxford agars. Suspected Listeria colonies were submitted to classical biochemical tests. Populations of psychrotrophic microorganisms >10(6) CFU/g were found in 51% of the 133 samples, and Enterobacteriaceae populations between 10(5) and 106 CFU/g were found in 42% of the samples. Fecal coliform concentrations higher than 10(2) CFU/g (Brazilian standard) were found in 97 (73%) of the samples, and Salmonella was detected in 4 (3%) of the samples. Two of the Salmonella-positive samples had <10(2) CFU/g concentrations of fecal coliforms. L. monocytogenes was detected in only 1 (0.6%) of the 181 samples examined. This positive sample was simultaneously detected by both methods. The other Listeria species identified by plating were L. welshimeri (one sample of curly lettuce) and L. innocua (2 samples of watercress). The results indicate that minimally processed vegetables had poor microbiological quality, and these products could be a vehicle for pathogens such as Salmonella and L. monocytogenes.

  14. Reference intervals for plasma free metanephrines with an age adjustment for normetanephrine for optimized laboratory testing of phaeochromocytoma.

    PubMed

    Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M

    2013-01-01

    Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Blood samples were collected in the supine position from 1226 subjects, aged 5-84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results.

  15. Reference intervals for plasma free metanephrines with an age adjustment for normetanephrine for optimized laboratory testing of phaeochromocytoma

    PubMed Central

    Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M

    2016-01-01

    Background Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Methods Blood samples were collected in the supine position from 1226 subjects, aged 5–84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Results Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. Conclusions These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results. PMID:23065528

  16. Minimal realization of right-handed gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-01-01

    We propose a minimally extended gauge symmetry model with U (1 )R , where only the right-handed fermions have nonzero charges in the fermion sector. To achieve both anomaly cancellations and minimality, three right-handed neutrinos are naturally required, and the standard model Higgs has to have nonzero charge under this symmetry. Then we find that its breaking scale(Λ ) is restricted by precise measurement of neutral gauge boson in the standard model; therefore, O (10 ) TeV ≲Λ . We also discuss its testability of the new gauge boson and discrimination of U (1 )R model from U (1 )B-L one at collider physics such as LHC and ILC.

  17. Extraction, separation and isolation of volatiles from Vitex agnus-castus L. (Verbenaceae) wild species of Sardinia, Italy, by supercritical CO2.

    PubMed

    Marongiu, Bruno; Piras, Alessandra; Porcedda, Silvia; Falconieri, Danilo; Goncalves, Maria J; Salgueiro, Ligia; Maxia, Andrea; Lai, Roberta

    2010-04-01

    Isolation of volatile concentrates from leaves, flowers and fruits of Vitex agnus-castus L. have been obtained by supercritical extraction with carbon dioxide. The composition of the volatile concentrates has been analysed by GC/MS. In all plant organs, the extracts are composed chiefly of alpha-pinene, sabinene, 1,8-cineole, alpha-terpinyl acetate, (E)-caryophyllene, (E)-beta-farnesene, bicyclogermacrene, spathulenol and manool. The main difference observed was in the content of sclarene, which was not present in the samples from flowers or fruits. To complete the investigation, a comparison with the hydrodistilled oil has been carried out. The minimal inhibitory concentration (MIC) and the minimal lethal concentration were used to evaluate the antifungal activity of the oils against dermatophyte strains (Trichophyton mentagrophytes, Microsporum canis, T. rubrum, M. gypseum and Epidermophyton floccosum). Antifungal activity of the leaf essential oil was the highest, with MIC values of 0.64 microL mL(-1) for most of the strains.

  18. Regularized minimum I-divergence methods for the inverse blackbody radiation problem

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin

    2006-08-01

    This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.

  19. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  20. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  1. An awake, minimally-invasive, fully-endoscopic surgical technique for treating lumbar radiculopathy secondary to heterotopic foraminal bone formation after a minimally invasive transforaminal lumbar interbody fusion with BMP: technical note

    PubMed Central

    2018-01-01

    One complication associated with recombinant human bone morphogenetic protein (rhBMP-2) use in minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) is heterotopic bone growth at the neural foramen which results in the compression of neural structures. Here we present an awake, minimally invasive surgical approach for treating the radiculopathy that results from this excessive bone growth in the foramen. A 42-year-old male underwent a lumbar 4–sacral 1 MIS-TLIF by another surgeon. He did well in the initial postoperative period, but he began to note right leg pain and numbness in an L5 dermatomal pattern. The pain continued for 2 years despite interventional pain management, and he began to note left foot dorsiflexion weakness. An electromyography (EMG) showed a left L5 radiculopathy and a CT Lumbar spine demonstrated excessive bone growth in the right L4–5 neural foramen. The patient underwent an awake, endoscopic foraminotomy procedure utilizing a blunt tipped manual shaver drill system. The patient’s radicular symptoms improved immediately, and he remained asymptomatic at the 1 year follow up. Heterotopic foraminal bone growth is one potential complication of rhBMP-2 use in the MIS-TLIF procedure. The endoscopic procedure described here is a minimally invasive surgical option that can be performed in an awake patient and is suggested a unique salvage or rescue procedure to be considered for the treatment of this potential rhBMP-2 complication. PMID:29732437

  2. Requirement for Dot1l in murine postnatal hematopoiesis and leukemogenesis by MLL translocation

    PubMed Central

    Jo, Stephanie Y.; Granowicz, Eric M.; Maillard, Ivan; Thomas, Dafydd

    2011-01-01

    Disruptor of telomeric silencing 1-like (Dot1l) is a histone 3 lysine 79 methyltransferase. Studies of constitutive Dot1l knockout mice show that Dot1l is essential for embryonic development and prenatal hematopoiesis. DOT1L also interacts with translocation partners of Mixed Lineage Leukemia (MLL) gene, which is commonly translocated in human leukemia. However, the requirement of Dot1l in postnatal hematopoiesis and leukemogenesis of MLL translocation proteins has not been conclusively shown. With a conditional Dot1l knockout mouse model, we examined the consequences of Dot1l loss in postnatal hematopoiesis and MLL translocation leukemia. Deletion of Dot1l led to pancytopenia and failure of hematopoietic homeostasis, and Dot1l-deficient cells minimally reconstituted recipient bone marrow in competitive transplantation experiments. In addition, MLL-AF9 cells required Dot1l for oncogenic transformation, whereas cells with other leukemic oncogenes, such as Hoxa9/Meis1 and E2A-HLF, did not. These findings illustrate a crucial role of Dot1l in normal hematopoiesis and leukemogenesis of specific oncogenes. PMID:21398221

  3. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  4. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  5. Outpatient safety assessment of an in-home predictive low-glucose suspend system with type 1 diabetes subjects at elevated risk of nocturnal hypoglycemia.

    PubMed

    Buckingham, Bruce A; Cameron, Fraser; Calhoun, Peter; Maahs, David M; Wilson, Darrell M; Chase, H Peter; Bequette, B Wayne; Lum, John; Sibayan, Judy; Beck, Roy W; Kollman, Craig

    2013-08-01

    Nocturnal hypoglycemia is a common problem with type 1 diabetes. In the home setting, we conducted a pilot study to evaluate the safety of a system consisting of an insulin pump and continuous glucose monitor communicating wirelessly with a bedside computer running an algorithm that temporarily suspends insulin delivery when hypoglycemia is predicted. After the run-in phase, a 21-night randomized trial was conducted in which each night was randomly assigned 2:1 to have either the predictive low-glucose suspend (PLGS) system active (intervention night) or inactive (control night). Three predictive algorithm versions were studied sequentially during the study for a total of 252 intervention and 123 control nights. The trial included 19 participants 18-56 years old with type 1 diabetes (hemoglobin A1c level of 6.0-7.7%) who were current users of the MiniMed Paradigm® REAL-Time Revel™ System and Sof-sensor® glucose sensor (Medtronic Diabetes, Northridge, CA). With the final algorithm, pump suspension occurred on 53% of 77 intervention nights. Mean morning glucose level was 144±48 mg/dL on the 77 intervention nights versus 133±57 mg/dL on the 37 control nights, with morning blood ketones >0.6 mmol/L following one intervention night. Overnight hypoglycemia was lower on intervention than control nights, with at least one value ≤70 mg/dL occurring on 16% versus 30% of nights, respectively, with the final algorithm. This study demonstrated that the PLGS system in the home setting is safe and feasible. The preliminary efficacy data appear promising with the final algorithm reducing nocturnal hypoglycemia by almost 50%.

  6. Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.

    2012-01-01

    Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906

  7. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    NASA Astrophysics Data System (ADS)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  8. The intrinsic antimicrobial activity of citric acid-coated manganese ferrite nanoparticles is enhanced after conjugation with the antifungal peptide Cm-p5

    PubMed Central

    Lopez-Abarrategui, Carlos; Figueroa-Espi, Viviana; Lugo-Alvarez, Maria B; Pereira, Caroline D; Garay, Hilda; Barbosa, João ARG; Falcão, Rosana; Jiménez-Hernández, Linnavel; Estévez-Hernández, Osvaldo; Reguera, Edilso; Franco, Octavio L; Dias, Simoni C; Otero-Gonzalez, Anselmo J

    2016-01-01

    Diseases caused by bacterial and fungal pathogens are among the major health problems in the world. Newer antimicrobial therapies based on novel molecules urgently need to be developed, and this includes the antimicrobial peptides. In spite of the potential of antimicrobial peptides, very few of them were able to be successfully developed into therapeutics. The major problems they present are molecule stability, toxicity in host cells, and production costs. A novel strategy to overcome these obstacles is conjugation to nanomaterial preparations. The antimicrobial activity of different types of nanoparticles has been previously demonstrated. Specifically, magnetic nanoparticles have been widely studied in biomedicine due to their physicochemical properties. The citric acid-modified manganese ferrite nanoparticles used in this study were characterized by high-resolution transmission electron microscopy, which confirmed the formation of nanocrystals of approximately 5 nm diameter. These nanoparticles were able to inhibit Candida albicans growth in vitro. The minimal inhibitory concentration was 250 µg/mL. However, the nanoparticles were not capable of inhibiting Gram-negative bacteria (Escherichia coli) or Gram-positive bacteria (Staphylococcus aureus). Finally, an antifungal peptide (Cm-p5) from the sea animal Cenchritis muricatus (Gastropoda: Littorinidae) was conjugated to the modified manganese ferrite nanoparticles. The antifungal activity of the conjugated nanoparticles was higher than their bulk counterparts, showing a minimal inhibitory concentration of 100 µg/mL. This conjugate proved to be nontoxic to a macrophage cell line at concentrations that showed antimicrobial activity. PMID:27563243

  9. Second Language Reading Research: Problems and Possibilities.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1994-01-01

    First-language (L1) reading theories are examined from second- language (L2) perspectives to identify significant research voids related to L2 problems. Unique aspects of L2 reading are considered and three distinct areas are discussed: consequences of prior reading experience, effects of cross-linguistic processing, and compensatory devices for…

  10. Characterizing L1-norm best-fit subspaces

    NASA Astrophysics Data System (ADS)

    Brooks, J. Paul; Dulá, José H.

    2017-05-01

    Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.

  11. Computer-guided percutaneous interbody fixation and fusion of the L5-S1 disc: a 2-year prospective study.

    PubMed

    Mac Millan, Michael

    2005-02-01

    The clinical outcomes of lumbar fusion are diminished by the complications associated with the surgical approach. Posterior approaches cause segmental muscular necrosis and anterior approaches risk visceral and vascular injury. This report details a two-year prospective study of a percutaneous method which avoids the major problems associated with existing approaches. Seventeen patients underwent percutaneous, trans-sacral fusion and fixation of L5-S1 with the assistance of computer guidance. Each patient was followed for a minimum of two years post surgery. SF-36 questionnaires and radiographs were obtained preoperatively and at two years post-operatively. Fusion was assessed with post-operative radiographs and/or CT scan. Ninety-three percent of the people fused as judged by plain AP films, Ferguson's view radiographs, and/or CT scans at the two year follow-up. Prospective health and functional SF-36 scores showed significant improvement from the preoperative to the postoperative period. There were no significant complications related to the approach or to the placement of the implants. Percutaneous fusion of the lumbosacral spine appears safe and provides excellent clinical results with a minimal amount of associated tissue trauma.

  12. Adaptation of the osteoarthritis-specific quality of life scale (the OAQoL) for use in Germany, Hungary, Italy, Spain and Turkey.

    PubMed

    Wilburn, Jeanette; McKenna, Stephen P; Kutlay, Şehim; Bender, Tamas; Braun, Jürgen; Castillo-Gallego, Concepcion; Favero, Marta; Geher, Pal; Kiltz, Uta; Martin-Mola, Emilio; Ramonda, Roberta; Rouse, Matthew; Tennant, Alan; Küçükdeveci, Ayşe A

    2017-05-01

    The Osteoarthritis Quality of Life scale (OAQoL) is specific to individuals with osteoarthritis. The present study describes the adaptation of the OAQoL for use in the following five European languages: German, Hungarian, Italian, Spanish and Turkish. The study involved three stages in each language; translation, cognitive debriefing (face and content validity) and validation. The validation stage assessed internal consistency (Cronbach's alpha), reproducibility (test-retest reliability using Spearman's rank correlations), convergent and divergent validity (correlations with the Health Assessment Questionnaire, The Western Ontario and McMaster Universities Index of osteoarthritis and Nottingham Health Profile) and known group validity. The OAQoL was successfully translated into the target languages with minimal problems. Cognitive debriefing interviewees found the measures easy to complete and identified few problems with content. Internal consistency ranged from 0.94 to 0.97 and test-retest reliability (reproducibility) from 0.87 to 0.98. These values indicate that the new language versions produce very low levels of measurement error. Median OAQoL scores were higher for patients reporting a current flare of osteoarthritis in all countries. Scores were also related, as expected, to perceived severity of osteoarthritis. The OAQoL was successfully adapted for use in Germany, Hungary, Italy, Spain and Turkey. The addition of these new language versions will prove valuable to multinational clinical trials and to clinical practice in the respective countries.

  13. Interpreting a CMS excess in l l j j +missing -transverse-momentum with the golden cascade of the minimal supersymmetric standard model

    NASA Astrophysics Data System (ADS)

    Allanach, Ben; Kvellestad, Anders; Raklev, Are

    2015-06-01

    The CMS experiment recently reported an excess consistent with an invariant mass edge in opposite-sign same flavor leptons, when produced in conjunction with at least two jets and missing transverse momentum. We provide an interpretation of the edge in terms of (anti)squark pair production followed by the "golden cascade" decay for one of the squarks: q ˜ →χ˜2 0q →l ˜ l q →χ˜1 0q l l in the minimal supersymmetric standard model. A simplified model involving binos, winos, an on-shell slepton, and the first two generations of squarks fits the event rate and the invariant mass edge. We check consistency with a recent ATLAS search in a similar region, finding that much of the good-fit parameter space is still allowed at the 95% confidence level (C.L.). However, a combination of other LHC searches, notably two-lepton stop pair searches and jets plus p T, rule out all of the remaining parameter space at the 95% C.L.

  14. Neutrino masses in the minimal gauged (B -L ) supersymmetry

    NASA Astrophysics Data System (ADS)

    Yan, Yu-Li; Feng, Tai-Fu; Yang, Jin-Lei; Zhang, Hai-Bin; Zhao, Shu-Min; Zhu, Rong-Fei

    2018-03-01

    We present the radiative corrections to neutrino masses in a minimal supersymmetric extension of the standard model with local U (1 )B -L symmetry. At tree level, three tiny active neutrinos and two nearly massless sterile neutrinos can be obtained through the seesaw mechanism. Considering the one-loop corrections to the neutrino masses, the numerical results indicate that two sterile neutrinos obtain KeV masses and the small active-sterile neutrino mixing angles. The lighter sterile neutrino is a very interesting dark matter candidate in cosmology. Meanwhile, the active neutrinos mixing angles and mass squared differences agree with present experimental data.

  15. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue

    NASA Astrophysics Data System (ADS)

    Jezernik, Sašo; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  16. Charge and energy minimization in electrical/magnetic stimulation of nervous tissue.

    PubMed

    Jezernik, Saso; Sinkjaer, Thomas; Morari, Manfred

    2010-08-01

    In this work we address the problem of stimulating nervous tissue with the minimal necessary energy at reduced/minimal charge. Charge minimization is related to a valid safety concern (avoidance and reduction of stimulation-induced tissue and electrode damage). Energy minimization plays a role in battery-driven electrical or magnetic stimulation systems (increased lifetime, repetition rates, reduction of power requirements, thermal management). Extensive new theoretical results are derived by employing an optimal control theory framework. These results include derivation of the optimal electrical stimulation waveform for a mixed energy/charge minimization problem, derivation of the charge-balanced energy-minimal electrical stimulation waveform, solutions of a pure charge minimization problem with and without a constraint on the stimulation amplitude, and derivation of the energy-minimal magnetic stimulation waveform. Depending on the set stimulus pulse duration, energy and charge reductions of up to 80% are deemed possible. Results are verified in simulations with an active, mammalian-like nerve fiber model.

  17. Accurately determining direction of arrival by seismic array based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Hu, J.; Zhang, H.; Yu, H.

    2016-12-01

    Seismic array analysis method plays an important role in detecting weak signals and determining their locations and rupturing process. In these applications, reliably estimating direction of arrival (DOA) for the seismic wave is very important. DOA is generally determined by the conventional beamforming method (CBM) [Rost et al, 2000]. However, for a fixed seismic array generally the resolution of CBM is poor in the case of low-frequency seismic signals, and in the case of high frequency seismic signals the CBM may produce many local peaks, making it difficult to pick the one corresponding to true DOA. In this study, we develop a new seismic array method based on compressive sensing (CS) to determine the DOA with high resolution for both low- and high-frequency seismic signals. The new method takes advantage of the space sparsity of the incoming wavefronts. The CS method has been successfully used to determine spatial and temporal earthquake rupturing distributions with seismic array [Yao et al, 2011;Yao et al, 2013;Yin 2016]. In this method, we first form the problem of solving the DOA as a L1-norm minimization problem. The measurement matrix for CS is constructed by dividing the slowness-angle domain into many grid nodes, which needs to satisfy restricted isometry property (RIP) for optimized reconstruction of the image. The L1-norm minimization is solved by the interior point method. We first test the CS-based DOA array determination method on synthetic data constructed based on Shanghai seismic array. Compared to the CBM, synthetic test for data without noise shows that the new method can determine the true DOA with a super-high resolution. In the case of multiple sources, the new method can easily separate multiple DOAs. When data are contaminated by noise at various levels, the CS method is stable when the noise amplitude is lower than the signal amplitude. We also test the CS method for the Wenchuan earthquake. For different arrays with different apertures, we are able to obtain reliable DOAs with uncertainties lower than 10 degrees.

  18. IFSM fractal image compression with entropy and sparsity constraints: A sequential quadratic programming approach

    NASA Astrophysics Data System (ADS)

    Kunze, Herb; La Torre, Davide; Lin, Jianyi

    2017-01-01

    We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.

  19. On the Miller-Tucker-Zemlin Based Formulations for the Distance Constrained Vehicle Routing Problems

    NASA Astrophysics Data System (ADS)

    Kara, Imdat

    2010-11-01

    Vehicle Routing Problem (VRP), is an extension of the well known Traveling Salesman Problem (TSP) and has many practical applications in the fields of distribution and logistics. When the VRP consists of distance based constraints it is called Distance Constrained Vehicle Routing Problem (DVRP). However, the literature addressing on the DVRP is scarce. In this paper, existing two-indexed integer programming formulations, having Miller-Tucker-Zemlin based subtour elimination constraints, are reviewed. Existing formulations are simplified and obtained formulation is presented as formulation F1. It is shown that, the distance bounding constraints of the formulation F1, may not generate the distance traveled up to the related node. To do this, we redefine the auxiliary variables of the formulation and propose second formulation F2 with new and easy to use distance bounding constraints. Adaptation of the second formulation to the cases where new restrictions such as minimal distance traveled by each vehicle or other objectives such as minimizing the longest distance traveled is discussed.

  20. 17α-ethinyl estradiol attenuates depressive-like behavior through GABAA receptor activation/nitrergic pathway blockade in ovariectomized mice.

    PubMed

    Saeedi Saravi, Seyed Soheil; Arefidoust, Alireza; Yaftian, Rahele; Saeedi Saravi, Seyed Sobhan; Dehpour, Ahmad Reza

    2016-04-01

    This study was performed to investigate the antidepressant-like effect of 17α-ethinyl estradiol (EE2) in ovariectomized (OVX) mice and the possible role of nitrergic and gamma aminobutyric acid (GABA)ergic pathways in this paradigm. Bilateral ovariectomy was performed in female mice, and different doses of EE2 were intraperitoneally injected either alone or combined with GABAA agonist, diazepam, GABAA antagonist, flumazenil, non-specific nitric oxide synthase (NOS) inhibitor, N(ω)-nitro-L-arginine methyl ester (L-NAME), specific nNOS inhibitor, 7-nitroindazole (7-NI), a nitric oxide (NO) precursor, L-arginine, and selective PDE5I, sildenafil. After locomotion assessment, immobility times were recorded in the forced swimming test (FST) and tail suspension test (TST). Moreover, hippocampal nitrite concentrations were measured in the examined groups. Ten days after ovariectomy, a significant prolonged immobility times were observed. EE2 (0.3 and 1μg/kg and 0.03, 0.1, and 1mg/kg) caused antidepressant-like activity in OVX mice in FST and TST. Diazepam (1 and 5mg/kg), L-NAME (30mg/kg), and 7-NI (100mg/kg) significantly reduced the immobility times. Co-administration of minimal and sub-effective doses of EE2 and diazepam (0.3μg/kg and 0.5mg/kg, respectively) exerted a significant antidepressant-like effect. The same effect was observed in combination of minimal and sub-effective doses of EE2 and either L-NAME or 7-NI. Moreover, combination of minimal and sub-effective doses of EE2, diazepam either L-NAME, or 7-NI emphasized the significant robust antidepressant-like activity. The study has demonstrated that lowest dose of EE2 exerts a significant antidepressant-like behavior. It is suggested that suppression of NO system, as well as GABAA activation, may be responsible for antidepressant-like activity of EE2 in OVX mice. Moreover, GABAA activation may inhibit nitrergic pathway.

  1. Simultaneous serum nicotine, cotinine, and trans-3'-hydroxycotinine quantitation with minimal sample volume for tobacco exposure status of solid organ transplant patients.

    PubMed

    Shu, Irene; Wang, Ping

    2013-06-01

    Concentrations of nicotine and its metabolites in blood are indicative of patients' current tobacco exposure, and their quantifications have been clinically applied to multiple assessments including demonstration of abstinence prior to heart-lung transplantation. For the purpose of transplant evaluation, the laboratory work up is extensive; thereby an assay with minimal sample volume is preferred. We developed and validated a rapid LC-MS/MS assay to simultaneously quantitate nicotine and its major metabolites, Cotinine and trans-3'-OH-cotinine (3-OH-Cot), in serum. 100μL of serum was spiked with deuterated internal standards and extracted by Oasis HLB solid phase extraction cartridge. Nicotine and metabolites in the reconstituted serum extract were separated by Agilent Eclipse XDB-C8 3.5μm 2.1mm×50mm HPLC column within 4.7min, and quantified by MS/MS with positive mode electrospray ionization and multiple reaction monitoring. Ion suppression was insignificant, and extraction efficiency was 79-110% at 50ng/mL for all compounds. Limit of detection was 1.0ng/mL for nicotine and 3-OH-Cot, and <0.5ng/mL for Cotinine. Linearity ranges for nicotine, cotinine and 3-OH-Cot were 2-100, 2-1000, and 5-1000ng/mL with recoveries of 86-115%. Within-day and twenty-day imprecision at nicotine/cotinine/3-OH-Cot levels of 22/150/90, 37/250/150, and 50/800/500ng/mL were all 1.1-6.5%. The reconstituted serum extracts were stable for at least 7 days stored in the HPLC autosampler at 5°C. Our method correlates well with alternative LC-MS/MS methods. We successfully developed and validated an LC-MS/MS assay to quantitate concentrations of nicotine and its metabolites in serum with minimal sample volume to assess tobacco exposure of heart-lung transplant patients. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Intraventricular vector flow mapping—a Doppler-based regularized problem with automatic model selection

    NASA Astrophysics Data System (ADS)

    Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien

    2017-09-01

    We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.

  3. An information geometric approach to least squares minimization

    NASA Astrophysics Data System (ADS)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  4. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  5. Nonstandard Lumbar Region in Predicting Fracture Risk.

    PubMed

    Alajlouni, Dima; Bliuc, Dana; Tran, Thach; Pocock, Nicholas; Nguyen, Tuan V; Eisman, John A; Center, Jacqueline R

    Femoral neck (FN) bone mineral density (BMD) is the most commonly used skeletal site to estimate fracture risk. The role of lumbar spine (LS) BMD in fracture risk prediction is less clear due to osteophytes that spuriously increase LS BMD, particularly at lower levels. The aim of this study was to compare fracture predictive ability of upper L1-L2 BMD with standard L2-L4 BMD and assess whether the addition of either LS site could improve fracture prediction over FN BMD. This study comprised a prospective cohort of 3016 women and men over 60 yr from the Dubbo Osteoporosis Epidemiology Study followed up for occurrence of minimal trauma fractures from 1989 to 2014. Dual-energy X-ray absorptiometry was used to measure BMD at L1-L2, L2-L4, and FN at baseline. Fracture risks were estimated using Cox proportional hazards models separately for each site. Predictive performances were compared using receiver operating characteristic curve analyses. There were 565 women and 179 men with a minimal trauma fracture during a mean of 11 ± 7 yr. L1-L2 BMD T-score was significantly lower than L2-L4 T-score in both genders (p < 0.0001). L1-L2 and L2-L4 BMD models had a similar fracture predictive ability. LS BMD was better than FN BMD in predicting vertebral fracture risk in women [area under the curve 0.73 (95% confidence interval, 0.68-0.79) vs 0.68 (95% confidence interval, 0.62-0.74), but FN was superior for hip fractures prediction in both women and men. The addition of L1-L2 or L2-L4 to FN BMD in women increased overall and vertebral predictive power compared with FN BMD alone by 1% and 4%, respectively (p < 0.05). In an elderly population, L1-L2 is as good as but not better than L2-L4 site in predicting fracture risk. The addition of LS BMD to FN BMD provided a modest additional benefit in overall fracture risk. Further studies in individuals with spinal degenerative disease are needed. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  6. In Vitro Study of the Antifungal Activity of Essential Oils Obtained from Mentha spicata, Thymus vulgaris, and Laurus nobilis.

    PubMed

    Houicher, Abderrahmane; Hechachna, Hind; Teldji, Hanifa; Ozogul, Fatih

    2016-01-01

    The aim of this study was to determine the antifungal activity of the essential oils isolated from three aromatic plants against 13 filamentous fungal strains. The major constituents of Mentha spicata, Thymus vulgaris, and Laurus nobilis essential oils were carvone (52.2%), linalool (78.1%), and 1,8-cineole (45.6%), respectively. There are also some patents suggesting the use of essential oils as natural and safe alternatives to fungicides for plant protection. In the present work, M. spicata essential oil exhibited the strongest activity against all tested fungi in which Fusarium graminearum, F.moniliforme, and Penicillium expansum were the most sensitive to mint oil with lower minimal inhibitory concentration (MIC) and minimal fungicidal concentration (MFC) values of 2.5 μL mL-1 (v/v). Thymus vulgaris essential oil was less active compared to the oil of M. spicata. Aspergillus ochraceus was the most sensitive strain to thyme oil with MIC and MFC values of 2.5 and 5 μL mL-1, respectively. Thymus vulgaris essential oil also exhibited a moderate fungicidal effect against the tested fungi, except for A. niger (MFC >20 μL-1). L. nobilis essential oil showed a similar antifungal activity with thyme oil in which A. parasiticus was the most resistant strain to this oil (MFC >20 μL mL-1). Our findings suggested the use of these essential oils as alternatives to synthetic fungicides in order to prevent pre-and post-harvest infections and ensure product safety. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  7. Causal Attribution: A New Scale Developed to Minimize Existing Methodological Problems.

    ERIC Educational Resources Information Center

    Bull, Kay Sather; Feuquay, Jeffrey P.

    In order to facilitate research on the construct of causal attribution, this paper details developmental procedures used to minimize previous deficiencies and proposes a new scale. The first version of the scale was in ipsative form and provided two basic sets of indices: (1) ability, effort, luck, and task difficulty indices in success and…

  8. The Camassa-Holm equation as an incompressible Euler equation: A geometric point of view

    NASA Astrophysics Data System (ADS)

    Gallouët, Thomas; Vialard, François-Xavier

    2018-04-01

    The group of diffeomorphisms of a compact manifold endowed with the L2 metric acting on the space of probability densities gives a unifying framework for the incompressible Euler equation and the theory of optimal mass transport. Recently, several authors have extended optimal transport to the space of positive Radon measures where the Wasserstein-Fisher-Rao distance is a natural extension of the classical L2-Wasserstein distance. In this paper, we show a similar relation between this unbalanced optimal transport problem and the Hdiv right-invariant metric on the group of diffeomorphisms, which corresponds to the Camassa-Holm (CH) equation in one dimension. Geometrically, we present an isometric embedding of the group of diffeomorphisms endowed with this right-invariant metric in the automorphisms group of the fiber bundle of half densities endowed with an L2 type of cone metric. This leads to a new formulation of the (generalized) CH equation as a geodesic equation on an isotropy subgroup of this automorphisms group; On S1, solutions to the standard CH thus give radially 1-homogeneous solutions of the incompressible Euler equation on R2 which preserves a radial density that has a singularity at 0. An other application consists in proving that smooth solutions of the Euler-Arnold equation for the Hdiv right-invariant metric are length minimizing geodesics for sufficiently short times.

  9. In vitro antibacterial and morphological effects of the urushiol component of the sap of the Korean lacquer tree (Rhus vernicifera Stokes) on Helicobacter pylori.

    PubMed

    Suk, Ki Tae; Kim, Hyun Soo; Kim, Moon Young; Kim, Jae Woo; Uh, Young; Jang, In Ho; Kim, Soo Ki; Choi, Eung Ho; Kim, Myong Jo; Joo, Jung Soo; Baik, Soon Koo

    2010-03-01

    Eradication regimens for Helicobacter pylori infection have some side effects, compliance problems, relapses, and antibiotic resistance. Therefore, alternative anti-H. pylori or supportive antimicrobial agents with fewer disadvantages are necessary for the treatment of H. pylori. We investigated the pH-(5.0, 6.0, 7.0, 8.0, 9.0, and 10.0) and concentration (0.032, 0.064, 0.128, 0.256, 0.514, and 1.024 mg/mL)-dependent antibacterial activity of crude urushiol extract from the sap of the Korean lacquer tree (Rhus vernicifera Stokes) against 3 strains (NCTC11637, 69, and 219) of H. pylori by the agar dilution method. In addition, the serial (before incubation, 3, 6, and 10 min after incubation) morphological effects of urushiol on H. pylori were examined by electron microscopy. All strains survived only within pH 6.0-9.0. The minimal inhibitory concentrations of the extract against strains ranged from 0.064 mg/mL to 0.256 mg/mL. Urushiol caused mainly separation of the membrane, vacuolization, and lysis of H. pylori. Interestingly, these changes were observed within 10 min following incubation with the 1xminimal inhibitory concentrations of urushiol. The results of this work suggest that urushiol has potential as a rapid therapeutic against H. pylori infection by disrupting the bacterial cell membrane.

  10. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  11. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  12. Minimal models of compact symplectic semitoric manifolds

    NASA Astrophysics Data System (ADS)

    Kane, D. M.; Palmer, J.; Pelayo, Á.

    2018-02-01

    A symplectic semitoric manifold is a symplectic 4-manifold endowed with a Hamiltonian (S1 × R) -action satisfying certain conditions. The goal of this paper is to construct a new symplectic invariant of symplectic semitoric manifolds, the helix, and give applications. The helix is a symplectic analogue of the fan of a nonsingular complete toric variety in algebraic geometry, that takes into account the effects of the monodromy near focus-focus singularities. We give two applications of the helix: first, we use it to give a classification of the minimal models of symplectic semitoric manifolds, where "minimal" is in the sense of not admitting any blowdowns. The second application is an extension to the compact case of a well known result of Vũ Ngọc about the constraints posed on a symplectic semitoric manifold by the existence of focus-focus singularities. The helix permits to translate a symplectic geometric problem into an algebraic problem, and the paper describes a method to solve this type of algebraic problem.

  13. Round-off errors in cutting plane algorithms based on the revised simplex procedure

    NASA Technical Reports Server (NTRS)

    Moore, J. E.

    1973-01-01

    This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.

  14. Correlation between the norm and the geometry of minimal networks

    NASA Astrophysics Data System (ADS)

    Laut, I. L.

    2017-05-01

    The paper is concerned with the inverse problem of the minimal Steiner network problem in a normed linear space. Namely, given a normed space in which all minimal networks are known for any finite point set, the problem is to describe all the norms on this space for which the minimal networks are the same as for the original norm. We survey the available results and prove that in the plane a rotund differentiable norm determines a distinctive set of minimal Steiner networks. In a two-dimensional space with rotund differentiable norm the coordinates of interior vertices of a nondegenerate minimal parametric network are shown to vary continuously under small deformations of the boundary set, and the turn direction of the network is determined. Bibliography: 15 titles.

  15. Adaptation of the QoL-AGHDA scale for adults with growth hormone deficiency in four Slavic languages.

    PubMed

    McKenna, Stephen P; Wilburn, Jeanette; Twiss, James; Crawford, Sigrid R; Hána, Václav; Karbownik-Lewinska, Malgorzata; Popovic, Vera; Pura, Mikulas; Koltowska-Häggström, Maria

    2011-08-02

    The Quality of Life in Adult Growth Hormone Deficiency Assessment (QoL-AGHDA) is a disease-specific quality of life measure specific to individuals who are growth hormone deficient. The present study describes the adaptation of the QoL-AGHDA for use in the following four Slavic languages; Czech, Polish, Serbian and Slovakian. The study involved three stages in each language; translation, cognitive debriefing and validation. The validation stage assessed internal consistency (Cronbach's alpha), reproducibility (test-retest reliability using Spearman's rank correlations), convergent and divergent validity (Correlations with the NHP) and known group validity. The QoL-AGHDA was successfully translated into the target languages with minimal problems. Cognitive debriefing interviewees (n = 15-18) found the measures easy to complete and identified few problems with the content. Internal consistency (Czech Republic = 0.91, Poland = 0.91, Serbia = 0.91 and Slovakia = 0.89) and reproducibility (Czech Republic = 0.91, Poland = 0.91, Serbia = 0.88 and Slovakia = 0.93) were good in all adaptations. Convergent and divergent validity and known group validity data were not available for Slovakia. The QoL-AGHDA correlated as expected with the NHP scales most relevant to GHD. The QoL-AGHDA was able to distinguish between participants based on a range of variables. The QoL-AGHDA was successfully adapted for use in the Czech Republic, Poland, Serbia and Slovakia. Further validation of the Slovakian version would be beneficial. The addition of these new language versions will prove valuable to multinational clinical trials and to clinical practice in the respective countries.

  16. Production of High-Viscosity Whey Broths by a Lactose-Utilizing Xanthomonas campestris Strain.

    PubMed

    Schwartz, R D; Bodie, E A

    1985-12-01

    Xanthomonas campestris BB-1L was isolated by enrichment and selection by serial passage in a lactose-minimal medium. When BB-1L was subsequently grown in medium containing only 4% whey and 0.05% yeast extract, the lactose was consumed and broth viscosities greater than 500 cps at a 12 s shear rate were produced. Prolonged maintenance in whey resulted in the loss of the ability of BB-1L to produce viscous broths in whey, indicating a reversion to preferential growth on whey protein, like the parent strain.

  17. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    NASA Astrophysics Data System (ADS)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.

  18. FEM Modeling of a Magnetoelectric Transducer for Autonomous Micro Sensors in Medical Application

    NASA Astrophysics Data System (ADS)

    Yang, Gang; Talleb, Hakeim; Gensbittel, Aurélie; Ren, Zhuoxiang

    2015-11-01

    In the context of wireless and autonomous sensors, this paper presents the multiphysics modeling of an energy transducer based on magnetoelectric (ME) composite for biomedical applications. The study considers the power requirement of an implanted sensor, the communication distance, the size limit of the device for minimal invasive insertion as well as the electromagnetic exposure restriction of the human body. To minimize the electromagnetic absorption by the human body, the energy source is provided by an external reader emitting low frequency magnetic field. The modeling is carried out with the finite element method by solving simultaneously the multiple physics problems including the electric load of the conditioning circuit. The simulation results show that with the T-L mode of a trilayer laminated ME composite, the transducer can deliver the required energy in respecting different constraints.

  19. Stationkeeping of Lissajous Trajectories in the Earth-Moon System with Applications to ARTEMIS

    NASA Technical Reports Server (NTRS)

    Folta, D. C.; Pavlak, T. A.; Howell, K. C.; Woodard, M. A.; Woodfork, D. W.

    2010-01-01

    In the last few decades, several missions have successfully exploited trajectories near the.Sun-Earth L1 and L2 libration points. Recently, the collinear libration points in the Earth-Moon system have emerged as locations with immediate application. Most libration point orbits, in any system, are inherently unstable. and must be controlled. To this end, several stationkeeping strategies are considered for application to ARTEMIS. Two approaches are examined to investigate the stationkeeping problem in this regime and the specific options. available for ARTEMIS given the mission and vehicle constraints. (I) A baseline orbit-targeting approach controls the vehicle to remain near a nominal trajectory; a related global optimum search method searches all possible maneuver angles to determine an optimal angle and magnitude; and (2) an orbit continuation method, with various formulations determines maneuver locations and minimizes costs. Initial results indicate that consistent stationkeeping costs can be achieved with both approaches and the costs are reasonable. These methods are then applied to Lissajous trajectories representing a baseline ARTEMIS libration orbit trajectory.

  20. Efficacy of pink guava pulp as an antioxidant in raw pork emulsion.

    PubMed

    Joseph, Serlene; Chatli, Manish K; Biswas, Ashim K; Sahoo, Jhari

    2014-08-01

    Lipid oxidation-induced quality problems can be minimized with the use of natural antioxidants. The antioxidant potential of pink guava pulp (PGP) was evaluated at different levels (0%; C, 5.0%; T-1, 7.5%; T-2 and 10.0%; T-3) in the raw pork emulsion during refrigerated storage of 9 days under aerobic packaging. Lycopene and β-carotene contents increased (P < 0.05) with PGP levels. The redness (a*) increased (P < 0.05), whereas L*decreased (P < 0.05) with the incorporation of PGP. The visual colour and odour scores were greater (P < 0.05) in PGP-treated products than control. Percent metmyoglobin formation was greater (P < 0.05) in the control than PGP-treated products, and increased (P < 0.05) during storage in all the treatments. Overall, peroxide value, thiobarbituric acid reactive substances and free fatty acid values were lower (P < 0.05) in PGP-treated raw emulsion than control throughout storage period. Our results indicated that pink guava pulp can be utilized as antioxidants in raw pork products to minimize lipid oxidation, off-odour development, and surface discolouration.

  1. Shaped-Based Recognition of 3D Objects From 2D Projections

    DTIC Science & Technology

    2006-12-01

    functions for a typical minimization by the graduated assignment algorithm. (The solid line is E , which uses the Euclid- ean distances to the nearest...of E and E0 generally decrease during the optimiza- tion process, but they can also rise because of changes in the assignment variables Mjk...m+ 1)× (n+ 1) match matrix M that minimizes the objective function E = mX j=1 nX k=1 Mjk ³ d (T (lj) , l 0 k) 2 − δ2 ´ . (7) M defines the

  2. Production of High-Viscosity Whey Broths by a Lactose-Utilizing Xanthomonas campestris Strain

    PubMed Central

    Schwartz, Robert D.; Bodie, Elizabeth A.

    1985-01-01

    Xanthomonas campestris BB-1L was isolated by enrichment and selection by serial passage in a lactose-minimal medium. When BB-1L was subsequently grown in medium containing only 4% whey and 0.05% yeast extract, the lactose was consumed and broth viscosities greater than 500 cps at a 12 s−1 shear rate were produced. Prolonged maintenance in whey resulted in the loss of the ability of BB-1L to produce viscous broths in whey, indicating a reversion to preferential growth on whey protein, like the parent strain. PMID:16346946

  3. Risk and protective factors of health-related quality of life in children and adolescents: Results of the longitudinal BELLA study.

    PubMed

    Otto, Christiane; Haller, Anne-Catherine; Klasen, Fionna; Hölling, Heike; Bullinger, Monika; Ravens-Sieberer, Ulrike

    2017-01-01

    Cross-sectional studies demonstrated associations of several sociodemographic and psychosocial factors with generic health-related quality of life (HRQoL) in children and adolescents. However, little is known about factors affecting the change in child and adolescent HRQoL over time. This study investigates potential psychosocial risk and protective factors of child and adolescent HRQoL based on longitudinal data of a German population-based study. Data from the BELLA study gathered at three measurement points (baseline, 1-year and 2-year follow-ups) were investigated in n = 1,554 children and adolescents aged 11 to 17 years at baseline. Self-reported HRQoL was assessed by the KIDSCREEN-10 Index. We examined effects of sociodemographic factors, mental health problems, parental mental health problems, as well as potential personal, familial, and social protective factors on child and adolescent HRQoL at baseline as well as over time using longitudinal growth modeling. At baseline, girls reported lower HRQoL than boys, especially in older participants; low socioeconomic status and migration background were both associated with low HRQoL. Mental health problems as well as parental mental health problems were negatively, self-efficacy, family climate, and social support were positively associated with initial HRQoL. Longitudinal analyses revealed less increase of HRQoL in girls than boys, especially in younger participants. Changes in mental health problems were negatively, changes in self-efficacy and social support were positively associated with the change in HRQoL over time. No effects were found for changes in parental mental health problems or in family climate on changes in HRQoL. Moderating effects for self-efficacy, family climate or social support on the relationships between the investigated risk factors and HRQoL were not found. The risk factor mental health problems negatively and the resource factors self-efficacy and social support positively affect the development of HRQoL in young people, and should be considered in prevention programs.

  4. A minimal dissipation type-based classification in irreversible thermodynamics and microeconomics

    NASA Astrophysics Data System (ADS)

    Tsirlin, A. M.; Kazakov, V.; Kolinko, N. A.

    2003-10-01

    We formulate the problem of finding classes of kinetic dependencies in irreversible thermodynamic and microeconomic systems for which minimal dissipation processes belong to the same type. We show that this problem is an inverse optimal control problem and solve it. The commonality of this problem in irreversible thermodynamics and microeconomics is emphasized.

  5. Absolute quantitation of low abundance plasma APL1β peptides at sub-fmol/mL Level by SRM/MRM without immunoaffinity enrichment.

    PubMed

    Sano, Shozo; Tagami, Shinji; Hashimoto, Yuuki; Yoshizawa-Kumagaye, Kumiko; Tsunemi, Masahiko; Okochi, Masayasu; Tomonaga, Takeshi

    2014-02-07

    Selected/multiple reaction monitoring (SRM/MRM) has been widely used for the quantification of specific proteins/peptides, although it is still challenging to quantitate low abundant proteins/peptides in complex samples such as plasma/serum. To overcome this problem, enrichment of target proteins/peptides is needed, such as immunoprecipitation; however, this is labor-intense and generation of antibodies is highly expensive. In this study, we attempted to quantify plasma low abundant APLP1-derived Aβ-like peptides (APL1β), a surrogate marker for Alzheimer's disease, by SRM/MRM using stable isotope-labeled reference peptides without immunoaffinity enrichment. A combination of Cibacron Blue dye mediated albumin removal and acetonitrile extraction followed by C18-strong cation exchange multi-StageTip purification was used to deplete plasma proteins and unnecessary peptides. Optimal and validated precursor ions to fragment ion transitions of APL1β were developed on a triple quadruple mass spectrometer, and the nanoliquid chromatography gradient for peptide separation was optimized to minimize the biological interference of plasma. Using the stable isotope-labeled (SI) peptide as an internal control, absolute concentrations of plasma APL1β peptide could be quantified as several hundred amol/mL. To our knowledge, this is the lowest detection level of endogenous plasma peptide quantified by SRM/MRM.

  6. Minimization In Digital Design As A Meta-Planning Problem

    NASA Astrophysics Data System (ADS)

    Ho, William P. C.; Wu, Jung-Gen

    1987-05-01

    In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.

  7. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  8. Selecting a restoration technique to minimize OCR error.

    PubMed

    Cannon, M; Fugate, M; Hush, D R; Scovel, C

    2003-01-01

    This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.

  9. Extent of weight reduction necessary for minimization of diabetes risk in Japanese men with visceral fat accumulation and glycated hemoglobin of 5.6-6.4.

    PubMed

    Iwahashi, Hiromi; Noguchi, Midori; Okauchi, Yukiyoshi; Morita, Sachiko; Imagawa, Akihisa; Shimomura, Iichiro

    2015-09-01

    Weight reduction improves glycemic control in obese men with glycated hemoglobin (HbA1c) of 5.6-6.4%, suggesting that it can prevent the development of diabetes in these patients. The aim of the present study was to quantify the amount of weight reduction necessary for minimization of diabetes risk in Japanese men with visceral fat accumulation. The study participants were 482 men with an estimated visceral fat area of ≥100 cm(2), HbA1c of 5.6-6.4%, fasting plasma glucose (FPG) of <126 mg/dL or casual plasma glucose <200 mg/dL. They were divided into two groups based on weight change at the end of the 3-year follow-up period (weight gain and weight loss groups). The weight loss group was classified into quartile subgroups (lowest group, 0 to <1.2%: second lowest group, ≥1.2 to <2.5%: second highest group, ≥2.5 to <4.3%: highest group, ≥4.3% weight loss). The development of diabetes at the end-point represented a rise in HbA1c to ≥6.5% or FPG ≥126 mg/dL, or casual plasma glucose ≥200 mg/dL. The cumulative incidence of diabetes at the end of the 3-year follow-up period was 16.2% in the weight gain group and 10.1% in the weight loss group (P not significant). The incidence of diabetes was significantly lower in the highest weight loss group (3.1%), but not in the second highest, the second lowest and the lowest weight loss groups (9.7, 10.1 and 18.3%), compared with the weight gain group. Minimization of the risk of diabetes in Japanese men with visceral fat accumulation requires a minimum of 4-5% weight loss in those with HbA1c of 5.6-6.4%.

  10. A fast algorithm for identifying friends-of-friends halos

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  11. Short-term and long-term effects of a minimally invasive transilial vertebral blocking procedure on the lumbosacral morphometry in dogs measured by computed tomography.

    PubMed

    Müller, Friedrich; Schenk, Henning C; Forterre, Franck

    2017-04-01

    To determine the effects of a minimally invasive transilial vertebral (MTV) blocking procedure on the computed tomographic (CT) appearance of the lumbosacral (L7/S1) junction of dogs with degenerative lumbosacral stenosis (DLSS). Prospective study. 59 client-owned dogs with DLSS. Lumbosacral CT images were acquired with hyperextended pelvic limbs before and after MTV in all dogs. Clinical follow-up was obtained after 1 year, including a neurologic status classified in 4 grades, and if possible, CT. Morphometric measurements (Mean ± SEM) including foraminal area, endplate distance at L7/S1 and LS angle were obtained on sets of reformatted parasagittal and sagittal CT images. The mean foraminal area (ForL) increased from 32.5 ± 1.7 mm 2 to 59.7 ± 1.9 mm 2 on the left and from 31.1 ± 1.4 mm 2 to 59.1 ± 2.0 mm 2 on the right (ForR) side after MTV. The mean endplate distance (EDmd) between L7/S1 increased from 3.7 ± 0.1 mm to 6.0 ± 0.1 mm, and mean lumbosacral angle (LSa) from 148.0 ± 1.1° to 170.0 ± 1.1° after MTV. CT measurements were available 1 year postoperatively in 12 cases: ForL: 41.2 ± 3.1 mm 2 ; ForR: 37.9 ± 3.1 mm 2 ; EDmd: 4.3 ± 0.4 mm, and LSa 157.6 ± 2.1° (values are mean and standard error of mean =  SEM). All 39 dogs with long-term follow-up improved by at least 1 neurologic grade, 9/39 improving by 3 grades, 15/39 by 2 grades, and 15/39 by 1 grade. MTV results in clinical improvement and morphometric enlargement of the foraminal area in dogs with variable degrees of foraminal stenosis. MTV may be a valuable minimally invasive option for treatment of dogs with DLSS. © 2017 The American College of Veterinary Surgeons.

  12. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  13. Discrete diffraction managed solitons: Threshold phenomena and rapid decay for general nonlinearities

    NASA Astrophysics Data System (ADS)

    Choi, Mi-Ran; Hundertmark, Dirk; Lee, Young-Ran

    2017-10-01

    We prove a threshold phenomenon for the existence/non-existence of energy minimizing solitary solutions of the diffraction management equation for strictly positive and zero average diffraction. Our methods allow for a large class of nonlinearities; they are, for example, allowed to change sign, and the weakest possible condition, it only has to be locally integrable, on the local diffraction profile. The solutions are found as minimizers of a nonlinear and nonlocal variational problem which is translation invariant. There exists a critical threshold λcr such that minimizers for this variational problem exist if their power is bigger than λcr and no minimizers exist with power less than the critical threshold. We also give simple criteria for the finiteness and strict positivity of the critical threshold. Our proof of existence of minimizers is rather direct and avoids the use of Lions' concentration compactness argument. Furthermore, we give precise quantitative lower bounds on the exponential decay rate of the diffraction management solitons, which confirm the physical heuristic prediction for the asymptotic decay rate. Moreover, for ground state solutions, these bounds give a quantitative lower bound for the divergence of the exponential decay rate in the limit of vanishing average diffraction. For zero average diffraction, we prove quantitative bounds which show that the solitons decay much faster than exponentially. Our results considerably extend and strengthen the results of Hundertmark and Lee [J. Nonlinear Sci. 22, 1-38 (2012) and Commun. Math. Phys. 309(1), 1-21 (2012)].

  14. Triplet Tuning - a New ``BLACK-BOX'' Computational Scheme for Photochemically Active Molecules

    NASA Astrophysics Data System (ADS)

    Lin, Zhou; Van Voorhis, Troy

    2017-06-01

    Density functional theory (DFT) is an efficient computational tool that plays an indispensable role in the design and screening of π-conjugated organic molecules with photochemical significance. However, due to intrinsic problems in DFT such as self-interaction error, the accurate prediction of energy levels is still a challenging task. Functionals can be parameterized to correct these problems, but the parameters that make a well-behaved functional are system-dependent rather than universal in most cases. To alleviate both problems, optimally tuned range-separated hybrid functionals were introduced, in which the range-separation parameter, ω, can be adjusted to impose Koopman's theorem, ɛ_{HOMO} = -I. These functionals turned out to be good estimators for asymptotic properties like ɛ_{HOMO} and ɛ_{LUMO}. In the present study, we propose a ``black-box'' procedure that allows an automatic construction of molecule-specific range-separated hybrid functionals following the idea of such optimal tuning. However, instead of focusing on ɛ_{HOMO} and ɛ_{LUMO}, we target more local, photochemistry-relevant energy levels such as the lowest triplet state, T_1. In practice, we minimize the difference between two E_{{T}_1}'s that are obtained from two DFT-based approaches, Δ-SCF and linear-response TDDFT. We achieve this minimization using a non-empirical adjustment of two parameters in the range-separated hybrid functional - ω, and the percentage of Hartree-Fock contribution in the short-range exchange, c_{HF}. We apply this triplet tuning scheme to a variety of organic molecules with important photochemical applications, including laser dyes, photovoltaics, and light-emitting diodes, and achieved good agreements with the spectroscopic measurements for E_{{T}_1}'s and related local properties. A. Dreuw and M. Head-Gordon, Chem. Rev. 105, 4009 (2015). O. A. Vydrov and G. E. Scuseria, J. Chem. Phys. 125, 234109 (2006). L. Kronik, T. Stein, S. Refaely-Abramson, and R. Baer, J. Chem. Theory Comput. 8, 1515 (2012). Z. Lin and T. A. Van Voorhis, in preparation for submission to J. Chem. Theory Comput.

  15. Determination of minimal steady-state plasma level of diazepam causing seizure threshold elevation in rats.

    PubMed

    Dhir, Ashish; Rogawski, Michael A

    2018-05-01

    Diazepam, administered by the intravenous, oral, or rectal routes, is widely used for the management of acute seizures. Dosage forms for delivery of diazepam by other routes of administration, including intranasal, intramuscular, and transbuccal, are under investigation. In predicting what dosages are necessary to terminate seizures, the minimal exposure required to confer seizure protection must be known. Here we administered diazepam by continuous intravenous infusion to obtain near-steady-state levels, which allowed an assessment of the minimal levels that elevate seizure threshold. The thresholds for various behavioral seizure signs (myoclonic jerk, clonus, and tonus) were determined with the timed intravenous pentylenetetrazol seizure threshold test in rats. Diazepam was administered to freely moving animals by continuous intravenous infusion via an indwelling jugular vein cannula. Blood samples for assay of plasma levels of diazepam and metabolites were recovered via an indwelling cannula in the contralateral jugular vein. The pharmacokinetic parameters of diazepam following a single 80-μg/kg intravenous bolus injection were determined using a noncompartmental pharmacokinetic approach. The derived parameters V d , CL, t 1/2α (distribution half-life) and t 1/2β (terminal half-life) for diazepam were, respectively, 608 mL, 22.1 mL/min, 13.7 minutes, and 76.8 minutes, respectively. Various doses of diazepam were continuously infused without or with an initial loading dose. At the end of the infusions, the thresholds for various behavioral seizure signs were determined. The minimal plasma diazepam concentration associated with threshold elevations was estimated at approximately 70 ng/mL. The active metabolites nordiazepam, oxazepam, and temazepam achieved levels that are expected to make only minor contributions to the threshold elevations. Diazepam elevates seizure threshold at steady-state plasma concentrations lower than previously recognized. The minimally effective plasma concentration provides a reference that may be considered when estimating the diazepam exposure required for acute seizure treatment. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.

  16. In vitro effects on biofilm viability and antibacterial and antiadherent activities of silymarin.

    PubMed

    Evren, Ebru; Yurtcu, Erkan

    2015-07-01

    Limited treatment options in infectious diseases caused by resistant microorganisms created the need to search new approaches. Several herbal extracts are studied for their enormous therapeutic potential. Silymarin extract, from Silybum marianum (milk thistle), is an old and a new remedy for this goal. The purpose of this study is to evaluate the antibacterial and antiadherent effects of silymarin besides biofilm viability activity on standard bacterial strains. Minimal inhibitory concentration (MIC), minimal bactericidal concentration (MBC), antiadherent/antibiofilm activity, and effects on biofilm viability of silymarin were evaluated against standard bacterial strains. MIC values were observed between 60 and >241 μg/mL (0.25->1 mmol/L). Gram-positive bacteria were inhibited at concentrations between 60 and 120 μg/mL. Gram-negative bacteria were not inhibited by the silymarin concentrations included in this study. MBC values for Gram-positive bacteria were greater than 241 μg/mL. Adherence/biofilm formations were decreased to 15 μg/mL silymarin concentration when compared with silymarin-untreated group. Silymarin reduced the biofilm viabilities to 13 and 46 % at 1 and 0.5 mmol/L concentrations, respectively. We demonstrated that silymarin shows antibacterial and antiadherent/antibiofilm activity against certain standard bacterial strains which may be beneficial when used as a dietary supplement or a drug.

  17. Minimally Invasive Tubular Resection of Lumbar Synovial Cysts: Report of 40 Consecutive Cases.

    PubMed

    Birch, Barry D; Aoun, Rami James N; Elbert, Gregg A; Patel, Naresh P; Krishna, Chandan; Lyons, Mark K

    2016-10-01

    Lumbar synovial cysts are a relatively common clinical finding. Surgical treatment of symptomatic synovial cysts includes computed tomography-guided aspiration, open resection and minimally invasive tubular resection. We report our series of 40 consecutive minimally invasive microscopic tubular lumbar synovial cyst resections. Following Institutional Review Board approval, a retrospective analysis of 40 cases of minimally invasive microscopic tubular retractor synovial cyst resections at a single institution by a single surgeon (B.D.B.) was conducted. Gross total resection was performed in all cases. Patient characteristics, surgical operating time, complications, and outcomes were analyzed. Lumbar radiculopathy was the presenting symptoms in all but 1 patient, who presented with neurogenic claudication. The mean duration of symptoms was 6.5 months (range, 1-25 months), mean operating time was 58 minutes (range, 25-110 minutes), and mean blood loss was 20 mL (range, 5-50 mL). Seven patients required overnight observation. The median length of stay in the remaining 33 patients was 4 hours. There were 2 cerebrospinal fluid leaks repaired directly without sequelae. The mean follow-up duration was 80.7 months. Outcomes were good or excellent in 37 of the 40 patients, fair in 1 patient, and poor in 2 patients. Minimally invasive microscopic tubular retractor resection of lumbar synovial cysts can be done safely and with comparable outcomes and complication rates as open procedures with potentially reduced operative time, length of stay, and healthcare costs. Patient selection for microscopic tubular synovial cyst resection is based in part on the anatomy of the spine and synovial cyst and is critical when recommending minimally invasive vs. open resection to patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Intragastric pressure during food intake: a physiological and minimally invasive method to assess gastric accommodation.

    PubMed

    Janssen, P; Verschueren, S; Ly, H Giao; Vos, R; Van Oudenhove, L; Tack, J

    2011-04-01

    The stomach relaxes upon food intake and thereby provides a reservoir while keeping the intragastric pressure (IGP) low. We set out to determine whether we could use IGP as a measurement for stomach accommodation during food intake. In fasted healthy volunteers (n = 7-17) a manometer and an infusion catheter were positioned in the proximal stomach. After a stabilization period a nutrient drink was intragastrically infused at 15, 30 and 60 mL min(-1). To investigate the effect of impaired accommodation the effect of N(G)-monomethyl-L-arginine (L-NMMA) was examined. The volunteers scored satiation until maximum, when the experiment ended. The IGP was presented as a change from baseline (mean ± SEM) and compared with repeated measures anova. Independent on the ingestion speed, the IGP decreased initially and gradually increased thereafter. Volunteers scored maximal satiation after 699 ± 62, 809 ± 90 and 997 ± 120 mL nutrient drink infused (15, 30 and 60 mL min(-1) respectively; P < 0.01). Maximum IGP decrease was 3.4 ± 0.5 mmHg after 205 ± 28 mL, 5.1 ± 0.7 mmHg after 212 ± 46 mL, and 5.2 ± 0.7 mmHg after 296 ± 28 mL infused volume [15, 30 and 60 mL min(-1) respectively; not significant (ns)]. Post hoc analysis showed significant correlations between IGP and satiation score increase. During L-NMMA infusion IGP was significantly increased while subjects drank significantly less (816 ± 91 vs 1032 ± 71 mL; P < 0.005). Interestingly, the correlation between IGP increase and satiation score increase did not differ after L-NMMA treatment. The IGP during nutrient drink ingestion provides a minimally invasive alternative to the barostat for the assessment of gastric accommodation. These findings furthermore indicate that IGP is a major determinant of satiation. © 2011 Blackwell Publishing Ltd.

  19. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  20. Effect of laser beam conditioning on fabrication of clean micro-channel on stainless steel 316L using second harmonic of Q-switched Nd:YAG laser

    NASA Astrophysics Data System (ADS)

    Singh, Sanasam Sunderlal; Baruah, Prahlad Kr; Khare, Alika; Joshi, Shrikrishna N.

    2018-02-01

    Laser micromachining of metals for fabrication of micro-channels generate ridge formation along the edges accompanied by ripples along the channel bed. The ridge formation is due to the formation of interference pattern formed by back reflections from the beam splitter and other optical components involved before focusing on the work piece. This problem can be curtailed by using a suitable aperture or Iris diaphragm so as to cut the unwanted portion of the laser beam before illuminating the sample. This paper reports an experimental investigation on minimizing this problem by conditioning the laser beam using an Iris diaphragm and using optimum process parameters. In this work, systematic experiments have been carried out using the second harmonic of a Q-switched Nd:YAG laser to fabricate micro-channels. Initial experiments revealed that formation of ridges along the sides of micro-channel can easily be minimized with the help of Iris diaphragm. Further it is noted that a clean micro-channel of depth 43.39 μm, width up to 64.49 μm and of good surface quality with average surface roughness (Ra) value of 370 nm can be machined on stainless steel (SS) 316L by employing optimum process condition: laser beam energy of 30 mJ/pulse, 11 number of laser scans and scan speed of 169.54 μm/s with an opening of 4 mm diameter of Iris diaphragm in the path of the laser beam.

  1. A unified framework for penalized statistical muon tomography reconstruction with edge preservation priors of lp norm type

    NASA Astrophysics Data System (ADS)

    Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping

    2016-01-01

    The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.

  2. Impact of fertilization on chestnut growth, N and P concentrations in runoff water on degraded slope land in South China.

    PubMed

    Zeng, Shu-Cai; Chen, Bei-Guang; Jiang, Cheng-Ai; Wu, Qi-Tang

    2007-01-01

    Growing fruit trees on the slopes of rolling hills in South China was causing serious environmental problems because of heavy application of chemical fertilizers and soil erosion. Suitable sources of fertilizers and proper rates of applications were of key importance to both crop yields and environmental protection. In this article, the impact of four fertilizers, i.e., inorganic compound fertilizer, organic compound fertilizer, pig manure compost, and peanut cake (peanut oil pressing residue), on chestnut (Castanea mollissima Blume) growth on a slope in South China, and on the total N and total P concentrations in runoff waters have been investigated during two years of study, with an orthogonal experimental design. Results show that the organic compound fertilizer and peanut cake promote the heights of young chestnut trees compared to the control. In addition, peanut cake increases single-fruit weights and organic compound fertilizer raises single-seed weights. All the fertilizers increased the concentrations of total N and total P in runoff waters, except for organic compound fertilizer, in the first year experiment. The observed mean concentrations of total N varied from 1.6 mg/L to 3.2 mg/L and P from 0.12 mg/L to 0.22 mg/L, which were increased with the amount of fertilizer applications, with no pattern of direct proportion. On the basis of these experiment results, organic compound fertilizer at 2 kg/tree and peanut cake at 1 kg/tree are recommended to maximize chestnut growth and minimize water pollution.

  3. Antimicrobial activities of gaseous essential oils against Listeria monocytogenes on a laboratory medium and radish sprouts.

    PubMed

    Lee, Gyeongmin; Kim, Yoonbin; Kim, Hoikyung; Beuchat, Larry R; Ryu, Jee-Hoon

    2018-01-16

    The aim of this study was to evaluate the antimicrobial activities of gaseous essential oils (EO gases) against Listeria monocytogenes on the surfaces of a laboratory medium and radish sprouts. We determined the minimal inhibitory concentration (MIC) and minimal lethal concentration (MLC) values of EO gases from eight EOs extracted from basil leaves, carrot seed, cinnamon bark, cinnamon leaves, clove flower buds, oregano leaves, thyme flowers (linalool), and thyme leaves (thymol) against L. monocytogenes on a nutrient agar supplemented with 1% glucose and 0.025% bromocresol purple (NGBA). Oregano, thyme thymol, and cinnamon bark EO gases showed the strongest antilisterial activities (MIC and MLC, 78.1μL/L). We also investigated the inhibitory and lethal activities of these gases against L. monocytogenes on the surface of radish sprouts. The number of L. monocytogenes after exposure to EO gases at ≥156μL/L was significantly (P≤0.05) lower than that of untreated L. monocytogenes. For example, the initial number of L. monocytogenes on the surface of radish sprouts (ca. 6.3logCFU/g) decreased by 1.4logCFU/g within 24h at 30°C and 43% relative humidity (RH) without EO gas treatment, whereas the number of L. monocytogenes after exposure to oregano, thyme thymol, and cinnamon bark EO gases at 156μL/L decreased by 2.1, 2.1, and 1.8logCFU/g, respectively, after 24h. Although EO gases exerted greater lethal activities at higher concentrations (312 and 625μL/L), L. monocytogenes on the surface of radish sprouts was not completely inactivated. The number of L. monocytogenes on sprouts treated with oregano, thyme thymol, and cinnamon bark EO gases at 625μL/L decreased by 2.7-3.0logCFU/g after 24h at 30°C and 43% RH. Results indicate that EO gases that showed antilisterial activities on a laboratory medium also exhibited reduced lethal activity on the surface of radish sprouts. These findings will be useful when developing strategies to inactivate L. monocytogenes and possibly other foodborne pathogens on sprouts and perhaps other foods using EO gases. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Vector-valued Lizorkin-Triebel spaces and sharp trace theory for functions in Sobolev spaces with mixed \\pmb{L_p}-norm for parabolic problems

    NASA Astrophysics Data System (ADS)

    Weidemaier, P.

    2005-06-01

    The trace problem on the hypersurface y_n=0 is investigated for a function u=u(y,t) \\in L_q(0,T;W_{\\underline p}^{\\underline m}(\\mathbb R_+^n)) with \\partial_t u \\in L_q(0,T; L_{\\underline p}(\\mathbb R_+^n)), that is, Sobolev spaces with mixed Lebesgue norm L_{\\underline p,q}(\\mathbb R^n_+\\times(0,T))=L_q(0,T;L_{\\underline p}(\\mathbb R_+^n)) are considered; here \\underline p=(p_1,\\dots,p_n) is a vector and \\mathbb R^n_+=\\mathbb R^{n-1} \\times (0,\\infty). Such function spaces are useful in the context of parabolic equations. They allow, in particular, different exponents of summability in space and time. It is shown that the sharp regularity of the trace in the time variable is characterized by the Lizorkin-Triebel space F_{q,p_n}^{1-1/(p_nm_n)}(0,T;L_{\\widetilde{\\underline p}}(\\mathbb R^{n-1})), \\underline p=(\\widetilde{\\underline p},p_n). A similar result is established for first order spatial derivatives of u. These results allow one to determine the exact spaces for the data in the inhomogeneous Dirichlet and Neumann problems for parabolic equations of the second order if the solution is in the space L_q(0,T; W_p^2(\\Omega)) \\cap W_q^1(0,T;L_p(\\Omega)) with p \\le q.

  5. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  6. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  7. Concurrent optimization of material spatial distribution and material anisotropy repartition for two-dimensional structures

    NASA Astrophysics Data System (ADS)

    Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris

    2018-04-01

    An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.

  8. Minimizing capture-related stress on white-tailed deer with a capture collar

    USGS Publications Warehouse

    DelGiudice, G.D.; Kunkel, K.E.; Mech, L.D.; Seal, U.S.

    1990-01-01

    We compared the effect of 3 capture methods for white-tailed deer (Odocoileus virginianus) on blood indicators of acute excitement and stress from 1 February to 20 April 1989. Eleven adult females were captured by Clover trap or cannon net between 1 February and 9 April 1989 in northeastern Minnesota [USA]. These deer were fitted with radio-controlled capture collars, and 9 deer were recaptured 7-33 days later. Trapping method affected serum cortisol (P < 0.0001), hemoglobin (Hb) (P < 0.06), and packed cell volume (PCV) (P < 0.07). Cortisol concentrations were lower (P < 0.0001) in capture-collared deer (0.54 .+-. 0.07 [SE] .mu.g/dL) compared to Clover-trapped (4.37 .+-. 0.69 .mu.g/dL) and cannon-netted (3.88 .+-. 0.82 .mu.g/dL) deer. Capture-collared deer were minimally stressed compared to deer captured by traditional methods. Use of the capture collar should permit more accurate interpretation of blood profiles of deer for assessement of condition and general health.

  9. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  10. Preliminary Investigation of Longitudinal Differences in TEC and Scintillation at Transition Latitudes

    DTIC Science & Technology

    1991-04-04

    AIR. j NO ABSOLUTE TEC. C: NEW SATELITE WINDOWS INSERTED. j - IAPEDRIVE PROBLEM. D: SOME DATA ON CHART NOT RECORDED ON TAPE. H = NPIB INTERFASE...PROBLEM. E: SOME DATA BROKEN UP. V 2 FLOPPY DISK/DRIVE PROBLEM. F: TAPE CHANGE. W - WRONG SATELITE WINDOWS. TOTAL HOURS OF ACTIVITY LEVEL 1:300 P - POWER...0 1 0 31 1 COMMENT’S: ADDITIONAL COMMENT’S. A: TAPE STOP * L1/L2 SCINTILLATION. B: GPS SYSTEM OFF THE AIR. O= N ABSOLUTE TEC. C: NEW SATELITE WINDOWS

  11. Nanoliter microfluidic hybrid method for simultaneous screening and optimization validated with crystallization of membrane proteins

    PubMed Central

    Li, Liang; Mustafi, Debarshi; Fu, Qiang; Tereshko, Valentina; Chen, Delai L.; Tice, Joshua D.; Ismagilov, Rustem F.

    2006-01-01

    High-throughput screening and optimization experiments are critical to a number of fields, including chemistry and structural and molecular biology. The separation of these two steps may introduce false negatives and a time delay between initial screening and subsequent optimization. Although a hybrid method combining both steps may address these problems, miniaturization is required to minimize sample consumption. This article reports a “hybrid” droplet-based microfluidic approach that combines the steps of screening and optimization into one simple experiment and uses nanoliter-sized plugs to minimize sample consumption. Many distinct reagents were sequentially introduced as ≈140-nl plugs into a microfluidic device and combined with a substrate and a diluting buffer. Tests were conducted in ≈10-nl plugs containing different concentrations of a reagent. Methods were developed to form plugs of controlled concentrations, index concentrations, and incubate thousands of plugs inexpensively and without evaporation. To validate the hybrid method and demonstrate its applicability to challenging problems, crystallization of model membrane proteins and handling of solutions of detergents and viscous precipitants were demonstrated. By using 10 μl of protein solution, ≈1,300 crystallization trials were set up within 20 min by one researcher. This method was compatible with growth, manipulation, and extraction of high-quality crystals of membrane proteins, demonstrated by obtaining high-resolution diffraction images and solving a crystal structure. This robust method requires inexpensive equipment and supplies, should be especially suitable for use in individual laboratories, and could find applications in a number of areas that require chemical, biochemical, and biological screening and optimization. PMID:17159147

  12. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  13. Finding minimum-quotient cuts in planar graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.K.; Phillips, C.A.

    Given a graph G = (V, E) where each vertex v {element_of} V is assigned a weight w(v) and each edge e {element_of} E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and {bar S} is c(S, {bar S})/min{l_brace}w(S), w(S){r_brace}, where c(S, {bar S}) is the sum of the costs of the edges crossing the cut and w(S) and w({bar S}) are the sum of the weights of the vertices in S and {bar S}, respectively. The problem of finding a cut whose quotient is minimum for a graph hasmore » in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,{bar S}) minimizing c(S,{bar S}) subject to the constraint bW {le} w(S) {le} (1 {minus} b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b {le} {1/2}. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao`s algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao`s most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less

  14. Finding minimum-quotient cuts in planar graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, J.K.; Phillips, C.A.

    Given a graph G = (V, E) where each vertex v [element of] V is assigned a weight w(v) and each edge e [element of] E is assigned a cost c(e), the quotient of a cut partitioning the vertices of V into sets S and [bar S] is c(S, [bar S])/min[l brace]w(S), w(S)[r brace], where c(S, [bar S]) is the sum of the costs of the edges crossing the cut and w(S) and w([bar S]) are the sum of the weights of the vertices in S and [bar S], respectively. The problem of finding a cut whose quotient is minimummore » for a graph has in recent years attracted considerable attention, due in large part to the work of Rao and Leighton and Rao. They have shown that an algorithm (exact or approximation) for the minimum-quotient-cut problem can be used to obtain an approximation algorithm for the more famous minimumb-balanced-cut problem, which requires finding a cut (S,[bar S]) minimizing c(S,[bar S]) subject to the constraint bW [le] w(S) [le] (1 [minus] b)W, where W is the total vertex weight and b is some fixed balance in the range 0 < b [le] [1/2]. Unfortunately, the minimum-quotient-cut problem is strongly NP-hard for general graphs, and the best polynomial-time approximation algorithm known for the general problem guarantees only a cut whose quotient is at mostO(lg n) times optimal, where n is the size of the graph. However, for planar graphs, the minimum-quotient-cut problem appears more tractable, as Rao has developed several efficient approximation algorithms for the planar version of the problem capable of finding a cut whose quotient is at most some constant times optimal. In this paper, we improve Rao's algorithms, both in terms of accuracy and speed. As our first result, we present two pseudopolynomial-time exact algorithms for the planar minimum-quotient-cut problem. As Rao's most accurate approximation algorithm for the problem -- also a pseudopolynomial-time algorithm -- guarantees only a 1.5-times-optimal cut, our algorithms represent a significant advance.« less

  15. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    PubMed

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  16. Effects of Instruction on L2 Pronunciation Development: A Synthesis of 15 Quasi-Experimental Intervention Studies

    ERIC Educational Resources Information Center

    Saito, Kazuya

    2012-01-01

    In recent years, several researchers have made strong calls for research on teaching for "intelligible" (rather than "native-like") pronunciation. Their reasoning is that, while maintaining their first language (L1)-related accents to a certain degree, students need to fulfill the minimal phonological requirements to be comprehensible in order to…

  17. Multiobjective Collaborative Optimization of Systems of Systems

    DTIC Science & Technology

    2005-06-01

    K: HSC MODEL AND OPTIMIZATION DESCRIPTION ................................................ 157 APPENDIX L: HSC OPTIMIZATION CODE...7 0 Table 6. System Variables of FPF Data Set Showing Minimal HSC Impact on...App.E, F) Data Analysis Front ITS Model (App. I, J) Chap.] 1 ConclusionsSHSC Model (App. K, L) Cot[& HSC Model (App. M, NV) MoeJ Future Work Figure

  18. The longitudinal development of fine phonetic detail in late learners of Spanish

    NASA Astrophysics Data System (ADS)

    Casillas, Joseph Vincent

    The present investigation analyzed early second language (L2) learning in adults. A common finding regarding L2 acquisition is that early learning appears to be necessary in order to perform on the same level as a native speaker. Surprisingly, many current theoretical models posit that the human ability to learn novel speech sounds remains active throughout the lifespan. In light of this fact, this project examines L2 acquisition in late learners with a special focus on L1/L2 use, input, and context of learning. Research regarding L1/L2 use has tended to be observational, and throughout the previous six decades of L2 research the role of input has been minimized and left largely unexplained. This study includes two production experiments and two perception experiments and focuses on the role of L1/L2 use and input in L2 acquisition in late learners in order to add to current research regarding their role in accurately and efficiently acquiring a novel speech sound. Moreover, this research is concerned with shedding light on when, if at all, during the acquisition process late learners begin to acquire a new, language-specific phonetic system, and the amount of exposure necessary in order to acquire L2 fine-phonetic detail. The experimental design presented in the present study also aims to shed light on the temporal relationship between production and perception with regard to category formation. To begin to fully understand these issues, the present study proposes a battery of tasks which were administered throughout the course of a domestic immersion program. Domestic immersion provides an understudied linguistic context in which L1 use is minimized, target language use is maximized, and L2 input is abundant. The results suggest that L2 phonetic category formation occurs at an early stage of development, and is perceptually driven. Moreover, early L2 representations are fragile, and especially susceptible to cross-language interference. Together, the studies undertaken for this work add to our understanding of the initial stages of the acquisition of L2 phonology in adult learners.

  19. Prospective clinical study on long-term swallowing function and voice quality in advanced head and neck cancer patients treated with concurrent chemoradiotherapy and preventive swallowing exercises.

    PubMed

    Kraaijenga, Sophie A C; van der Molen, Lisette; Jacobi, Irene; Hamming-Vrieze, Olga; Hilgers, Frans J M; van den Brekel, Michiel W M

    2015-11-01

    Concurrent chemoradiotherapy (CCRT) for advanced head and neck cancer (HNC) is associated with substantial early and late side effects, most notably regarding swallowing function, but also regarding voice quality and quality of life (QoL). Despite increased awareness/knowledge on acute dysphagia in HNC survivors, long-term (i.e., beyond 5 years) prospectively collected data on objective and subjective treatment-induced functional outcomes (and their impact on QoL) still are scarce. The objective of this study was the assessment of long-term CCRT-induced results on swallowing function and voice quality in advanced HNC patients. The study was conducted as a randomized controlled trial on preventive swallowing rehabilitation (2006-2008) in a tertiary comprehensive HNC center with twenty-two disease-free and evaluable HNC patients as participants. Multidimensional assessment of functional sequels was performed with videofluoroscopy, mouth opening measurements, Functional Oral Intake Scale, acoustic voice parameters, and (study specific, SWAL-QoL, and VHI) questionnaires. Outcome measures at 6 years post-treatment were compared with results at baseline and at 2 years post-treatment. At a mean follow-up of 6.1 years most initial tumor-, and treatment-related problems remained similarly low to those observed after 2 years follow-up, except increased xerostomia (68%) and increased (mild) pain (32%). Acoustic voice analysis showed less voicedness, increased fundamental frequency, and more vocal effort for the tumors located below the hyoid bone (n = 12), without recovery to baseline values. Patients' subjective vocal function (VHI score) was good. Functional swallowing and voice problems at 6 years post-treatment are minimal in this patient cohort, originating from preventive and continued post-treatment rehabilitation programs.

  20. Direct and inverse theorems on approximation by root functions of a regular boundary-value problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radzievskii, G V

    2006-08-31

    One considers the spectral problem x{sup (n)}+ Fx={lambda}x with boundary conditions U{sub j}(x)=0, j=1,...,n, for functions x on [0,1]. It is assumed that F is a linear bounded operator from the Hoelder space C{sup {gamma}}, {gamma} element of [0,n-1), into L{sub 1} and the U{sub j} are bounded linear functionals on C{sup k{sub j}} with k{sub j} element of {l_brace}0,...,n- 1{r_brace}. Let P{sub {zeta}} be the linear span of the root functions of the problem x{sup (n)}+ Fx={lambda}x, U{sub j}(x)=0, j=1,...,n, corresponding to the eigenvalues {lambda}{sub k} with |{lambda}{sub k}|<{zeta}{sup n}, and let E{sub {zeta}}(f){sub W{sub p}{sup l}}:=inf{l_brace}||f-g||{sub W{sub p}{supmore » l}}:g element of P{sub {zeta}}{r_brace}. An estimate of E{sub {zeta}}(f){sub W{sub p}{sup l}} is obtained in terms of the K-functional K({zeta}{sup -m},f;W{sub p}{sup l},W{sub p,U}{sup l+m}):= inf{l_brace}||f-x||{sub W{sub p}{sup l}}+{zeta}{sup -m}||x||{sub W{sub p}{sup l}{sup +}{sup m}}:x element of W{sub p}{sup l+m}, U{sub j}(x)=0 for k{sub j}

  1. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1989-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  2. Finite-element grid improvement by minimization of stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.; Oswald, Fred B.

    1987-01-01

    A new and simple method of finite-element grid improvement is presented. The objective is to improve the accuracy of the analysis. The procedure is based on a minimization of the trace of the stiffness matrix. For a broad class of problems this minimization is seen to be equivalent to minimizing the potential energy. The method is illustrated with the classical tapered bar problem examined earlier by Prager and Masur. Identical results are obtained.

  3. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  4. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  5. Determination of plutonium in nitric acid solutions using energy dispersive L X-ray fluorescence with a low power X-ray generator

    NASA Astrophysics Data System (ADS)

    Py, J.; Groetz, J.-E.; Hubinois, J.-C.; Cardona, D.

    2015-04-01

    This work presents the development of an in-line energy dispersive L X-ray fluorescence spectrometer set-up, with a low power X-ray generator and a secondary target, for the determination of plutonium concentration in nitric acid solutions. The intensity of the L X-rays from the internal conversion and gamma rays emitted by the daughter nuclei from plutonium is minimized and corrected, in order to eliminate the interferences with the L X-ray fluorescence spectrum. The matrix effects are then corrected by the Compton peak method. A calibration plot for plutonium solutions within the range 0.1-20 g L-1 is given.

  6. [(Modic) signal alterations of vertebral endplates and their correlation to a minimally invasive treatment of lumbar disc herniation using epidural injections].

    PubMed

    Liphofer, J P; Theodoridis, T; Becker, G T; Koester, O; Schmid, G

    2006-11-01

    To study the influence of (Modic) signal alterations (SA) of the cartilage endplate (CEP) of vertebrae L3-S1 on the outcome of an in-patient minimally invasive treatment (MIT) using epidural injections on patients with lumbar disc herniation (LDH). The MR images of 59 consecutive patients with LDH within segments L3/L4 - L5/S1 undergoing in-patient minimally invasive treatment with epidural injections were evaluated in a clinical study. The (Modic) signal alterations of the CEP were recorded using T1- and T2-weighted sagittal images. On the basis of the T2-weighted sagittal images, the extension and distribution of the SA were measured by dividing each CEP into 9 areas. The outcome of the MIT was recorded using the Oswestry Disability Index (ODI) before and after therapy and in a 3-month follow-up. Within a subgroup of patients (n = 35), the distribution and extension of the signal alterations were correlated with the development of the ODI. Segments with LDH showed significantly more (p < 0.001) SA of the CEP than segments without LDH. Although the extension of the SA was not dependent on sex, it did increase significantly with age (p = 0.017). The outcome after MIT did not depend on the sex and age of the patients nor on the type of LDH. The SA extension tended to have a negative correlation with the outcome after MIT after 3 months (p = 0.071). A significant negative correlation could be established between the SA extension in the central section of the upper endplate and the outcome after 3 months (p = 0.019). 1. Lumbar disc herniation is clearly associated with the prevalence of (Modic) signal alterations. 2. Extensive signal alterations tend to correlate with a negative outcome of an MIT using epidural injections. 3. Such SA in the central portion of the upper CEP correlate significantly with a negative treatment result. 4. The central portion of the upper CEP being extensively affected by (Modic) SA is a negative predictor for the success of a minimally invasive pain therapy.

  7. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  8. Obtaining sparse distributions in 2D inverse problems.

    PubMed

    Reci, A; Sederman, A J; Gladden, L F

    2017-08-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L 1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L 1 regularization to a class of inverse problems; relaxation-relaxation, T 1 -T 2 , and diffusion-relaxation, D-T 2 , correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L 1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L 1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L 1 regularization algorithm stably recovers a distribution at a signal to noise ratio<20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise. Copyright © 2017. Published by Elsevier Inc.

  9. Obtaining sparse distributions in 2D inverse problems

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-08-01

    The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.

  10. The research of the coupled orbital-attitude controlled motion of celestial body in the neighborhood of the collinear libration point L1

    NASA Astrophysics Data System (ADS)

    Shmyrov, A.; Shmyrov, V.; Shymanchuk, D.

    2017-10-01

    This article considers the motion of a celestial body within the restricted three-body problem of the Sun-Earth system. The equations of controlled coupled attitude-orbit motion in the neighborhood of collinear libration point L1 are investigated. The translational orbital motion of a celestial body is described using Hill's equations of circular restricted three-body problem of the Sun-Earth system. Rotational orbital motion is described using Euler's dynamic equations and quaternion kinematic equation. We investigate the problem of stability of celestial body rotational orbital motion in relative equilibrium positions and stabilization of celestial body rotational orbital motion with proposed control laws in the neighborhood of collinear libration point L1. To study stabilization problem, Lyapunov function is constructed in the form of the sum of the kinetic energy and special "kinematic function" of the Rodriguez-Hamiltonian parameters. Numerical modeling of the controlled rotational motion of a celestial body at libration point L1 is carried out. The numerical characteristics of the control parameters and rotational motion are given.

  11. Muscle gap approach under a minimally invasive channel technique for treating long segmental lumbar spinal stenosis

    PubMed Central

    Bin, Yang; De cheng, Wang; wei, Wang Zong; Hui, Li

    2017-01-01

    Abstract This study aimed to compare the efficacy of muscle gap approach under a minimally invasive channel surgical technique with the traditional median approach. In the Orthopedics Department of Traditional Chinese and Western Medicine Hospital, Tongzhou District, Beijing, 68 cases of lumbar spinal canal stenosis underwent surgery using the muscle gap approach under a minimally invasive channel technique and a median approach between September 2013 and February 2016. Both approaches adopted lumbar spinal canal decompression, intervertebral disk removal, cage implantation, and pedicle screw fixation. The operation time, bleeding volume, postoperative drainage volume, and preoperative and postoperative visual analog scale (VAS) score and Japanese Orthopedics Association score (JOA) were compared between the 2 groups. All patients were followed up for more than 1 year. No significant difference between the 2 groups was found with respect to age, gender, surgical segments. No diversity was noted in the operation time, intraoperative bleeding volume, preoperative and 1 month after the operation VAS score, preoperative and 1 month after the operation JOA score, and 6 months after the operation JOA score between 2 groups (P > .05). The amount of postoperative wound drainage (260.90 ± 160 mL vs 447.80 ± 183.60 mL, P < .001) and the VAS score 6 months after the operation (1.71 ± 0.64 vs 2.19 ± 0.87, P = .01) were significantly lower in the muscle gap approach group than in the median approach group (P < .05). In the muscle gap approach under a minimally invasive channel group, the average drainage volume was reduced by 187 mL, and the average VAS score 6 months after the operation was reduced by an average of 0.48. The muscle gap approach under a minimally invasive channel technique is a feasible method to treat long segmental lumbar spinal canal stenosis. It retains the integrity of the posterior spine complex to the greatest extent, so as to reduce the adjacent spinal segmental degeneration and soft tissue trauma. Satisfactory short-term and long-term clinical results were obtained. PMID:28796075

  12. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  13. Minimally invasive removal of a recurrent lumbar herniated nucleus pulposus by the small incised microendoscopic discectomy interlaminar approach.

    PubMed

    Koga, S; Sairyo, K; Shibuya, I; Kanamori, Y; Kosugi, T; Matsumoto, H; Kitagawa, Y; Sumita, T; Dezawa, A

    2012-02-01

    In this report, we introduce two cases of recurrent herniated nucleus pulposus (HNP) at L5-S1 that were successfully removed using the small incised microendoscopic discectomy (sMED) technique, proposed by Dezawa and Sairyo in 2011. sMED was performed via the interlaminar approach with a percutaneous endoscope. The patients had previously underdone microendoscopic discectomy for HNP. For the recurrent HNP, the sMED interlaminar approach was selected because the HNP occurred at the level of L5-S1; the percutaneous endoscopic transforaminal approach was not possible for anatomical reasons. To perform sMED via the interlaminar approach, we employed new, specially made devices to enable us to use this technique. In conclusion, sMED is the most minimally invasive approach available for HNP, and its limitations have been gradually eliminated with the introduction specially made devices. In the near future, percutaneous endoscopic surgery could be the gold standard for minimally invasive disc surgery. © 2012 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Blackwell Publishing Asia Pty Ltd.

  14. Surface Tension Driven Instability in the Regime of Stokes Flow

    NASA Astrophysics Data System (ADS)

    Yao, Zhenwei; Bowick, Mark; Xing, Xiangjun

    2010-03-01

    A cylinder of liquid inside another liquid is unstable towards droplet formation. This instability is driven by minimization of surface tension energy and was analyzed first by [1,2] and then by [3]. We revisit this problem in the limit of small Laplace number, where the inertial of liquids can be completely ignored. The stream function is found to obey biharmonic equation, and its analytic solutions are found. We rederive Tomotika's main results, and also obtain many new analytic results about the velocity fields. We also apply our formalism to study the recent experiment on toroidal liquid droplet[4]. Our framework shall have many applications in micro-fluidics. [1] L.Rayleigh, On The Instability of A Cylinder of Viscous Liquid Under Capillary Force, Scientific Papers, Cambridge, Vol.III, 1902. [2] L.Rayleigh, On The Instability of Cylindrical Fluid Surfaces, Scientific Papers, Cambridge, Vol.III, 1902. [3] S.Tomotika, On the Instability of a Cylindrical Thread of a Viscous Liquid surround by Another Viscous Fluid, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Volume 150, Issue 870, pp. 322-337. [4] E.Pairam and A.Fern'andez-Nieves, Generation and Stability of Toroidal Droplets in a Viscous Liquid, Physical Review Letters 102, 234501 (2009).

  15. Laboratory calibration and field testing of the Chemcatcher-Metal for trace levels of rare earth elements in estuarine waters.

    PubMed

    Petersen, Jördis; Pröfrock, Daniel; Paschke, Albrecht; Broekaert, Jose A C; Prange, Andreas

    2015-10-01

    Little knowledge is available about water concentrations of rare earth elements (REEs) in the marine environment. The direct measurement of REEs in coastal waters is a challenging task due to their ultra-low concentrations as well as the high salt content in the water samples. To quantify these elements at environmental concentrations (pg L(-1) to low ng L(-1)) in coastal waters, current analytical techniques are generally expensive and time consuming, and require complex chemical preconcentration procedures. Therefore, an integrative passive sampler was tested as a more economic alternative sampling approach for REE analysis. We used a Chemcatcher-Metal passive sampler consisting of a 3M Empore Chelating Disk as the receiving phase, as well as a cellulose acetate membrane as the diffusion-limiting layer. The effect of water turbulence and temperature on the uptake rates of REEs was analyzed during 14-day calibration experiments by a flow-through exposure tank system. The sampling rates were in the range of 0.42 mL h(-1) (13 °C; 0.25 m s(-1)) to 4.01 mL h(-1) (13 °C; 1 m s(-1)). Similar results were obtained for the different REEs under investigation. The water turbulence was the most important influence on uptake. The uptake rates were appropriate to ascertain time-weighted average concentrations of REEs during a field experiment in the Elbe Estuary near Cuxhaven Harbor (exposure time 4 weeks). REE concentrations were determined to be in the range 0.2 to 13.8 ng L(-1), where the highest concentrations were found for neodymium and samarium. In comparison, most of the spot samples measured along the Chemcatcher samples had REE concentrations below the limit of detection, in particular due to necessary dilution to minimize the analytical problems that arise with the high salt content in marine water samples. This study was among the first efforts to measure REE levels in the field using a passive sampling approach. Our results suggest that passive samplers could be an effective tool to monitor ultra-trace concentrations of REEs in coastal waters with high salt content.

  16. Taxonomic structure of the yeasts and lactic acid bacteria microbiota of pineapple (Ananas comosus L. Merr.) and use of autochthonous starters for minimally processing.

    PubMed

    Di Cagno, Raffaella; Cardinali, Gainluigi; Minervini, Giovanna; Antonielli, Livio; Rizzello, Carlo Giuseppe; Ricciuti, Patrizia; Gobbetti, Marco

    2010-05-01

    Pichia guilliermondii was the only identified yeast in pineapple fruits. Lactobacillus plantarum and Lactobacillus rossiae were the main identified species of lactic acid bacteria. Typing of lactic acid bacteria differentiated isolates depending on the layers. L. plantarum 1OR12 and L. rossiae 2MR10 were selected within the lactic acid bacteria isolates based on the kinetics of growth and acidification. Five technological options, including minimal processing, were considered for pineapple: heating at 72 degrees C for 15 s (HP); spontaneous fermentation without (FP) or followed by heating (FHP), and fermentation by selected autochthonous L. plantarum 1OR12 and L. rossiae 2MR10 without (SP) or preceded by heating (HSP). After 30 days of storage at 4 degrees C, HSP and SP had a number of lactic acid bacteria 1000 to 1,000,000 times higher than the other processed pineapples. The number of yeasts was the lowest in HSP and SP. The Community Level Catabolic Profiles of processed pineapples indirectly confirmed the capacity of autochthonous starters to dominate during fermentation. HSP and SP also showed the highest antioxidant activity and firmness, the better preservation of the natural colours and were preferred for odour and overall acceptability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  17. Anterior Longitudinal Ligament Release From the Minimally Invasive Lateral Retroperitoneal Transpsoas Approach: Technical Note.

    PubMed

    Beckman, Joshua M; Marengo, Nicola; Murray, Gisela; Bach, Konrad; Uribe, Juan S

    2016-09-01

    The technique for minimally invasive anterior longitudinal ligament release is a major advancement in lateral access surgery. This method provides hypermobility of lumbar segments to allow for aggressive lordosis restoration while maintaining the benefits of indirect decompression and minimally invasive access. To provide video demonstration of the lateral retroperitoneal transpsoas approach with anterior longitudinal ligament sectioning. A detailed surgical technique of the minimally invasive anterior column release is described and illustrated in an elderly patient with adult spinal deformity and low back pain (visual analog scale, 8 of 10) refractory to conservative measures. The 3-foot standing radiographs demonstrated a lumbar lordosis of 54.4°, pelvic incidence of 63.7°, and pelvic tilt of 17.5°. Computed tomography and magnetic resonance imaging showed generalized lumbar spondylosis and degenerative disc changes from L2 to L5. The patient underwent a multilevel minimally invasive deformity correction with an anterior longitudinal ligament release at the L3/L4 level through the lateral retroperitoneal transpsoas approach. Lumbar lordosis increased from 54.4° to 77° with a global improvement in sagittal vertical axis from 4.37 cm to 0 cm. Total blood loss was less than 25 mL, and there were no major neurological or vascular complications. The anterior longitudinal ligament release using the minimally invasive lateral approach allows for deformity correction without the morbidity and blood loss encountered by traditional open posterior approaches. However, the risk of major vascular/visceral complication warrants only experts in minimally invasive lateral surgery to attempt this technique.

  18. Foundational Aero Research for Development of Efficient Power Turbines With 50% Variable-speed Capability

    DTIC Science & Technology

    2011-02-01

    expected, with increased loading (or reduced axial -chord to pitch ratio for a given turning). In addition to minimizing design-point loss due to...5  Figure 2. Computed loading diagrams and Reynolds lapse rates for aft- (L1A) and mid- loaded (L1M) LPT blading (Clark et al., 2009...reference 22 in Welch, 2010) accomplishing the same 95° flow turning at high aerodynamic loading (Z = 1.34). .................8  Figure 3. Computed 2-D

  19. A review of global outlook on fluoride contamination in groundwater with prominence on the Pakistan current situation.

    PubMed

    Rasool, Atta; Farooqi, Abida; Xiao, Tangfu; Ali, Waqar; Noor, Sifat; Abiola, Oyebamiji; Ali, Salar; Nasim, Wajid

    2017-12-19

    Several million people are exposed to fluoride (F - ) via drinking water in the world. Current review emphasized the elevated level of fluoride concentrations in the groundwater and associated potential health risk globally with a special focus on Pakistan. Millions of people are deeply dependent on groundwater from different countries of the world encompassing with an elevated level of fluoride. The latest estimates suggest that around 200 million people, from among 25 nations the world over, are under the dreadful fate of fluorosis. India and China, the two most populous countries of the world, are the worst affected. In Pakistan, fluoride data of 29 major cities are reviewed and 34% of the cities show fluoride levels with a mean value greater than 1.5 mg/L where Lahore, Quetta and Tehsil Mailsi are having the maximum values of 23.60, 24.48, > 5.5 mg/L, respectively. In recent years, however, other countries have minimized, even eliminated its use due to health issues. High concentration of fluoride for extended time period causes adverse effects of health such as skin lesions, discoloration, cardiovascular disorders, dental fluorosis and crippling skeletal fluorosis. This review deliberates comprehensive strategy of drinking water quality in the global scenario of fluoride contamination, especially in Pakistan with prominence on major pollutants, mitigation technologies, sources of pollution and ensuing health problems. Considering these verities, health authorities urgently need to establish alternative means of water decontamination in order to prevent associated health problems.

  20. Humidity of anaesthetic gases with respect to low flow anaesthesia.

    PubMed

    Kleemann, P P

    1994-08-01

    It has been demonstrated in an experimental study in swine using the scanning electron microscope that a rebreathing technique utilising minimal fresh gas flowrates significantly improves climatization of anaesthetic gases. Consequently, effects of various anaesthetic techniques on airway climate must be assessed, which covers the need for suitable measuring devices. Basic principles and methods of humidity measurement in flowing anaesthetic gases include gravimetric hygrometry, dew point hygrometry, wet-dry bulb psychrometry, mass spectrometry, spectroscopic hygrometry and electrical hygrometry. A custom-made apparatus for continuous measurement of humidity and temperature in the inspired and expired gas mixtures of a breathing circuit (separated by a valve system, integrated between the endotracheal tube and the Y-piece) is described. Comparative evaluation of this apparatus and the psychrometer was carried out. It could be demonstrated that the apparatus, measuring with capacitive humidity sensors, is more suitable for prolonged use under clinical conditions than the psychrometer. In the second part of the study, climatization of anaesthetic gases under clinical conditions was investigated using fresh gas flowrates of 0.6, 1.5, 3.0 and 6.0 l/min. In the inspiratory limb of the circuit an absolute humidity of 21.3 mg H2O/l and a temperature of 31.5 degrees C were obtained after 120 minutes of minimal flow. Humidity and temperature of inspired air obtained with fresh gas flowrates of 6.0 and 3.0 l/min were found to be inadequate for prolonged anaesthesia. Reducing the fresh gas flow to 1.5 l/min increases heat and moisture content in the respired gases, but conditions are still inadequate for prolonged anaesthesia. Sufficient moisture (> or 20 mg H2O/l) and temperature are obtained under minimal flow conditions after one hour.

  1. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  2. A Self-Calibrating Radar Sensor System for Measuring Vital Signs.

    PubMed

    Huang, Ming-Chun; Liu, Jason J; Xu, Wenyao; Gu, Changzhan; Li, Changzhi; Sarrafzadeh, Majid

    2016-04-01

    Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.

  3. A general algorithm using finite element method for aerodynamic configurations at low speeds

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.

    1975-01-01

    A finite element algorithm for numerical simulation of two-dimensional, incompressible, viscous flows was developed. The Navier-Stokes equations are suitably modelled to facilitate direct solution for the essential flow parameters. A leap-frog time differencing and Galerkin minimization of these model equations yields the finite element algorithm. The finite elements are triangular with bicubic shape functions approximating the solution space. The finite element matrices are unsymmetrically banded to facilitate savings in storage. An unsymmetric L-U decomposition is performed on the finite element matrices to obtain the solution for the boundary value problem.

  4. 48 CFR 209.104-1 - General standards.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... International Contracting, OUSD(AT&L)DPAP(CPIC)), 3060 Defense Pentagon, Washington, DC 20301-3060. (ii....104-1 General standards. (e) For cost-reimbursement or incentive type contracts, or contracts which...; (iii) Risk of misallocations and mischarges are minimized; and (iv) Contract allocations and charges...

  5. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  6. Problem of quality assurance during metal constructions welding via robotic technological complexes

    NASA Astrophysics Data System (ADS)

    Fominykh, D. S.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.

    2018-05-01

    The problem of minimizing the probability for critical combinations of events that lead to a loss in welding quality via robotic process automation is examined. The problem is formulated, models and algorithms for its solution are developed. The problem is solved by minimizing the criterion characterizing the losses caused by defective products. Solving the problem may enhance the quality and accuracy of operations performed and reduce the losses caused by defective product

  7. Antimycotoxigenic characteristics of Rosmarinus officinalis and Trachyspermum copticum L. essential oils.

    PubMed

    Rasooli, Iraj; Fakoor, Mohammad Hadi; Yadegarinia, Davod; Gachkar, Latif; Allameh, Abdolamir; Rezaei, Mohammad Bagher

    2008-02-29

    Aflatoxin B1 (AFB1) is a highly toxic and carcinogenic metabolite produced by Aspergillus species on food and agricultural commodities. Natural products may regulate the cellular effects of aflatoxins and evidence suggests that aromatic organic compounds of spices can control the production of aflatoxins. With a view to controlling aflatoxin production, the essential oils from Rosmarinus officinalis and Trachyspermum copticum L. were obtained by hydrodistillation. Antifungal activities of the oils were studied with special reference to the inhibition of Aspergillus parasiticus growth and aflatoxin production. Minimal inhibitory (MIC) and minimal fungicidal (MFC) concentrations of the oils were determined. T. copticum L. oil showed a stronger inhibitory effect than R. officinalis on the growth of A. parasiticus. Aflatoxin production was inhibited at 450 ppm of both oils with that of R. officinalis being stronger inhibitor. The oils were analyzed by GC and GC/MS. The major components of R. officinalis and T. copticum L. oils were Piperitone (23.65%), alpha-pinene (14.94%), Limonene (14.89%), 1,8-Cineole (7.43%) and Thymol (37.2%), P-Cymene (32.3%), gamma-Terpinene (27.3%) respectively. It is concluded that the essential oils could be safely used as preservative materials on some kinds of foods to protect them from toxigenic fungal infections.

  8. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    PubMed

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  9. Influence of the field humiture environment on the mechanical properties of 316L stainless steel repaired with Fe314

    NASA Astrophysics Data System (ADS)

    Zhang, Lianzhong; Li, Dichen; Yan, Shenping; Xie, Ruidong; Qu, Hongliang

    2018-04-01

    The mechanical properties of 316L stainless steel repaired with Fe314 under different temperatures and humidities without inert gas protection were studied. Results indicated favorable compatibility between Fe314 and 316L stainless steel. The average yield strength, tensile strength, and sectional contraction percentage were higher in repaired samples than in 316L stainless steel, whereas the elongation rate was slightly lower. The different conditions of humiture environment on the repair sample exerted minimal influence on tensile and yield strengths. The Fe314 cladding layer was mainly composed of equiaxed grains and mixed with randomly oriented columnar crystal and tiny pores or impurities in the tissue. Results indicated that the hardness value of Fe314 cladding layer under different humiture environments ranged within 419-451.1 HV0.2. The field humiture environment also showed minimal impact on the average hardness of Fe314 cladding layers. Furthermore, 316L stainless steel can be repaired through laser cladding by using Fe314 powder without inert gas protection under different temperatures and humidity environments.

  10. Existence of solutions of a two-dimensional boundary value problem for a system of nonlinear equations arising in growing cell populations.

    PubMed

    Jeribi, Aref; Krichen, Bilel; Mefteh, Bilel

    2013-01-01

    In the paper [A. Ben Amar, A. Jeribi, and B. Krichen, Fixed point theorems for block operator matrix and an application to a structured problem under boundary conditions of Rotenberg's model type, to appear in Math. Slovaca. (2014)], the existence of solutions of the two-dimensional boundary value problem (1) and (2) was discussed in the product Banach space L(p)×L(p) for p∈(1, ∞). Due to the lack of compactness on L1 spaces, the analysis did not cover the case p=1. The purpose of this work is to extend the results of Ben Amar et al. to the case p=1 by establishing new variants of fixed-point theorems for a 2×2 operator matrix, involving weakly compact operators.

  11. Cytotoxicity and anti-Sporothrix brasiliensis activity of the Origanum majorana Linn. oil.

    PubMed

    Waller, Stefanie Bressan; Madrid, Isabel Martins; Ferraz, Vanny; Picoli, Tony; Cleff, Marlete Brum; de Faria, Renata Osório; Meireles, Mário Carlos Araújo; de Mello, João Roberto Braga

    The study aimed to evaluate the anti-Sporothrix sp. activity of the essential oil of Origanum majorana Linn. (marjoram), its chemical analysis, and its cytotoxic activity. A total of 18 fungal isolates of Sporothrix brasiliensis (n: 17) from humans, dogs and cats, and a standard strain of Sporothrix schenckii (n: 1) were tested using the broth microdilution technique (Clinical and Laboratory Standard Institute - CLSI M27-A3) and the results were expressed in minimal inhibitory concentration (MIC) and minimal fungicidal concentration (MFC). The MIC 50 and MIC 90 of itraconazole against S. brasiliensis were 2μg/mL and 8μg/mL, respectively, and the MFC 50 and MFC 90 were 2μg/mL and >16μg/mL, respectively, with three S. brasiliensis isolates resistant to antifungal. S. schenckii was sensitive at MIC of 1μg/mL and MFC of 8μg/mL. For the oil of O. majorana L., all isolates were susceptible to MIC of ≤2.25-9mg/mL and MFC of ≤2.25-18mg/mL. The MIC 50 and MIC 90 were ≤2.25mg/mL and 4.5mg/mL, respectively, and the MFC 50/90 values were twice more than the MIC. Twenty-two compounds were identified by gas chromatography with a flame ionization detector (CG-FID) and 1,8-cineole and 4-terpineol were the majority. Through the colorimetric (MTT) assay, the toxicity was observed in 70-80% of VERO cells between 0.078 and 5mg/mL. For the first time, the study demonstrated the satisfactory in vitro anti-Sporothrix sp. activity of marjoram oil and further studies are needed to ensure its safe and effective use. Copyright © 2016 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  12. Axial presacral lumbar interbody fusion and percutaneous posterior fixation for stabilization of lumbosacral isthmic spondylolisthesis.

    PubMed

    Gerszten, Peter C; Tobler, William; Raley, Thomas J; Miller, Larry E; Block, Jon E; Nasca, Richard J

    2012-04-01

    Case series. To describe a minimally invasive surgical technique for treatment of lumbosacral spondylolisthesis. Traditional surgical management of lumbosacral spondylolisthesis is technically challenging and associated with significant complications. Minimally invasive surgical techniques offer patients treatment alternatives with lower operative morbidity risk. The combination of percutaneous pedicle screw reduction and an axial presacral approach for lumbosacral discectomy and fusion is an option for the surgical management of low-grade lumbosacral spondylolisthesis. Twenty-six consecutive patients with symptomatic L5-S1 level isthmic spondylolisthesis (grade 1 or grade 2) underwent axial presacral lumbar interbody fusion and percutaneous posterior fixation. Study outcomes included visual analogue scale for axial pain severity, Odom criteria, and radiographic fusion. The procedure was successfully completed in all patients with no intraoperative complications reported. Intraoperative blood loss was minimal (range, 20-150 mL). Median hospital stay was 1 day (range, <1-2 d). Spondylolisthesis grade was improved after axial lumbar interbody fusion (P<0.001) with 50% (13 of 26) of patients showing a reduction of at least 1 grade. Axial pain severity improved from 8.1±1.4 at baseline to 2.8±2.3 after axial lumbar interbody fusion, representing a 66% reduction from baseline (95% confidence interval, 54.3%-77.9%). At 2-year posttreatment, all patients showed solid fusion. Using Odom criteria, 81% of patients were judged as excellent or good (16 excellent, 5 good, 3 fair, and 2 poor). There were no perioperative procedure-related complications including infection or bowel perforation. During postoperative follow-up, 4 patients required reintervention due to recurrent radicular (n=2) or screw-related (n=2) pain. The minimally invasive presacral axial interbody fusion and posterior instrumentation technique is a safe and effective treatment for low-grade isthmic spondylolisthesis.

  13. Assessing the effect of sodium dichloroisocyanurate concentration on transfer of Salmonella enterica serotype Typhimurium in wash water for production of minimally processed iceberg lettuce (Lactuca sativa L.).

    PubMed

    Maffei, D F; Sant'Ana, A S; Monteiro, G; Schaffner, D W; Franco, B D G M

    2016-06-01

    This study evaluated the impact of sodium dichloroisocyanurate (5, 10, 20, 30, 40, 50 and 250 mg l(-1) ) in wash water on transfer of Salmonella Typhimurium from contaminated lettuce to wash water and then to other noncontaminated lettuces washed sequentially in the same water. Experiments were designed mimicking the conditions commonly seen in minimally processed vegetable (MPV) processing plants in Brazil. The scenarios were as follows: (1) Washing one inoculated lettuce portion in nonchlorinated water, followed by washing 10 noninoculated portions sequentially. (2) Washing one inoculated lettuce portion in chlorinated water followed by washing five noninoculated portions sequentially. (3) Washing five inoculated lettuce portions in chlorinated water sequentially, followed by washing five noninoculated portions sequentially. (4) Washing five noninoculated lettuce portions in chlorinated water sequentially, followed by washing five inoculated portions sequentially and then by washing five noninoculated portions sequentially in the same water. Salm. Typhimurium transfer from inoculated lettuce to wash water and further dissemination to noninoculated lettuces occurred when nonchlorinated water was used (scenario 1). When chlorinated water was used (scenarios 2, 3 and 4), no measurable Salm. Typhimurium transfer occurred if the sanitizer was ≥10 mg l(-1) . Use of sanitizers in correct concentrations is important to minimize the risk of microbial transfer during MPV washing. In this study, the impact of sodium dichloroisocyanurate in the wash water on transfer of Salmonella Typhimurium from inoculated lettuce to wash water and then to other noninoculated lettuces washed sequentially in the same water was evaluated. The use of chlorinated water, at concentration above 10 mg l(-1) , effectively prevented Salm. Typhimurium transfer under several different washing scenarios. Conversely, when nonchlorinated water was used, Salm. Typhimurium transfer occurred in up to at least 10 noninoculated batches of lettuce washed sequentially in the same water. © 2016 The Society for Applied Microbiology.

  14. Energy minimization on manifolds for docking flexible molecules

    PubMed Central

    Mirzaei, Hanieh; Zarbafian, Shahrooz; Villar, Elizabeth; Mottarella, Scott; Beglov, Dmitri; Vajda, Sandor; Paschalidis, Ioannis Ch.; Vakili, Pirooz; Kozakov, Dima

    2015-01-01

    In this paper we extend a recently introduced rigid body minimization algorithm, defined on manifolds, to the problem of minimizing the energy of interacting flexible molecules. The goal is to integrate moving the ligand in six dimensional rotational/translational space with internal rotations around rotatable bonds within the two molecules. We show that adding rotational degrees of freedom to the rigid moves of the ligand results in an overall optimization search space that is a manifold to which our manifold optimization approach can be extended. The effectiveness of the method is shown for three different docking problems of increasing complexity. First we minimize the energy of fragment-size ligands with a single rotatable bond as part of a protein mapping method developed for the identification of binding hot spots. Second, we consider energy minimization for docking a flexible ligand to a rigid protein receptor, an approach frequently used in existing methods. In the third problem we account for flexibility in both the ligand and the receptor. Results show that minimization using the manifold optimization algorithm is substantially more efficient than minimization using a traditional all-atom optimization algorithm while producing solutions of comparable quality. In addition to the specific problems considered, the method is general enough to be used in a large class of applications such as docking multidomain proteins with flexible hinges. The code is available under open source license (at http://cluspro.bu.edu/Code/Code_Rigtree.tar), and with minimal effort can be incorporated into any molecular modeling package. PMID:26478722

  15. Evanescent field: A potential light-tool for theranostics application

    NASA Astrophysics Data System (ADS)

    Polley, Nabarun; Singh, Soumendra; Giri, Anupam; Pal, Samir Kumar

    2014-03-01

    A noninvasive or minimally invasive optical approach for theranostics, which would reinforce diagnosis, treatment, and preferably guidance simultaneously, is considered to be major challenge in biomedical instrument design. In the present work, we have developed an evanescent field-based fiber optic strategy for the potential theranostics application in hyperbilirubinemia, an increased concentration of bilirubin in the blood and is a potential cause of permanent brain damage or even death in newborn babies. Potential problem of bilirubin deposition on the hydroxylated fiber surface at physiological pH (7.4), that masks the sensing efficacy and extraction of information of the pigment level, has also been addressed. Removal of bilirubin in a blood-phantom (hemoglobin and human serum albumin) solution from an enhanced level of 77 μM/l (human jaundice >50 μM/l) to ˜30 μM/l (normal level ˜25 μM/l in human) using our strategy has been successfully demonstrated. In a model experiment using chromatography paper as a mimic of biological membrane, we have shown efficient degradation of the bilirubin under continuous monitoring for guidance of immediate/future course of action.

  16. The conceptual imperfection of aquatic risk assessment tests: highlighting the need for tests designed to detect therapeutic effects of pharmaceutical contaminants

    NASA Astrophysics Data System (ADS)

    Klaminder, J.; Jonsson, M.; Fick, J.; Sundelin, A.; Brodin, T.

    2014-08-01

    Standardized ecotoxicological tests still constitute the fundamental tools when doing risk-assessment of aquatic contaminants. These protocols are managed towards minimal mortality in the controls, which is not representative for natural systems where mortality is often high. This methodological bias, generated from assays where mortality in the control group is systematically disregarded, makes it difficult to measure therapeutic effects of pharmaceutical contaminants leading to lower mortality. This is of concern considering that such effects on exposed organisms still may have substantial ecological consequences. In this paper, we illustrate this conceptual problem by presenting empirical data for how the therapeutic effect of Oxazepam—a common contaminant of surface waters—lower mortality rates among exposed Eurasian perch (Perca fluviatilis) from wild populations, at two different life stages. We found that fry hatched from roe that had been exposed to dilute concentrations (1.1 ± 0.3 μg l-1) of Oxazepam for 24 h 3-6 days prior to hatching showed lower mortality rates and increased activity 30 days after hatching. Similar effects, i.e. increased activity and lower mortality rates were also observed for 2-year old perch exposed to dilute Oxazepam concentrations (1.2 ± 0.4 μg l-1). We conclude that therapeutic effects from pharmaceutical contaminants need to be considered in risk assessment assays to avoid that important ecological effects from aquatic contaminants are systematically missed.

  17. Statistical learning from nonrecurrent experience with discrete input variables and recursive-error-minimization equations

    NASA Astrophysics Data System (ADS)

    Carter, Jeffrey R.; Simon, Wayne E.

    1990-08-01

    Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and

  18. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  19. Exploiting Sparsity in Hyperspectral Image Classification via Graphical Models

    DTIC Science & Technology

    2013-05-01

    distribution p by minimizing the Kullback – Leibler (KL) distance D(p‖p̂) = Ep[log(p/p̂)] using first- and second-order statistics, via a maximum-weight...Obtain sparse representations αl, l = 1, . . . , T , in RN from test image. 6: Inference: Classify based on the output of the resulting classifier using ...The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing

  20. Optimal ECG Electrode Sites and Criteria for Detection of Asymptomatic Coronary Artery Disease at Rest and with Exercise.

    DTIC Science & Technology

    1985-12-01

    dilating during peak workloads. By preventing dilation , circum- ferential atherosclerosis, even if minimal on angiography, can profoundly reduce myocardial...Depolarization on Refractoriness of Ischemic Canine Myocardium, J. Electrocardiol. 15(4):335, 1982. 127. Burgess, M.J., Green, L.S., Millar, K., Wyatt, R...Sympathetic Stimulation on Refractory Periods of Ischemic Canine Ventricular Myocardium, J. Electrocardiol. 15(1):1, 1982. 129. Burgess, M.J., Lux, R.L

  1. Predictive Hyperglycemia and Hypoglycemia Minimization: In-Home Evaluation of Safety, Feasibility, and Efficacy in Overnight Glucose Control in Type 1 Diabetes.

    PubMed

    Spaic, Tamara; Driscoll, Marsha; Raghinaru, Dan; Buckingham, Bruce A; Wilson, Darrell M; Clinton, Paula; Chase, H Peter; Maahs, David M; Forlenza, Gregory P; Jost, Emily; Hramiak, Irene; Paul, Terri; Bequette, B Wayne; Cameron, Faye; Beck, Roy W; Kollman, Craig; Lum, John W; Ly, Trang T

    2017-03-01

    The objective of this study was to determine the safety, feasibility, and efficacy of a predictive hyperglycemia and hypoglycemia minimization (PHHM) system compared with predictive low-glucose insulin suspension (PLGS) alone in overnight glucose control. A 42-night trial was conducted in 30 individuals with type 1 diabetes in the age range 15-45 years. Participants were randomly assigned each night to either PHHM or PLGS and were blinded to the assignment. The system suspended the insulin pump on both the PHHM and PLGS nights for predicted hypoglycemia but delivered correction boluses for predicted hyperglycemia on PHHM nights only. The primary outcome was the percentage of time spent in a sensor glucose range of 70-180 mg/dL during the overnight period. The addition of automated insulin delivery with PHHM increased the time spent in the target range (70-180 mg/dL) from 71 ± 10% during PLGS nights to 78 ± 10% during PHHM nights ( P < 0.001). The average morning blood glucose concentration improved from 163 ± 23 mg/dL after PLGS nights to 142 ± 18 mg/dL after PHHM nights ( P < 0.001). Various sensor-measured hypoglycemic outcomes were similar on PLGS and PHHM nights. All participants completed 42 nights with no episodes of severe hypoglycemia, diabetic ketoacidosis, or other study- or device-related adverse events. The addition of a predictive hyperglycemia minimization component to our existing PLGS system was shown to be safe, feasible, and effective in overnight glucose control. © 2017 by the American Diabetes Association.

  2. The Management of Dynamic Epistemic Relationships Regarding Second Language Knowledge in Second Language Education: Epistemic Discrepancies and Epistemic (Im)Balance

    ERIC Educational Resources Information Center

    Rusk, Fredrik; Pörn, Michaela; Sahlström, Fritjof

    2016-01-01

    Using the first language (L1) to solve problems in understanding the second language (L2) may be beneficial for L2 learning. However, the overuse of L1 may deprive L2 learners of exposure to the L2. It appears that the question is not whether to use L1 or L2; it is when and how each language can be used to support L2 learning. This study focuses…

  3. An Implicit Enumeration Algorithm with Binary-Valued Constraints.

    DTIC Science & Technology

    1986-03-01

    problems is the National Basketball Association ( NBA -) schedul- ing problems developed by Bean (1980), as discussed in detail in the Appendix. These...fY! X F L- %n~ P ’ % -C-10 K7 K: K7 -L- -7".i - W. , W V APPENDIX The NBA Scheduling Problem §A.1 Formulation The National Basketball Association...16 2.2 4.9 40.2 15.14 §6.2.3 NBA Scheduling Problem The last set of testing problems involves the NBA scheduling problem. A detailed description of

  4. Development of a Test Facility for Air Revitalization Technology Evaluation

    NASA Technical Reports Server (NTRS)

    Lu, Sao-Dung; Lin, Amy; Campbell, Melissa; Smith, Frederick

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  5. Gain-Scheduled Fault Tolerance Control Under False Identification

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Belcastro, Christine (Technical Monitor)

    2006-01-01

    An active fault tolerant control (FTC) law is generally sensitive to false identification since the control gain is reconfigured for fault occurrence. In the conventional FTC law design procedure, dynamic variations due to false identification are not considered. In this paper, an FTC synthesis method is developed in order to consider possible variations of closed-loop dynamics under false identification into the control design procedure. An active FTC synthesis problem is formulated into an LMI optimization problem to minimize the upper bound of the induced-L2 norm which can represent the worst-case performance degradation due to false identification. The developed synthesis method is applied for control of the longitudinal motions of FASER (Free-flying Airplane for Subscale Experimental Research). The designed FTC law of the airplane is simulated for pitch angle command tracking under a false identification case.

  6. Orthography-Induced Length Contrasts in the Second Language Phonological Systems of L2 Speakers of English: Evidence from Minimal Pairs.

    PubMed

    Bassetti, Bene; Sokolović-Perović, Mirjana; Mairano, Paolo; Cerni, Tania

    2018-06-01

    Research shows that the orthographic forms ("spellings") of second language (L2) words affect speech production in L2 speakers. This study investigated whether English orthographic forms lead L2 speakers to produce English homophonic word pairs as phonological minimal pairs. Targets were 33 orthographic minimal pairs, that is to say homophonic words that would be pronounced as phonological minimal pairs if orthography affects pronunciation. Word pairs contained the same target sound spelled with one letter or two, such as the /n/ in finish and Finnish (both /'fɪnɪʃ/ in Standard British English). To test for effects of length and type of L2 exposure, we compared Italian instructed learners of English, Italian-English late bilinguals with lengthy naturalistic exposure, and English natives. A reading-aloud task revealed that Italian speakers of English L2 produce two English homophonic words as a minimal pair distinguished by different consonant or vowel length, for instance producing the target /'fɪnɪʃ/ with a short [n] or a long [nː] to reflect the number of consonant letters in the spelling of the words finish and Finnish. Similar effects were found on the pronunciation of vowels, for instance in the orthographic pair scene-seen (both /siːn/). Naturalistic exposure did not reduce orthographic effects, as effects were found both in learners and in late bilinguals living in an English-speaking environment. It appears that the orthographic form of L2 words can result in the establishment of a phonological contrast that does not exist in the target language. Results have implications for models of L2 phonological development.

  7. Minimizing Significant Figure Fuzziness.

    ERIC Educational Resources Information Center

    Fields, Lawrence D.; Hawkes, Stephen J.

    1986-01-01

    Addresses the principles and problems associated with the use of significant figures. Explains uncertainty, the meaning of significant figures, the Simple Rule, the Three Rule, and the 1-5 Rule. Also provides examples of the Rules. (ML)

  8. Staying Healthy

    MedlinePlus

    ... paramount to minimizing further lung damage. In some cases, oxygen may be prescribed as well, to help with breathing problems. Alpha-1 patients will also want to receive annual flu shots and should be vaccinated against pneumonia. Eating properly ...

  9. [Quality of life of primary care patients in Rio de Janeiro and São Paulo, Brasil: associations with stressful life events and mental health].

    PubMed

    Portugal, Flávia Batista; Campos, Mônica Rodrigues; Gonçalves, Daniel Almeida; Mari, Jair de Jesus; Fortes, Sandra Lúcia Correia Lima

    2016-02-01

    Quality of life (QoL) is a subjective construct, which can be negatively associated with factors such as mental disorders and stressful life events (SLEs). This article seeks to identify the association between socioeconomic and demographic variables, common mental disorders, symptoms suggestive of depression and anxiety, SLEs with QoL in patients attended in Primary Care (PC). It is a transversal study, conducted with 1,466 patients attended in PC centers in the cities of São Paulo and Rio de Janeiro in 2009 and 2010. Bivariate analysis was performed using the T-test and four multiple linear regressions for each QoL domain. The scores for the physical, psychological, social relations and environment domains were, respectively, 64.7; 64.2; 68.5 and 49.1. By means of multivariate analysis, associations of the physical domain were found with health problems and discrimination; of the psychological domain with discrimination; of social relations with financial/structural problems; of external causes and health problems; and of the environment with financial/structural problems, external causes and discrimination. Mental health variables, health problems and financial/structural problems were the factors negatively associated with QoL.

  10. Extent of weight reduction necessary for minimization of diabetes risk in Japanese men with visceral fat accumulation and glycated hemoglobin of 5.6–6.4%

    PubMed Central

    Iwahashi, Hiromi; Noguchi, Midori; Okauchi, Yukiyoshi; Morita, Sachiko; Imagawa, Akihisa; Shimomura, Iichiro

    2015-01-01

    Aims/Introduction Weight reduction improves glycemic control in obese men with glycated hemoglobin (HbA1c) of 5.6–6.4%, suggesting that it can prevent the development of diabetes in these patients. The aim of the present study was to quantify the amount of weight reduction necessary for minimization of diabetes risk in Japanese men with visceral fat accumulation. Materials and Methods The study participants were 482 men with an estimated visceral fat area of ≥100 cm2, HbA1c of 5.6–6.4%, fasting plasma glucose (FPG) of <126 mg/dL or casual plasma glucose <200 mg/dL. They were divided into two groups based on weight change at the end of the 3-year follow-up period (weight gain and weight loss groups). The weight loss group was classified into quartile subgroups (lowest group, 0 to <1.2%: second lowest group, ≥1.2 to <2.5%: second highest group, ≥2.5 to <4.3%: highest group, ≥4.3% weight loss). The development of diabetes at the end-point represented a rise in HbA1c to ≥6.5% or FPG ≥126 mg/dL, or casual plasma glucose ≥200 mg/dL. Results The cumulative incidence of diabetes at the end of the 3-year follow-up period was 16.2% in the weight gain group and 10.1% in the weight loss group (P not significant). The incidence of diabetes was significantly lower in the highest weight loss group (3.1%), but not in the second highest, the second lowest and the lowest weight loss groups (9.7, 10.1 and 18.3%), compared with the weight gain group. Conclusions Minimization of the risk of diabetes in Japanese men with visceral fat accumulation requires a minimum of 4–5% weight loss in those with HbA1c of 5.6–6.4%. PMID:26417413

  11. The U(1)-Kepler Problems

    NASA Astrophysics Data System (ADS)

    Meng, Guowu

    2010-12-01

    Let n ⩾ 2 be a positive integer. To each irreducible representation σ of U(1), a U(1)-Kepler problem in dimension (2n - 1) is constructed and analyzed. This system is superintegrable and when n = 2 it is equivalent to a MICZ-Kepler problem. The dynamical symmetry group of this system is widetildeU(n, n), and the Hilbert space of bound states {{H}}(σ ) is the unitary highest weight representation of widetildeU(n, n) with the minimal positive Gelfand-Kirillov dimension. Furthermore, it is shown that the correspondence between σ ^* (the dual of σ) and {H}(σ ) is the theta-correspondence for dual pair (U(1), U(n,n))subseteq Sp_{4n}({R}).

  12. TU-C-17A-05: Dose Domain Optimization of MLC Leaf Patterns for Highly Complicated 4Ï€ IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, D; Yu, V; Ruan, D

    Purpose: Highly conformal non-coplanar 4π radiotherapy plans typically require more than 20 intensity-modulated fields to deliver. A novel method to calculate multileaf collimator (MLC) leaf patterns is introduced to maximize delivery efficiency, accuracy and plan quality. Methods: 4 GBM patients, with a prescription dose of 59.4 Gy or 60 Gy, were evaluated using the 4π algorithm using 20 beams. The MLC calculation utilized a least square minimization of the dose distribution, with an anisotropic total variation regularization term to encourage piecewise continuity in the fluence maps. Transforming the fluence to the dose domain required multiplying the fluence with a sparsemore » matrix. Exploiting this property made it feasible to solve the problem using CVX, a MATLAB-based convex modeling framework. The fluence was stratified into even step sizes, and the MLC segments, limited to 300, were calculated. The patients studied were replanned using Eclipse with the same beam angles. Results: Compared to the original 4π plan, the stratified 4π plan increased the maximum/mean dose for, in Gy, by 1.0/0.0 (brainstem), 0.5/0.2 (chiasm), 0.0/0.0 (spinal cord), 1.9/0.3 (L eye), 0.7/0.2 (R eye), 0.4/0.4 (L lens), 0.3/0.3 (R lens), 1.0/0.8 (L Optical Nerve), 0.5/0.3 (R Optical Nerve), 0.3/0.2 (L Cochlea), 0.1/0.1 (R Cochlea), 4.6/0.2 (brain), 2.4/0.1 (brain-PTV), 5.1/0.9 (PTV). Compared to Eclipse, which generated an average of 607 segments, the stratified plan reduced (−) or increased (+) the maximum/mean dose, in Gy, by −10.2/−4.1 (brainstem), −10.5/−8.9 (chiasm), +0.0/−0.1 (spinal cord), −4.9/−3.4 (L eye), −4.1/−2.5 (R eye), −2.8/−2.7 (L lens), −2.1/−1.9 (R lens), −7.6/−6.5 (L Optical Nerve), −8.9/−6.1 (R Optical Nerve), −1.3/−1.9 (L Cochlea), −1.8/−1.8 (R Cochlea), +1.7/−2.1 (brain), +3.2/−2.6 (brain-PTV), +1.8/+0.3 Gy (PTV. The stratified plan was also more homogeneous in the PTV. Conclusion: This novel solver can transform complicated fluence maps into significantly fewer deliverable MLC segments than the commercial system while achieving superior dosimetry. Funding support partially contributed by Varian.« less

  13. New minimally invasive discectomy technique through the interlaminar space using a percutaneous endoscope.

    PubMed

    Dezawa, A; Sairyo, K

    2011-05-01

    The serial dilating technique used to access herniated discs at the L5-S1 space using percutaneous endoscopic discectomy (PED) via an 8 mm skin incision can possibly injure the S1 nerve root. In this paper, we describe in detail a new surgical procedure to safely access the disc and to avoid the nerve root damage. This small-incision endoscopic technique, small-incision microendoscopic discectomy (sMED), mimics microendoscopic discectomy and applies PED. The sMED approach is similar to the well-established microendoscopic discectomy technique. To secure the surgical field, a duckbill-type PED cannula is used. Following laminotomy of L5 using a high-speed drill, the ligamentum flavum is partially removed using the Kerrison rongeur. Using the curved nerve root retractor, the S1 nerve root is gradually and gently moved caudally. Following the compete retraction of the S1 nerve root to the caudal side of the herniated nucleus pulposus (HNP), the nerve root is retracted safely medially and caudally using the bill side of the duckbill PED cannula. Next, using the HNP rongeur for PED, the HNP is removed piece by piece until the nerve root is decompressed. A total of 30 patients with HNP at the L5-S1 level underwent sMED. In all cases, HNP was successfully removed and patients showed improvement following surgery. Only one patient complained of moderate radiculopathy at the final visit. No complications were encountered. We introduced a minimally invasive technique to safely remove HNP at the L5-S1 level. sMED is possibly the least invasive technique for HNP removal at the L5-S1 level. © 2011 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Blackwell Publishing Asia Pty Ltd.

  14. Technical note: an R package for fitting sparse neural networks with application in animal breeding.

    PubMed

    Wang, Yangfan; Mi, Xue; Rosa, Guilherme J M; Chen, Zhihui; Lin, Ping; Wang, Shi; Bao, Zhenmin

    2018-05-04

    Neural networks (NNs) have emerged as a new tool for genomic selection (GS) in animal breeding. However, the properties of NN used in GS for the prediction of phenotypic outcomes are not well characterized due to the problem of over-parameterization of NN and difficulties in using whole-genome marker sets as high-dimensional NN input. In this note, we have developed an R package called snnR that finds an optimal sparse structure of a NN by minimizing the square error subject to a penalty on the L1-norm of the parameters (weights and biases), therefore solving the problem of over-parameterization in NN. We have also tested some models fitted in the snnR package to demonstrate their feasibility and effectiveness to be used in several cases as examples. In comparison of snnR to the R package brnn (the Bayesian regularized single layer NNs), with both using the entries of a genotype matrix or a genomic relationship matrix as inputs, snnR has greatly improved the computational efficiency and the prediction ability for the GS in animal breeding because snnR implements a sparse NN with many hidden layers.

  15. Clinical Profiles of Children with Disruptive Behaviors Based on the Severity of Their Conduct Problems, Callous-Unemotional Traits and Emotional Difficulties.

    PubMed

    Andrade, Brendan F; Sorge, Geoff B; Na, Jennifer Jiwon; Wharton-Shukster, Erika

    2015-08-01

    This study identified clinical profiles of referred children based on the severity of callous-unemotional (CU) traits, emotional difficulties, and conduct problems. Parents of 166 children (132 males) aged 6-12 years referred to a hospital clinic because of disruptive behavior completed measures to assess these key indicators, and person-centered analysis was used to identify profiles. Four distinct profiles were identified that include: (1) Children low in severity on the three domains, (2) Children high in severity on the three domains, (3) Children high in severity in conduct problems and CU traits with minimal emotional difficulties, and (4) Children high in severity in conduct problems and emotional difficulties with minimal CU traits. Profiles differed in degree of aggression and behavioral impairment. Findings show that clinic-referred children with disruptive behaviors can be grouped based on these important indicators into profiles that have important implications for assessment and treatment selection.

  16. AF-Shell 1.0 User Guide

    NASA Technical Reports Server (NTRS)

    McElroy, Mark W.

    2017-01-01

    This document serves as a user guide for the AF-Shell 1.0 software, an efficient tool for progressive damage simulation in composite laminates. This guide contains minimal technical material and is meant solely as a guide for a new user to apply AF-Shell 1.0 to laminate damage simulation problems.

  17. Lobb's Generalization of Catalan's Parenthesization Problem

    ERIC Educational Resources Information Center

    Koshy, Thomas

    2009-01-01

    A. Lobb discovered an interesting generalization of Catalan's parenthesization problem, namely: Find the number L(n, m) of arrangements of n + m positive ones and n - m negative ones such that every partial sum is nonnegative, where 0 = m = n. This article uses Lobb's formula, L(n, m) = (2m + 1)/(n + m + 1) C(2n, n + m), where C is the usual…

  18. Mutations M287L and Q266I in the Glycine Receptor α1 Subunit Change Sensitivity to Volatile Anesthetics in Oocytes and Neurons, but Not the Minimal Alveolar Concentration in Knockin Mice

    PubMed Central

    Borghese, Cecilia M.; Xiong, Wei; Oh, S. Irene; Ho, Angel; Mihic, S. John; Zhang, Li; Lovinger, David M.; Homanics, Gregg E.; Eger, Edmond I; Harris, R. Adron

    2012-01-01

    Background Volatile anesthetics (VAs) alter the function of key central nervous system proteins but it is not clear which, if any, of these targets mediates the immobility produced by VAs in the face of noxious stimulation. A leading candidate is the glycine receptor, a ligand-gated ion channel important for spinal physiology. VAs variously enhance such function, and blockade of spinal GlyRs with strychnine affects the minimal alveolar concentration (an anesthetic EC50) in proportion to the degree of enhancement. Methods We produced single amino acid mutations into the glycine receptorα1 subunit that increased (M287L, third transmembrane region) or decreased (Q266I, second transmembrane region) sensitivity to isoflurane in recombinant receptors, and introduced such receptors into mice. The resulting knockin mice presented impaired glycinergic transmission, but heterozygous animals survived to adulthood, and we determined the effect of isoflurane on glycine-evoked responses of brain stem neurons from the knockin mice, and the minimal alveolar concentration for isoflurane and other VAs in the immature and mature knockin mice. Results Studies of glycine-evoked currents in brain stem neurons from knock-in mice confirmed the changes seen with recombinant receptors. No increases in the minimal alveolar concentration were found in knockin mice, but the minimal alveolar concentration for isoflurane and enflurane (but not halothane) decreased in 2-week-old Q266I mice. This change is opposite to the one expected for a mutation that decreases the sensitivity to volatile anesthetics. Conclusion Taken together, these results indicate that glycine receptors containing the α1 subunit are not likely to be crucial for the action of isoflurane and other VAs. PMID:22885675

  19. Transfer and Semantic Universals in the L2 Acquisition of the English Article System by Child L2 Learners

    ERIC Educational Resources Information Center

    Morales-Reyes, Alexandra; Soler, Inmaculada Gómez

    2016-01-01

    L2 learners' problems with English articles have been linked to learners' L1 and their access to universal semantic features (e.g., definiteness and specificity). Studies suggest that L2 adults rely on their L1 knowledge, while child L2 learners rely more on their access to semantic universals. The present study investigates whether child L2…

  20. 45 CFR 262.5 - Under what general circumstances will we determine that a State has reasonable cause?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... State if we determine that the State had reasonable cause for its failure. The general factors a State...., hurricanes, earthquakes, fire) whose disruptive impact was so significant as to cause the State's failure; (2... (3) Isolated problems of minimal impact that are not indicative of a systemic problem. (b)(1) We will...

  1. Increasing Second Language Learners' Production and Comprehension of Developmentally Advanced Syntactic Forms

    ERIC Educational Resources Information Center

    Gámez, Perla B.; Vasilyeva, Marina

    2015-01-01

    This investigation extended the use of the priming methodology to 5- and 6-year-olds at the beginning stages of learning English as a second language (L2). In Study 1, 14 L2 children described transitive scenes without an experimenter's input. They produced no passives and minimal actives; most of their utterances were incomplete. In Study 2, 56…

  2. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  3. Flights between a neighborhoods of unstable libration points of Sun-Earth system

    NASA Astrophysics Data System (ADS)

    Surkova, Valerya; Shmyrov, Vasily

    2018-05-01

    In this paper we study the problem of constructing impulse flights between neighborhoods of unstable collinear libration points of the Sun-Earth system [1]. Such maneuvering in near-Earth space may prove to be in demand in modern space navigation. For example, such a maneuvering was done by the space vehicle GENESIS. Three test points are chosen for the implementation of the impulse control, in order to move to a neighborhood of the libration point L2. It is shown that the earlier on the exit from the vicinity of the libration point L1 impulse control was realized, the sooner the neighborhood L2 was achieved. Separated from this problem, the problem of optimal control in the neighborhood of L2 was considered and a form of stabilizing control is presented.

  4. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  5. Percutaneous transhepatic embolization of gastroesophageal varices combined with partial splenic embolization for the treatment of variceal bleeding and hypersplenism

    PubMed Central

    Gong, Wei-Dong; Xue, Ke; Chu, Yuan-Kui; Wang, Qing; Yang, Wei; Quan, Hui; Yang, Peng; Wang, Zhi-Min; Wu, Zhi-Qun

    2015-01-01

    This study aims to evaluate the therapeutic results of percutaneous transhepatic embolization of gastroesophageal varices combined with partial splenic embolization in patients with liver cirrhosis, and to explore the role of this minimally invasive treatment as an alternative to surgery. 25 patients with liver cirrhosis were received percutaneous transhepatic embolization of gastroesophageal varices combined with partial splenic embolization. Another 25 patients with liver cirrhosis underwent Hassab’s operation. They were followed up, and received endoscopy, B ultrasound, liver function and hematologic examination at 24 months after the therapy. In minimal invasive group, before treatment and after 24 month following up after treatment, improved varices, improved portal hypertension and improved hypersplenism were showed comparing with the surgery group, and that they were measured by endoscopic visualization, ultrasound and blood counts. the white blood cell and platelet count were 2.33±0.65 (109/L) and 3.63±1.05 (1010/L), 7.98±3.0 (109/L) and 16.3±9.10 (1010/L) (P<0.05); the diameter of the portal vein were 1.47±0.25 cm, 1.31±0.23 cm (P<0.05). Esophageal varices passed from grade III to lower grade II in 11 patients, and from grade II to lower grade I in 6 patients at 24 month following up. In surgical group, the white blood cell and platelet count were 2.2±0.60 (109/L), 4.1±1.25 (1010/L) before treatment; 9.3±2.56 (109/L), 32.1±12.47 (1010/L) after the treatment at 24 month following up (P<0.05). The diameter of the portal vein were 1.43±0.22 cm before the treatment and 1.28±0.18 cm after the treatment (P<0.05). Esophageal varices passed from grade III to lower grade II in 13 patients, and from grade II to lower grade I in 7 patients. The combination of PGEV and PSE can be considered as an option for the treatment of variceal bleeding with hypersplenism. PMID:26770628

  6. Combination of minimal processing and irradiation to improve the microbiological safety of lettuce ( Lactuca sativa, L.)

    NASA Astrophysics Data System (ADS)

    Goularte, L.; Martins, C. G.; Morales-Aizpurúa, I. C.; Destro, M. T.; Franco, B. D. G. M.; Vizeu, D. M.; Hutzler, B. W.; Landgraf, M.

    2004-09-01

    The feasibility of gamma radiation in combination with minimal processing (MP) to reduce the number of Salmonella spp. and Escherichia coli O157:H7 in iceberg lettuce ( Lactuca sativa, L.) (shredded) was studied in order to increase the safety of the product. The reduction of the microbial population during the processing, the D10-values for Salmonella spp. and E. coli O157:H7 inoculated on shredded iceberg lettuce as well as the sensory evaluation of the irradiated product were evaluated. The immersion in chlorine (200 ppm) reduced coliform and aerobic mesophilic microorganisms by 0.9 and 2.7 log, respectively. D-values varied from 0.16 to 0.23 kGy for Salmonella spp. and from 0.11 to 0.12 kGy for E. coli O157:H7. Minimally processed iceberg lettuce exposed to 0.9 kGy does not show any change in sensory attributes. However, the texture of the vegetable was affected during the exposition to 1.1 kGy. The exposition of MP iceberg lettuce to 0.7 kGy reduced the population of Salmonella spp. by 4.0 log and E. coli by 6.8 log without impairing the sensory attributes. The combination of minimal process and gamma radiation to improve the safety of iceberg lettuce is feasible if good hygiene practices begins at farm stage.

  7. Initial multicenter technical experience with the Apollo device for minimally invasive intracerebral hematoma evacuation.

    PubMed

    Spiotta, Alejandro M; Fiorella, David; Vargas, Jan; Khalessi, Alexander; Hoit, Dan; Arthur, Adam; Lena, Jonathan; Turk, Aquilla S; Chaudry, M Imran; Gutman, Frederick; Davis, Raphael; Chesler, David A; Turner, Raymond D

    2015-06-01

    No conventional surgical intervention has been shown to improve outcomes for patients with spontaneous intracerebral hemorrhage (ICH) compared with medical management. We report the initial multicenter experience with a novel technique for the minimally invasive evacuation of ICH using the Penumbra Apollo system (Penumbra Inc, Alameda, California). Institutional databases were queried to perform a retrospective analysis of all patients who underwent ICH evacuation with the Apollo system from May 2014 to September 2014 at 4 centers (Medical University of South Carolina, Stony Brook University, University of California at San Diego, and Semmes-Murphy Clinic). Cases were performed either in the neurointerventional suite, operating room, or in a hybrid operating room/angiography suite. Twenty-nine patients (15 female; mean age, 62 ± 12.6 years) underwent the minimally invasive evacuation of ICH. Six of these parenchymal hemorrhages had an additional intraventricular hemorrhage component. The mean volume of ICH was 45.4 ± 30.8 mL, which decreased to 21.8 ± 23.6 mL after evacuation (mean, 54.1 ± 39.1% reduction; P < .001). Two complications directly attributed to the evacuation attempt were encountered (6.9%). The mortality rate was 13.8% (n = 4). Minimally invasive evacuation of ICH and intraventricular hemorrhage can be achieved with the Apollo system. Future work will be required to determine which subset of patients are most likely to benefit from this promising technology.

  8. Screening for fractions of Oxytropis falcata Bunge with antibacterial activity.

    PubMed

    Jiang, H; Hu, J R; Zhan, W Q; Liu, X

    2009-01-01

    Preliminary studies with the four extracts of Oxytropis falcate Bunge exhibited that the chloroform and ethyl acetate extracts showed stronger antibacterial activities against the nine tested Gram-positive and Gram-negative bacteria. The HPLC-scanned and bioassay-guided fractionation led to the isolation and identification of the main flavonoid compounds, i.e. rhamnocitrin, kaempferol, rhamnetin, 2',4'-dihydroxychalcone and 2',4',beta-trihydroxy-dihydrochalcon. Except 2',4',beta-trihydroxy-dihydrochalcon, four other compounds had good antibacterial activities. The minimal inhibitory concentrations (MICs) and minimal bactericidal concentrations (MBCs) of the four compounds ranged between 125 and 515 microg mL(-1). Staphylococcus aureus was the most susceptible to these compounds, with MIC and MBC values from 125 to 130 microg mL(-1). This is the first report of antibacterial activity in O. falcate Bunge. In this study, evidence to evaluate the biological functions of O. falcate Bunge is provided, which promote the rational use of this herb.

  9. Anesthetic efficacy and heart rate effects of the intraosseous injection of 3% mepivacaine after an inferior alveolar nerve block.

    PubMed

    Gallatin, E; Stabile, P; Reader, A; Nist, R; Beck, M

    2000-01-01

    The purpose of this study was to determine the anesthetic efficacy and heart rate effects of an intraosseous injection of 3% mepivacaine after an inferior alveolar nerve block. Through use of a repeated-measures design, each of 48 subjects randomly received 2 combinations of injections at 2 separate appointments. The combinations were (1) an inferior alveolar nerve block (with 1.8 mL of 3% mepivacaine) + intraosseous injection with 1.8 mL of 3% mepivacaine and (2) an inferior alveolar nerve (with 1. 8 mL of 3% mepivacaine) + mock intraosseous injection. The first molar was blindly pulp tested at 2-minute cycles for 60 minutes postinjection. Anesthesia was considered successful with 2 consecutive 80 readings. Heart rate (pulse rate) was measured with a pulse oximeter. All subjects had lip numbness with both of the inferior alveolar nerve + intraosseous techniques. Anesthetic success for the first molar was significantly increased for 30 minutes with intraosseous injection of mepivacaine in comparison with the inferior alveolar nerve block alone (mock intraosseous injection). Subjects receiving the intraosseous injection of mepivacaine experienced minimal increases in heart rate. The intraosseous injection of 1.8 mL of 3% mepivacaine, when used to augment an inferior alveolar nerve block, significantly increased anesthetic success for 30 minutes in the first molar. The 3% mepivacaine had a minimal effect on heart rate and would be useful in patients with contraindications to epinephrine use.

  10. BOUNDARY VALUE PROBLEM INVOLVING THE p-LAPLACIAN ON THE SIERPIŃSKI GASKET

    NASA Astrophysics Data System (ADS)

    Priyadarshi, Amit; Sahu, Abhilash

    In this paper, we study the following boundary value problem involving the weak p-Laplacian. -Δpu=λa(x)|u|q-1u + b(x)|u|l-1uin 𝒮∖𝒮 0; u=0on 𝒮0, where 𝒮 is the Sierpiński gasket in ℝ2, 𝒮0 is its boundary, λ > 0, p > 1, 0 < q < p - 1 < l and a,b : 𝒮→ ℝ are bounded nonnegative functions. We will show the existence of at least two nontrivial weak solutions to the above problem for a certain range of λ using the analysis of fibering maps on suitable subsets.

  11. Scheduling Capacitated One-Way Vehicles on Paths with Deadlines

    NASA Astrophysics Data System (ADS)

    Uchida, Jun; Karuno, Yoshiyuki; Nagamochi, Hiroshi

    In this paper, we deal with a scheduling problem of minimizing the number of employed vehicles on paths. Let G=(V,E) be a path with a set V={vi|i=1,2,...,n} of vertices and a set E={{vi,vi+1}|i=1,2,...,n-1} of edges. Vehicles with capacity b are initially situated at v1. There is a job i at each vertex vi∈V, which has its own handling time hi and deadline di. With each edge {vi,vi+1}∈E, a travel time wi,i+1 is associated. Each job is processed by exactly one vehicle, and the number of jobs processed by a vehicle does not exceed the capacity b. A routing of a vehicle is called one-way if the vehicle visits every edge {vi,vi+1} exactly once (i.e., it simply moves from v1 to vn on G). Any vehicle is assumed to follow the one-way routing constraint. The problem asks to find a schedule that minimizes the number of one-way vehicles, meeting the deadline and capacity constraints. A greedy heuristic is proposed, which repeats a dynamic programming procedure for a single one-way vehicle problem of maximizing the number of non-tardy jobs. We show that the greedy heuristic runs in O(n3) time, and the approximation ratio is at most ln b+1.

  12. Quality of life effects of androgen deprivation therapy in a prostate cancer cohort in New Zealand: can we minimize effects using a stratification based on the aldo-keto reductase family 1, member C3 rs12529 gene polymorphism?

    PubMed

    Karunasinghe, Nishi; Zhu, Yifei; Han, Dug Yeo; Lange, Katja; Zhu, Shuotun; Wang, Alice; Ellett, Stephanie; Masters, Jonathan; Goudie, Megan; Keogh, Justin; Benjamin, Benji; Holmes, Michael; Ferguson, Lynnette R

    2016-08-02

    Androgen deprivation therapy (ADT) is an effective palliation treatment in men with advanced prostate cancer (PC). However, ADT has well documented side effects that could alter the patient's health-related quality of life (HRQoL). The current study aims to test whether a genetic stratification could provide better knowledge for optimising ADT options to minimize HRQoL effects. A cohort of 206 PC survivors (75 treated with and 131 without ADT) was recruited with written consent to collect patient characteristics, clinical data and HRQoL data related to PC management. The primary outcomes were the percentage scores under each HRQoL subscale assessed using the European Organisation for Research and Treatment of Cancer Quality of Life questionnaires (QLQ-C30 and PR25) and the Depression Anxiety Stress Scales developed by the University of Melbourne, Australia. Genotyping of these men was carried out for the aldo-keto reductase family 1, member C3 (AKR1C3) rs12529 single nucleotide polymorphism (SNP). Analysis of HRQoL scores were carried out against ADT duration and in association with the AKR1C3 rs12529 SNP using the generalised linear model. P-values <0 · 05 were considered significant, and were further tested for restriction with Bonferroni correction. Increase in hormone treatment-related effects were recorded with long-term ADT compared to no ADT. The C and G allele frequencies of the AKR1C3rs12529 SNP were 53·4 % and 46·6 % respectively. Hormone treatment-related symptoms showed an increase with ADT when associated with the AKR1C3 rs12529 G allele. Meanwhile, decreasing trends on cancer-specific symptoms and increased sexual interest were recorded with no ADT when associated with the AKR1C3 rs12529 G allele and reverse trends with the C allele. As higher incidence of cancer-specific symptoms relate to cancer retention it is possible that associated with the C allele there could be higher incidence of unresolved cancers under no ADT options. If these findings can be reproduced in larger homogeneous cohorts, a genetic stratification based on the AKR1C3 rs12529 SNP, can minimize ADT-related HRQoL effects in PC patients. Our data additionally show that with this stratification it could also be possible to identify men needing ADT for better oncological advantage.

  13. Periodic Inclusion—Matrix Microstructures with Constant Field Inclusions

    NASA Astrophysics Data System (ADS)

    Liu, Liping; James, Richard D.; Leo, Perry H.

    2007-04-01

    We find a class of special microstructures consisting of a periodic array of inclusions, with the special property that constant magnetization (or eigenstrain) of the inclusion implies constant magnetic field (or strain) in the inclusion. The resulting inclusions, which we term E-inclusions, have the same property in a finite periodic domain as ellipsoids have in infinite space. The E-inclusions are found by mapping the magnetostatic or elasticity equations to a constrained minimization problem known as a free-boundary obstacle problem. By solving this minimization problem, we can construct families of E-inclusions with any prescribed volume fraction between zero and one. In two dimensions, our results coincide with the microstructures first introduced by Vigdergauz,[1,2] while in three dimensions, we introduce a numerical method to calculate E-inclusions. E-inclusions extend the important role of ellipsoids in calculations concerning phase transformations and composite materials.

  14. Single machine total completion time minimization scheduling with a time-dependent learning effect and deteriorating jobs

    NASA Astrophysics Data System (ADS)

    Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping

    2012-05-01

    In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.

  15. Allyl isothiocyanate enhances shelf life of minimally processed shredded cabbage.

    PubMed

    Banerjee, Aparajita; Penna, Suprasanna; Variyar, Prasad S

    2015-09-15

    The effect of allyl isothiocyanate (AITC), in combination with low temperature (10°C) storage on post harvest quality of minimally processed shredded cabbage was investigated. An optimum concentration of 0.05μL/mL AITC was found to be effective in maintaining the microbial and sensory quality of the product for a period of 12days. Inhibition of browning was shown to result from a down-regulation (1.4-fold) of phenylalanine ammonia lyase (PAL) gene expression and a consequent decrease in PAL enzyme activity and o-quinone content. In the untreated control samples, PAL activity increased following up-regulation in PAL gene expression that could be linearly correlated with enhanced o-quinone formation and browning. The efficacy of AITC in extending the shelf life of minimally processed shredded cabbage and its role in down-regulation of PAL gene expression resulting in browning inhibition in the product is reported here for the first time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. POLLUTION PREVENTION MULTI-YEAR PLAN

    EPA Science Inventory

    Over the last decade, the Agency has increasingly focused on pollution prevention when addressing high-risk human health and environmental problems. A preventive approach requires: (1) innovative design and production techniques that minimize or eliminate adverse environmental im...

  17. Prospective evaluation of the impact of covert hepatic encephalopathy on quality of life and sleep in cirrhotic patients.

    PubMed

    Labenz, C; Baron, J S; Toenges, G; Schattenberg, J M; Nagel, M; Sprinzl, M F; Nguyen-Tat, M; Zimmermann, T; Huber, Y; Marquardt, J U; Galle, P R; Wörns, M-A

    2018-06-04

    Minimal hepatic encephalopathy (HE) and HE grade 1 (HE1) according to the West Haven criteria have recently been grouped as one entity named-covert HE- (CHE). Data regarding the impact of CHE on health-related quality of life (HRQoL) and sleep quality are controversial. First, to determine whether CHE affects HRQoL and sleep quality of cirrhotic patients and second, whether minimal HE (MHE) and HE1 affect HRQoL and sleep quality to a comparable extent. A total of 145 consecutive cirrhotic patients were enrolled. HE1 was diagnosed clinically according to the West Haven criteria. Critical flicker frequency and the Psychometric Hepatic Encephalopathy Score were used to detect MHE. Chronic Liver Disease Questionnaire (CLDQ) was used to assess HRQoL and Pittsburgh Sleep Quality Index (PSQI) was applied to assess sleep quality. Covert HE was detected in 59 (40.7%) patients (MHE: n = 40; HE1: n = 19). Multivariate analysis identified CHE (P < 0.001) and female gender (P = 0.006) as independent predictors of reduced HRQoL (CLDQ total score). CHE (P = 0.021), low haemoglobin (P = 0.024) and female gender (P = 0.003) were identified as independent predictors of poor sleep quality (PSQI total score). Results of CLDQ and PSQI were comparable in patients with HE1 and MHE (CLDQ: 4.6 ± 0.9 vs 4.5 ± 1.2, P = 0.907; PSQI: 11.3 ± 3.8 vs 9.9 ± 5.0, P = 0.3). Covert HE was associated with impaired HRQoL and sleep quality. MHE and HE1 affected both outcomes to a comparable extent supporting the use of CHE as a clinically useful term for patients with both entities of HE in clinical practice. © 2018 John Wiley & Sons Ltd.

  18. Minimization of the root of a quadratic functional under a system of affine equality constraints with application to portfolio management

    NASA Astrophysics Data System (ADS)

    Landsman, Zinoviy

    2008-10-01

    We present an explicit closed form solution of the problem of minimizing the root of a quadratic functional subject to a system of affine constraints. The result generalizes Z. Landsman, Minimization of the root of a quadratic functional under an affine equality constraint, J. Comput. Appl. Math. 2007, to appear, see , articles in press, where the optimization problem was solved under only one linear constraint. This is of interest for solving significant problems pertaining to financial economics as well as some classes of feasibility and optimization problems which frequently occur in tomography and other fields. The results are illustrated in the problem of optimal portfolio selection and the particular case when the expected return of finance portfolio is certain is discussed.

  19. Extension of Storage Stability in Energy-Dense Encapsulated Systems by Minimization of Lipid Oxidation

    DTIC Science & Technology

    1988-01-01

    of compounds like gallic 8 and chlorogenic acids . There are, of course, modifying circumstances. BHA and BHT are quite volatile and may be partially...GROUP FLUORE-SCEI-NCEI AIITox I l)ATI ON ASCORBIC ACID MA LLARI) REIA’IION ANT lOX I DANTs SOLID SAMPLE FLUORESCENCE ICALI.ATEI) IIYDROP1I 11 1. A...By contrast, ascorbic acid produced little to no effect in the compressed system. ....DG, a lipophile, was mcre than twic.Pas effective as propyl

  20. Inverse associations between cord vitamin D and attention deficit hyperactivity disorder symptoms: A child cohort study.

    PubMed

    Mossin, Mats H; Aaby, Jens B; Dalgård, Christine; Lykkedegn, Sine; Christesen, Henrik T; Bilenberg, Niels

    2017-07-01

    To examine the association between cord 25-hydroxyvitamin D 2+3 (25(OH)D) and attention deficit hyperactivity disorder symptoms in toddlers, using Child Behaviour Checklist for ages 1.5-5. In a population-based birth cohort, a Child Behaviour Checklist for ages 1.5-5 questionnaire was returned from parents of 1233 infants with mean age 2.7 (standard deviation 0.6) years. Adjusted associations between cord 25(OH)D and Child Behaviour Checklist-based attention deficit hyperactivity disorder problems were analysed by multiple regression. Results The median cord 25(OH)D was 44.1 (range: 1.5-127.1) nmol/L. Mean attention deficit hyperactivity disorder problem score was 2.7 (standard deviation 2.1). In adjusted analyses, cord 25(OH)D levels >25 nmol/L and >30 nmol/L were associated with lower attention deficit hyperactivity disorder scores compared to levels ⩽25 nmol/L ( p = 0.035) and ⩽30 nmol/L ( p = 0.043), respectively. The adjusted odds of scoring above the 90th percentile on the Child Behaviour Checklist-based attention deficit hyperactivity disorder problem scale decreased by 11% per 10 nmol/L increase in cord 25(OH)D. An inverse association between cord 25(OH)D and attention deficit hyperactivity disorder symptoms in toddlers was found, suggesting a protective effect of prenatal vitamin D.

  1. Vitamin D supplement consumption is required to achieve a minimal target 25-hydroxyvitamin D concentration of > or = 75 nmol/L in older people.

    PubMed

    Baraké, Roula; Weiler, Hope; Payette, Hélène; Gray-Donald, Katherine

    2010-03-01

    Population level data on how older individuals living at high latitudes achieve optimal vitamin D status are not fully explored. Our objective was to examine the intake of vitamin D among healthy older individuals with 25-hydroxyvitamin D [25(OH)D] concentrations > or =75 nmol/L and to describe current sources of dietary vitamin D. We conducted a population-based, cross-sectional study of 404 healthy men and women aged 69 to 83 y randomly selected from the NuAge longitudinal study in Québec, Canada. Dietary intakes were assessed by 6 24-h recalls. We examined the contribution of foods and vitamin/mineral supplements to vitamin D intake. Serum 25(OH)D was assessed by RIA. We assessed smoking status, season of 25(OH)D measurement, physical activity, and anthropometric and sociodemographic variables. Vitamin D status was distributed as follows: 7% (<37.5 nmol/L), 48% (37.5-74.9 nmol/L), and 45% (> or = 75 nmol/L). Vitamin D intake from supplements varied across the 3 vitamin D status groups: 0.5, 4.1, and 8.9 microg/d, respectively (P < 0.0001). Adding food sources, these total intakes were 4.6, 8.7, and 14.1 microg/d, respectively. In multivariate analysis, vitamin D from foods and supplements and by season was associated with vitamin D status. These healthy, community-dwelling older men and women with 25(OH)D concentrations >75 nmol/L had mean intakes of 14.1 microg/d from food and supplements. Supplement use is an important contributor to achieve a minimal target of 25(OH)D concentration > or = 75 nmol/L.

  2. Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.

    PubMed

    Forti, Mauro; Nistri, Paolo; Quincampoix, Marc

    2006-11-01

    This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.

  3. Artemisia herba-alba essential oil from Buseirah (South Jordan): Chemical characterization and assessment of safe antifungal and anti-inflammatory doses.

    PubMed

    Abu-Darwish, M S; Cabral, C; Gonçalves, M J; Cavaleiro, C; Cruz, M T; Efferth, T; Salgueiro, L

    2015-11-04

    Artemisia herba-alba Asso ("desert wormwood" in English; "armoise blanche" in French; "shaih" in Arabic), is a medicinal and strongly aromatic plant widely used in traditional medicine by many cultures since ancient times. It is used to treat inflammatory disorders (colds, coughing, bronchitis, diarrhea), infectious diseases (skin diseases, scabies, syphilis) and others (diabetes, neuralgias). In Jordanian traditional medicine, this plant is used as antiseptic and against skin diseases, scabies, syphilis, fever as well as menstrual and nervous disorders. Considering the traditional medicinal uses and the lack of scientific studies addressing the cellular and molecular players involved in these biological activities, the present study was designed to unveil the antifungal and anti-inflammatory activities of A. herba-alba Asso essential oil at doses devoid of toxicity to mammalian cells. Chemical analysis of A. herba-alba essential oil isolated by hydrodistillation from aerial parts was carried out by gas chromatography (GC) and gas chromatography-mass spectrometry (GC-MS). The antifungal activity (minimal inhibitory concentrations and minimal lethal concentrations) was evaluated against yeasts, dermatophyte and Aspergillus strains. In order to explore the mechanisms behind the anti-fungal effect of the essential oil, the germ tube inhibition assay was evaluated using Candida albicans. The assessment of cell viability was accomplished using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay and the in vitro anti-inflammatory potential of A. herba-alba oil at the periphery and central nervous system was evaluated by measuring nitric oxide (NO) production using lipopolysaccharide (LPS)-stimulated mouse macrophages and microglia, respectively. Oxygen-containing monoterpenes are the main compounds of the oil, namely 1,8-cineole (20.1%), β-thujone (25.1%), α-thujone (22.9%) and camphor (10.5%). Among the fungal strains tested, the oil demonstrated potential against Trichophyton rubrum and Epidermophyton floccosum, with minimal inhibitory concentration (MIC) and minimal lethal concentration (MCL) values of 0.32 mg/mL and Cryptococcus neoformans with MIC of 0.64 mg/mL. The oil revealed a strong inhibitory effect on germ tube formation in C. albicans with inhibition of filamentation around 90% at a concentration 0.16 mg/mL. Importantly, the essential oil significantly inhibited NO production evoked by LPS without cytotoxicity at concentrations up to 1.25 µL/mL in macrophages and up to 0.32 µL/mL in microglia. Furthermore, evaluation of cell viability in RAW 264.7 macrophages, BW2 microgliacells and HaCaT keratinocytes showed no cytotoxicity at concentrations up to 0.32 μL/mL. It was possible to find appropriate doses of A. herba-alba oil with both antifungal and anti-inflammatory activities and without detrimental effects towards several mammalian cell types. These findings add significant information to the pharmacological activity of A. herba-alba essential oil, specifically to its antifungal and anti-inflammatory therapeutic value, thus justifying and reinforcing the use of this plant in traditional medicine. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. L-Serine overproduction with minimization of by-product synthesis by engineered Corynebacterium glutamicum.

    PubMed

    Zhu, Qinjian; Zhang, Xiaomei; Luo, Yuchang; Guo, Wen; Xu, Guoqiang; Shi, Jinsong; Xu, Zhenghong

    2015-02-01

    The direct fermentative production of L-serine by Corynebacterium glutamicum from sugars is attractive. However, superfluous by-product accumulation and low L-serine productivity limit its industrial production on large scale. This study aimed to investigate metabolic and bioprocess engineering strategies towards eliminating by-products as well as increasing L-serine productivity. Deletion of alaT and avtA encoding the transaminases and introduction of an attenuated mutant of acetohydroxyacid synthase (AHAS) increased both L-serine production level (26.23 g/L) and its productivity (0.27 g/L/h). Compared to the parent strain, the by-products L-alanine and L-valine accumulation in the resulting strain were reduced by 87 % (from 9.80 to 1.23 g/L) and 60 % (from 6.54 to 2.63 g/L), respectively. The modification decreased the metabolic flow towards the branched-chain amino acids (BCAAs) and induced to shift it towards L-serine production. Meanwhile, it was found that corn steep liquor (CSL) could stimulate cell growth and increase sucrose consumption rate as well as L-serine productivity. With addition of 2 g/L CSL, the resulting strain showed a significant improvement in the sucrose consumption rate (72 %) and the L-serine productivity (67 %). In fed-batch fermentation, 42.62 g/L of L-serine accumulation was achieved with a productivity of 0.44 g/L/h and yield of 0.21 g/g sucrose, which was the highest production of L-serine from sugars to date. The results demonstrated that combined metabolic and bioprocess engineering strategies could minimize by-product accumulation and improve L-serine productivity.

  5. In vitro effects of Salvia officinalis L. essential oil on Candida albicans

    PubMed Central

    Sookto, Tularat; Srithavaj, Theerathavaj; Thaweboon, Sroisiri; Thaweboon, Boonyanit; Shrestha, Binit

    2013-01-01

    Objective To determine the anticandidal activities of Salvia officinalis L. (S. officinalis) essential oil against Candida albicans (C. albicans) and the inhibitory effects on the adhesion of C. albicans to polymethyl methacrylate (PMMA) resin surface. Methods Disc diffusion method was first used to test the anticandidal activities of the S. officinalis L. essential oil against the reference strain (ATCC 90028) and 2 clinical strains of C. albicans. Then the minimal inhibitory concentration (MIC) and minimal lethal concentration (MLC) were determined by modified membrane method. The adhesion of C. albicans to PMMA resin surface was assessed after immersion with S. officinalis L. essential oil at various concentrations of 1×MIC, 0.5×MIC and 0.25×MIC at room temperature for 30 min. One-way ANOVA was used to compare the Candida cell adhesion with the pretreatment agents and Tukey's test was used for multiple comparisons. Results S. officinalis L. essential oil exhibited anticandidal activity against all strains of C. albicans with inhibition zone ranging from 40.5 mm to 19.5 mm. The MIC and MLC of the oil were determined as 2.780 g/L against all test strains. According to the effects on C. albicans adhesion to PMMA resin surface, it was found that immersion in the essential oil at concentrations of 1×MIC (2.780 g/L), 0.5×MIC (1.390 g/L) and 0.25×MIC (0.695 g/L) for 30 min significantly reduced the adhesion of all 3 test strains to PMMA resin surface in a dose dependent manner (P<0.05). Conclusions S. officinalis L. essential oil exhibited anticandidal activities against C. albicans and had inhibitory effects on the adhesion of the cells to PMMA resin surface. With further testing and development, S. officinalis essential oil may be used as an antifungal denture cleanser to prevent candidal adhesion and thus reduce the risk of candida-associated denture stomatitis. PMID:23646301

  6. Is the neutrophil to lymphocyte ratio associated with liver fibrosis in patients with chronic hepatitis B?

    PubMed

    Kekilli, Murat; Tanoglu, Alpaslan; Sakin, Yusuf Serdar; Kurt, Mevlut; Ocal, Serkan; Bagci, Sait

    2015-05-14

    To determine the association between the neutrophil to lymphocyte (N/L) ratio and the degree of liver fibrosis in patients with chronic hepatitis B (CHB) infection. Between December 2011 and February 2013, 129 consecutive CHB patients who were admitted to the study hospitals for histological evaluation of chronic hepatitis B-related liver fibrosis were included in this retrospective study. The patients were divided into two groups based on the fibrosis score: individuals with a fibrosis score of F0 or F1 were included in the "no/minimal liver fibrosis" group, whereas patients with a fibrosis score of F2, F3, or F4 were included in the "advanced liver fibrosis" group. The Statistical Package for Social Sciences 18.0 for Windows was used to analyze the data. A P value of < 0.05 was accepted as statistically significant. Three experienced and blinded pathologists evaluated the fibrotic status and inflammatory activity of 129 liver biopsy samples from the CHB patients. Following histopathological examination, the "no/minimal fibrosis" group included 79 individuals, while the "advanced fibrosis" group included 50 individuals. Mean (N/L) ratio levels were notably lower in patients with advanced fibrosis when compared with patients with no/minimal fibrosis. The mean value of the aspartate aminotransferase-platelet ratio index was markedly higher in cases with advanced fibrosis compared to those with no/minimal fibrosis. Reduced levels of the peripheral blood N/L ratio were found to give high sensitivity, specificity and predictive values in CHB patients with significant fibrosis. The prominent finding of our research suggests that the N/L ratio can be used as a novel noninvasive marker of fibrosis in patients with CHB.

  7. Case Study: Unfavorable But Transient Physiological Changes During Contest Preparation in a Drug-Free Male Bodybuilder.

    PubMed

    Pardue, Andrew; Trexler, Eric T; Sprod, Lisa K

    2017-12-01

    Extreme body composition demands of competitive bodybuilding have been associated with unfavorable physiological changes, including alterations in metabolic rate and endocrine profile. The current case study evaluated the effects of contest preparation (8 months), followed by recovery (5 months), on a competitive drug-free male bodybuilder over 13 months (M1-M13). Serum testosterone, triiodothyronine (T 3 ), thyroxine (T 4 ), cortisol, leptin, and ghrelin were measured throughout the study. Body composition (BodPod, dualenergy x-ray absorptiometry [DXA]), anaerobic power (Wingate test), and resting metabolic rate (RMR) were assessed monthly. Sleep was assessed monthly via the Pittsburgh Sleep Quality Index (PSQI) and actigraphy. From M1 to M8, testosterone (623-173 ng∙dL -1 ), T 3 (123-40 ng∙dL -1 ), and T 4 (5.8-4.1 mg∙dL -1 ) decreased, while cortisol (25.2-26.5 mg∙dL -1 ) and ghrelin (383-822 pg∙mL -1 ) increased. The participant lost 9.1 kg before competition as typical energy intake dropped from 3,860 to 1,724 kcal∙day -1 ; BodPod estimates of body fat percentage were 13.4% at M1, 9.6% at M8, and 14.9% at M13; DXA estimates were 13.8%, 5.1%, and 13.8%, respectively. Peak anaerobic power (753.0 to 536.5 Watts) and RMR (107.2% of predicted to 81.2% of predicted) also decreased throughout preparation. Subjective sleep quality decreased from M1 to M8, but objective measures indicated minimal change. By M13, physiological changes were largely, but not entirely, reversed. Contest preparation may yield transient, unfavorable changes in endocrine profile, power output, RMR, and subjective sleep outcomes. Research with larger samples must identify strategies that minimize unfavorable adaptations and facilitate recovery following competition.

  8. Minimizing transient influence in WHPA delineation: An optimization approach for optimal pumping rate schemes

    NASA Astrophysics Data System (ADS)

    Rodriguez-Pretelin, A.; Nowak, W.

    2017-12-01

    For most groundwater protection management programs, Wellhead Protection Areas (WHPAs) have served as primarily protection measure. In their delineation, the influence of time-varying groundwater flow conditions is often underestimated because steady-state assumptions are commonly made. However, it has been demonstrated that temporary variations lead to significant changes in the required size and shape of WHPAs. Apart from natural transient groundwater drivers (e.g., changes in the regional angle of flow direction and seasonal natural groundwater recharge), anthropogenic causes such as transient pumping rates are of the most influential factors that require larger WHPAs. We hypothesize that WHPA programs that integrate adaptive and optimized pumping-injection management schemes can counter transient effects and thus reduce the additional areal demand in well protection under transient conditions. The main goal of this study is to present a novel management framework that optimizes pumping schemes dynamically, in order to minimize the impact triggered by transient conditions in WHPA delineation. For optimizing pumping schemes, we consider three objectives: 1) to minimize the risk of pumping water from outside a given WHPA, 2) to maximize the groundwater supply and 3) to minimize the involved operating costs. We solve transient groundwater flow through an available transient groundwater and Lagrangian particle tracking model. The optimization problem is formulated as a dynamic programming problem. Two different optimization approaches are explored: I) the first approach aims for single-objective optimization under objective (1) only. The second approach performs multiobjective optimization under all three objectives where compromise pumping rates are selected from the current Pareto front. Finally, we look for WHPA outlines that are as small as possible, yet allow the optimization problem to find the most suitable solutions.

  9. Antimicrobial effect of blueberry (Vaccinium corymbosum L.) extracts against the growth of Listeria monocytogenes and Salmonella Enteritidis

    USDA-ARS?s Scientific Manuscript database

    We studied the antimicrobial effects of berry extracts obtained from four cultivars (Elliott, Darrow, Bluecrop and Duke) of blueberry (Vaccinium corymbosum L.) on the growth of Listeria monocytogenes and Salmonella Enteritidis. The minimal inhibitory concentration (MIC) and minimal bactericidal conc...

  10. Effects of exercise with or without blueberries in the diet on cardio-metabolic risk factors: An exploratory pilot study in healthy subjects

    PubMed Central

    Nyberg, Sofia; Gerring, Edvard; Gjellan, Solveig; Vergara, Marta; Lindström, Torbjörn

    2013-01-01

    Background. The improvement of insulin sensitivity by exercise has been shown to be inhibited by supplementation of vitamins acting as antioxidants. Objective. To examine effects of exercise with or without blueberries, containing natural antioxidants, on cardio-metabolic risk factors. Methods. Fifteen healthy men and 17 women, 27.6 ± 6.5 years old, were recruited, and 26 completed a randomized cross-over trial with 4 weeks of exercise by running/jogging 5 km five times/week and 4 weeks of minimal physical activity. Participants were also randomized to consume 150 g of blueberries, or not, on exercise days. Laboratory variables were measured before and after a 5 km running-race at maximal speed at the beginning and end of each period, i.e. there were four maximal running-races and eight samplings in total for each participant. Results. Insulin and triglyceride levels were reduced while HDL-cholesterol increased by exercise compared with minimal physical activity. Participants randomized to consume blueberries showed an increase in fasting glucose levels compared with controls, during the exercise period (blueberries: from 5.12 ± 0.49 mmol/l to 5.32 ± 0.29 mmol/l; controls: from 5.24 ± 0.27 mmol/l to 5.17 ± 0.23 mmol/l, P = 0.04 for difference in change). Triglyceride levels fell in the control group (from 1.1 ± 0.49 mmol/l to 0.93 ± 0.31 mmol/l, P = 0.02), while HDL-cholesterol increased in the blueberry group (from 1.51 ± 0.29 mmol/l to 1.64 ± 0.33 mmol/l, P = 0.006). Conclusions. Ingestion of blueberries induced differential effects on cardio-metabolic risk factors, including increased levels of both fasting glucose and HDL-cholesterol. However, since it is possible that indirect effects on food intake were induced, other than consumption of blueberries, further studies are needed to confirm the findings. PMID:23977864

  11. Effects of exercise with or without blueberries in the diet on cardio-metabolic risk factors: an exploratory pilot study in healthy subjects.

    PubMed

    Nyberg, Sofia; Gerring, Edvard; Gjellan, Solveig; Vergara, Marta; Lindström, Torbjörn; Nystrom, Fredrik H

    2013-11-01

    The improvement of insulin sensitivity by exercise has been shown to be inhibited by supplementation of vitamins acting as antioxidants. To examine effects of exercise with or without blueberries, containing natural antioxidants, on cardio-metabolic risk factors. Fifteen healthy men and 17 women, 27.6 ± 6.5 years old, were recruited, and 26 completed a randomized cross-over trial with 4 weeks of exercise by running/jogging 5 km five times/week and 4 weeks of minimal physical activity. Participants were also randomized to consume 150 g of blueberries, or not, on exercise days. Laboratory variables were measured before and after a 5 km running-race at maximal speed at the beginning and end of each period, i.e. there were four maximal running-races and eight samplings in total for each participant. Insulin and triglyceride levels were reduced while HDL-cholesterol increased by exercise compared with minimal physical activity. Participants randomized to consume blueberries showed an increase in fasting glucose levels compared with controls, during the exercise period (blueberries: from 5.12 ± 0.49 mmol/l to 5.32 ± 0.29 mmol/l; controls: from 5.24 ± 0.27 mmol/l to 5.17 ± 0.23 mmol/l, P = 0.04 for difference in change). Triglyceride levels fell in the control group (from 1.1 ± 0.49 mmol/l to 0.93 ± 0.31 mmol/l, P = 0.02), while HDL-cholesterol increased in the blueberry group (from 1.51 ± 0.29 mmol/l to 1.64 ± 0.33 mmol/l, P = 0.006). Ingestion of blueberries induced differential effects on cardio-metabolic risk factors, including increased levels of both fasting glucose and HDL-cholesterol. However, since it is possible that indirect effects on food intake were induced, other than consumption of blueberries, further studies are needed to confirm the findings.

  12. Quantifying Patterns of Smooth Muscle Motility in the Gut and Other Organs With New Techniques of Video Spatiotemporal Mapping

    PubMed Central

    Lentle, Roger G.; Hulls, Corrin M.

    2018-01-01

    The uses and limitations of the various techniques of video spatiotemporal mapping based on change in diameter (D-type ST maps), change in longitudinal strain rate (L-type ST maps), change in area strain rate (A-type ST maps), and change in luminous intensity of reflected light (I-maps) are described, along with their use in quantifying motility of the wall of hollow structures of smooth muscle such as the gut. Hence ST-methods for determining the size, speed of propagation and frequency of contraction in the wall of gut compartments of differing geometric configurations are discussed. We also discuss the shortcomings and problems that are inherent in the various methods and the use of techniques to avoid or minimize them. This discussion includes, the inability of D-type ST maps to indicate the site of a contraction that does not reduce the diameter of a gut segment, the manipulation of axis [the line of interest (LOI)] of L-maps to determine the true axis of propagation of a contraction, problems with anterior curvature of gut segments and the use of adjunct image analysis techniques that enhance particular features of the maps. PMID:29686624

  13. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  14. Geometric chaos indicators and computations of the spherical hypertube manifolds of the spatial circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Guzzo, Massimiliano; Lega, Elena

    2018-06-01

    The circular restricted three-body problem has five relative equilibria L1 ,L2, . . . ,L5. The invariant stable-unstable manifolds of the center manifolds originating at the partially hyperbolic equilibria L1 ,L2 have been identified as the separatrices for the motions which transit between the regions of the phase-space which are internal or external with respect to the two massive bodies. While the stable and unstable manifolds of the planar problem have been extensively studied both theoretically and numerically, the spatial case has not been as deeply investigated. This paper is devoted to the global computation of these manifolds in the spatial case with a suitable finite time chaos indicator. The definition of the chaos indicator is not trivial, since the mandatory use of the regularizing Kustaanheimo-Stiefel variables may introduce discontinuities in the finite time chaos indicators. From the study of such discontinuities, we define geometric chaos indicators which are globally defined and smooth, and whose ridges sharply approximate the stable and unstable manifolds of the center manifolds of L1 ,L2. We illustrate the method for the Sun-Jupiter mass ratio, and represent the topology of the asymptotic manifolds using sections and three-dimensional representations.

  15. Technical Note: A novel leaf sequencing optimization algorithm which considers previous underdose and overdose events for MLC tracking radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wisotzky, Eric, E-mail: eric.wisotzky@charite.de, E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J., E-mail: paul.keall@sydney.edu.au

    2016-01-15

    Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events aremore » taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.« less

  16. L2-norm multiple kernel learning and its application to biomedical data fusion

    PubMed Central

    2010-01-01

    Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. Availability The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. PMID:20529363

  17. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  18. Teaching L2 Interactional Competence: Problems and Possibilities

    ERIC Educational Resources Information Center

    Waring, Hansun Zhang

    2018-01-01

    This contribution outlines the problems and possibilities of three issues with regard to the teaching of L2 interactional competence (IC): (1) specifying IC, (2) standardising IC, and (3) translating conversation analytic (CA) insights into classroom practices. In particular, I argue for a shift of discussion from the conceptually treacherous…

  19. Anti-Proliferative and Anti-Inflammatory Lanostane Triterpenoids from the Polish Edible Mushroom Macrolepiota procera.

    PubMed

    Chen, He-Ping; Zhao, Zhen-Zhu; Li, Zheng-Hui; Huang, Ying; Zhang, Shuai-Bing; Tang, Yang; Yao, Jian-Neng; Chen, Lin; Isaka, Masahiko; Feng, Tao; Liu, Ji-Kai

    2018-03-28

    This study features the isolation and identification of 12 lanostane-type triterpenoids, namely lepiotaprocerins A-L, 1-12, from the fruiting bodies of the Poland-collected edible mushroom Macrolepiota procera. The structures and the absolute configurations of the new compounds were ambiguously established by extensive spectroscopic analyses, ECD calculation, and single-crystal X-ray diffraction analyses. Structurally, lepiotaprocerins A-F, 1-6, are distinguished by the presence of a rare "1-en-1,11-epoxy" moiety which has not been previously described in the lanostane class. Biologically, lepiotaprocerins A-F, 1-6, displayed more significant inhibitions of nitric oxide (NO) production than the positive control L- N G -monomethyl arginine (L-NMMA) (IC 50 47.1 μM), and lepiotaprocerins G-L, 7-12, showed various cytotoxicity potencies against a panel of human cancer cell lines. Compound 9 also displayed antitubercular activity against Mycobacterium tuberculosis H37Ra with a minimal inhibitory concentration (MIC) 50 μg/mL.

  20. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  1. Complications with axial presacral lumbar interbody fusion: A 5-year postmarketing surveillance experience

    PubMed Central

    Gundanna, Mukund I.; Miller, Larry E.; Block, Jon E.

    2011-01-01

    Background Open and minimally invasive lumbar fusion procedures have inherent procedural risks, with posterior and transforaminal approaches resulting in significant soft-tissue injury and the anterior approach endangering organs and major blood vessels. An alternative lumbar fusion technique uses a small paracoccygeal incision and a presacral approach to the L5-S1 intervertebral space, which avoids critical structures and may result in a favorable safety profile versus open and other minimally invasive fusion techniques. The purpose of this study was to evaluate complications associated with axial interbody lumbar fusion procedures using the Axial Lumbar Interbody Fusion (AxiaLIF) System (TranS1, Wilmington, North Carolina) in the postmarketing period. Methods Between March 2005 and March 2010, 9,152 patients underwent interbody fusion with the AxiaLIF System through an axial presacral approach. A single-level L5-S1 fusion was performed in 8,034 patients (88%), and a 2-level (L4-S1) fusion was used in 1,118 (12%). A predefined database was designed to record device- or procedure-related complaints via spontaneous reporting. The complications that were recorded included bowel injury, superficial wound and systemic infections, transient intraoperative hypotension, migration, subsidence, presacral hematoma, sacral fracture, vascular injury, nerve injury, and ureter injury. Results Complications were reported in 120 of 9,152 patients (1.3%). The most commonly reported complications were bowel injury (n = 59, 0.6%) and transient intraoperative hypotension (n = 20, 0.2%). The overall complication rate was similar between single-level (n = 102, 1.3%) and 2-level (n = 18, 1.6%) fusion procedures, with no significant differences noted for any single complication. Conclusions The 5-year postmarketing surveillance experience with the AxiaLIF System suggests that axial interbody lumbar fusion through the presacral approach is associated with a low incidence of complications. The overall complication rates observed in our evaluation compare favorably with those reported in trials of open and minimally invasive lumbar fusion surgery. PMID:25802673

  2. Antifungal activity of phenolic-rich Lavandula multifida L. essential oil.

    PubMed

    Zuzarte, M; Vale-Silva, L; Gonçalves, M J; Cavaleiro, C; Vaz, S; Canhoto, J; Pinto, E; Salgueiro, L

    2012-07-01

    This study evaluates the antifungal activity and mechanism of action of a new chemotype of Lavandula multifida from Portugal. The essential oil was analyzed by gas chromatography (GC) and gas chromatography/mass spectrometry (GC/MS), and the minimal inhibitory concentration (MIC) and minimal lethal concentration (MLC) of the oil and its major compounds were determined against several pathogenic fungi responsible for candidosis, meningitis, dermatophytosis, and aspergillosis. The influence of the oil on the dimorphic transition in Candida albicans was also studied, as well as propidium iodide (PI) and FUN-1 staining of C. albicans cells by flow cytometry. The essential oil was characterized by high contents of monoterpenes, with carvacrol and cis-β-ocimene being the main constituents. The oil was more effective against dermatophytes and Cryptococcus neoformans, with MIC and MLC values of 0.16 μL/mL and 0.32 μL/mL, respectively. The oil was further shown to completely inhibit filamentation in C. albicans at concentrations below the respective MIC (0.08 μL/mL), with cis-β-ocimene being the main compound responsible for this inhibition (0.02 μL/mL). The flow cytometry results suggest a mechanism of action ultimately leading to cytoplasmic membrane disruption and cell death. L. multifida essential oil may be useful in complementary therapy to treat disseminated candidosis, since the inhibition of filamentation alone appears to be sufficient to treat this type of infection.

  3. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    PubMed

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  4. Optical solitons in nematic liquid crystals: model with saturation effects

    NASA Astrophysics Data System (ADS)

    Borgna, Juan Pablo; Panayotaros, Panayotis; Rial, Diego; de la Vega, Constanza Sánchez F.

    2018-04-01

    We study a 2D system that couples a Schrödinger evolution equation to a nonlinear elliptic equation and models the propagation of a laser beam in a nematic liquid crystal. The nonlinear elliptic equation describes the response of the director angle to the laser beam electric field. We obtain results on well-posedness and solitary wave solutions of this system, generalizing results for a well-studied simpler system with a linear elliptic equation for the director field. The analysis of the nonlinear elliptic problem shows the existence of an isolated global branch of solutions with director angles that remain bounded for arbitrary electric field. The results on the director equation are also used to show local and global existence, as well as decay for initial conditions with sufficiently small L 2-norm. For sufficiently large L 2-norm we show the existence of energy minimizing optical solitons with radial, positive and monotone profiles.

  5. A lightweight sensor network management system design

    USGS Publications Warehouse

    Yuan, F.; Song, W.-Z.; Peterson, N.; Peng, Y.; Wang, L.; Shirazi, B.; LaHusen, R.

    2008-01-01

    In this paper, we propose a lightweight and transparent management framework for TinyOS sensor networks, called L-SNMS, which minimizes the overhead of management functions, including memory usage overhead, network traffic overhead, and integration overhead. We accomplish this by making L-SNMS virtually transparent to other applications hence requiring minimal integration. The proposed L-SNMS framework has been successfully tested on various sensor node platforms, including TelosB, MICAz and IMote2. ?? 2008 IEEE.

  6. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration

    PubMed Central

    Doss, Hani; Tan, Aixin

    2017-01-01

    In the classical biased sampling problem, we have k densities π1(·), …, πk(·), each known up to a normalizing constant, i.e. for l = 1, …, k, πl(·) = νl(·)/ml, where νl(·) is a known function and ml is an unknown constant. For each l, we have an iid sample from πl,·and the problem is to estimate the ratios ml/ms for all l and all s. This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the πl’s are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case. PMID:28706463

  7. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  8. Synergies of carvacrol and 1,8-cineole to inhibit bacteria associated with minimally processed vegetables.

    PubMed

    de Sousa, Jossana Pereira; de Azerêdo, Geíza Alves; de Araújo Torres, Rayanne; da Silva Vasconcelos, Margarida Angélica; da Conceição, Maria Lúcia; de Souza, Evandro Leite

    2012-03-15

    This study assessed the occurrence of an enhancing inhibitory effect of the combined application of carvacrol and 1,8-cineole against bacteria associated with minimally processed vegetables using the determination of Fractional Inhibitory Concentration (FIC) index, time-kill assay in vegetable broth and application in vegetable matrices. Their effects, individually and in combination, on the sensory characteristics of the vegetables were also determined. Carvacrol and 1,8-cineole displayed Minimum Inhibitory Concentration (MIC) in a range of 0.6-2.5 and 5-20 μL/mL, respectively, against the organisms studied. FIC indices of the combined application of the compounds were 0.25 against Listeria monocytogenes, Aeromonas hydrophila and Pseudomonas fluorescens, suggesting a synergic interaction. Application of carvacrol and 1,8-cineole alone (MIC) or in a mixture (1/8 MIC+1/8 MIC or 1/4 MIC+1/4 MIC) in vegetable broth caused a significant decrease (p<0.05) in bacterial count over 24h. Mixtures of carvacrol and 1,8-cineole reduced (p<0.05) the inocula of all bacteria in vegetable broth and in experimentally inoculated fresh-cut vegetables. A similar efficacy was observed in the reduction of naturally occurring microorganisms in vegetables. Sensory evaluation revealed that the scores of the most-evaluated attributes fell between "like slightly" and "neither like nor dislike." The combination of carvacrol and 1,8-cineole at sub-inhibitory concentrations could constitute an interesting approach to sanitizing minimally processed vegetables. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Static respiratory muscle work during immersion with positive and negative respiratory loading.

    PubMed

    Taylor, N A; Morrison, J B

    1999-10-01

    Upright immersion imposes a pressure imbalance across the thorax. This study examined the effects of air-delivery pressure on inspiratory muscle work during upright immersion. Eight subjects performed respiratory pressure-volume relaxation maneuvers while seated in air (control) and during immersion. Hydrostatic, respiratory elastic (lung and chest wall), and resultant static respiratory muscle work components were computed. During immersion, the effects of four air-delivery pressures were evaluated: mouth pressure (uncompensated); the pressure at the lung centroid (PL,c); and at PL,c +/-0.98 kPa. When breathing at pressures less than the PL,c, subjects generally defended an expiratory reserve volume (ERV) greater than the immersed relaxation volume, minus residual volume, resulting in additional inspiratory muscle work. The resultant static inspiratory muscle work, computed over a 1-liter tidal volume above the ERV, increased from 0.23 J. l(-1), when subjects were breathing at PL,c, to 0.83 J. l(-1) at PL,c -0.98 kPa (P < 0.05), and to 1.79 J. l(-1) at mouth pressure (P < 0.05). Under the control state, and during the above experimental conditions, static expiratory work was minimal. When breathing at PL,c +0.98 kPa, subjects adopted an ERV less than the immersed relaxation volume, minus residual volume, resulting in 0.36 J. l(-1) of expiratory muscle work. Thus static inspiratory muscle work varied with respiratory loading, whereas PL,c air supply minimized this work during upright immersion, restoring lung-tissue, chest-wall, and static muscle work to levels obtained in the control state.

  10. Topological Constraints on Identifying Additive Link Metrics via End-to-end Paths Measurements

    DTIC Science & Technology

    2012-09-20

    can be applied to each Gi separately when H is disconnected. ... ( a ) (b) (c) (d) G1 G2 G1 G2 l1 l1 l2l2 m2 m1 r a1 akr a2 m2 m1 r1 s1 l1 l2 r2 s2 a1i...Email: towsley@cs.umass.edu Abstract—We investigate the problem of identifying individual link metrics in a communication network through measuring...the number of links in the network. There lacks, however, a fundamental theory to relate the number of linearly independent paths (and thus link

  11. The minimal GUT with inflaton and dark matter unification

    NASA Astrophysics Data System (ADS)

    Chen, Heng-Yu; Gogoladze, Ilia; Hu, Shan; Li, Tianjun; Wu, Lina

    2018-01-01

    Giving up the solutions to the fine-tuning problems, we propose the non-supersymmetric flipped SU(5)× U(1)_X model based on the minimal particle content principle, which can be constructed from the four-dimensional SO(10) models, five-dimensional orbifold SO(10) models, and local F-theory SO(10) models. To achieve gauge coupling unification, we introduce one pair of vector-like fermions, which form a complete SU(5)× U(1)_X representation. The proton lifetime is around 5× 10^{35} years, neutrino masses and mixing can be explained via the seesaw mechanism, baryon asymmetry can be generated via leptogenesis, and the vacuum stability problem can be solved as well. In particular, we propose that inflaton and dark matter particles can be unified to a real scalar field with Z_2 symmetry, which is not an axion and does not have the non-minimal coupling to gravity. Such a kind of scenarios can be applied to the generic scalar dark matter models. Also, we find that the vector-like particle corrections to the B_s^0 masses might be about 6.6%, while their corrections to the K^0 and B_d^0 masses are negligible.

  12. All Prime Contract Awards by State or Country, Place and Contractor. Part 10. (Abington, Massachusetts-Zeeland, Michigan)

    DTIC Science & Technology

    1989-01-01

    IOOOOON ᝰ WO) 40L L i LLJ 14 M -4 :: 347 )0( 0 ix 4(0 0.70 4z0(0) 0 0 co L(0 M00 0)L’ 00 (0.- 4 0.4 of4C 00000 IO0 00 00 00 w00 0000 0000 0’) 00 00 0000 V...82174mw-*0 N"’- -4N CO NOWO MINIM) m’,4mmw0 -400 mmC’ -4N-401U1 LA ON I 0)00 It :) C 000 If aHc CO0 a* I N. U I moo0 It IL Q M 00 (11 1 I 00- if I H4 0

  13. The Thermal Equilibrium Solution of a Generic Bipolar Quantum Hydrodynamic Model

    NASA Astrophysics Data System (ADS)

    Unterreiter, Andreas

    The thermal equilibrium state of a bipolar, isothermic quantum fluid confined to a bounded domain ,d = 1,2 or d = 3 is entirely described by the particle densities n, p, minimizing the energy where G1,2 are strictly convex real valued functions, . It is shown that this variational problem has a unique minimizer in and some regularity results are proven. The semi-classical limit is carried out recovering the minimizer of the limiting functional. The subsequent zero space charge limit leads to extensions of the classical boundary conditions. Due to the lack of regularity the asymptotics can not be settled on Sobolev embedding arguments. The limit is carried out by means of a compactness-by-convexity principle.

  14. On multiple crack identification by ultrasonic scanning

    NASA Astrophysics Data System (ADS)

    Brigante, M.; Sumbatyan, M. A.

    2018-04-01

    The present work develops an approach which reduces operator equations arising in the engineering problems to the problem of minimizing the discrepancy functional. For this minimization, an algorithm of random global search is proposed, which is allied to some genetic algorithms. The efficiency of the method is demonstrated by the solving problem of simultaneous identification of several linear cracks forming an array in an elastic medium by using the circular Ultrasonic scanning.

  15. Fully implicit moving mesh adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Serazio, C.; Chacon, L.; Lapenta, G.

    2006-10-01

    In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)

  16. L'effet de p53 sur la radiosensibilité des cellules humaines normales et cancéreuses

    NASA Astrophysics Data System (ADS)

    Little, J. B.; Li, C. Y.; Nagasawa, H.; Huang, H.

    1998-04-01

    The radiosensitivity of normal human fibroblasts in p53 dependent and associated with the loss of cells from the cycling population as the result of an irreversible G1 arrest; cells lacking normal p53 function show no arrest and are more radioresistant. Under conditions in which the repair potentially lethal radiation damage is facilitated, the fraction of cells arrested in G1 is reduced and survival is enhanced. The response of human tumor cells differs significantly. The radiation-induced G1 arrest is minimal or absent in p53+ tumor cells, and loss of normal p53 function has no consistent effect on their radiosensitivity. These results suggest that p53 status may not be a useful predictive marker for the response of human solid tumors to radiation therapy. La radiosensibilité des fibroblastes diploïdes humains est liée à l'expression de p53, et à la perte de cellules en cycle résultant d'un arrêt irréversible en phase G1 ; dans les cellules n'ayant pas une fonction p53 normale, on ne constate aucun arrêt, et elles sont plus radio-résistantes. Dans des conditions favorables à la réparation de lésions potentiellement léthales dues à l'irradiation, la proportion de cellules bloquées en phase G1 baisse, et les chances de survie sont accrues. Bien différente est la réaction des cellules cancéreuses humaines. Le blocage par irradiation en phase G1 est minime ou inexistant dans les cellules cancéreuses p53^+, et la perte de la fonction normale p53 n'a pas d'effet constant sur leur radiosensibilité. Ces résultats laissent penser que l'expression de p53 n'est pas un indice fiable permettant de prévoir la réaction des tumeurs solides à la radiothérapie.

  17. Constraining composite Higgs models using LHC data

    NASA Astrophysics Data System (ADS)

    Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar

    2018-03-01

    We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.

  18. Hybrid light transport model based bioluminescence tomography reconstruction for early gastric cancer detection

    NASA Astrophysics Data System (ADS)

    Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie

    2012-03-01

    Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.

  19. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  20. Development of sinkholes resulting from man's activities in the Eastern United States

    USGS Publications Warehouse

    Newton, John G.

    1987-01-01

    Alternatives that allow avoiding or minimizing sinkhole hazards are most numerous when a problem or potential problem is recognized during site evaluation. The number of alternatives declines after the beginning of site development. Where sinkhole development is predictable, zoning of land use can minimize hazards.

  1. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  2. Water reuse in the l-lysine fermentation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsiao, T.Y.; Glatz, C.E.

    1996-02-05

    L-Lysine is produced commercially by fermentation. As is typical for fermentation processes, a large amount of liquid waste is generated. To minimize the waste, which is mostly the broth effluent from the cation exchange column used for l-lysine recovery, the authors investigated a strategy of recycling a large fraction of this broth effluent to the subsequent fermentation. This was done on a lab-scale process with Corynebacterium glutamicum ATCC 21253 as the l-lysine-producing organisms. Broth effluent from a fermentation in a defined medium was able to replace 75% of the water for the subsequent batch; this recycle ratio was maintained formore » 3 sequential batches without affecting cell mass and l-lysine production. Broth effluent was recycled at 50% recycle ratio in a fermentation in a complex medium containing beet molasses. The first recycle batch had an 8% lower final l-lysine level, but 8% higher maximum cell mass. In addition to reducing the volume of liquid waste, this recycle strategy has the additional advantage of utilizing the ammonium desorbed from the ion-exchange column as a nitrogen source in the recycle fermentation. The major problem of recycling the effluent from the complex medium was in the cation-exchange operation, where column capacity was 17% lower for the recycle batch. The loss of column capacity probably results from the buildup of cations competing with l-lysine for binding.« less

  3. A mechanism for intergenomic integration: abundance of ribulose bisphosphate carboxylase small-subunit protein influences the translation of the large-subunit mRNA.

    PubMed Central

    Rodermel, S; Haley, J; Jiang, C Z; Tsai, C H; Bogorad, L

    1996-01-01

    Multimeric protein complexes in chloroplasts and mitochondria are generally composed of products of both nuclear and organelle genes of the cell. A central problem of eukaryotic cell biology is to identify and understand the molecular mechanisms for integrating the production and accumulation of the products of the two separate genomes. Ribulose bisphosphate carboxylase (Rubisco) is localized in the chloroplasts of photosynthetic eukaryotic cells and is composed of small subunits (SS) and large subunits (LS) coded for by nuclear rbcS and chloroplast rbcL genes, respectively. Transgenic tobacco plants containing antisense rbcS DNA have reduced levels of rbcS mRNA, normal levels of rbcL mRNA, and coordinately reduced LS and SS proteins. Our previous experiments indicated that the rate of translation of rbcL mRNA might be reduced in some antisense plants; direct evidence is presented here. After a short-term pulse there is less labeled LS protein in the transgenic plants than in wild-type plants, indicating that LS accumulation is controlled in the mutants at the translational and/or posttranslational levels. Consistent with a primary restriction at translation, fewer rbcL mRNAs are associated with polysomes of normal size and more are free or are associated with only a few ribosomes in the antisense plants. Effects of the rbcS antisense mutation on mRNA and protein accumulation, as well as on the distribution of mRNAs on polysomes, appear to be minimal for other chloroplast and nuclear photosynthetic genes. Our results suggest that SS protein abundance specifically contributes to the regulation of LS protein accumulation at the level of rbcL translation initiation. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 Fig. 6 Fig. 7 Fig. 8 PMID:8632983

  4. Minimization principles for the coupled problem of Darcy-Biot-type fluid transport in porous media linked to phase field modeling of fracture

    NASA Astrophysics Data System (ADS)

    Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan

    2015-09-01

    This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.

  5. Brain and Surface Warping via Minimizing Lipschitz Extensions (PREPRINT)

    DTIC Science & Technology

    2006-01-01

    Angenent, S. Haker , A. Tannenbaum, and R. Kikinis, “Conformal geometry and brain flattening,” Proc. MICCAI, pp. 271-278, 1999. 1 [2] G. Aronsson, M...surface mapping,” IEEE Transactions on Medical Imaging, 23:7, 2004. 1 [17] S. Haker , L. Zhu, A. Tannenbaum, and S. An- genent, “Optimal mass transport for

  6. Prediction of Metabolic Flux Distribution from Gene Expression Data Based on the Flux Minimization Principle

    DTIC Science & Technology

    2014-11-14

    problem. Modification of the PLOS ONE | www.plosone.org 1 November 2014 | Volume 9 | Issue 11 | e112524 Report Documentation Page Form ApprovedOMB No. 0704... Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 FBA algorithm to incorporate additional biological information from gene expression profiles is...We set the maximization of biomass production as the objective of FBA and implemented it in two different forms : without flux minimization (or

  7. Minimizing distortion and internal forces in truss structures by simulated annealing

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Padula, Sharon L.

    1990-01-01

    Inaccuracies in the length of members and the diameters of joints of large space structures may produce unacceptable levels of surface distortion and internal forces. Here, two discrete optimization problems are formulated, one to minimize surface distortion (DSQRMS) and the other to minimize internal forces (FSQRMS). Both of these problems are based on the influence matrices generated by a small-deformation linear analysis. Good solutions are obtained for DSQRMS and FSQRMS through the use of a simulated annealing heuristic.

  8. Estimation of Microbial Contamination of Food from Prevalence and Concentration Data: Application to Listeria monocytogenes in Fresh Vegetables▿

    PubMed Central

    Crépet, Amélie; Albert, Isabelle; Dervin, Catherine; Carlin, Frédéric

    2007-01-01

    A normal distribution and a mixture model of two normal distributions in a Bayesian approach using prevalence and concentration data were used to establish the distribution of contamination of the food-borne pathogenic bacteria Listeria monocytogenes in unprocessed and minimally processed fresh vegetables. A total of 165 prevalence studies, including 15 studies with concentration data, were taken from the scientific literature and from technical reports and used for statistical analysis. The predicted mean of the normal distribution of the logarithms of viable L. monocytogenes per gram of fresh vegetables was −2.63 log viable L. monocytogenes organisms/g, and its standard deviation was 1.48 log viable L. monocytogenes organisms/g. These values were determined by considering one contaminated sample in prevalence studies in which samples are in fact negative. This deliberate overestimation is necessary to complete calculations. With the mixture model, the predicted mean of the distribution of the logarithm of viable L. monocytogenes per gram of fresh vegetables was −3.38 log viable L. monocytogenes organisms/g and its standard deviation was 1.46 log viable L. monocytogenes organisms/g. The probabilities of fresh unprocessed and minimally processed vegetables being contaminated with concentrations higher than 1, 2, and 3 log viable L. monocytogenes organisms/g were 1.44, 0.63, and 0.17%, respectively. Introducing a sensitivity rate of 80 or 95% in the mixture model had a small effect on the estimation of the contamination. In contrast, introducing a low sensitivity rate (40%) resulted in marked differences, especially for high percentiles. There was a significantly lower estimation of contamination in the papers and reports of 2000 to 2005 than in those of 1988 to 1999 and a lower estimation of contamination of leafy salads than that of sprouts and other vegetables. The interest of the mixture model for the estimation of microbial contamination is discussed. PMID:17098926

  9. Does finite-temperature decoding deliver better optima for noisy Hamiltonians?

    NASA Astrophysics Data System (ADS)

    Ochoa, Andrew J.; Nishimura, Kohji; Nishimori, Hidetoshi; Katzgraber, Helmut G.

    The minimization of an Ising spin-glass Hamiltonian is an NP-hard problem. Because many problems across disciplines can be mapped onto this class of Hamiltonian, novel efficient computing techniques are highly sought after. The recent development of quantum annealing machines promises to minimize these difficult problems more efficiently. However, the inherent noise found in these analog devices makes the minimization procedure difficult. While the machine might be working correctly, it might be minimizing a different Hamiltonian due to the inherent noise. This means that, in general, the ground-state configuration that correctly minimizes a noisy Hamiltonian might not minimize the noise-less Hamiltonian. Inspired by rigorous results that the energy of the noise-less ground-state configuration is equal to the expectation value of the energy of the noisy Hamiltonian at the (nonzero) Nishimori temperature [J. Phys. Soc. Jpn., 62, 40132930 (1993)], we numerically study the decoding probability of the original noise-less ground state with noisy Hamiltonians in two space dimensions, as well as the D-Wave Inc. Chimera topology. Our results suggest that thermal fluctuations might be beneficial during the optimization process in analog quantum annealing machines.

  10. Fluorides in groundwater and its impact on health.

    PubMed

    Shailaja, K; Johnson, Mary Esther Cynthia

    2007-04-01

    Fluoride is a naturally occurring toxic mineral present in drinking water and causes yellowing of teeth, tooth problems etc. Fluorspar, Cryolite and Fluorapatite are the naturally occurring minerals, from which fluoride finds its path to groundwater through infiltration. In the present study two groundwater samples, Station I and Station II at Hyderabad megacity, the capital of Andhra Pradesh were investigated for one year from January 2001 to December 2001. The average fluoride values were 1.37 mg/l at Station I and 0.91 mg/l at Station II. The permissible limit given by BIS (1983) 0.6-1.2 mg/l and WHO (1984) 1.5 mg/l for fluoride in drinking water. The groundwaters at Station I exceeded the limit while at Station II it was within the limits. The study indicated that fluoride content of 0.5 mg/l is sufficient to cause yellowing of teeth and dental problems.

  11. A Study of Hand Back Skin Texture Patterns for Personal Identification and Gender Classification

    PubMed Central

    Xie, Jin; Zhang, Lei; You, Jane; Zhang, David; Qu, Xiaofeng

    2012-01-01

    Human hand back skin texture (HBST) is often consistent for a person and distinctive from person to person. In this paper, we study the HBST pattern recognition problem with applications to personal identification and gender classification. A specially designed system is developed to capture HBST images, and an HBST image database was established, which consists of 1,920 images from 80 persons (160 hands). An efficient texton learning based method is then presented to classify the HBST patterns. First, textons are learned in the space of filter bank responses from a set of training images using the l1 -minimization based sparse representation (SR) technique. Then, under the SR framework, we represent the feature vector at each pixel over the learned dictionary to construct a representation coefficient histogram. Finally, the coefficient histogram is used as skin texture feature for classification. Experiments on personal identification and gender classification are performed by using the established HBST database. The results show that HBST can be used to assist human identification and gender classification. PMID:23012512

  12. Dynamic Network Formation Using Ant Colony Optimization

    DTIC Science & Technology

    2009-03-01

    backhauls, VRP with pick-up and delivery, VRP with satellite facilities, and VRP with time windows (Murata & Itai , 2005). The general vehicle...given route is only visited once. The objective of the basic problem is to minimize a total cost as follows (Murata & Itai , 2005): M m mc 1 min...Problem based on Ant Colony System. Second Internation Workshop on Freight Transportation and Logistics. Palermo, Italy. Murata, T., & Itai , R. (2005

  13. Optimization of an innovative hollow-fiber process to produce lactose-reduced skim milk.

    PubMed

    Neuhaus, Winfried; Novalin, Senad; Klimacek, Mario; Splechtna, Barbara; Petzelbauer, Inge; Szivak, Alexander; Kulbe, Klaus D

    2006-07-01

    The research field for applications of lactose hydrolysis has been investigated for several decades. Lactose intolerance, improvement for technical processing of solutions containing lactose, and utilization of lactose in whey are the main topics for development of biotechnological processes. We report here the optimization of a hollow-fiber membrane reactor process for enzymatic lactose hydrolysis. Lactase was circulated abluminally during luminal flow of skim milk. The main problem, the growth of microorganisms in the enzyme solution, was minimized by sterile filtration, ultraviolet irradiation, and temperature adjustment. Based on previous experiments at 23 +/- 2 degrees C, further characterization was carried out at 8 +/- 2 degrees C, 15 +/- 2 degrees C (beta-galactosidase), and 58 +/- 2 degrees C (thermostable beta-glycosidase) varying enzyme activity and flow rates. For a cost-effective process, the parameters 15 +/- 2 degrees C, 240 U/mL of beta-galactosidase, an enzyme solution flow rate of 25 L/h, and a skim milk flow rate of about 9 L/h should be used in order to achieve an aimed productivity of 360 g/(L x h) and to run at conditions for the highest process long-term stability.

  14. [MINIMALLY INVASIVE PROCEDURE FOR CORRECTION OF PECTUS CARINATUM].

    PubMed

    Xu, Bing; Liu, Wenying

    2015-04-01

    To explore the method and experience in correction of pectus carinatum with minimally invasive procedure. Between June 2010 and January 2014, 30 patients with pectus carinatum were corrected by minimally invasive procedure. There were 21 boys and 9 girls whose average age was 13 years and 2 months (range, from 8 years and 10 months to 18 years and 9 months), including 24 cases of first operation, 2 recurrence after traditional pectus carinatum correction, and 4 cases secondary to median thoracotomy. Thirty patients had symmetric and asymmetric mild pectus carinatum. The operation was performed successfully in all patients, and no severe complication occurred. The operation time was 42-95 minutes (mean, 70 minutes). The bleeding volume during operation was 4-30 mL (mean, 10 mL). The time from operation to discharge was 6-10 days (mean, 7 days). The average time of follow-up was 25 months (range, 9-54 months). All surgical wound healed primarily with no infection. The X-ray films showed slight pneumothorax in 7 cases, and it was absorbed after 1 month without treatment. Loosening of internal fixation was found in 1 patient because of trauma at 6 months, and operation was performed again. The bar was removed at 2 years in 21 patients. The patients had good thoracic contour and normal activity. Minimally invasive procedure for correction of pectus carinatum is safe and will get satisfactory effect in maintaining thoracic contour. It has also less trauma and shorter operation time.

  15. Evaluation of bioactivity of linalool-rich essential oils from Ocimum basilucum and Coriandrum sativum varieties.

    PubMed

    Duman, Ahmet D; Telci, Isa; Dayisoylu, Kenan S; Digrak, Metin; Demirtas, Ibrahim; Alma, Mehmet H

    2010-06-01

    Essential oils from Ocimum basilicum L. and Coriandrum sativum L. varieties originating from Turkey were investigated for their antimicrobial properties. The antimicrobial effects of the oil varieties were evaluated by the disc diffusion and minimum inhibitory concentration (MIC) methods against eight bacteria and three fungi. The compositions of the essential oils were analyzed and identified by GC and GC-MS. O. basilicum, C. sativum var. macrocarpum and var. microcarpum oils revealed the presence of linalool (54.4%), eugenol (9.6%), methyl eugenol (7.6%); linalool (78.8%), gamma-terpinene (6.0%), nerol acetate (3.5%); and linalool (90.6%), and nerol acetate (3.3%) as the major components, respectively. The oils exhibited antibacterial activity ranging from 1.25 to 10 microL disc(-1) against the test organisms with inhibition zones of 9.5-39.0 mm and minimal inhibitory concentrations values in the range 0.5- > or =1 microL/L. Linalool, eugenol, and methyl eugenol at 1.25 microL disc(-1) had antimicrobial effects on all microorganisms, giving inhibition zones ranging from 7 to 19 mm.

  16. A reflectance flow-through thionine sol-gel sensor for the determination of Se(IV).

    PubMed

    Carvalhido, Joana A E; Almeida, Agostinho A; Araújo, Alberto N; Montenegro, Maria C B S M

    2010-01-01

    In this work, a reversible sensor to assess the total Se(IV) content in samples is described. Pre-activated glass slides were spin-coated with 100 microL of a 20-h aged sol-gel mixture of 1 mL of tetramethoxysilane, 305 microL of 50 mmol L(-1) HCl and 2.0 mg of thionine. The flow-cell consisted of one of those slides as a window, and was filled with beads of a polystyrene anionic exchange resin to retain Se(IV) in the form of selenite ions. A reflectance transduction scheme at a wavelength of 596 nm was adopted. The cell was coupled to a multicommutation flow system where a programmed volume of a sample solution and 373 microL of 0.4 mmol L(-1) iodide in a 1.6 mol L(-1) HCl solution were sequentially inserted into the cell. The iodine produced from the reaction of retained Se(IV) with iodide bleached the blue color of thionine. Considering a sample volume of 2.30 mL, with which the preconcentration step was minimized, a linear dynamic working range between 1.5 to 20 microg mL(-1) and a detection limit of 0.29 microg mL(-1) were obtained. The sensor enabled us to perform approximately 200 assays, and provided results similar to those of electrothermal atomic absorption spectrometry.

  17. Foraminoplastic transfacet epidural endoscopic approach for removal of intraforaminal disc herniation at the L5-S1 level

    PubMed Central

    Kaczmarczyk, Jacek; Nowakowski, Andrzej; Sulewski, Adam

    2014-01-01

    Transforaminal endoscopic disc removal in the L5-S1 motion segment of the lumbar spine creates a technical challenge due to anatomical reasons and individual variability. The majority of surgeons prefer a posterior classical or minimally invasive approach. There is only one foraminoplastic modification of the technique in the literature so far. In this paper we present a new technique with a foraminoplastic transfacet approach that may be suitable in older patients with advanced degenerative disease of the spine. PMID:24729817

  18. Correlation of endothelin-1 concentration and angiotensin-converting enzyme activity with the staging of liver fibrosis.

    PubMed

    Kardum, Dusko; Fabijanić, Damir; Lukić, Anita; Romić, Zeljko; Petrovecki, Mladen; Bogdanović, Zoran; Jurić, Klara; Urek-Crncević, Marija; Banić, Marko

    2012-06-01

    Increased serum angiotensin-converting enzyme (SACE) activity and serum concentration of endothelin-1 (ET-1) were found in liver cirrhosis. We investigated a correlation between the different stages of liver fibrosis and SACE activity and serum ET-1 concentration. Seventy patients with pathohistologically established chronic liver disease were divided in three groups according to Ishak criteria for liver fibrosis: minimal fibrosis (Ishak score 0-1, n =20), medium fibrosis (Ishak score 2-5, n=20) and cirrhosis (Ishak score 6, n=30). SACE activity and ET-1 concentration were determined using commercial ELISA kits. SACE activity and ET-1 concentrations were proportional to the severity of disease, the highest being in patients with liver cirrhosis. Maximal increase in SACE activity was found between minimal and medium fibrosis while maximal increase in ET-1 concentration was revealed between medium fibrosis and cirrhosis. The analysis of the Receiver Operating Characteristic (ROC) curve for SACE activity suggested a cut-off value to separate minimal from medium fibrosis at 59.00 U/L (sensitivity 100%, specificity 64.7%). The cut-off value for serum ET-1 concentration to separate medium fibrosis from cirrhosis was 12.4 pg/mL (sensitivity 96.8%, specificity 94.4%). A positive correlation between SACE activity and ET-1 concentration was registered (Spearman's ñ = 0.438, p = 0.004). Both SACE activity and ET-1 concentration were increased in all stages of liver fibrosis. Cut-off points for SACE activity and ET-1 concentration could be a biochemical marker for the progression of fibrosis. Positive correlation between SACE activity and ET-1 concentration might indicate their interaction in the development of liver cirrhosis.

  19. Il fattore "eta'" nell'acquisizione linguistica (L1 e L2): dimensioni di un "meta-problema" (The "Age" Factor in Language Acquisition [L1 and L2]: The Dimensions of a "Meta-Problem").

    ERIC Educational Resources Information Center

    Titone, Renzo

    1991-01-01

    Summarizes and comments on two recent books, one by Birgit Harley and the other by David Singleton, that review the language research carried out to determine the importance of age in learning a second language. (CFM)

  20. Software Aid for Optimizing 0-1 Matrices.

    DTIC Science & Technology

    1982-11-01

    jib-Ax 1! where x = (AT"A) ’A 7~ b1s minimized, subject to the constraint that the entries of A must be O’s or I’s. Th$* problem has arisen in a...Elman University of California, San Diego Department of Linguistics La Jolla, CA 92093 1 ERIC Facility-Acquisitions 14833 Rugby Avenue Pethesda, MD

  1. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  2. Pure state `really' informationally complete with rank-1 POVM

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shang, Yun

    2018-03-01

    What is the minimal number of elements in a rank-1 positive operator-valued measure (POVM) which can uniquely determine any pure state in d-dimensional Hilbert space H_d? The known result is that the number is no less than 3d-2. We show that this lower bound is not tight except for d=2 or 4. Then we give an upper bound 4d-3. For d=2, many rank-1 POVMs with four elements can determine any pure states in H_2. For d=3, we show eight is the minimal number by construction. For d=4, the minimal number is in the set of {10,11,12,13}. We show that if this number is greater than 10, an unsettled open problem can be solved that three orthonormal bases cannot distinguish all pure states in H_4. For any dimension d, we construct d+2k-2 adaptive rank-1 positive operators for the reconstruction of any unknown pure state in H_d, where 1≤ k ≤ d.

  3. Cognitive radio adaptation for power consumption minimization using biogeography-based optimization

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Han; Zheng, Shi-Lian; Yang, Xiao-Niu; Zhao, Zhi-Jin

    2016-12-01

    Adaptation is one of the key capabilities of cognitive radio, which focuses on how to adjust the radio parameters to optimize the system performance based on the knowledge of the radio environment and its capability and characteristics. In this paper, we consider the cognitive radio adaptation problem for power consumption minimization. The problem is formulated as a constrained power consumption minimization problem, and the biogeography-based optimization (BBO) is introduced to solve this optimization problem. A novel habitat suitability index (HSI) evaluation mechanism is proposed, in which both the power consumption minimization objective and the quality of services (QoS) constraints are taken into account. The results show that under different QoS requirement settings corresponding to different types of services, the algorithm can minimize power consumption while still maintaining the QoS requirements. Comparison with particle swarm optimization (PSO) and cat swarm optimization (CSO) reveals that BBO works better, especially at the early stage of the search, which means that the BBO is a better choice for real-time applications. Project supported by the National Natural Science Foundation of China (Grant No. 61501356), the Fundamental Research Funds of the Ministry of Education, China (Grant No. JB160101), and the Postdoctoral Fund of Shaanxi Province, China.

  4. A Prospective Randomized Study on Operative Treatment for Simple Distal Tibial Fractures-Minimally Invasive Plate Osteosynthesis Versus Minimal Open Reduction and Internal Fixation.

    PubMed

    Kim, Ji Wan; Kim, Hyun Uk; Oh, Chang-Wug; Kim, Joon-Woo; Park, Ki Chul

    2018-01-01

    To compare the radiologic and clinical results of minimally invasive plate osteosynthesis (MIPO) and minimal open reduction and internal fixation (ORIF) for simple distal tibial fractures. Randomized prospective study. Three level 1 trauma centers. Fifty-eight patients with simple and distal tibial fractures were randomized into a MIPO group (treatment with MIPO; n = 29) or a minimal group (treatment with minimal ORIF; n = 29). These numbers were designed to define the rate of soft tissue complication; therefore, validation of superiority in union time or determination of differences in rates of delayed union was limited in this study. Simple distal tibial fractures treated with MIPO or minimal ORIF. The clinical outcome measurements included operative time, radiation exposure time, and soft tissue complications. To evaluate a patient's function, the American Orthopedic Foot and Ankle Society ankle score (AOFAS) was used. Radiologic measurements included fracture alignment, delayed union, and union time. All patients acquired bone union without any secondary intervention. The mean union time was 17.4 weeks and 16.3 weeks in the MIPO and minimal groups, respectively. There was 1 case of delayed union and 1 case of superficial infection in each group. The radiation exposure time was shorter in the minimal group than in the MIPO group. Coronal angulation showed a difference between both groups. The American Orthopedic Foot and Ankle Society ankle scores were 86.0 and 86.7 in the MIPO and minimal groups, respectively. Minimal ORIF resulted in similar outcomes, with no increased rate of soft tissue problems compared to MIPO. Both MIPO and minimal ORIF have high union rates and good functional outcomes for simple distal tibial fractures. Minimal ORIF did not result in increased rates of infection and wound dehiscence. Therapeutic Level II. See Instructions for Authors for a complete description of levels of evidence.

  5. Biventricular pacing preserves left ventricular performance in patients with high-grade atrio-ventricular block: a randomized comparison with DDD(R) pacing in 50 consecutive patients.

    PubMed

    Albertsen, Andi E; Nielsen, Jens C; Poulsen, Steen H; Mortensen, Peter T; Pedersen, Anders K; Hansen, Peter S; Jensen, Henrik K; Egeblad, Henrik

    2008-03-01

    We aimed to investigate whether biventricular (BiV) pacing minimizes left ventricular (LV) dyssynchrony and preserves LV ejection fraction (LVEF) as compared with standard dual-chamber DDD(R) pacing in consecutive patients with high-grade atrio-ventricular (AV) block. Fifty patients were randomized to DDD(R) pacing or BiV pacing. LVEF was measured using three-dimensional echocardiography. Tissue-Doppler imaging was used to quantify LV dyssynchrony in terms of number of segments with delayed longitudinal contraction (DLC). LVEF was not different between groups after 12 months (P = 0.18). In the DDD(R) group LVEF decreased significantly from 59.7(57.4-61.4)% at baseline to 57.2(52.1-60.6)% at 12 months of follow-up (P = 0.03), whereas LVEF remained unchanged in the BiV group [58.9(47.1-61.7)% at baseline vs. 60.1(55.2-63.3)% after 12 months (P = 0.15)]. Dyssynchrony was more prominent in the DDD(R) group than in the BiV group at baseline (2.2 +/- 2.2 vs. 1.4 +/- 1.3 segments with DLC per patient, P = 0.10); and at 12 month follow-up (1.8 +/- 1.9 vs. 0.8 +/- 0.9 segments with DLC per patient, P = 0.02). NT-proBNP was unchanged in the DDD(R) group during follow-up (122 +/- 178 pmol/L vs. 91 +/- 166 pmol/L, NS) but decreased significantly in the BiV-group (from 198 +/- 505 pmol/L to 86 +/- 95 pmol/L after 12 months, P = 0.02). BiV pacing minimizes LV dyssynchrony, preserves LV function, and reduces NT-proBNP in contrast to DDD(R) pacing in patients with high-grade AV block.

  6. Hemothorax caused by the trocar tip of the rod inserter after minimally invasive transforaminal lumbar interbody fusion: case report.

    PubMed

    Maruo, Keishi; Tachibana, Toshiya; Inoue, Shinichi; Arizumi, Fumihiro; Yoshiya, Shinichi

    2016-03-01

    Minimally invasive surgery (MIS) for transforaminal lumbar interbody fusion (MIS-TLIF) is widely used for lumbar degenerative diseases. In the paper the authors report a unique case of a hemothorax caused by the trocar tip of the rod inserter after MIS-TLIF. A 61-year-old woman presented with thigh pain and gait disturbance due to weakness in her lower right extremity. She was diagnosed with a lumbar disc herniation at L1-2 and the MIS-TLIF procedure was performed. Immediately after surgery, the patient's thigh pain resolved and she remained stable with normal vital signs. The next day after surgery, she developed severe anemia and her hemoglobin level decreased to 7.6 g/dl, which required blood transfusions. A chest radiograph revealed a hemothorax. A CT scan confirmed a hematoma of the left paravertebral muscle. A chest tube was placed to treat the hemothorax. After 3 days of drainage, there was no active bleeding. The patient was discharged 14 days after surgery without leg pain or any respiratory problems. This complication may have occurred due to injury of the intercostal artery by the trocar tip of the rod inserter. A hemothorax after spine surgery is a rare complication, especially in the posterior approach. The rod should be caudally inserted in the setting of the thoracolumbar spine.

  7. Common origin of 3.55 keV x-ray line and gauge coupling unification with left-right dark matter

    NASA Astrophysics Data System (ADS)

    Borah, Debasish; Dasgupta, Arnab; Patra, Sudhanwa

    2017-12-01

    We present a minimal left-right dark matter framework that can simultaneously explain the recently observed 3.55 keV x-ray line from several galaxy clusters and gauge coupling unification at high energy scale. Adopting a minimal dark matter strategy, we consider both left and right handed triplet fermionic dark matter candidates which are stable by virtue of a remnant Z2≃(-1 )B -L symmetry arising after the spontaneous symmetry breaking of left-right gauge symmetry to that of the standard model. A scalar bitriplet field is incorporated whose first role is to allow radiative decay of right handed triplet dark matter into the left handed one and a photon with energy 3.55 keV. The other role this bitriplet field at TeV scale plays is to assist in achieving gauge coupling unification at a high energy scale within a nonsupersymmetric S O (10 ) model while keeping the scale of left-right gauge symmetry around the TeV corner. Apart from solving the neutrino mass problem and giving verifiable new contributions to neutrinoless double beta decay and charged lepton flavor violation, the model with TeV scale gauge bosons can also give rise to interesting collider signatures like diboson excess, dilepton plus two jets excess reported recently in the large hadron collider data.

  8. Engineering Escherichia coli for selective geraniol production with minimized endogenous dehydrogenation.

    PubMed

    Zhou, Jia; Wang, Chonglong; Yoon, Sang-Hwal; Jang, Hui-Jeong; Choi, Eui-Sung; Kim, Seon-Won

    2014-01-01

    Geraniol, a monoterpene alcohol, has versatile applications in the fragrance industry, pharmacy and agrochemistry. Moreover, geraniol could be an ideal gasoline alternative. In this study, recombinant overexpression of geranyl diphosphate synthase and the bottom portion of a foreign mevalonate pathway in Escherichia coli MG1655 produced 13.3mg/L of geraniol. Introduction of Ocimum basilicum geraniol synthase increased geraniol production to 105.2mg/L. However, geraniol production encountered a loss from its endogenous dehydrogenization and isomerization into other geranoids (nerol, neral and geranial). Three E. coli enzymes (YjgB, YahK and YddN) were identified with high sequence identity to plant geraniol dehydrogenases. YjgB was demonstrated to be the major one responsible for geraniol dehydrogenization. Deletion of yjgB increased geraniol production to 129.7mg/L. Introduction of the whole mevalonate pathway for enhanced building block synthesis from endogenously synthesized mevalonate improved geraniol production up to 182.5mg/L in the yjgB mutant after 48h of culture, which was a double of that obtained in the wild type control (96.5mg/L). Our strategy for improving geraniol production in engineered E. coli should be generalizable for addressing similar problems during metabolic engineering. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. [Cryotherapy is useful and safe in the prevention of oral mucositis after high-dose melphalan (L-PAM)].

    PubMed

    Inagaki, Noriko; Ohue, Yukiko; Shigeta, Hiroe; Tasaka, Taizo

    2006-11-01

    We prospectively assessed the effectiveness of cryotherapy after high-dose L-PAM to prevent oral mucositis. Cryotherapy with ice tips was commenced 15 minutes before L-PAM administration, and continued until the end of administration. Twenty-six patients were enrolled in this study. Thirteen patients with myeloma were treated with 200 mg/m2 L-PAM followed by autologous peripheral blood stem cell transplantation, and 13 patients (4 AML, 4 MDS, 2 ALL, 2 lymphoma and 1 CML) were treated with 140 mg/m2 L-PAM followed by allogeneic stem cell transplantation. Grade 1 mucositis occurred in four of 13 patients (31%) with 200 mg/m2 L-PAM, and 2 of 13 patients (16%) with 140 mg/m2 L-PAM. Only one patient had grade 2 mucositis, and no grade 3 mucositis were observed. The procedure was well tolerated in all patients. These data suggest that cryotherapy is effective to minimize L-PAM-induced oral mucositis.

  10. Optimal trajectories of aircraft and spacecraft

    NASA Technical Reports Server (NTRS)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful engineering compromise between energy requirements and aerodynamics heating requirements.

  11. Sparse Recovery via l1 and L1 Optimization

    DTIC Science & Technology

    2014-11-01

    problem, with t being the descent direc- tion, obtaining ut = uxx + f − 1 µ p(u) (6) as an evolution equation. We can hope that these L1 regularized (or...implementation. He considered a wide class of second–order elliptic equations and, with Friedman [14], an extension to parabolic equa- tions. In [15, 16...obtaining an elliptic PDE, or by gradi- ent descent to obtain a parabolic PDE. Addition- ally, some PDEs can be rewritten using the L1 subgradient such as the

  12. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    NASA Astrophysics Data System (ADS)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  13. Antibacterial activity of Zuccagnia punctata Cav. ethanolic extracts.

    PubMed

    Zampini, Iris C; Vattuone, Marta A; Isla, Maria I

    2005-12-01

    The present study was conducted to investigate antibacterial activity of Zuccagnia punctata ethanolic extract against 47 strains of antibiotic-resistant Gram-negative bacteria and to identify bioactive compounds. Inhibition of bacterial growth was investigated using agar diffusion, agar macrodilution, broth microdilution and bioautographic methods. Zuccagnia punctata extract was active against all assayed bacteria (Escherichia coli, Klebsiella pneumoniae, Proteus mirabilis, Enterobacter cloacae, Serratia marcescens, Morganella morganii, Acinetobacter baumannii, Pseudomonas aeruginosa, Stenotrophomonas maltophilia) with minimal inhibitory concentration (MIC) values ranging from 25 to 200 microg/mL. Minimal bactericidal concentration (MBC) values were identical or two-fold higher than the corresponding MIC values. Contact bioautography, indicated that Zuccagnia punctata extracts possess one major antibacterial component against Pseudomonas aeruginosa and at least three components against. Klebsiella pneumoniae and Escherichia coli. Activity-guided fractionation of 1he ethanol extract on a silica gel column yielded a compound (2',4'-dihydroxychalcone), which exhibited strong antibacterial activity with MIC values between 0.10 and 1.00 microg/mL for Proteus mirabilis, Enterobacter cloacae, Serratia marcescens, Morganella morganii, Acinetobacter baumannii, Pseudomonas aeruginosa, Stenotrophomonas maltophilia. These values are lower than imipenem (0.25-16 microg/mL). Zuccagnia punctata might provide promising therapeutic agents against infections with multi-resistant Gram-negative bacteria.

  14. Combined cannabinoid therapy via an oromucosal spray.

    PubMed

    Perez, Jordi

    2006-08-01

    Extensive basic science research has identified the potential therapeutic benefits of active compounds extracted from the Cannabis sativa L. plant (the cannabinoids). It is recognized that a significant proportion of patients suffering with the debilitating symptoms of pain and spasticity in multiple sclerosis or other conditions smoke cannabis despite the legal implications and stigma associated with this controlled substance. GW Pharmaceuticals have developed Sativex (GW- 1,000-02), a combined cannabinoid medicine that delivers and maintains therapeutic levels of two principal cannabinoids, delta-9-tetrahydrocannabinol (THC) and cannabidiol (CBD), via an oromucosal pump spray, that aims to minimize psychotropic side effects. Sativex has proved to be well tolerated and successfully self-administered and self-titrated in both healthy volunteers and patient cohorts. Clinical assessment of this combined cannabinoid medicine has demonstrated efficacy in patients with intractable pain (chronic neuropathic pain, pain due to brachial plexus nerve injury, allodynic peripheral neuropathic pain and advanced cancer pain), rheumatoid arthritis and multiple sclerosis (bladder problems, spasticity and central pain), with no significant intoxication-like symptoms, tolerance or withdrawal syndrome.

  15. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  16. Collocational Differences between L1 and L2: Implications for EFL Learners and Teachers

    ERIC Educational Resources Information Center

    Sadeghi, Karim

    2009-01-01

    Collocations are one of the areas that produce problems for learners of English as a foreign language. Iranian learners of English are by no means an exception. Teaching experience at schools, private language centers, and universities in Iran suggests that a significant part of EFL learners' problems with producing the language, especially at…

  17. Problemes en enseignement fonctionnel des langues (Problems in the Functional Teaching of Languages). Publication B-103.

    ERIC Educational Resources Information Center

    Alvarez, Gerardo, Ed.; Huot, Diane, Ed.

    Articles include: (1) "L'elaboration du materiel pedagogique pour des publics adultes" (The Elaboration of Teaching Materials for the Adult Public) by G. Painchaud-Leblanc, (2) "L'elaboration d'un programme d'etudes en francais langue seconde a partir des donnees recentes en didactique des langues" (The Elaboration of a Program…

  18. Exact recovery of sparse multiple measurement vectors by [Formula: see text]-minimization.

    PubMed

    Wang, Changlong; Peng, Jigen

    2018-01-01

    The joint sparse recovery problem is a generalization of the single measurement vector problem widely studied in compressed sensing. It aims to recover a set of jointly sparse vectors, i.e., those that have nonzero entries concentrated at a common location. Meanwhile [Formula: see text]-minimization subject to matrixes is widely used in a large number of algorithms designed for this problem, i.e., [Formula: see text]-minimization [Formula: see text] Therefore the main contribution in this paper is two theoretical results about this technique. The first one is proving that in every multiple system of linear equations there exists a constant [Formula: see text] such that the original unique sparse solution also can be recovered from a minimization in [Formula: see text] quasi-norm subject to matrixes whenever [Formula: see text]. The other one is showing an analytic expression of such [Formula: see text]. Finally, we display the results of one example to confirm the validity of our conclusions, and we use some numerical experiments to show that we increase the efficiency of these algorithms designed for [Formula: see text]-minimization by using our results.

  19. Comparative Study of the Difference of Perioperative Complication and Radiologic Results: MIS-DLIF (Minimally Invasive Direct Lateral Lumbar Interbody Fusion) Versus MIS-OLIF (Minimally Invasive Oblique Lateral Lumbar Interbody Fusion).

    PubMed

    Jin, Jie; Ryu, Kyeong-Sik; Hur, Jung-Woo; Seong, Ji-Hoon; Kim, Jin-Sung; Cho, Hyun-Jin

    2018-02-01

    Retrospective observatory analysis. The purpose of this study was to compare the incidence of perioperative complication, difference of cage location, and sagittal alignment between minimally invasive oblique lateral lumbar interbody fusion (MIS-OLIF) and MIS-direct lateral lumbar interbody fusion (DLIF) in the cases of single-level surgery at L4-L5. MIS-DLIF using tubular retractor has been used for the treatment of lumbar degenerative diseases; however, blunt transpsoas dissection poses a risk of injury to the lumbar plexus. As an alternative, MIS-OLIF uses a window between the prevertebral venous structures and psoas muscle. A total of 43 consecutive patients who underwent MIS-DLIF or MIS-OLIF for various L4/L5 level pathologies between November 2011 and April 2014 by a single surgeon were retrospectively reviewed. A complication classification based on the relation to surgical procedure and effect duration was used. Perioperative complications until 3-month postoperatively were reviewed for the patients. Radiologic results including the cage location and sagittal alignment were also assessed with plain radiography. There were no significant statistical differences in perioperative parameters and early clinical outcome between 2 groups. Overall, there were 13 (59.1%) approach-related complications in the DLIF group and 3 (14.3%) in the OLIF group. In the DLIF group, 3 (45.6%) were classified as persistent, however, there was no persistent complication in the OLIF group. In the OLIF group, cage is located mostly in the middle 1/3 of vertebral body, significantly increasing posterior disk space height and foraminal height compared with the DLIF group. Global and segmental lumbar lordosis was greater in the DLIF group due to anterior cage position without statistical significance. In our report of L4/L5 level diseases, the OLIF technique may decrease approach-related perioperative morbidities by eliminating the risk of unwanted muscle and nerve manipulations. Using orthogonal maneuver, cage could be safely placed more posteriorly, resulting in better disk and foraminal height restoration.

  20. Disruption of the Globular Cluster Pal 5

    NASA Technical Reports Server (NTRS)

    Miller, R. H.; Smith, B. F.; Cuzzi, Jeffrey N. (Technical Monitor)

    1995-01-01

    Orbit calculations suggest that the sparse globular cluster, Pal 5, will pass within 7 kpc of the Galactic center the next time it crosses the plane, where it might be destroyed by tidal stresses. We study this problem, treating Pal 5 as a self-consistent dynamical system orbiting through an external potential that represents the Galaxy. The first part of the problem is to find suitable analytic approximations to the Galactic potential. They must be valid in all regions the cluster is likely to explore. Observed velocity and positional data for Pal 5 are used as initial conditions to determine the orbit. Methods we used for a different problem some 12 years ago have been adapted to this problem. Three experiments have been run, with M/L= 1, 3, and 10, for the cluster model. The cluster blew up shortly after passing through the Galactic plane (about 130 Myrs after the beginning of the run) with M/L=1. At M/L = 3 and 10 the cluster survived, although it got quite a kick in the fundamental mode on passing through the plane. But the fundamental mode oscillation died out in a couple of oscillation cycles at M/L=10. Pal 5 will probably be destroyed on its next crossing of the Galactic plane if M/L=1, but it can survive (albeit with fairly heavy damage) if NI/L=3. We haven't tried to trap the mass limits more closely than that. Pal 5 comes through pretty well unscathed at M/L=10. An interesting follow-up experiment would be to back the cluster up along its orbit to look at its previous passage through the Galactic plane, to see what kind of object it might have been at earlier times.

  1. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  2. Emergency EPR and OSL dosimetry with table vitamins and minerals.

    PubMed

    Sholom, S; McKeever, S W S

    2016-12-01

    Several table vitamins, minerals and L-lysine amino acid have been preliminarily tested as potential emergency dosemeters using electron paramagnetic resonance (EPR) and optically stimulated luminescence (OSL) techniques. Radiation-induced EPR signals were detected in samples of vitamin B2 and L-lysine while samples of multivitamins of different brands as well as mineral Mg demonstrated prominent OSL signals after exposure to ionizing radiation doses. Basic dosimetric properties of the radiation-sensitive substances were studied, namely dose response, fading of the EPR or OSL signals and values of minimum measurable doses (MMDs). For EPR-sensitive samples, the EPR signal is converted into units of dose using a linear dose response and correcting for fading using the measured fading dependence. For OSL-sensitive materials, a multi-aliquot, enhanced-temperature protocol was developed to avoid the problem of sample sensitization and to minimize the influence of signal fading. The sample dose in this case is also evaluated using the dose response and fading curves. MMDs of the EPR-sensitive samples were below 2 Gy while those of the OSL-sensitive materials were below 500 mGy as long as the samples are analyzed within 1 week after exposure. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Laurel leaf extracts for honeybee pest and disease management: antimicrobial, microsporicidal, and acaricidal activity.

    PubMed

    Damiani, Natalia; Fernández, Natalia J; Porrini, Martín P; Gende, Liesel B; Álvarez, Estefanía; Buffa, Franco; Brasesco, Constanza; Maggi, Matías D; Marcangeli, Jorge A; Eguaras, Martín J

    2014-02-01

    A diverse set of parasites and pathogens affects productivity and survival of Apis mellifera honeybees. In beekeeping, traditional control by antibiotics and molecules of synthesis has caused problems with contamination and resistant pathogens. In this research, different Laurus nobilis extracts are tested against the main honeybee pests through an integrated point of view. In vivo effects on bee survival are also evaluated. The ethanol extract showed minimal inhibitory concentration (MIC) values of 208 to 416 μg/mL, having the best antimicrobial effect on Paenibacillus larvae among all substances tested. Similarly, this leaf extract showed a significant antiparasitic activity on Varroa destructor, killing 50 % of mites 24 h after a 30-s exposure, and on Nosema ceranae, inhibiting the spore development in the midgut of adult bees ingesting 1 × 10(4) μg/mL of extract solution. Both ethanol extract and volatile extracts (essential oil, hydrolate, and its main component) did not cause lethal effects on adult honeybees. Thus, the absence of topical and oral toxicity of the ethanol extract on bees and the strong antimicrobial, microsporicidal, and miticidal effects registered in this study place this laurel extract as a promising integrated treatment of bee diseases and stimulates the search for other bioactive phytochemicals from plants.

  4. Scheduling with non-decreasing deterioration jobs and variable maintenance activities on a single machine

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Yin, Yunqiang; Wu, Chin-Chia

    2017-01-01

    There is a situation found in many manufacturing systems, such as steel rolling mills, fire fighting or single-server cycle-queues, where a job that is processed later consumes more time than that same job when processed earlier. The research finds that machine maintenance can improve the worsening of processing conditions. After maintenance activity, the machine will be restored. The maintenance duration is a positive and non-decreasing differentiable convex function of the total processing times of the jobs between maintenance activities. Motivated by this observation, the makespan and the total completion time minimization problems in the scheduling of jobs with non-decreasing rates of job processing time on a single machine are considered in this article. It is shown that both the makespan and the total completion time minimization problems are NP-hard in the strong sense when the number of maintenance activities is arbitrary, while the makespan minimization problem is NP-hard in the ordinary sense when the number of maintenance activities is fixed. If the deterioration rates of the jobs are identical and the maintenance duration is a linear function of the total processing times of the jobs between maintenance activities, then this article shows that the group balance principle is satisfied for the makespan minimization problem. Furthermore, two polynomial-time algorithms are presented for solving the makespan problem and the total completion time problem under identical deterioration rates, respectively.

  5. Design optimization of transmitting antennas for weakly coupled magnetic induction communication systems

    PubMed Central

    2017-01-01

    This work focuses on the design of transmitting coils in weakly coupled magnetic induction communication systems. We propose several optimization methods that reduce the active, reactive and apparent power consumption of the coil. These problems are formulated as minimization problems, in which the power consumed by the transmitting coil is minimized, under the constraint of providing a required magnetic field at the receiver location. We develop efficient numeric and analytic methods to solve the resulting problems, which are of high dimension, and in certain cases non-convex. For the objective of minimal reactive power an analytic solution for the optimal current distribution in flat disc transmitting coils is provided. This problem is extended to general three-dimensional coils, for which we develop an expression for the optimal current distribution. Considering the objective of minimal apparent power, a method is developed to reduce the computational complexity of the problem by transforming it to an equivalent problem of lower dimension, allowing a quick and accurate numeric solution. These results are verified experimentally by testing a number of coil geometries. The results obtained allow reduced power consumption and increased performances in magnetic induction communication systems. Specifically, for wideband systems, an optimal design of the transmitter coil reduces the peak instantaneous power provided by the transmitter circuitry, and thus reduces its size, complexity and cost. PMID:28192463

  6. Solution to a gene divergence problem under arbitrary stable nucleotide transition probabilities

    NASA Technical Reports Server (NTRS)

    Holmquist, R.

    1976-01-01

    A nucleic acid chain, L nucleotides in length, with the specific base sequence B(1)B(2) ... B(L) is defined by the L-dimensional vector B = (B(1), B(2), ..., B(L)). For twelve given constant non-negative transition probabilities that, in a specified position, the base B is replaced by the base B' in a single step, an exact analytical expression is derived for the probability that the position goes from base B to B' in X steps. Assuming that each base mutates independently of the others, an exact expression is derived for the probability that the initial gene sequence B goes to a sequence B' = (B'(1), B'(2), ..., B'(L)) after X = (X(1), X(2), ..., X(L)) base replacements. The resulting equations allow a more precise accounting for the effects of Darwinian natural selection in molecular evolution than does the idealized (biologically less accurate) assumption that each of the four nucleotides is equally likely to mutate to and be fixed as one of the other three. Illustrative applications of the theory to some problems of biological evolution are given.

  7. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  8. Relationship between low handgrip strength and quality of life in Korean men and women.

    PubMed

    Kang, Seo Young; Lim, Jisun; Park, Hye Soon

    2018-06-19

    Handgrip strength is strongly related to muscle power in the extremities and is an important index for diagnosing sarcopenia. We evaluated the relationship between handgrip strength and quality of life (QoL) in Korean men and women. We analyzed 4620 participants (2070 men and 2550 women) using data from the Korea National Health and Nutrition Examination Survey VI-3 (2015). Low handgrip strength was defined as the lower quartile of handgrip strength in the study population. QoL was evaluated according to the European Quality of Life Scale-Five Dimensions (EQ-5D). The relationship between handgrip strength and QoL was evaluated by multivariate logistic regression analyses. The odds ratios (ORs) for low handgrip strength significantly increased as age increased for both men and women. The ORs for low handgrip strength increased as body mass index decreased in men. In men with low handgrip strength, the OR for having problems in mobility (OR 1.93, 95% confidence interval (CI) 1.25-2.98) and having pain or discomfort (1.53, 1.04-2.24) significantly increased. In women with low handgrip strength, the OR for having problems in mobility (2.12, 1.02-2.87), problems in usual activities (2.04, 1.46-2.85), and having pain or discomfort (1.48, 1.15-1.90) significantly increased. Men with low handgrip strength had poor QoL on the mobility and pain/discomfort dimensions of EQ-5D, whereas women with low handgrip strength had poor QoL on mobility, usual activities, and pain/discomfort dimensions. Management to improve handgrip strength is necessary for achieving better QoL.

  9. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  10. Effect of Essential Oils on Germination and Growth of Some Pathogenic and Spoilage Spore-Forming Bacteria.

    PubMed

    Voundi, Stève Olugu; Nyegue, Maximilienne; Lazar, Iuliana; Raducanu, Dumitra; Ndoye, Florentine Foe; Marius, Stamate; Etoa, François-Xavier

    2015-06-01

    The use of essential oils as a food preservative has increased due to their capacity to inhibit vegetative growth of some bacteria. However, only limited data are available on their effect on bacterial spores. The aim of the present study was to evaluate the effect of some essential oils on the growth and germination of three Bacillus species and Geobacillus stearothermophilus. Essential oils were chemically analyzed using gas chromatography and gas chromatography coupled to mass spectrometry. The minimal inhibitory and bactericidal concentrations of vegetative growth and spore germination were assessed using the macrodilution method. Germination inhibitory effect of treated spores with essential oils was evaluated on solid medium, while kinetic growth was followed using spectrophotometry in the presence of essential oils. Essential oil from Drypetes gossweileri mainly composed of benzyl isothiocyanate (86.7%) was the most potent, with minimal inhibitory concentrations ranging from 0.0048 to 0.0097 mg/mL on vegetative cells and 0.001 to 0.002 mg/mL on spore germination. Furthermore, essential oil from D. gossweileri reduced 50% of spore germination after treatment at 1.25 mg/mL, and its combination with other oils improved both bacteriostatic and bactericidal activities with additive or synergistic effects. Concerning the other essential oils, the minimal inhibitory concentration ranged from 5 to 0.63 mg/mL on vegetative growth and from 0.75 to 0.09 mg/mL on the germination of spores. Spectrophotometric evaluation showed an inhibitory effect of essential oils on both germination and outgrowth. From these results, it is concluded that some of the essential oils tested might be a valuable tool for bacteriological control in food industries. Therefore, further research regarding their use as food preservatives should be carried out.

  11. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  12. TH-AB-BRA-02: Automated Triplet Beam Orientation Optimization for MRI-Guided Co-60 Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, D; Thomas, D; Cao, M

    2016-06-15

    Purpose: MRI guided Co-60 provides daily and intrafractional MRI soft tissue imaging for improved target tracking and adaptive radiotherapy. To remedy the low output limitation, the system uses three Co-60 sources at 120° apart, but using all three sources in planning is considerably unintuitive. We automate the beam orientation optimization using column generation, and then solve a novel fluence map optimization (FMO) problem while regularizing the number of MLC segments. Methods: Three patients—1 prostate (PRT), 1 lung (LNG), and 1 head-and-neck boost plan (H&NBoost)—were evaluated. The beamlet dose for 180 equally spaced coplanar beams under 0.35 T magnetic field wasmore » calculated using Monte Carlo. The 60 triplets were selected utilizing the column generation algorithm. The FMO problem was formulated using an L2-norm minimization with anisotropic total variation (TV) regularization term, which allows for control over the number of MLC segments. Our Fluence Regularized and Optimized Selection of Triplets (FROST) plans were compared against the clinical treatment plans (CLN) produced by an experienced dosimetrist. Results: The mean PTV D95, D98, and D99 differ by −0.02%, +0.12%, and +0.44% of the prescription dose between planning methods, showing same PTV dose coverage. The mean PTV homogeneity (D95/D5) was at 0.9360 (FROST) and 0.9356 (CLN). R50 decreased by 0.07 with FROST. On average, FROST reduced Dmax and Dmean of OARs by 6.56% and 5.86% of the prescription dose. The manual CLN planning required iterative trial and error runs which is very time consuming, while FROST required minimal human intervention. Conclusions: MRI guided Co-60 therapy needs the output of all sources yet suffers from unintuitive and laborious manual beam selection processes. Automated triplet orientation optimization is shown essential to overcome the difficulty and improves the dosimetry. A novel FMO with regularization provides additional controls over the number of MLC segments and treatment time. Varian Medical Systems; NIH grant R01CA188300; NIH grant R43CA183390.« less

  13. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    PubMed

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  14. Inhibitory effect of 1,2,4-triazole-ciprofloxacin hybrids on Haemophilus parainfluenzae and Haemophilus influenzae biofilm formation in vitro under stationary conditions.

    PubMed

    Kosikowska, Urszula; Andrzejczuk, Sylwia; Plech, Tomasz; Malm, Anna

    2016-10-01

    Haemophilus parainfluenzae and Haemophilus influenzae, upper respiratory tract microbiota representatives, are able to colonize natural and artificial surfaces as biofilm. The aim of the present study was to assay the effect of ten 1,2,4-triazole-ciprofloxacin hybrids on planktonic or biofilm-forming haemophili cells in vitro under stationary conditions on the basis of MICs (minimal inhibitory concentrations) and MBICs (minimal biofilm inhibitory concentrations). In addition, anti-adhesive properties of these compounds were examined. The reference strains of H. parainfluenzae and H. influenzae were included. The broth microdilution microtiter plate (MTP) method with twofold dilution of the compounds, or ciprofloxacin (reference agent) in 96-well polystyrene microplates, was used. The optical density (OD) reading was made spectrophotometrically at a wavelength of 570 nm (OD570) both to measure bacterial growth and to detect biofilm-forming cells under the same conditions with 0.1% crystal violet. The following values of parameters were estimated for 1,2,4-triazole-ciprofloxacin hybrids - MIC = 0.03-15.63 mg/L, MBIC = 0.03-15.63 mg/L, MBIC/MIC = 0.125-8, depending on the compound, and for ciprofloxacin - MIC = 0.03-0.06 mg/L, MBIC = 0.03-0.12 mg/L, MBIC/MIC = 1-2. The observed strong anti-adhesive properties (95-100% inhibition) of the tested compounds were reversible during long-term incubation at subinhibitory concentrations. Thus, 1,2,4-triazole-ciprofloxacin hybrids may be considered as starting compounds for designing improved agents not only against planktonic but also against biofilm-forming Haemophilus spp. cells. Copyright © 2016 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.

  15. The Mother Tongue in the Foreign Language: An Account of Russian L2 Learners' Error Incidence on Output

    ERIC Educational Resources Information Center

    Forteza Fernandez, Rafael Filiberto; Korneeva, Larisa I.

    2017-01-01

    Based on Selinker's hypothesis of five psycholinguistic processes shaping interlanguage (1972), the paper focuses attention on the Russian L2-learners' overreliance on the L1 as the main factor hindering their development. The research problem is, therefore, the high incidence of L1 transfer in the spoken and written English language output of…

  16. Speculum lumbar extraforaminal microdiscectomy.

    PubMed

    Obenchain, T G

    2001-01-01

    Public interest, monetary pressures and improving diagnostic techniques have placed an increasing emphasis on minimalism in lumbar disc excision. Current techniques include microlumbar discectomy and minimally invasive spinal surgery. Both are good techniques but may be painful, require a hospital stay and/or are not widely used because of difficulty acquiring the necessary skills. The author therefore developed a less invasive microscopic technique that may be performed on a consistent outpatient basis with easily acquired skills. The purpose of this study was to describe a variant of minimally invasive lumbar disc excision, while assessing the effects on a small group of patients. The treatment protocol was a prospective community hospital-based case study designed to evaluate a less invasive method of excising herniated lumbar discs residing in the canal, foraminal or far lateral space. This study is comprised of 50 patients with all anatomic forms of lumbar disc herniations, inside or outside the canal, at all levels except the lumbosacral joint. Clinical results were measured by return to work time, the criteria of MacNab and by Prolo et al.'s economic and functional criteria. Selection criteria included adult patients with intractable low back and leg pain, plus an imaging study revealing a lumbar disc herniation consistent with the patient's clinical presentation. Mean patient age was 48 years. The male:female ratio was approximately 2:1. All patients failed at least 3 weeks of conservative therapy. Herniations occurred from the L2-3 space through L4-5, with 30 herniations being within and 20 outside the spinal canal. Both contained and extruded/sequestered herniations were treated. Excluded from the study were patients with herniations inside the spinal canal at the L5-S1 level. Surgical approach was by microscopic speculum transforaminal route for discs residing both within and outside the lumbar canal. The initial 50 consecutive patients had successful technical operations performed on an outpatient basis by this less invasive technique. By the criteria of MacNab (Table 3), 84% (42 of 50) had an excellent or good result, returning to work at a mean time of 3.5 weeks. Per Prolo et al.'s economic scale, 72% were disabled at levels I and II before surgery. Postoperatively, 92% had improved to levels IV and V. Similarly, on his functional scale, 94% functioned at levels I and II before surgery, whereas 88% achieved levels IV and V after surgery. Eighty percent required no pain medications 1 week after surgery. The only complication was an L3 minor nerve root injury as it exited the L3-4 foramen. The author has described a minimally invasive technique for excising herniated discs that is applicable to all types of lumbar herniations, except for those residing in the canal at L5-S1. Clinical outcomes are comparable to those of other forms of discectomy.

  17. Carbon nanotubes for voltammetric determination of sulphite in some beverages.

    PubMed

    Silva, Erika M; Takeuchi, Regina M; Santos, André L

    2015-04-15

    In this work, a square-wave voltammetric method based on sulphite electrochemical reduction was developed for quantification of this preservative in commercial beverages. A carbon-paste electrode chemically modified with multiwalled carbon nanotubes was used as the working electrode. Under the optimised experimental conditions, a linear response to sulphite concentrations from 1.6 to 32 mg SO2 L(-1) (25-500 μmol L(-1) of sulphite), with a limit of detection of 1.0 mg SO2 L(-1) (16 μmol L(-1) of sulphite), was obtained. This method does not suffer interference from other common beverage additives such as ascorbic acid, fructose, and sucrose, and it enables fast and reliable sulphite determination in beverages, with minimal sample pretreatment. Despite its selectivity, the method is not applicable to red grape juice or red wine samples, because some of their components produce a cathodic peak at almost the same potential as that of sulphite reduction. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Sparse reconstruction of breast MRI using homotopic L0 minimization in a regional sparsified domain.

    PubMed

    Wong, Alexander; Mishra, Akshaya; Fieguth, Paul; Clausi, David A

    2013-03-01

    The use of MRI for early breast examination and screening of asymptomatic women has become increasing popular, given its ability to provide detailed tissue characteristics that cannot be obtained using other imaging modalities such as mammography and ultrasound. Recent application-oriented developments in compressed sensing theory have shown that certain types of magnetic resonance images are inherently sparse in particular transform domains, and as such can be reconstructed with a high level of accuracy from highly undersampled k-space data below Nyquist sampling rates using homotopic L0 minimization schemes, which holds great potential for significantly reducing acquisition time. An important consideration in the use of such homotopic L0 minimization schemes is the choice of sparsifying transform. In this paper, a regional differential sparsifying transform is investigated for use within a homotopic L0 minimization framework for reconstructing breast MRI. By taking local regional characteristics into account, the regional differential sparsifying transform can better account for signal variations and fine details that are characteristic of breast MRI than the popular finite differential transform, while still maintaining strong structure fidelity. Experimental results show that good breast MRI reconstruction accuracy can be achieved compared to existing methods.

  19. Matrix Interdiction Problem

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, Shiva Prasad; Pan, Feng

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.

  20. Improvement in ethanol productivity of engineered E. coli strain SSY13 in defined medium via adaptive evolution.

    PubMed

    Jilani, Syed Bilal; Venigalla, Siva Sai Krishna; Mattam, Anu Jose; Dev, Chandra; Yazdani, Syed Shams

    2017-09-01

    E. coli has the ability to ferment both C5 and C6 sugars and produce mixture of acids along with small amount of ethanol. In our previous study, we reported the construction of an ethanologenic E. coli strain by modulating flux through the endogenous pathways. In the current study, we made further changes in the strain to make the overall process industry friendly; the changes being (1) removal of plasmid, (2) use of low-cost defined medium, and (3) improvement in consumption rate of both C5 and C6 sugars. We first constructed a plasmid-free strain SSY13 and passaged it on AM1-xylose minimal medium plate for 150 days. Further passaging was done for 56 days in liquid AM1 medium containing either glucose or xylose on alternate days. We observed an increase in specific growth rate and carbon utilization rate with increase in passage numbers until 42 days for both glucose and xylose. The 42nd day passaged strain SSK42 fermented 113 g/L xylose in AM1 minimal medium and produced 51.1 g/L ethanol in 72 h at 89% of maximum theoretical yield with ethanol productivity of 1.4 g/L/h during 24-48 h of fermentation. The ethanol titer, yield and productivity were 49, 40 and 36% higher, respectively, for SSK42 as compared to unevolved SSY13 strain.

  1. The B - L/electroweak Hierarchy in Smooth Heterotic Compactifications

    NASA Astrophysics Data System (ADS)

    Ambroso, Michael; Ovrut, Burt A.

    E8 × E8 heterotic string and M-theory, when appropriately compactified, can give rise to realistic, N = 1 supersymmetric particle physics. In particular, the exact matter spectrum of the MSSM, including three right-handed neutrino supermultiplets, one per family, and one pair of Higgs-Higgs conjugate superfields is obtained by compactifying on Calabi-Yau manifolds admitting specific SU(4) vector bundles. These "heterotic standard models" have the SU(3)C × SU(2)L × U(1)Y gauge group of the standard model augmented by an additional gauged U(1)B - L. Their minimal content requires that the B - L gauge symmetry be spontaneously broken by a vacuum expectation value of at least one right-handed sneutrino. In a previous paper, we presented the results of a renormalization group analysis showing that B - L gauge symmetry is indeed radiatively broken with a B - L/electroweak hierarchy of { O}(10) to { O}(102). In this paper, we present the details of that analysis, extending the results to include higher order terms in tan β-1 and the explicit spectrum of all squarks and sleptons.

  2. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  3. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  4. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  5. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  6. Effectiveness of disinfectants used in cooling towers against Legionella pneumophila.

    PubMed

    García, M T; Pelaz, C

    2008-01-01

    Legionella persists in man-made aquatic installations despite preventive treatments. More information about disinfectants could improve the effectiveness of treatments. This study tests the susceptibility of Legionella pneumophila serogroup (sg) 1 against 8 disinfectants used in cooling tower treatments. We determined the minimal inhibitory concentration (MIC), minimal bactericidal concentration (MBC) and bactericidal effect of sodium hypochlorite (A), hydrogen peroxide with silver nitrate (B), didecyldimethylammonium chloride (C), benzalkonium chloride (D), tributyltetradecylphosphonium chloride (E), tetrahydroxymethylphosphonium sulfide (F), 2,2-dibromonitropropionamide (G) and chloromethylisothiazolone (H) against 28 L. pneumophila sg 1 isolates. MIC and MBC values were equivalent. Bacteria are less susceptible to disinfectants F, B, D and A than to H, E, C and G. All disinfectants induced a bactericidal effect. The effect rate is dose dependent for G, H, F and B; the effect is fast for the rest of disinfectants at any concentration. The bactericidal activity of disinfectants A, G and F depends on the susceptibility test used. All disinfectants have bactericidal activity against L. pneumophila sg 1 at concentrations used in cooling tower treatments. Results depend on the assay for some products.

  7. Characterization for stability in planar conductivities

    NASA Astrophysics Data System (ADS)

    Faraco, Daniel; Prats, Martí

    2018-05-01

    We find a complete characterization for sets of uniformly strongly elliptic and isotropic conductivities with stable recovery in the L2 norm when the data of the Calderón Inverse Conductivity Problem is obtained in the boundary of a disk and the conductivities are constant in a neighborhood of its boundary. To obtain this result, we present minimal a priori assumptions which turn out to be sufficient for sets of conductivities to have stable recovery in a bounded and rough domain. The condition is presented in terms of the integral moduli of continuity of the coefficients involved and their ellipticity bound as conjectured by Alessandrini in his 2007 paper, giving explicit quantitative control for every pair of conductivities.

  8. Proceedings: The Annual Executive Seminar on International Security Affairs (7th). The Changing Scene Foreign Military Sales and Technology Transfer, Washington, DC, 16-17 March 1983.

    DTIC Science & Technology

    1983-03-17

    reconstituted Lebanese Army. A substantial part of our request for funds in supplemental and fiscal 󈨘 will involve the necessary support for the... Lebanese Army. One thing we forget, though -- that it might not be possible - L:" . • ° .’..’’." "o"°-.’ ’." ,.°°,"• -" - .°’° ’° -".- % •° ° .’ ,"-’•o...days, denies the facts of history. What we want to do is minimize it. The Lebanese have an interesting way of solving their own problems. They are

  9. Effects of different fresh gas flows with or without a heat and moisture exchanger on inhaled gas humidity in adults undergoing general anaesthesia: A systematic review and meta-analysis of randomised controlled trials.

    PubMed

    Braz, José R C; Braz, Mariana G; Hayashi, Yoko; Martins, Regina H G; Betini, Marluci; Braz, Leandro G; El Dib, Regina

    2017-08-01

    The minimum inhaled gas absolute humidity level is 20 mgH2O l for short-duration use in general anaesthesia and 30 mgH2O l for long-duration use in intensive care to avoid respiratory tract dehydration. The aim is to compare the effects of different fresh gas flows (FGFs) through a circle rebreathing system with or without a heat and moisture exchanger (HME) on inhaled gas absolute humidity in adults undergoing general anaesthesia. Systematic review and meta-analyses of randomised controlled trials. We defined FGF (l min) as minimal (0.25 to 0.5), low (0.6 to 1.0) or high (≥2). We extracted the inhaled gas absolute humidity data at 60 and 120 min after connection of the patient to the breathing circuit. The effect size is expressed as the mean differences and corresponding 95% confidence intervals (CI). PubMed, EMBASE, SciELO, LILACS and CENTRAL until January 2017. We included 10 studies. The inhaled gas absolute humidity was higher with minimal flow compared with low flow at 120 min [mean differences 2.51 (95%CI: 0.32 to 4.70); P = 0.02] but not at 60 min [mean differences 2.95 (95%CI: -0.95 to 6.84); P = 0.14], and higher with low flow compared with high flow at 120 min [mean differences 7.19 (95%CI: 4.53 to 9.86); P < 0.001]. An inhaled gas absolute humidity minimum of 20 mgH2O l was attained with minimal flow at all times but not with low or high flows. An HME increased the inhaled gas absolute humidity: with minimal flow at 120 min [mean differences 8.49 (95%CI: 1.15 to 15.84); P = 0.02]; with low flow at 60 min [mean differences 9.87 (95%CI: 3.18 to 16.57); P = 0.04] and 120 min [mean differences 7.19 (95%CI: 3.29 to 11.10); P = 0.003]; and with high flow of 2 l min at 60 min [mean differences 6.46 (95%CI: 4.05 to 8.86); P < 0.001] and of 3 l min at 120 min [mean differences 12.18 (95%CI: 6.89 to 17.47); P < 0.001]. The inhaled gas absolute humidity data attained or were near 30 mgH2O l when an HME was used at all FGFs and times. All intubated patients should receive a HME with low or high flows. With minimal flow, a HME adds cost and is not needed to achieve an appropriate inhaled gas absolute humidity.

  10. Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.

    PubMed

    Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli

    2015-05-01

    2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.

  11. PD-L1 expression on neoplastic or stromal cells is respectively a poor or good prognostic factor for adult T-cell leukemia/lymphoma.

    PubMed

    Miyoshi, Hiroaki; Kiyasu, Junichi; Kato, Takeharu; Yoshida, Noriaki; Shimono, Joji; Yokoyama, Shintaro; Taniguchi, Hiroaki; Sasaki, Yuya; Kurita, Daisuke; Kawamoto, Keisuke; Kato, Koji; Imaizumi, Yoshitaka; Seto, Masao; Ohshima, Koichi

    2016-09-08

    Programmed cell death ligand 1 (PD-L1) is expressed on both tumor and tumor-infiltrating nonmalignant cells in lymphoid malignancies. The programmed cell death 1 (PD-1)/PD-L1 pathway suppresses host antitumor responses, although little is known about the significance of PD-1/PD-L1 expression in the tumor microenvironment. To investigate the clinicopathological impact of PD-L1 expression in adult T-cell leukemia/lymphoma (ATLL), we performed PD-L1 immunostaining in 135 ATLL biopsy samples. We observed 2 main groups: 1 had clear PD-L1 expression in lymphoma cells (nPD-L1(+), 7.4% of patients), and the other showed minimal expression in lymphoma cells (nPD-L1(-), 92.6%). Within the nPD-L1(-) group, 2 subsets emerged: the first displayed abundant PD-L1 expression in nonmalignant stromal cells of the tumor microenvironment (miPD-L1(+), 58.5%) and the second group did not express PD-L1 in any cell (PD-L1(-), 34.1%). nPD-L1(+) ATLL (median survival time [MST] 7.5 months, 95% CI [0.4-22.3]) had inferior overall survival (OS) compared with nPD-L1(-) ATLL (MST 14.5 months, 95% CI [10.1-20.0]) (P = .0085). Among nPD-L1(-) ATLL, miPD-L1(+) ATLL (MST 18.6 months, 95% CI [11.0-38.5]) showed superior OS compared with PD-L1(-) ATLL (MST 10.2 months, 95% CI [8.0-14.7]) (P = .0029). The expression of nPD-L1 and miPD-L1 maintained prognostic value for OS in multivariate analysis (P = .0322 and P = .0014, respectively). This is the first report describing the clinicopathological features and outcomes of PD-L1 expression in ATLL. More detailed studies will disclose clinical and biological significance of PD-L1 expression in ATLL. © 2016 by The American Society of Hematology.

  12. Permanence of diced cartilage, bone dust and diced cartilage/bone dust mixture in experimental design in twelve weeks.

    PubMed

    Islamoglu, Kemal; Dikici, Mustafa Bahadir; Ozgentas, Halil Ege

    2006-09-01

    Bone dust and diced cartilage are used for contour restoration because their minimal donor site morbidity. The purpose of this study is to investigate permanence of bone dust, diced cartilage and bone dust/diced cartilage mixture in rabbits over 12 weeks. New Zealand white rabbits were used for this study. There were three groups in the study: Group I: 1 mL bone dust. Group II: 1 mL diced cartilage. Group III: 0.5 mL bone dust + 0.5 mL diced cartilage mixture. They were placed into subcutaneous tissue of rabbits and removed 12 weeks later. The mean volumes of groups were 0.23 +/- 0.08 mL in group I, 0.60 +/- 0.12 mL in group II and 0.36 +/- 0.10 mL in group III. The differences between groups were found statistically significant. In conclusion, diced cartilage was found more reliable than bone dust aspect of preserving its volume for a long period in this study.

  13. Total antioxidant activity and antimicrobial potency of the essential oil and oleoresin of Zingiber officinale Roscoe

    PubMed Central

    Bellik, Yuva

    2014-01-01

    Objective To compare in vitro antioxidant and antimicrobial activities of the essential oil and oleoresin of Zingiber officinale Roscoe. Methods The antioxidant activity was evaluated based on the ability of the ginger extracts to scavenge ABTS°+ free radical. The antimicrobial activity was studied by the disc diffusion method and minimal inhibitory concentration was determined by using the agar incorporation method. Results Ginger extracts exerted significant antioxidant activity and dose-depend effect. In general, oleoresin showed higher antioxidant activity [IC50=(1.820±0.034) mg/mL] when compared to the essential oil [IC50=(110.14±8.44) mg/mL]. In terms of antimicrobial activity, ginger compounds were more effective against Escherichia coli, Bacillus subtilis and Staphylococcus aureus, and less effective against Bacillus cereus. Aspergillus niger was least, whereas, Penicillium spp. was higher sensitive to the ginger extracts; minimal inhibitory concentrations of the oleoresin and essential oil were 2 mg/mL and 869.2 mg/mL, respectively. Moreover, the studied extracts showed an important antifungal activity against Candida albicans. Conclusions The study confirms the wide application of ginger oleoresin and essential oil in the treatment of many bacterial and fungal diseases.

  14. Bioactive compounds isolated from submerged fermentations of the Chilean fungus Stereum rameale.

    PubMed

    Aqueveque, Pedro; Céspedes, Carlos Leonardo; Becerra, José; Dávila, Marcelo; Sterner, Olov

    2015-01-01

    Liquid fermentations of the fungus Stereum rameale (N° 2511) yielded extracts with antibacterial activity. The antibacterial activity reached its peak after 216 h of stirring. Bioassay-guided fractionation methods were employed for the isolation of the bioactive metabolites. Three known compounds were identified: MS-3 (1), vibralactone (2) and vibralactone B (3). The three compounds showed antibacterial activity as a function of their concentration. Minimal bactericidal concentrations (MBC) of compound 1 against Gram-positive bacteria were as follows: Bacillus cereus (50 μg/mL), Bacillus subtilis (10 μg/mL) and Staphylococcus aureus (100 μg/mL). Compounds 2 and 3 were active only against Gram-negative bacteria. The MBC of compound 2 against Escherichia coli was 200 μg/mL. Compound 3 inhibited significantly the growth of E. coli and Pseudomonas aeruginosa, with MBC values of 50 and 100 μg/mL, respectively.

  15. Dynamics and Control of a Minimally Actuated Biomimetic Vehicle: Part 1 - Aerodynamic Model (Postprint)

    DTIC Science & Technology

    2009-08-01

    B L     LW Downstroke rcp B LWD =     xWPcp sinα+∆x B L xWPcp sinφLW cosα− y WP cp cosφLW − w 2 xWPcp cosφLW cosα + y WP cp sinφLW +∆z B L...associated with each wing and stroke are given by MBRWU = rcp B RWU × FBRWU MBRWD = rcp B RWD × FBRWD MBLWU = rcp B LWU × FBLWU MBLWD = rcp B LWD × FBLWD (14

  16. Statistical Physics for Adaptive Distributed Control

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.

  17. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  18. Minimal Left-Right Symmetric Dark Matter.

    PubMed

    Heeck, Julian; Patra, Sudhanwa

    2015-09-18

    We show that left-right symmetric models can easily accommodate stable TeV-scale dark matter particles without the need for an ad hoc stabilizing symmetry. The stability of a newly introduced multiplet either arises accidentally as in the minimal dark matter framework or comes courtesy of the remaining unbroken Z_{2} subgroup of B-L. Only one new parameter is introduced: the mass of the new multiplet. As minimal examples, we study left-right fermion triplets and quintuplets and show that they can form viable two-component dark matter. This approach is, in particular, valid for SU(2)×SU(2)×U(1) models that explain the recent diboson excess at ATLAS in terms of a new charged gauge boson of mass 2 TeV.

  19. Clinical characteristics of occult macular dystrophy in family with mutation of RP1l1 gene.

    PubMed

    Tsunoda, Kazushige; Usui, Tomoaki; Hatase, Tetsuhisa; Yamai, Satoshi; Fujinami, Kaoru; Hanazono, Gen; Shinoda, Kei; Ohde, Hisao; Akahori, Masakazu; Iwata, Takeshi; Miyake, Yozo

    2012-06-01

    To report the clinical characteristics of occult macular dystrophy (OMD) in members of one family with a mutation of the RP1L1 gene. Fourteen members with a p.Arg45Trp mutation in the RP1L1 gene were examined. The visual acuity, visual fields, fundus photographs, fluorescein angiograms, full-field electroretinograms, multifocal electroretinograms, and optical coherence tomographic images were examined. The clinical symptoms and signs and course of the disease were documented. All the members with the RP1L1 mutation except one woman had ocular symptoms and signs of OMD. The fundus was normal in all the patients during the entire follow-up period except in one patient with diabetic retinopathy. Optical coherence tomography detected the early morphologic abnormalities both in the photoreceptor inner/outer segment line and cone outer segment tip line. However, the multifocal electroretinograms were more reliable in detecting minimal macular dysfunction at an early stage of OMD. The abnormalities in the multifocal electroretinograms and optical coherence tomography observed in the OMD patients of different durations strongly support the contribution of RP1L1 mutation to the presence of this disease.

  20. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  1. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  2. Working with the Disadvantaged Student in Vocational Education.

    ERIC Educational Resources Information Center

    DeKalb SERVE Satellite Center, Stone Mountain, GA.

    This handbook provides vocational educators at the secondary and postsecondary levels with approaches for working with minimally disadvantaged students enrolled in their regular programs. Chapter 1 focuses on the disadvantaged student and considers such problems as perceptual difficulties, resistance to authority, parental influence, insecurity…

  3. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  4. What's the Problem? L2 Learners' Use of the L1 during Consciousness-Raising, Form-Focused Tasks

    ERIC Educational Resources Information Center

    Scott, Virginia M.; de la Fuente, Maria Jose

    2008-01-01

    This qualitative study provides preliminary insight into the role of the first language (L1) when pairs of intermediate-level college learners of French and Spanish are engaged in consciousness-raising, form-focused grammar tasks. Using conversation analysis of audiotaped interactions and stimulated recall sessions, we explored the ways students…

  5. Direct Density Functional Energy Minimization using an Tetrahedral Finite Element Grid

    NASA Astrophysics Data System (ADS)

    Vaught, A.; Schmidt, K. E.; Chizmeshya, A. V. G.

    1998-03-01

    We describe an O(N) (N proportional to volume) technique for solving electronic structure problems using the finite element method (FEM). A real--space tetrahedral grid is used as a basis to represent the electronic density, of a free or periodic system and Poisson's equation is solved as a boundary value problem. Nuclear cusps are treated using a local grid consisting of radial elements. These features facilitate the implementation of complicated energy functionals and permit a direct (constrained) energy minimization with respect to the density. We demonstrate the usefulness of the scheme by calculating the binding trends and polarizabilities of a number of atoms and molecules using a number of recently proposed non--local, orbital--free kinetic energy functionals^1,2. Scaling behavior, computational efficiency and the generalization to band--structure will also be discussed. indent 0 pt øbeylines øbeyspaces skip 0 pt ^1 P. Garcia-Gonzalez, J.E. Alvarellos and E. Chacon, Phys. Rev. B 54, 1897 (1996). ^2 A. J. Thakkar, Phys.Rev.B 46, 6920 (1992).

  6. Minimal clinically important improvement (MCII) and patient-acceptable symptom state (PASS) in total hip arthroplasty (THA) patients 1 year postoperatively

    PubMed Central

    Paulsen, Aksel

    2014-01-01

    Background and purpose The increased use of patient-reported outcomes (PROs) in orthopedics requires data on estimated minimal clinically important improvements (MCIIs) and patient-acceptable symptom states (PASSs). We wanted to find cut-points corresponding to minimal clinically important PRO change score and the acceptable postoperative PRO score, by estimating MCII and PASS 1 year after total hip arthroplasty (THA) for the Hip Dysfunction and Osteoarthritis Outcome Score (HOOS) and the EQ-5D. Patients and methods THA patients from 16 different departments received 2 PROs and additional questions preoperatively and 1 year postoperatively. The PROs included were the HOOS subscales pain (HOOS Pain), physical function short form (HOOS-PS), and hip-related quality of life (HOOS QoL), and the EQ-5D. MCII and PASS were estimated using multiple anchor-based approaches. Results Of 1,837 patients available, 1,335 answered the preoperative PROs, and 1,288 of them answered the 1-year follow-up. The MCIIs and PASSs were estimated to be: 24 and 91 (HOOS Pain), 23 and 88 (HOOS-PS), 17 and 83 (HOOS QoL), 0.31 and 0.92 (EQ-5D Index), and 23 and 85 (EQ-VAS), respectively. MCIIs corresponded to a 38–55% improvement from mean baseline PRO score and PASSs corresponded to absolute follow-up scores of 57–91% of the maximum score in THA patients 1 year after surgery. Interpretation This study improves the interpretability of PRO scores. The different estimation approaches presented may serve as a guide for future MCII and PASS estimations in other contexts. The cutoff points may serve as reference values in registry settings. PMID:24286564

  7. The design of L1-norm visco-acoustic wavefield extrapolators

    NASA Astrophysics Data System (ADS)

    Salam, Syed Abdul; Mousa, Wail A.

    2018-04-01

    Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.

  8. Solar radiation pressure application for orbital motion stabilization near the Sun-Earth collinear libration point

    NASA Astrophysics Data System (ADS)

    Polyakhova, Elena; Shmyrov, Alexander; Shmyrov, Vasily

    2018-05-01

    Orbital maneuvering in a neighborhood of the collinear libration point L1 of Sun-Earth system has specific properties, primarily associated with the instability L1. For a long stay in this area of space the stabilization problem of orbital motion requires a solution. Numerical experiments have shown that for stabilization of motion it is requires very small control influence in comparison with the gravitational forces. On the other hand, the stabilization time is quite long - months, and possibly years. This makes it highly desirable to use solar pressure forces. In this paper we illustrate the solar sail possibilities for solving of stabilization problem in a neighborhood L1 with use of the model example.

  9. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems.

    PubMed

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-12-20

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm.

  10. Optimal Rate Schedules with Data Sharing in Energy Harvesting Communication Systems

    PubMed Central

    Wu, Weiwei; Li, Huafan; Shan, Feng; Zhao, Yingchao

    2017-01-01

    Despite the abundant research on energy-efficient rate scheduling polices in energy harvesting communication systems, few works have exploited data sharing among multiple applications to further enhance the energy utilization efficiency, considering that the harvested energy from environments is limited and unstable. In this paper, to overcome the energy shortage of wireless devices at transmitting data to a platform running multiple applications/requesters, we design rate scheduling policies to respond to data requests as soon as possible by encouraging data sharing among data requests and reducing the redundancy. We formulate the problem as a transmission completion time minimization problem under constraints of dynamical data requests and energy arrivals. We develop offline and online algorithms to solve this problem. For the offline setting, we discover the relationship between two problems: the completion time minimization problem and the energy consumption minimization problem with a given completion time. We first derive the optimal algorithm for the min-energy problem and then adopt it as a building block to compute the optimal solution for the min-completion-time problem. For the online setting without future information, we develop an event-driven online algorithm to complete the transmission as soon as possible. Simulation results validate the efficiency of the proposed algorithm. PMID:29261135

  11. Chemical composition, toxicity and larvicidal and antifungal activities of Persea americana (avocado) seed extracts.

    PubMed

    Leite, João Jaime Giffoni; Brito, Erika Helena Salles; Cordeiro, Rossana Aguiar; Brilhante, Raimunda Sâmia Nogueira; Sidrim, José Júlio Costa; Bertini, Luciana Medeiros; Morais, Selene Maia de; Rocha, Marcos Fábio Gadelha

    2009-01-01

    The present study had the aim of testing the hexane and methanol extracts of avocado seeds, in order to determine their toxicity towards Artemia salina, evaluate their larvicidal activity towards Aedes aegypti and investigate their in vitro antifungal potential against strains of Candida spp, Cryptococcus neoformans and Malassezia pachydermatis through the microdilution technique. In toxicity tests on Artemia salina, the hexane and methanol extracts from avocado seeds showed LC50 values of 2.37 and 24.13 mg mL-1 respectively. Against Aedes aegypti larvae, the LC50 results obtained were 16.7 mg mL-1 for hexane extract and 8.87 mg mL-1 for methanol extract from avocado seeds. The extracts tested were also active against all the yeast strains tested in vitro, with differing results such that the minimum inhibitory concentration of the hexane extract ranged from 0.625 to 1.25mg L-(1), from 0.312 to 0.625 mg mL-1 and from 0.031 to 0.625 mg mL-1, for the strains of Candida spp, Cryptococcus neoformans and Malassezia pachydermatis, respectively. The minimal inhibitory concentration for the methanol extract ranged from 0.125 to 0.625 mg mL-1, from 0.08 to 0.156 mg mL-1 and from 0.312 to 0.625 mg mL-1, for the strains of Candida spp., Cryptococcus neoformans and Malassezia pachydermatis, respectively.

  12. Immunological evaluation of colonic delivered Hepatitis B surface antigen loaded TLR-4 agonist modified solid fat nanoparticles.

    PubMed

    Sahu, Kantrol Kumar; Pandey, Ravi Shankar

    2016-10-01

    Hepatitis B is one of the leading liver diseases and remains a major global health problem. Currently available vaccines provide protection but often results in weaker/minimal mucosal immunity. Thus the present study is devoted to the development and in-vivo exploration of the colonically delivered biomimetic nanoparticles which capably enhance humoral as well as cellular immune response. In present work, Hepatitis B surface antigen (HBsAg) entrapped nanoparticles containing Monophosphoryl lipid A (MPLA) (HB+L-NP) were prepared by solvent evaporation method and characterized for particle size (~210nm), shape, zeta potential (-24mV±0.68), entrapment efficiency (58.45±1.68%), in-vitro release and antigen integrity. Dose escalation study was done to confirm prophylactic immune response following defined doses of prepared nanoparticulate formulations with or without MPLA. Intramuscular administered alum based marketed HBsAg (Genevac B) was used as standard (10μg) and were able to induce significant systemic (IgG) but remarkably low mucosal immune (IgA) response. Notably, HB+L-NP (0.5ml-10μg) induced strong systemic and robust mucosal immunity (510 and 470 mIU/ml respectively, p<0.001) from which mucosal was more significant due to the involvement of Common Mucosal Immune System (CMIS). Likewise, significant cellular immune response was elicited by HB+L-NP through T-cell activation (mixed Th1 and Th2) as confirmed by significantly increased cytokines level (IL-2 and Interferon-γ) in spleen homogenates. This study supports that delivery of HBsAg to the colon may open new vista in designing oral vaccines later being one of most accepted route for potential vaccines in future. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Tris-EDTA significantly enhances antibiotic efficacy against multidrug-resistant Pseudomonas aeruginosa in vitro.

    PubMed

    Buckley, Laura M; McEwan, Neil A; Nuttall, Tim

    2013-10-01

    Multidrug-resistant Pseudomonas aeruginosa commonly complicates chronic bacterial otitis in dogs. The aim of this in vitro study was to determine the effect of ethylenediaminetetraacetic acid-tromethamine (Tris-EDTA) on the minimal bactericidal concentrations (MBCs) and minimal inhibitory concentrations (MICs) of marbofloxacin and gentamicin for multidrug-resistant P. aeruginosa isolates from cases of canine otitis. Eleven isolates were identified as multidrug resistant on disc diffusion; 10 were resistant to marbofloxacin and two were resistant to gentamicin. Isolates were incubated for 90 min with each antibiotic alone and in combination with Tris-EDTA at concentrations of 0.075 μg/mL to 5 mg/mL for marbofloxacin, 0.001 μg/mL to 10 mg/mL for gentamicin and 17.8:4.7 to 0.14:0.04 mg/mL for Tris-EDTA. Positive and negative controls were included. Aliquots of each antibiotic and/or Tris-EDTA concentration were subsequently transferred to sheep blood agar to determine the MBCs, and tryptone soy broth was added to the remaining suspensions to determine the MICs. Tris-EDTA alone was bacteriostatic but not bactericidal at any concentration. The addition of Tris-EDTA significantly reduced the median MBC (from 625 to 468.8 μg/mL; P < 0.001) and MIC (from 29.3 to 2.4 μg/mL; P = 0.008) of marbofloxacin, and the median MBC (from 625 to 39.1 μg/mL) and MIC (from 19.5 to 1.2 μg/mL) of gentamicin (both P < 0.001). Tris-EDTA significantly reduced the MBCs and MICs of marbofloxacin and gentamicin for multidrug-resistant P. aeruginosa in vitro. This may be of use to clinicians managing these infections in dogs. © 2013 ESVD and ACVD.

  14. Prediction of minimal residual viremia in HCV type 1 infected patients receiving interferon-based therapy.

    PubMed

    Knop, Viola; Teuber, Gerlinde; Klinker, Hartwig; Möller, Bernd; Rasenack, Jens; Hinrichsen, Holger; Gerlach, Tilman; Spengler, Ulrich; Buggisch, Peter; Neumann, Konrad; Sarrazin, Christoph; Zeuzem, Stefan; Berg, Thomas

    2013-01-01

    Complete suppression of viral replication is crucial in chronic HCV treatment in order to prevent relapse and resistance development. We wanted to find out which factors influence the period from being already HCV RNA negative by bDNA assay (< 615 IU/mL) to become undetectable by the more sensitive TMA test (< 5.3 IU/mL). Evaluated were 433 HCV type 1-infected patients. All of them received 1.5 ug/kg Peg-IFNα-2b plus ribavirin for 18-48 weeks. bDNA was performed weekly during the first 8 weeks and thereafter at weeks 12, 24, and 48. Patients who became bDNA undetectable were additionally analysed by TMA. Of the 309 patients with on-treatment response (< 615 IU/mL), 289 also reached undetectable HCV RNA levels by TMA. Multivariate analysis revealed that viremia ≤ 400,000 IU/mL (p = 0.001), fast initial virologic decline (p = 0.004) and absence of fibrosis (p = 0.035) were independent predictors of an accelerated on-treatment response by TMA assay in already bDNA negative patients. bDNA negative patients becoming HCV RNA undetectable by TMA within the following 3 weeks had a frequency of relapse of 21%, whereas those showing TMA negativity after 3 weeks relapsed in 38% (p = 0.001). In RVR patients (bDNA < 615 IU/mL at week 4) the corresponding relapse rates were 15.3% vs. 37.5%, respectively (p = 0.003). Early viral kinetics, baseline viremia and fibrosis stage are important tools to predict persistent minimal viremia during interferon-based therapy. The data have implications for designing a more refined treatment strategy in HCV infection, even in the setting of protease inhibitor-based triple treatment.

  15. Optimal transfers between libration-point orbits in the elliptic restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Hiday, Lisa Ann

    1992-09-01

    A strategy is formulated to design optimal impulsive transfers between three-dimensional libration-point orbits in the vicinity of the interior L(1) libration point of the Sun-Earth/Moon barycenter system. Two methods of constructing nominal transfers, for which the fuel cost is to be minimized, are developed; both inferior and superior transfers between two halo orbits are considered. The necessary conditions for an optimal transfer trajectory are stated in terms of the primer vector. The adjoint equation relating reference and perturbed trajectories in this formulation of the elliptic restricted three-body problem is shown to be distinctly different from that obtained in the analysis of trajectories in the two-body problem. Criteria are established whereby the cost on a nominal transfer can be improved by the addition of an interior impulse or by the implementation of coastal arcs in the initial and final orbits. The necessary conditions for the local optimality of a time-fixed transfer trajectory possessing additional impulses are satisfied by requiring continuity of the Hamiltonian and the derivative of the primer vector at all interior impulses. The optimality of a time-free transfer containing coastal arcs is surmised by examination of the slopes at the endpoints of a plot of the magnitude of the primer vector over the duration of the transfer path. If the initial and final slopes of the primer magnitude are zero, the transfer trajectory is optimal; otherwise, the execution of coasts is warranted. The position and timing of each interior impulse applied to a time-fixed transfer as well as the direction and length of coastal periods implemented on a time-free transfer are specified by the unconstrained minimization of the appropriate variation in cost utilizing a multivariable search technique. Although optimal solutions in some instances are elusive, the time-fixed and time-free optimization algorithms prove to be very successful in diminishing costs on nominal transfer trajectories. The inclusion of coastal arcs on time-free superior and inferior transfers results in significant modification of the transfer time of flight caused by shifts in departure and arrival locations on the halo orbits.

  16. Intracellular multiplication of Legionnaires' disease bacteria (Legionella pneumophila) in human monocytes is reversibly inhibited by erythromycin and rifampin.

    PubMed Central

    Horwitz, M A; Silverstein, S C

    1983-01-01

    We have previously reported that virulent egg yolk-grown Legionella pneumophila, Philadelphia 1 strain, multiplies intracellularly in human blood monocytes and only intracellularly under tissue culture conditions. In this paper, we have investigated the effect of erythromycin and rifampin on L. pneumophila-monocyte interaction in vitro; erythromycin and rifampin are currently the drugs of choice for the treatment of Legionnaires' disease. The intracellular multiplication of L. pneumophila is inhibited by erythromycin and rifampin, as measured by colony-forming units, whether the antibiotics are added just before or just after infection of monocytes with L. pneumophila, or 2 d after infection when L. pneumophila is in the logarithmic phase of growth in monocytes. Intracellular multiplication of L. pneumophila is inhibited by 1.25 microgram/ml but not less than or equal to 0.125 microgram/ml erythromycin and 0.01 microgram/ml but not less than or equal to 0.001 microgram/ml rifampin. These concentrations of antibiotics are comparable to those that inhibit extracellular multiplication of L. pneumophila under cell-free conditions in artificial medium; the minimal inhibitory concentration is 0.37 microgram/ml for erythromycin and 0.002 microgram/ml for rifampin. Multiplication of L. pneumophila in the logarithmic phase of growth in monocytes is inhibited within 1 h of the addition of antibiotics. Intracellular bacteria inhibited from multiplying by antibiotics are not killed. By electron microscopy, the bacteria appear intact within membrane-bound vacuoles, studded with ribosomelike structures. L. pneumophila multiplying extracellularly on artificial medium is killed readily by relatively low concentrations of erythromycin and rifampin; the minimal bactericidal concentration is 1 microgram/ml for erythromycin and 0.009 microgram/ml for rifampin. In contrast, L. pneumophila multiplying intracellularly is resistant to killing by these concentrations of erythromycin and rifampin or by concentrations equal to or greater than peak serum levels in humans. Extracellular L. pneumophila in stationary phase is also resistant to killing by erythromycin and rifampin. These findings, taken together with our previous work, indicate that, in vivo, L. pneumophila is resistant to killing by erythromycin and rifampin. Inhibition of L. pneumophila multiplication in monocytes by antibiotics is reversible; when the antibiotics are removed from infected monocyte cultures after 2 d, L. pneumophila resumes multiplication. This study indicates that patients with Legionnaires' disease under treatment with erythromycin and rifampin require host defenses to eliminate L. pneumophila, and that inadequate host defenses may result in relapse after cessation of therapy. PMID:6848556

  17. Gold nanoparticles functionalized with a fragment of the neural cell adhesion molecule L1 stimulate L1-mediated functions

    NASA Astrophysics Data System (ADS)

    Schulz, Florian; Lutz, David; Rusche, Norman; Bastús, Neus G.; Stieben, Martin; Höltig, Michael; Grüner, Florian; Weller, Horst; Schachner, Melitta; Vossmeyer, Tobias; Loers, Gabriele

    2013-10-01

    The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1 sequence of the third fibronectin type III domain of murine L1 was identified and conjugated to gold nanoparticles (AuNPs) to obtain constructs that interact homophilically with the extracellular domain of L1 and trigger the cognate beneficial L1-mediated functions. Covalent conjugation was achieved by reacting mixtures of two cysteine-terminated forms of this L1 peptide and thiolated poly(ethylene) glycol (PEG) ligands (~2.1 kDa) with citrate stabilized AuNPs of two different sizes (~14 and 40 nm in diameter). By varying the ratio of the L1 peptide-PEG mixtures, an optimized layer composition was achieved that resulted in the expected homophilic interaction of the AuNPs. These AuNPs were stable as tested over a time period of 30 days in artificial cerebrospinal fluid and interacted with the extracellular domain of L1 on neurons and Schwann cells, as could be shown by using cells from wild-type and L1-deficient mice. In vitro, the L1-derivatized particles promoted neurite outgrowth and survival of neurons from the central and peripheral nervous system and stimulated Schwann cell process formation and proliferation. These observations raise the hope that, in combination with other therapeutic approaches, L1 peptide-functionalized AuNPs may become a useful tool to ameliorate the deficits resulting from acute and chronic injuries of the mammalian nervous system.The neural cell adhesion molecule L1 is involved in nervous system development and promotes regeneration in animal models of acute and chronic injury of the adult nervous system. To translate these conducive functions into therapeutic approaches, a 22-mer peptide that encompasses a minimal and functional L1 sequence of the third fibronectin type III domain of murine L1 was identified and conjugated to gold nanoparticles (AuNPs) to obtain constructs that interact homophilically with the extracellular domain of L1 and trigger the cognate beneficial L1-mediated functions. Covalent conjugation was achieved by reacting mixtures of two cysteine-terminated forms of this L1 peptide and thiolated poly(ethylene) glycol (PEG) ligands (~2.1 kDa) with citrate stabilized AuNPs of two different sizes (~14 and 40 nm in diameter). By varying the ratio of the L1 peptide-PEG mixtures, an optimized layer composition was achieved that resulted in the expected homophilic interaction of the AuNPs. These AuNPs were stable as tested over a time period of 30 days in artificial cerebrospinal fluid and interacted with the extracellular domain of L1 on neurons and Schwann cells, as could be shown by using cells from wild-type and L1-deficient mice. In vitro, the L1-derivatized particles promoted neurite outgrowth and survival of neurons from the central and peripheral nervous system and stimulated Schwann cell process formation and proliferation. These observations raise the hope that, in combination with other therapeutic approaches, L1 peptide-functionalized AuNPs may become a useful tool to ameliorate the deficits resulting from acute and chronic injuries of the mammalian nervous system. Electronic supplementary information (ESI) available: In vitro assays of the stimulatory activity of the L1-peptide, in vitro assays comparing the stimulatory activity of the L1-peptide coupled and not coupled to AuNPs, TEM characterization of AuNPs, additional results of aggregation experiments including an explanatory figure, UV-vis data proving the stability of AuNP@L1/PEGMUA-conjugates in relevant buffers, simple structure modeling of a L1-peptide and PEGMUA on AuNPs, and structure modeling of L1-peptides. See DOI: 10.1039/c3nr02707d

  18. Does Self-Help Increase Rates of Help Seeking for Student Mental Health Problems by Minimizing Stigma as a Barrier?

    ERIC Educational Resources Information Center

    Levin, Michael E.; Krafft, Jennifer; Levin, Crissa

    2018-01-01

    Objective: This study examined whether self-help (books, websites, mobile apps) increases help seeking for mental health problems among college students by minimizing stigma as a barrier. Participants and Methods: A survey was conducted with 200 college students reporting elevated distress from February to April 2017. Results: Intentions to use…

  19. Fast Algorithms for Designing Unimodular Waveform(s) With Good Correlation Properties

    NASA Astrophysics Data System (ADS)

    Li, Yongzhe; Vorobyov, Sergiy A.

    2018-03-01

    In this paper, we develop new fast and efficient algorithms for designing single/multiple unimodular waveforms/codes with good auto- and cross-correlation or weighted correlation properties, which are highly desired in radar and communication systems. The waveform design is based on the minimization of the integrated sidelobe level (ISL) and weighted ISL (WISL) of waveforms. As the corresponding optimization problems can quickly grow to large scale with increasing the code length and number of waveforms, the main issue turns to be the development of fast large-scale optimization techniques. The difficulty is also that the corresponding optimization problems are non-convex, but the required accuracy is high. Therefore, we formulate the ISL and WISL minimization problems as non-convex quartic optimization problems in frequency domain, and then simplify them into quadratic problems by utilizing the majorization-minimization technique, which is one of the basic techniques for addressing large-scale and/or non-convex optimization problems. While designing our fast algorithms, we find out and use inherent algebraic structures in the objective functions to rewrite them into quartic forms, and in the case of WISL minimization, to derive additionally an alternative quartic form which allows to apply the quartic-quadratic transformation. Our algorithms are applicable to large-scale unimodular waveform design problems as they are proved to have lower or comparable computational burden (analyzed theoretically) and faster convergence speed (confirmed by comprehensive simulations) than the state-of-the-art algorithms. In addition, the waveforms designed by our algorithms demonstrate better correlation properties compared to their counterparts.

  20. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

Top