Sample records for conventional dimensional regularization

  1. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  2. One-loop corrections to light cone wave functions: The dipole picture DIS cross section

    NASA Astrophysics Data System (ADS)

    Hänninen, H.; Lappi, T.; Paatelainen, R.

    2018-06-01

    We develop methods to perform loop calculations in light cone perturbation theory using a helicity basis, refining the method introduced in our earlier work. In particular this includes implementing a consistent way to contract the four-dimensional tensor structures from the helicity vectors with d-dimensional tensors arising from loop integrals, in a way that can be fully automatized. We demonstrate this explicitly by calculating the one-loop correction to the virtual photon to quark-antiquark dipole light cone wave function. This allows us to calculate the deep inelastic scattering cross section in the dipole formalism to next-to-leading order accuracy. Our results, obtained using the four dimensional helicity scheme, agree with the recent calculation by Beuf using conventional dimensional regularization, confirming the regularization scheme independence of this cross section.

  3. Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging

    PubMed Central

    Guo, Qijia; Wang, Jie; Chang, Tianying

    2017-01-01

    The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083

  4. Solving the hypersingular boundary integral equation in three-dimensional acoustics using a regularization relationship.

    PubMed

    Yan, Zai You; Hung, Kin Chew; Zheng, Hui

    2003-05-01

    Regularization of the hypersingular integral in the normal derivative of the conventional Helmholtz integral equation through a double surface integral method or regularization relationship has been studied. By introducing the new concept of discretized operator matrix, evaluation of the double surface integrals is reduced to calculate the product of two discretized operator matrices. Such a treatment greatly improves the computational efficiency. As the number of frequencies to be computed increases, the computational cost of solving the composite Helmholtz integral equation is comparable to that of solving the conventional Helmholtz integral equation. In this paper, the detailed formulation of the proposed regularization method is presented. The computational efficiency and accuracy of the regularization method are demonstrated for a general class of acoustic radiation and scattering problems. The radiation of a pulsating sphere, an oscillating sphere, and a rigid sphere insonified by a plane acoustic wave are solved using the new method with curvilinear quadrilateral isoparametric elements. It is found that the numerical results rapidly converge to the corresponding analytical solutions as finer meshes are applied.

  5. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  6. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  7. Optical characteristics of a one-dimensional photonic crystal with an additional regular layer

    NASA Astrophysics Data System (ADS)

    Tolmachev, V. A.; Baldycheva, A. V.; Krutkova, E. Yu.; Perova, T. S.; Berwick, K.

    2009-06-01

    In this paper, the forbidden Photonic Band Gaps (PBGs) of a one-dimensional Photonic Crystal (1D PC) with additional regular layer, t for the constant value of the lattice constant A and at normal incident of light beam were investigated. The additional regular layer was formed from both sides of the high-refractive index layer H. The gap map approach and the Transfer Matrix Method were used for numerical analysis of this structure. The limitation of filling fraction values caused by the presence of t-layer was taking into account during calculations of the Stop-Band (SB) regions for threecomponent PC. The red shift of SBs was observed at the introduction of t-layer to conventional two-component 1D PC with optical contrast of N=3.42/1. The blue edge of the first PBG occupied the intermediate position between the blue edges of SBs regions of conventional PCs with different optical contrast N. This gives the opportunity of tuning the optical contrast of PC by introduction of the additional layer, rather than using the filler, as well as fine tuning of the SB edge. The influence of the number of periods m and the optical contrast N on the properties of SBs was also investigated. The effect of the PBG disappearance in the gap map and in the regions of the PBGs of high order was revealed at certain parameters of the additional layer.

  8. Exotic superfluidity and pairing phenomena in atomic Fermi gases in mixed dimensions.

    PubMed

    Zhang, Leifeng; Che, Yanming; Wang, Jibiao; Chen, Qijin

    2017-10-11

    Atomic Fermi gases have been an ideal platform for simulating conventional and engineering exotic physical systems owing to their multiple tunable control parameters. Here we investigate the effects of mixed dimensionality on the superfluid and pairing phenomena of a two-component ultracold atomic Fermi gas with a short-range pairing interaction, while one component is confined on a one-dimensional (1D) optical lattice whereas the other is in a homogeneous 3D continuum. We study the phase diagram and the pseudogap phenomena throughout the entire BCS-BEC crossover, using a pairing fluctuation theory. We find that the effective dimensionality of the non-interacting lattice component can evolve from quasi-3D to quasi-1D, leading to strong Fermi surface mismatch. Upon pairing, the system becomes effectively quasi-two dimensional in the BEC regime. The behavior of T c bears similarity to that of a regular 3D population imbalanced Fermi gas, but with a more drastic departure from the regular 3D balanced case, featuring both intermediate temperature superfluidity and possible pair density wave ground state. Unlike a simple 1D optical lattice case, T c in the mixed dimensions has a constant BEC asymptote.

  9. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography

    PubMed Central

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-01-01

    Purpose This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d′) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM. PMID:28626290

  10. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    NASA Astrophysics Data System (ADS)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  11. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  12. Spatial resolution properties of motion-compensated tomographic image reconstruction methods.

    PubMed

    Chun, Se Young; Fessler, Jeffrey A

    2012-07-01

    Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.

  13. Three-dimensional HDlive imaging of an umbilical cord cyst.

    PubMed

    Inubashiri, Eisuke; Nishiyama, Naomi; Tatedo, Sayuri; Minami, Hiina; Saitou, Atushi; Watanabe, Yukio; Sugawara, Masaki

    2018-04-01

    Umbilical cord cysts (UCC) are a rare congenital malformation. Previous reports have suggested that the second- and third-trimester UCC may be associated with other structural anomalies or chromosomal abnormalities. Therefore, high-quality imaging is clinically important for the antenatal diagnosis of UCC and to conduct a precise anatomical survey of intrauterine abnormalities. There have been few reports of antenatal diagnosis of UCC with the conventional two- and three-dimensional ultrasonography. In this report, we demonstrate the novel visual depiction of UCC in utero with three-dimensional HDlive imaging, which helps substantially with prenatal diagnosis. A case with an abnormal placental mass at 16 weeks and 5 days of gestation was observed in detail using HDlive. HDlive revealed very realistic images of the intrauterine abnormality: the oval lesion was smooth with regular contours and a homogenous wall at the site of cord insertion on the placenta. In addition, we confirmed the absent of umbilical cord, placental, and fetal structural anomalies. Here, we report a case wherein HDlive may have provided clinically valuable information for prenatal diagnosis of UCC and offered a potential advantage relative to the conventional US.

  14. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  15. Comparison of morphological and conventional edge detectors in medical imaging applications

    NASA Astrophysics Data System (ADS)

    Kaabi, Lotfi; Loloyan, Mansur; Huang, H. K.

    1991-06-01

    Recently, mathematical morphology has been used to develop efficient image analysis tools. This paper compares the performance of morphological and conventional edge detectors applied to radiological images. Two morphological edge detectors including the dilation residue found by subtracting the original signal from its dilation by a small structuring element, and the blur-minimization edge detector which is defined as the minimum of erosion and dilation residues of the blurred image version, are compared with the linear Laplacian and Sobel and the non-linear Robert edge detectors. Various structuring elements were used in this study: regular 2-dimensional, and 3-dimensional. We utilized two criterions for edge detector's performance classification: edge point connectivity and the sensitivity to the noise. CT/MR and chest radiograph images have been used as test data. Comparison results show that the blur-minimization edge detector, with a rolling ball-like structuring element outperforms other standard linear and nonlinear edge detectors. It is less noise sensitive, and performs the most closed contours.

  16. Multimedia article. The keys to the new laparoscopic world Thumbs up! knot and Tornado knot.

    PubMed

    Uchida, K; Haruta, N; Okajima, M; Matsuda, M; Yamamoto, M

    2005-06-01

    Most laparoscopic surgeons feel some anxiety when performing intracorporeal knotting with conventional techniques [1, 2]. Two factors contribute to this anxiety. The first is the necessity of recognizing three dimensions on a two-dimensional monitor. The conventional intracorporeal knotting techniques make loops by twisting the thread with a second pair of forceps. This necessitates cooperative movement of both hands, with the added difficulties of depth perception. Regular touch confirmations reduce problems with depth perception. However, touch confirmation is more complicated in laparoscopic surgery than in laparotomy. The second problem is that tied loops can come loose and escape the instruments, especially with hard thread. This is not only stressful but also increases operation time.

  17. Alternative dimensional reduction via the density matrix

    NASA Astrophysics Data System (ADS)

    de Carvalho, C. A.; Cornwall, J. M.; da Silva, A. J.

    2001-07-01

    We give graphical rules, based on earlier work for the functional Schrödinger equation, for constructing the density matrix for scalar and gauge fields in equilibrium at finite temperature T. More useful is a dimensionally reduced effective action (DREA) constructed from the density matrix by further functional integration over the arguments of the density matrix coupled to a source. The DREA is an effective action in one less dimension which may be computed order by order in perturbation theory or by dressed-loop expansions; it encodes all thermal matrix elements. We term the DREA procedure alternative dimensional reduction, to distinguish it from the conventional dimensionally reduced field theory (DRFT) which applies at infinite T. The DREA is useful because it gives a dimensionally reduced theory usable at any T including infinity, where it yields the DRFT, and because it does not and cannot have certain spurious infinities which sometimes occur in the density matrix itself or the conventional DRFT; these come from ln T factors at infinite temperature. The DREA can be constructed to all orders (in principle) and the only regularizations needed are those which control the ultraviolet behavior of the zero-T theory. An example of spurious divergences in the DRFT occurs in d=2+1φ4 theory dimensionally reduced to d=2. We study this theory and show that the rules for the DREA replace these ``wrong'' divergences in physical parameters by calculable powers of ln T; we also compute the phase transition temperature of this φ4 theory in one-loop order. Our density-matrix construction is equivalent to a construction of the Landau-Ginzburg ``coarse-grained free energy'' from a microscopic Hamiltonian.

  18. Automatic Aircraft Collision Avoidance System and Method

    NASA Technical Reports Server (NTRS)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  19. Is the Filinov integral conditioning technique useful in semiclassical initial value representation methods?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spanner, Michael; Batista, Victor S.; Brumer, Paul

    2005-02-22

    The utility of the Filinov integral conditioning technique, as implemented in semiclassical initial value representation (SC-IVR) methods, is analyzed for a number of regular and chaotic systems. For nonchaotic systems of low dimensionality, the Filinov technique is found to be quite ineffective at accelerating convergence of semiclassical calculations since, contrary to the conventional wisdom, the semiclassical integrands usually do not exhibit significant phase oscillations in regions of large integrand amplitude. In the case of chaotic dynamics, it is found that the regular component is accurately represented by the SC-IVR, even when using the Filinov integral conditioning technique, but that quantummore » manifestations of chaotic behavior was easily overdamped by the filtering technique. Finally, it is shown that the level of approximation introduced by the Filinov filter is, in general, comparable to the simpler ad hoc truncation procedure introduced by Kay [J. Chem. Phys. 101, 2250 (1994)].« less

  20. Identification of nodal tissue in the living heart using rapid scanning fiber-optics confocal microscopy and extracellular fluorophores.

    PubMed

    Huang, Chao; Kaza, Aditya K; Hitchcock, Robert W; Sachse, Frank B

    2013-09-01

    Risks associated with pediatric reconstructive heart surgery include injury of the sinoatrial node (SAN) and atrioventricular node (AVN), requiring cardiac rhythm management using implantable pacemakers. These injuries are the result of difficulties in identifying nodal tissues intraoperatively. Here we describe an approach based on confocal microscopy and extracellular fluorophores to quantify tissue microstructure and identify nodal tissue. Using conventional 3-dimensional confocal microscopy we investigated the microstructural arrangement of SAN, AVN, and atrial working myocardium (AWM) in fixed rat heart. AWM exhibited a regular striated arrangement of the extracellular space. In contrast, SAN and AVN had an irregular, reticulated arrangement. AWM, SAN, and AVN tissues were beneath a thin surface layer of tissue that did not obstruct confocal microscopic imaging. Subsequently, we imaged tissues in living rat hearts with real-time fiber-optics confocal microscopy. Fiber-optics confocal microscopy images resembled images acquired with conventional confocal microscopy. We investigated spatial regularity of tissue microstructure from Fourier analysis and second-order image moments. Fourier analysis of fiber-optics confocal microscopy images showed that the spatial regularity of AWM was greater than that of nodal tissues (37.5 ± 5.0% versus 24.3 ± 3.9% for SAN and 23.8 ± 3.7% for AVN; P<0.05). Similar differences of spatial regularities were revealed from second-order image moments (50.0 ± 7.3% for AWM versus 29.3 ± 6.7% for SAN and 27.3 ± 5.5% for AVN; P<0.05). The study demonstrates feasibility of identifying nodal tissue in living heart using extracellular fluorophores and fiber-optics confocal microscopy. Application of the approach in pediatric reconstructive heart surgery may reduce risks of injuring nodal tissues.

  1. Do e-cigarettes have the potential to compete with conventional cigarettes?: a survey of conventional cigarette smokers' experiences with e-cigarettes.

    PubMed

    Kralikova, Eva; Novak, Jan; West, Oliver; Kmetova, Alexandra; Hajek, Peter

    2013-11-01

    Electronic cigarettes (ECs) are becoming increasingly popular globally. If they were to replace conventional cigarettes, it could have a substantial impact on public health. To evaluate EC's potential for competing with conventional cigarettes as a consumer product, we report the first data, to our knowledge, on the proportion of smokers who try ECs and become regular users. A total of 2,012 people seen smoking or buying cigarettes in the Czech Republic were approached to answer questions about smoking, with no mention made of ECs to avoid the common bias in surveys of EC users. During the interview, the volunteers' experience with ECs was then discussed. A total of 1,738 smokers (86%) participated. One-half reported trying ECs at least once. Among those who tried ECs, 18.3% (95% CI, 0.15.7%-20.9%) reported using them regularly, and 14% (95% CI, 11.6%-16.2%) used them daily. On average, regular users used ECs daily for 7.1 months. The most common reason for using ECs was to reduce consumption of conventional cigarettes; 60% of regular EC users reported that ECs helped them to achieve this. Being older and having a more favorable initial experience with ECs explained 19% of the variance in progressing to regular EC use. Almost one-fifth of smokers who try ECs once go on to become regular users. ECs may develop into a genuine competitor to conventional cigarettes. Government agencies preparing to regulate ECs need to ensure that such moves do not create a market monopoly for conventional cigarettes.

  2. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  3. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    PubMed

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  4. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  5. Unstructured viscous grid generation by advancing-front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.

  6. Image volume analysis of omnidirectional parallax regular-polyhedron three-dimensional displays.

    PubMed

    Kim, Hwi; Hahn, Joonku; Lee, Byoungho

    2009-04-13

    Three-dimensional (3D) displays having regular-polyhedron structures are proposed and their imaging characteristics are analyzed. Four types of conceptual regular-polyhedron 3D displays, i.e., hexahedron, octahedron, dodecahedron, and icosahedrons, are considered. In principle, regular-polyhedron 3D display can present omnidirectional full parallax 3D images. Design conditions of structural factors such as viewing angle of facet panel and observation distance for 3D display with omnidirectional full parallax are studied. As a main issue, image volumes containing virtual 3D objects represented by the four types of regular-polyhedron displays are comparatively analyzed.

  7. Computer-Assisted Orthognathic Surgery for Patients with Cleft Lip/Palate: From Traditional Planning to Three-Dimensional Surgical Simulation

    PubMed Central

    Lonic, Daniel; Pai, Betty Chien-Jung; Yamaguchi, Kazuaki; Chortrakarnkij, Peerasak; Lin, Hsiu-Hsia; Lo, Lun-Jou

    2016-01-01

    Background Although conventional two-dimensional (2D) methods for orthognathic surgery planning are still popular, the use of three-dimensional (3D) simulation is steadily increasing. In facial asymmetry cases such as in cleft lip/palate patients, the additional information can dramatically improve planning accuracy and outcome. The purpose of this study is to investigate which parameters are changed most frequently in transferring a traditional 2D plan to 3D simulation, and what planning parameters can be better adjusted by this method. Patients and Methods This prospective study enrolled 30 consecutive patients with cleft lip and/or cleft palate (mean age 18.6±2.9 years, range 15 to 32 years). All patients received two-jaw single-splint orthognathic surgery. 2D orthodontic surgery plans were transferred into a 3D setting. Severe bony collisions in the ramus area after 2D plan transfer were noted. The position of the maxillo-mandibular complex was evaluated and eventually adjusted. Position changes of roll, midline, pitch, yaw, genioplasty and their frequency within the patient group were recorded as an alternation of the initial 2D plan. Patients were divided in groups of no change from the original 2D plan and changes in one, two, three and four of the aforementioned parameters as well as subgroups of unilateral, bilateral cleft lip/palate and isolated cleft palate cases. Postoperative OQLQ scores were obtained for 20 patients who finished orthodontic treatment. Results 83.3% of 2D plans were modified, mostly concerning yaw (63.3%) and midline (36.7%) adjustments. Yaw adjustments had the highest mean values in total and in all subgroups. Severe bony collisions as a result of 2D planning were seen in 46.7% of patients. Possible asymmetry was regularly foreseen and corrected in the 3D simulation. Conclusion Based on our findings, 3D simulation renders important information for accurate planning in complex cleft lip/palate cases involving facial asymmetry that is regularly missed in conventional 2D planning. PMID:27002726

  8. Computer-Assisted Orthognathic Surgery for Patients with Cleft Lip/Palate: From Traditional Planning to Three-Dimensional Surgical Simulation.

    PubMed

    Lonic, Daniel; Pai, Betty Chien-Jung; Yamaguchi, Kazuaki; Chortrakarnkij, Peerasak; Lin, Hsiu-Hsia; Lo, Lun-Jou

    2016-01-01

    Although conventional two-dimensional (2D) methods for orthognathic surgery planning are still popular, the use of three-dimensional (3D) simulation is steadily increasing. In facial asymmetry cases such as in cleft lip/palate patients, the additional information can dramatically improve planning accuracy and outcome. The purpose of this study is to investigate which parameters are changed most frequently in transferring a traditional 2D plan to 3D simulation, and what planning parameters can be better adjusted by this method. This prospective study enrolled 30 consecutive patients with cleft lip and/or cleft palate (mean age 18.6±2.9 years, range 15 to 32 years). All patients received two-jaw single-splint orthognathic surgery. 2D orthodontic surgery plans were transferred into a 3D setting. Severe bony collisions in the ramus area after 2D plan transfer were noted. The position of the maxillo-mandibular complex was evaluated and eventually adjusted. Position changes of roll, midline, pitch, yaw, genioplasty and their frequency within the patient group were recorded as an alternation of the initial 2D plan. Patients were divided in groups of no change from the original 2D plan and changes in one, two, three and four of the aforementioned parameters as well as subgroups of unilateral, bilateral cleft lip/palate and isolated cleft palate cases. Postoperative OQLQ scores were obtained for 20 patients who finished orthodontic treatment. 83.3% of 2D plans were modified, mostly concerning yaw (63.3%) and midline (36.7%) adjustments. Yaw adjustments had the highest mean values in total and in all subgroups. Severe bony collisions as a result of 2D planning were seen in 46.7% of patients. Possible asymmetry was regularly foreseen and corrected in the 3D simulation. Based on our findings, 3D simulation renders important information for accurate planning in complex cleft lip/palate cases involving facial asymmetry that is regularly missed in conventional 2D planning.

  9. Scientific data interpolation with low dimensional manifold model

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  10. Low-redundancy linear arrays in mirrored interferometric aperture synthesis.

    PubMed

    Zhu, Dong; Hu, Fei; Wu, Liang; Li, Jun; Lang, Liang

    2016-01-15

    Mirrored interferometric aperture synthesis (MIAS) is a novel interferometry that can improve spatial resolution compared with that of conventional IAS. In one-dimensional (1-D) MIAS, antenna array with low redundancy has the potential to achieve a high spatial resolution. This Letter presents a technique for the direct construction of low-redundancy linear arrays (LRLAs) in MIAS and derives two regular analytical patterns that can yield various LRLAs in short computation time. Moreover, for a better estimation of the observed scene, a bi-measurement method is proposed to handle the rank defect associated with the transmatrix of those LRLAs. The results of imaging simulation demonstrate the effectiveness of the proposed method.

  11. Dynamical Friedel oscillations of a Fermi sea

    NASA Astrophysics Data System (ADS)

    Zhang, J. M.; Liu, Y.

    2018-02-01

    We study the scenario of quenching an interaction-free Fermi sea on a one-dimensional lattice ring by suddenly changing the potential of a site. From the point-of-view of the conventional Friedel oscillation, which is a static or equilibrium problem, it is of interest what temporal and spatial oscillations the local sudden quench will induce. Numerically, the primary observation is that for a generic site, the local particle density switches between two plateaus periodically in time. Making use of the proximity of the realistic model to an exactly solvable model and employing the Abel regularization to assign a definite value to a divergent series, we obtain an analytical formula for the heights of the plateaus, which turns out to be very accurate for sites not too close to the quench site. The unexpect relevance and the incredible accuracy of the Abel regularization are yet to be understood. Eventually, when the contribution of the defect mode is also taken into account, the plateaus for those sites close to or on the quench site can also be accurately predicted. We have also studied the infinite lattice case. In this case, ensuing the quench, the out-going wave fronts leave behind a stable density oscillation pattern. Because of some interesting single-particle property, this dynamically generated Friedel oscillation differs from its conventional static counterpart only by the defect mode.

  12. FeynArts model file for MSSM transition counterterms from DREG to DRED

    NASA Astrophysics Data System (ADS)

    Stöckinger, Dominik; Varšo, Philipp

    2012-02-01

    The FeynArts model file MSSMdreg2dred implements MSSM transition counterterms which can convert one-loop Green functions from dimensional regularization to dimensional reduction. They correspond to a slight extension of the well-known Martin/Vaughn counterterms, specialized to the MSSM, and can serve also as supersymmetry-restoring counterterms. The paper provides full analytic results for the counterterms and gives one- and two-loop usage examples. The model file can simplify combining MS¯-parton distribution functions with supersymmetric renormalization or avoiding the renormalization of ɛ-scalars in dimensional reduction. Program summaryProgram title:MSSMdreg2dred.mod Catalogue identifier: AEKR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKR_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL-License [1] No. of lines in distributed program, including test data, etc.: 7600 No. of bytes in distributed program, including test data, etc.: 197 629 Distribution format: tar.gz Programming language: Mathematica, FeynArts Computer: Any, capable of running Mathematica and FeynArts Operating system: Any, with running Mathematica, FeynArts installation Classification: 4.4, 5, 11.1 Subprograms used: Cat Id Title Reference ADOW_v1_0 FeynArts CPC 140 (2001) 418 Nature of problem: The computation of one-loop Feynman diagrams in the minimal supersymmetric standard model (MSSM) requires regularization. Two schemes, dimensional regularization and dimensional reduction are both common but have different pros and cons. In order to combine the advantages of both schemes one would like to easily convert existing results from one scheme into the other. Solution method: Finite counterterms are constructed which correspond precisely to the one-loop scheme differences for the MSSM. They are provided as a FeynArts [2] model file. Using this model file together with FeynArts, the (ultra-violet) regularization of any MSSM one-loop Green function is switched automatically from dimensional regularization to dimensional reduction. In particular the counterterms serve as supersymmetry-restoring counterterms for dimensional regularization. Restrictions: The counterterms are restricted to the one-loop level and the MSSM. Running time: A few seconds to generate typical Feynman graphs with FeynArts.

  13. Scientific data interpolation with low dimensional manifold model

    DOE PAGES

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...

    2017-09-28

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  14. Scientific data interpolation with low dimensional manifold model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less

  15. Three-dimensional color Doppler echocardiographic quantification of tricuspid regurgitation orifice area: comparison with conventional two-dimensional measures.

    PubMed

    Chen, Tien-En; Kwon, Susan H; Enriquez-Sarano, Maurice; Wong, Benjamin F; Mankad, Sunil V

    2013-10-01

    Three-dimensional (3D) color Doppler echocardiography (CDE) provides directly measured vena contracta area (VCA). However, a large comprehensive 3D color Doppler echocardiographic study with sufficiently severe tricuspid regurgitation (TR) to verify its value in determining TR severity in comparison with conventional quantitative and semiquantitative two-dimensional (2D) parameters has not been previously conducted. The aim of this study was to examine the utility and feasibility of directly measured VCA by 3D transthoracic CDE, its correlation with 2D echocardiographic measurements of TR, and its ability to determine severe TR. Ninety-two patients with mild or greater TR prospectively underwent 2D and 3D transthoracic echocardiography. Two-dimensional evaluation of TR severity included the ratio of jet area to right atrial area, vena contracta width, and quantification of effective regurgitant orifice area using the flow convergence method. Full-volume breath-hold 3D color data sets of TR were obtained using a real-time 3D echocardiography system. VCA was directly measured by 3D-guided direct planimetry of the color jet. Subgroup analysis included the presence of a pacemaker, eccentricity of the TR jet, ellipticity of the orifice shape, underlying TR mechanism, and baseline rhythm. Three-dimensional VCA correlated well with effective regurgitant orifice area (r = 0.62, P < .0001), moderately with vena contracta width (r = 0.42, P < .0001), and weakly with jet area/right atrial area ratio. Subgroup analysis comparing 3D VCA with 2D effective regurgitant orifice area demonstrated excellent correlation for organic TR (r = 0.86, P < .0001), regular rhythm (r = 0.78, P < .0001), and circular orifice (r = 0.72, P < .0001) but poor correlation in atrial fibrillation rhythm (r = 0.23, P = .0033). Receiver operating characteristic curve analysis for 3D VCA demonstrated good accuracy for severe TR determination. Three-dimensional VCA measurement is feasible and obtainable in the majority of patients with mild or greater TR. Three-dimensional VCA measurement is also feasible in patients with atrial fibrillation but performed poorly even with <20% cycle length variation. Three-dimensional VCA has good cutoff accuracy in determining severe TR. This simple, straightforward 3D color Doppler measurement shows promise as an alternative for the quantification of TR. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.

  16. Six-dimensional regularization of chiral gauge theories

    NASA Astrophysics Data System (ADS)

    Fukaya, Hidenori; Onogi, Tetsuya; Yamamoto, Shota; Yamamura, Ryo

    2017-03-01

    We propose a regularization of four-dimensional chiral gauge theories using six-dimensional Dirac fermions. In our formulation, we consider two different mass terms having domain-wall profiles in the fifth and the sixth directions, respectively. A Weyl fermion appears as a localized mode at the junction of two different domain walls. One domain wall naturally exhibits the Stora-Zumino chain of the anomaly descent equations, starting from the axial U(1) anomaly in six dimensions to the gauge anomaly in four dimensions. Another domain wall implies a similar inflow of the global anomalies. The anomaly-free condition is equivalent to requiring that the axial U(1) anomaly and the parity anomaly are canceled among the six-dimensional Dirac fermions. Since our formulation is based on a massive vector-like fermion determinant, a nonperturbative regularization will be possible on a lattice. Putting the gauge field at the four-dimensional junction and extending it to the bulk using the Yang-Mills gradient flow, as recently proposed by Grabowska and Kaplan, we define the four-dimensional path integral of the target chiral gauge theory.

  17. Summary of mathematical models for a conventional and vertical junction photoconverter

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.

    1986-01-01

    The geometry and computer programming for mathematical models of a one-dimensional conventional photoconverter, a one-dimensional vertical junction photoconverter, a three-dimensional conventinal photoconverter, and a three-dimensional vertical junction solar cell are discussed.

  18. Minimum Fisher regularization of image reconstruction for infrared imaging bolometer on HL-2A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, J. M.; Liu, Y.; Li, W.

    2013-09-15

    An infrared imaging bolometer diagnostic has been developed recently for the HL-2A tokamak to measure the temporal and spatial distribution of plasma radiation. The three-dimensional tomography, reduced to a two-dimensional problem by the assumption of plasma radiation toroidal symmetry, has been performed. A three-dimensional geometry matrix is calculated with the one-dimensional pencil beam approximation. The solid angles viewed by the detector elements are taken into account in defining the chord brightness. And the local plasma emission is obtained by inverting the measured brightness with the minimum Fisher regularization method. A typical HL-2A plasma radiation model was chosen to optimize amore » regularization parameter on the criterion of generalized cross validation. Finally, this method was applied to HL-2A experiments, demonstrating the plasma radiated power density distribution in limiter and divertor discharges.« less

  19. A nonlinear dynamics approach for incorporating wind-speed patterns into wind-power project evaluation.

    PubMed

    Huffaker, Ray; Bittelli, Marco

    2015-01-01

    Wind-energy production may be expanded beyond regions with high-average wind speeds (such as the Midwest U.S.A.) to sites with lower-average speeds (such as the Southeast U.S.A.) by locating favorable regional matches between natural wind-speed and energy-demand patterns. A critical component of wind-power evaluation is to incorporate wind-speed dynamics reflecting documented diurnal and seasonal behavioral patterns. Conventional probabilistic approaches remove patterns from wind-speed data. These patterns must be restored synthetically before they can be matched with energy-demand patterns. How to accurately restore wind-speed patterns is a vexing problem spurring an expanding line of papers. We propose a paradigm shift in wind power evaluation that employs signal-detection and nonlinear-dynamics techniques to empirically diagnose whether synthetic pattern restoration can be avoided altogether. If the complex behavior of observed wind-speed records is due to nonlinear, low-dimensional, and deterministic system dynamics, then nonlinear dynamics techniques can reconstruct wind-speed dynamics from observed wind-speed data without recourse to conventional probabilistic approaches. In the first study of its kind, we test a nonlinear dynamics approach in an application to Sugarland Wind-the first utility-scale wind project proposed in Florida, USA. We find empirical evidence of a low-dimensional and nonlinear wind-speed attractor characterized by strong temporal patterns that match up well with regular daily and seasonal electricity demand patterns.

  20. γ5 in the four-dimensional helicity scheme

    NASA Astrophysics Data System (ADS)

    Gnendiger, C.; Signer, A.

    2018-05-01

    We investigate the regularization-scheme dependent treatment of γ5 in the framework of dimensional regularization, mainly focusing on the four-dimensional helicity scheme (fdh). Evaluating distinctive examples, we find that for one-loop calculations, the recently proposed four-dimensional formulation (fdf) of the fdh scheme constitutes a viable and efficient alternative compared to more traditional approaches. In addition, we extend the considerations to the two-loop level and compute the pseudoscalar form factors of quarks and gluons in fdh. We provide the necessary operator renormalization and discuss at a practical level how the complexity of intermediate calculational steps can be reduced in an efficient way.

  1. Dimensional changes of acrylic resin denture bases: conventional versus injection-molding technique.

    PubMed

    Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam

    2014-07-01

    Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding.

  2. Dimensional Changes of Acrylic Resin Denture Bases: Conventional Versus Injection-Molding Technique

    PubMed Central

    Gharechahi, Jafar; Asadzadeh, Nafiseh; Shahabian, Foad; Gharechahi, Maryam

    2014-01-01

    Objective: Acrylic resin denture bases undergo dimensional changes during polymerization. Injection molding techniques are reported to reduce these changes and thereby improve physical properties of denture bases. The aim of this study was to compare dimensional changes of specimens processed by conventional and injection-molding techniques. Materials and Methods: SR-Ivocap Triplex Hot resin was used for conventional pressure-packed and SR-Ivocap High Impact was used for injection-molding techniques. After processing, all the specimens were stored in distilled water at room temperature until measured. For dimensional accuracy evaluation, measurements were recorded at 24-hour, 48-hour and 12-day intervals using a digital caliper with an accuracy of 0.01 mm. Statistical analysis was carried out by SPSS (SPSS Inc., Chicago, IL, USA) using t-test and repeated-measures ANOVA. Statistical significance was defined at P<0.05. Results: After each water storage period, the acrylic specimens produced by injection exhibited less dimensional changes compared to those produced by the conventional technique. Curing shrinkage was compensated by water sorption with an increase in water storage time decreasing dimensional changes. Conclusion: Within the limitations of this study, dimensional changes of acrylic resin specimens were influenced by the molding technique used and SR-Ivocap injection procedure exhibited higher dimensional accuracy compared to conventional molding. PMID:25584050

  3. Curvature of Super Diff(S/sup 1/)/S/sup 1/

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, P.; Ramond, P.

    Motivated by the work of Bowick and Rajeev, we calculate the curvature of the infinite-dimensional flag manifolds DiffS/sup 1//S/sup 1/ and Super DiffS/sup 1//S/sup 1/ using standard finite-dimensional coset space techniques. We regularize the infinity by zeta-function regularization and recover the conformal and superconformal anomalies respectively for a specific choice of the torsion.

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  5. Evaluating four-loop conformal Feynman integrals by D-dimensional differential equations

    NASA Astrophysics Data System (ADS)

    Eden, Burkhard; Smirnov, Vladimir A.

    2016-10-01

    We evaluate a four-loop conformal integral, i.e. an integral over four four-dimensional coordinates, by turning to its dimensionally regularized version and applying differential equations for the set of the corresponding 213 master integrals. To solve these linear differential equations we follow the strategy suggested by Henn and switch to a uniformly transcendental basis of master integrals. We find a solution to these equations up to weight eight in terms of multiple polylogarithms. Further, we present an analytical result for the given four-loop conformal integral considered in four-dimensional space-time in terms of single-valued harmonic polylogarithms. As a by-product, we obtain analytical results for all the other 212 master integrals within dimensional regularization, i.e. considered in D dimensions.

  6. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  7. Steady state temperature distribution in dermal regions of an irregular tapered shaped human limb with variable eccentricity.

    PubMed

    Agrawal, M; Pardasani, K R; Adlakha, N

    2014-08-01

    The investigators in the past have developed some models of temperature distribution in the human limb assuming it as a regular circular or elliptical tapered cylinder. But in reality the limb is not of regular tapered cylindrical shape. The radius and eccentricity are not same throughout the limb. In view of above a model of temperature distribution in the irregular tapered elliptical shaped human limb is proposed for a three dimensional steady state case in this paper. The limb is assumed to be composed of multiple cylindrical substructures with variable radius and eccentricity. The mathematical model incorporates the effect of blood mass flow rate, metabolic activity and thermal conductivity. The outer surface is exposed to the environment and appropriate boundary conditions have been framed. The finite element method has been employed to obtain the solution. The temperature profiles have been computed in the dermal layers of a human limb and used to study the effect of shape, microstructure and biophysical parameters on temperature distribution in human limbs. The proposed model is one of the most realistic model as compared to conventional models as this can be effectively employed to every regular and nonregular structures of the body with variable radius and eccentricity to study the thermal behaviour. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Self-organization of maze-like structures via guided wrinkling.

    PubMed

    Bae, Hyung Jong; Bae, Sangwook; Yoon, Jinsik; Park, Cheolheon; Kim, Kibeom; Kwon, Sunghoon; Park, Wook

    2017-06-01

    Sophisticated three-dimensional (3D) structures found in nature are self-organized by bottom-up natural processes. To artificially construct these complex systems, various bottom-up fabrication methods, designed to transform 2D structures into 3D structures, have been developed as alternatives to conventional top-down lithography processes. We present a different self-organization approach, where we construct microstructures with periodic and ordered, but with random architecture, like mazes. For this purpose, we transformed planar surfaces using wrinkling to directly use randomly generated ridges as maze walls. Highly regular maze structures, consisting of several tessellations with customized designs, were fabricated by precisely controlling wrinkling with the ridge-guiding structure, analogous to the creases in origami. The method presented here could have widespread applications in various material systems with multiple length scales.

  9. Comparison between audio-only and audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy.

    PubMed

    Yu, Jesang; Choi, Ji Hoon; Ma, Sun Young; Jeung, Tae Sig; Lim, Sangwook

    2015-09-01

    To compare audio-only biofeedback to conventional audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy, limiting damage to healthy surrounding tissues caused by organ movement. Six healthy volunteers were assisted by audiovisual or audio-only biofeedback systems to regulate their respirations. Volunteers breathed through a mask developed for this study by following computer-generated guiding curves displayed on a screen, combined with instructional sounds. They then performed breathing following instructional sounds only. The guiding signals and the volunteers' respiratory signals were logged at 20 samples per second. The standard deviations between the guiding and respiratory curves for the audiovisual and audio-only biofeedback systems were 21.55% and 23.19%, respectively; the average correlation coefficients were 0.9778 and 0.9756, respectively. The regularities between audiovisual and audio-only biofeedback for six volunteers' respirations were same statistically from the paired t-test. The difference between the audiovisual and audio-only biofeedback methods was not significant. Audio-only biofeedback has many advantages, as patients do not require a mask and can quickly adapt to this method in the clinic.

  10. Label-free nanoscale characterization of red blood cell structure and dynamics using single-shot transport of intensity equation

    NASA Astrophysics Data System (ADS)

    Poola, Praveen Kumar; John, Renu

    2017-10-01

    We report the results of characterization of red blood cell (RBC) structure and its dynamics with nanometric sensitivity using transport of intensity equation microscopy (TIEM). Conventional transport of intensity technique requires three intensity images and hence is not suitable for studying real-time dynamics of live biological samples. However, assuming the sample to be homogeneous, phase retrieval using transport of intensity equation has been demonstrated with single defocused measurement with x-rays. We adopt this technique for quantitative phase light microscopy of homogenous cells like RBCs. The main merits of this technique are its simplicity, cost-effectiveness, and ease of implementation on a conventional microscope. The phase information can be easily merged with regular bright-field and fluorescence images to provide multidimensional (three-dimensional spatial and temporal) information without any extra complexity in the setup. The phase measurement from the TIEM has been characterized using polymeric microbeads and the noise stability of the system has been analyzed. We explore the structure and real-time dynamics of RBCs and the subdomain membrane fluctuations using this technique.

  11. A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project

    NASA Astrophysics Data System (ADS)

    Polydorides, Nick; Lionheart, William R. B.

    2002-12-01

    The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.

  12. Computational Labs Using VPython Complement Conventional Labs in Online and Regular Physics Classes

    NASA Astrophysics Data System (ADS)

    Bachlechner, Martina E.

    2009-03-01

    Fairmont State University has developed online physics classes for the high-school teaching certificate based on the text book Matter and Interaction by Chabay and Sherwood. This lead to using computational VPython labs also in the traditional class room setting to complement conventional labs. The computational modeling process has proven to provide an excellent basis for the subsequent conventional lab and allows for a concrete experience of the difference between behavior according to a model and realistic behavior. Observations in the regular class room setting feed back into the development of the online classes.

  13. A Nonlinear Dynamics Approach for Incorporating Wind-Speed Patterns into Wind-Power Project Evaluation

    PubMed Central

    Huffaker, Ray; Bittelli, Marco

    2015-01-01

    Wind-energy production may be expanded beyond regions with high-average wind speeds (such as the Midwest U.S.A.) to sites with lower-average speeds (such as the Southeast U.S.A.) by locating favorable regional matches between natural wind-speed and energy-demand patterns. A critical component of wind-power evaluation is to incorporate wind-speed dynamics reflecting documented diurnal and seasonal behavioral patterns. Conventional probabilistic approaches remove patterns from wind-speed data. These patterns must be restored synthetically before they can be matched with energy-demand patterns. How to accurately restore wind-speed patterns is a vexing problem spurring an expanding line of papers. We propose a paradigm shift in wind power evaluation that employs signal-detection and nonlinear-dynamics techniques to empirically diagnose whether synthetic pattern restoration can be avoided altogether. If the complex behavior of observed wind-speed records is due to nonlinear, low-dimensional, and deterministic system dynamics, then nonlinear dynamics techniques can reconstruct wind-speed dynamics from observed wind-speed data without recourse to conventional probabilistic approaches. In the first study of its kind, we test a nonlinear dynamics approach in an application to Sugarland Wind—the first utility-scale wind project proposed in Florida, USA. We find empirical evidence of a low-dimensional and nonlinear wind-speed attractor characterized by strong temporal patterns that match up well with regular daily and seasonal electricity demand patterns. PMID:25617767

  14. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  15. Dimensional regularization of the IR divergences in the Fokker action of point-particle binaries at the fourth post-Newtonian order

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Blanchet, Luc; Bohé, Alejandro; Faye, Guillaume; Marsat, Sylvain

    2017-11-01

    The Fokker action of point-particle binaries at the fourth post-Newtonian (4PN) approximation of general relativity has been determined previously. However two ambiguity parameters associated with infrared (IR) divergencies of spatial integrals had to be introduced. These two parameters were fixed by comparison with gravitational self-force (GSF) calculations of the conserved energy and periastron advance for circular orbits in the test-mass limit. In the present paper together with a companion paper, we determine both these ambiguities from first principle, by means of dimensional regularization. Our computation is thus entirely defined within the dimensional regularization scheme, for treating at once the IR and ultra-violet (UV) divergencies. In particular, we obtain crucial contributions coming from the Einstein-Hilbert part of the action and from the nonlocal tail term in arbitrary dimensions, which resolve the ambiguities.

  16. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  17. 36 CFR 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  18. 36 CFR 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  19. 36 CFR 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  20. Effective field theory dimensional regularization

    NASA Astrophysics Data System (ADS)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  1. A memory-efficient staining algorithm in 3D seismic modelling and imaging

    NASA Astrophysics Data System (ADS)

    Jia, Xiaofeng; Yang, Lu

    2017-08-01

    The staining algorithm has been proven to generate high signal-to-noise ratio (S/N) images in poorly illuminated areas in two-dimensional cases. In the staining algorithm, the stained wavefield relevant to the target area and the regular source wavefield forward propagate synchronously. Cross-correlating these two wavefields with the backward propagated receiver wavefield separately, we obtain two images: the local image of the target area and the conventional reverse time migration (RTM) image. This imaging process costs massive computer memory for wavefield storage, especially in large scale three-dimensional cases. To make the staining algorithm applicable to three-dimensional RTM, we develop a method to implement the staining algorithm in three-dimensional acoustic modelling in a standard staggered grid finite difference (FD) scheme. The implementation is adaptive to the order of spatial accuracy of the FD operator. The method can be applied to elastic, electromagnetic, and other wave equations. Taking the memory requirement into account, we adopt a random boundary condition (RBC) to backward extrapolate the receiver wavefield and reconstruct it by reverse propagation using the final wavefield snapshot only. Meanwhile, we forward simulate the stained wavefield and source wavefield simultaneously using the nearly perfectly matched layer (NPML) boundary condition. Experiments on a complex geologic model indicate that the RBC-NPML collaborative strategy not only minimizes the memory consumption but also guarantees high quality imaging results. We apply the staining algorithm to three-dimensional RTM via the proposed strategy. Numerical results show that our staining algorithm can produce high S/N images in the target areas with other structures effectively muted.

  2. 36 CFR § 1192.4 - Miscellaneous instructions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... General § 1192.4 Miscellaneous instructions. (a) Dimensional conventions. Dimensions that are not noted as minimum or maximum are absolute. (b) Dimensional tolerances. All dimensions are subject to conventional...

  3. Remarks on the regularity criteria of three-dimensional magnetohydrodynamics system in terms of two velocity field components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Kazuo

    2014-03-15

    We study the three-dimensional magnetohydrodynamics system and obtain its regularity criteria in terms of only two velocity vector field components eliminating the condition on the third component completely. The proof consists of a new decomposition of the four nonlinear terms of the system and estimating a component of the magnetic vector field in terms of the same component of the velocity vector field. This result may be seen as a component reduction result of many previous works [C. He and Z. Xin, “On the regularity of weak solutions to the magnetohydrodynamic equations,” J. Differ. Equ. 213(2), 234–254 (2005); Y. Zhou,more » “Remarks on regularities for the 3D MHD equations,” Discrete Contin. Dyn. Syst. 12(5), 881–886 (2005)].« less

  4. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  5. Dimensional regularization in position space and a Forest Formula for Epstein-Glaser renormalization

    NASA Astrophysics Data System (ADS)

    Dütsch, Michael; Fredenhagen, Klaus; Keller, Kai Johannes; Rejzner, Katarzyna

    2014-12-01

    We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions to the induction scheme of Epstein-Glaser. This closed expression, which we call the Epstein-Glaser Forest Formula, is analogous to Zimmermann's Forest Formula for BPH renormalization. For scalar fields, the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stückelberg and Petermann.

  6. Detecting chaos, determining the dimensions of tori and predicting slow diffusion in Fermi-Pasta-Ulam lattices by the Generalized Alignment Index method

    NASA Astrophysics Data System (ADS)

    Skokos, C.; Bountis, T.; Antonopoulos, C.

    2008-12-01

    The recently introduced GALI method is used for rapidly detecting chaos, determining the dimensionality of regular motion and predicting slow diffusion in multi-dimensional Hamiltonian systems. We propose an efficient computation of the GALIk indices, which represent volume elements of k randomly chosen deviation vectors from a given orbit, based on the Singular Value Decomposition (SVD) algorithm. We obtain theoretically and verify numerically asymptotic estimates of GALIs long-time behavior in the case of regular orbits lying on low-dimensional tori. The GALIk indices are applied to rapidly detect chaotic oscillations, identify low-dimensional tori of Fermi-Pasta-Ulam (FPU) lattices at low energies and predict weak diffusion away from quasiperiodic motion, long before it is actually observed in the oscillations.

  7. Comprehensive non-dimensional normalization of gait data.

    PubMed

    Pinzone, Ornella; Schwartz, Michael H; Baker, Richard

    2016-02-01

    Normalizing clinical gait analysis data is required to remove variability due to physical characteristics such as leg length and weight. This is particularly important for children where both are associated with age. In most clinical centres conventional normalization (by mass only) is used whereas there is a stronger biomechanical argument for non-dimensional normalization. This study used data from 82 typically developing children to compare how the two schemes performed over a wide range of temporal-spatial and kinetic parameters by calculating the coefficients of determination with leg length, weight and height. 81% of the conventionally normalized parameters had a coefficient of determination above the threshold for a statistical association (p<0.05) compared to 23% of those normalized non-dimensionally. All the conventionally normalized parameters exceeding this threshold showed a reduced association with non-dimensional normalization. In conclusion, non-dimensional normalization is more effective that conventional normalization in reducing the effects of height, weight and age in a comprehensive range of temporal-spatial and kinetic parameters. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Regularity of Solutions of the Nonlinear Sigma Model with Gravitino

    NASA Astrophysics Data System (ADS)

    Jost, Jürgen; Keßler, Enno; Tolksdorf, Jürgen; Wu, Ruijun; Zhu, Miaomiao

    2018-02-01

    We propose a geometric setup to study analytic aspects of a variant of the super symmetric two-dimensional nonlinear sigma model. This functional extends the functional of Dirac-harmonic maps by gravitino fields. The system of Euler-Lagrange equations of the two-dimensional nonlinear sigma model with gravitino is calculated explicitly. The gravitino terms pose additional analytic difficulties to show smoothness of its weak solutions which are overcome using Rivière's regularity theory and Riesz potential theory.

  9. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    NASA Astrophysics Data System (ADS)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  10. Three-Dimensional Messages for Interstellar Communication

    NASA Astrophysics Data System (ADS)

    Vakoch, Douglas A.

    One of the challenges facing independently evolved civilizations separated by interstellar distances is to communicate information unique to one civilization. One commonly proposed solution is to begin with two-dimensional pictorial representations of mathematical concepts and physical objects, in the hope that this will provide a foundation for overcoming linguistic barriers. However, significant aspects of such representations are highly conventional, and may not be readily intelligible to a civilization with different conventions. The process of teaching conventions of representation may be facilitated by the use of three-dimensional representations redundantly encoded in multiple formats (e.g., as both vectors and as rasters). After having illustrated specific conventions for representing mathematical objects in a three-dimensional space, this method can be used to describe a physical environment shared by transmitter and receiver: a three-dimensional space defined by the transmitter--receiver axis, and containing stars within that space. This method can be extended to show three-dimensional representations varying over time. Having clarified conventions for representing objects potentially familiar to both sender and receiver, novel objects can subsequently be depicted. This is illustrated through sequences showing interactions between human beings, which provide information about human behavior and personality. Extensions of this method may allow the communication of such culture-specific features as aesthetic judgments and religious beliefs. Limitations of this approach will be noted, with specific reference to ETI who are not primarily visual.

  11. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  12. Comparison between audio-only and audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy

    PubMed Central

    Yu, Jesang; Choi, Ji Hoon; Ma, Sun Young; Jeung, Tae Sig

    2015-01-01

    Purpose To compare audio-only biofeedback to conventional audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy, limiting damage to healthy surrounding tissues caused by organ movement. Materials and Methods Six healthy volunteers were assisted by audiovisual or audio-only biofeedback systems to regulate their respirations. Volunteers breathed through a mask developed for this study by following computer-generated guiding curves displayed on a screen, combined with instructional sounds. They then performed breathing following instructional sounds only. The guiding signals and the volunteers' respiratory signals were logged at 20 samples per second. Results The standard deviations between the guiding and respiratory curves for the audiovisual and audio-only biofeedback systems were 21.55% and 23.19%, respectively; the average correlation coefficients were 0.9778 and 0.9756, respectively. The regularities between audiovisual and audio-only biofeedback for six volunteers' respirations were same statistically from the paired t-test. Conclusion The difference between the audiovisual and audio-only biofeedback methods was not significant. Audio-only biofeedback has many advantages, as patients do not require a mask and can quickly adapt to this method in the clinic. PMID:26484309

  13. A Three-Dimensional Finite Element Analysis of the Stress Distribution Generated by Splinted and Nonsplinted Prostheses in the Rehabilitation of Various Bony Ridges with Regular or Short Morse Taper Implants.

    PubMed

    Toniollo, Marcelo Bighetti; Macedo, Ana Paula; Rodrigues, Renata Cristina; Ribeiro, Ricardo Faria; de Mattos, Maria G

    The aim of this study was to compare the biomechanical performance of splinted or nonsplinted prostheses over short- or regular-length Morse taper implants (5 mm and 11 mm, respectively) in the posterior area of the mandible using finite element analysis. Three-dimensional geometric models of regular implants (Ø 4 × 11 mm) and short implants (Ø 4 × 5 mm) were placed into a simulated model of the left posterior mandible that included the first premolar tooth; all teeth posterior to this tooth had been removed. The four experimental groups were as follows: regular group SP (three regular implants were rehabilitated with splinted prostheses), regular group NSP (three regular implants were rehabilitated with nonsplinted prostheses), short group SP (three short implants were rehabilitated with splinted prostheses), and short group NSP (three short implants were rehabilitated with nonsplinted prostheses). Oblique forces were simulated in molars (365 N) and premolars (200 N). Qualitative and quantitative analyses of the minimum principal stress in bone were performed using ANSYS Workbench software, version 10.0. The use of splinting in the short group reduced the stress to the bone surrounding the implants and tooth. The use of NSP or SP in the regular group resulted in similar stresses. The best indication when there are short implants is to use SP. Use of NSP is feasible only when regular implants are present.

  14. Phase-sensitive spectral estimation by the hybrid filter diagonalization method.

    PubMed

    Celik, Hasan; Ridge, Clark D; Shaka, A J

    2012-01-01

    A more robust way to obtain a high-resolution multidimensional NMR spectrum from limited data sets is described. The Filter Diagonalization Method (FDM) is used to analyze phase-modulated data and cast the spectrum in terms of phase-sensitive Lorentzian "phase-twist" peaks. These spectra are then used to obtain absorption-mode phase-sensitive spectra. In contrast to earlier implementations of multidimensional FDM, the absolute phase of the data need not be known beforehand, and linear phase corrections in each frequency dimension are possible, if they are required. Regularization is employed to improve the conditioning of the linear algebra problems that must be solved to obtain the spectral estimate. While regularization smoothes away noise and small peaks, a hybrid method allows the true noise floor to be correctly represented in the final result. Line shape transformation to a Gaussian-like shape improves the clarity of the spectra, and is achieved by a conventional Lorentzian-to-Gaussian transformation in the time-domain, after inverse Fourier transformation of the FDM spectra. The results obtained highlight the danger of not using proper phase-sensitive line shapes in the spectral estimate. The advantages of the new method for the spectral estimate are the following: (i) the spectrum can be phased by conventional means after it is obtained; (ii) there is a true and accurate noise floor; and (iii) there is some indication of the quality of fit in each local region of the spectrum. The method is illustrated with 2D NMR data for the first time, but is applicable to n-dimensional data without any restriction on the number of time/frequency dimensions. Copyright © 2011. Published by Elsevier Inc.

  15. Random packing of regular polygons and star polygons on a flat two-dimensional surface.

    PubMed

    Cieśla, Michał; Barbasz, Jakub

    2014-08-01

    Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks.

  16. Self assembling proteins

    DOEpatents

    Yeates, Todd O.; Padilla, Jennifer; Colovos, Chris

    2004-06-29

    Novel fusion proteins capable of self-assembling into regular structures, as well as nucleic acids encoding the same, are provided. The subject fusion proteins comprise at least two oligomerization domains rigidly linked together, e.g. through an alpha helical linking group. Also provided are regular structures comprising a plurality of self-assembled fusion proteins of the subject invention, and methods for producing the same. The subject fusion proteins find use in the preparation of a variety of nanostructures, where such structures include: cages, shells, double-layer rings, two-dimensional layers, three-dimensional crystals, filaments, and tubes.

  17. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  18. Boundary Conditions for Infinite Conservation Laws

    NASA Astrophysics Data System (ADS)

    Rosenhaus, V.; Bruzón, M. S.; Gandarias, M. L.

    2016-12-01

    Regular soliton equations (KdV, sine-Gordon, NLS) are known to possess infinite sets of local conservation laws. Some other classes of nonlinear PDE possess infinite-dimensional symmetries parametrized by arbitrary functions of independent or dependent variables; among them are Zabolotskaya-Khokhlov, Kadomtsev-Petviashvili, Davey-Stewartson equations and Born-Infeld equation. Boundary conditions were shown to play an important role for the existence of local conservation laws associated with infinite-dimensional symmetries. In this paper, we analyze boundary conditions for the infinite conserved densities of regular soliton equations: KdV, potential KdV, Sine-Gordon equation, and nonlinear Schrödinger equation, and compare them with boundary conditions for the conserved densities obtained from infinite-dimensional symmetries with arbitrary functions of independent and dependent variables.

  19. 78 FR 13694 - Conference of the Parties to the Convention on International Trade in Endangered Species of Wild...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-28

    ... Wild Fauna and Flora (CITES); Sixteenth Regular Meeting; Tentative U.S. Negotiating Positions for... Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), will attend the... . SUPPLEMENTARY INFORMATION: Background The Convention on International Trade in Endangered Species of Wild Fauna...

  20. Effect of the refractive index on the hawking temperature: an application of the Hamilton-Jacobi method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakalli, I., E-mail: izzet.sakalli@emu.edu.tr; Mirekhtiary, S. F., E-mail: fatemeh.mirekhtiary@emu.edu.tr

    2013-10-15

    Hawking radiation of a non-asymptotically flat 4-dimensional spherically symmetric and static dilatonic black hole (BH) via the Hamilton-Jacobi (HJ) method is studied. In addition to the naive coordinates, we use four more different coordinate systems that are well-behaved at the horizon. Except for the isotropic coordinates, direct computation by the HJ method leads to the standard Hawking temperature for all coordinate systems. The isotropic coordinates allow extracting the index of refraction from the Fermat metric. It is explicitly shown that the index of refraction determines the value of the tunneling rate and its natural consequence, the Hawking temperature. The isotropicmore » coordinates in the conventional HJ method produce a wrong result for the temperature of the linear dilaton. Here, we explain how this discrepancy can be resolved by regularizing the integral possessing a pole at the horizon.« less

  1. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    NASA Astrophysics Data System (ADS)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and geology differentiation using guided fuzzy c-means clustering. Geophysics, 80(4): ID1-ID18. Okabe M. 1979. Analytical expressions for gravity anomalies due to homogeneous polyhedral bodies and translations into magnetic anomalies. Geophysics, 44(4), 730-741.

  2. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2017-06-01

    Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.

  3. Comparison of cryoablation with 3D mapping versus conventional mapping for the treatment of atrioventricular re-entrant tachycardia and right-sided paraseptal accessory pathways.

    PubMed

    Russo, Mario S; Drago, Fabrizio; Silvetti, Massimo S; Righi, Daniela; Di Mambro, Corrado; Placidi, Silvia; Prosperi, Monica; Ciani, Michele; Naso Onofrio, Maria T; Cannatà, Vittorio

    2016-06-01

    Aim Transcatheter cryoablation is a well-established technique for the treatment of atrioventricular nodal re-entry tachycardia and atrioventricular re-entry tachycardia in children. Fluoroscopy or three-dimensional mapping systems can be used to perform the ablation procedure. The aim of this study was to compare the success rate of cryoablation procedures for the treatment of right septal accessory pathways and atrioventricular nodal re-entry circuits in children using conventional or three-dimensional mapping and to evaluate whether three-dimensional mapping was associated with reduced patient radiation dose compared with traditional mapping. In 2013, 81 children underwent transcatheter cryoablation at our institution, using conventional mapping in 41 children - 32 atrioventricular nodal re-entry tachycardia and nine atrioventricular re-entry tachycardia - and three-dimensional mapping in 40 children - 24 atrioventricular nodal re-entry tachycardia and 16 atrioventricular re-entry tachycardia. Using conventional mapping, the overall success rate was 78.1 and 66.7% in patients with atrioventricular nodal re-entry tachycardia or atrioventricular re-entry tachycardia, respectively. Using three-dimensional mapping, the overall success rate was 91.6 and 75%, respectively (p=ns). The use of three-dimensional mapping was associated with a reduction in cumulative air kerma and cumulative air kerma-area product of 76.4 and 67.3%, respectively (p<0.05). The use of three-dimensional mapping compared with the conventional fluoroscopy-guided method for cryoablation of right septal accessory pathways and atrioventricular nodal re-entry circuits in children was associated with a significant reduction in patient radiation dose without an increase in success rate.

  4. "Ersatz" and "hybrid" NMR spectral estimates using the filter diagonalization method.

    PubMed

    Ridge, Clark D; Shaka, A J

    2009-03-12

    The filter diagonalization method (FDM) is an efficient and elegant way to make a spectral estimate purely in terms of Lorentzian peaks. As NMR spectral peaks of liquids conform quite well to this model, the FDM spectral estimate can be accurate with far fewer time domain points than conventional discrete Fourier transform (DFT) processing. However, noise is not efficiently characterized by a finite number of Lorentzian peaks, or by any other analytical form, for that matter. As a result, noise can affect the FDM spectrum in different ways than it does the DFT spectrum, and the effect depends on the dimensionality of the spectrum. Regularization to suppress (or control) the influence of noise to give an "ersatz", or EFDM, spectrum is shown to sometimes miss weak features, prompting a more conservative implementation of filter diagonalization. The spectra obtained, called "hybrid" or HFDM spectra, are acquired by using regularized FDM to obtain an "infinite time" spectral estimate and then adding to it the difference between the DFT of the data and the finite time FDM estimate, over the same time interval. HFDM has a number of advantages compared to the EFDM spectra, where all features must be Lorentzian. They also show better resolution than DFT spectra. The HFDM spectrum is a reliable and robust way to try to extract more information from noisy, truncated data records and is less sensitive to the choice of regularization parameter. In multidimensional NMR of liquids, HFDM is a conservative way to handle the problems of noise, truncation, and spectral peaks that depart significantly from the model of a multidimensional Lorentzian peak.

  5. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  6. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  7. An approach for the regularization of a power flow solution around the maximum loading point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kataoka, Y.

    1992-08-01

    In the conventional power flow solution, the boundary conditions are directly specified by active power and reactive power at each node, so that the singular point coincided with the maximum loading point. For this reason, the computations are often disturbed by ill-condition. This paper proposes a new method for getting the wide-range regularity by giving some modifications to the conventional power flow solution method, thereby eliminating the singular point or shifting it to the region with the voltage lower than that of the maximum loading point. Then, the continuous execution of V-P curves including maximum loading point is realized. Themore » efficiency and effectiveness of the method are tested in practical 598-nodes system in comparison with the conventional method.« less

  8. Exploratory Mediation Analysis via Regularization

    PubMed Central

    Serang, Sarfaraz; Jacobucci, Ross; Brimhall, Kim C.; Grimm, Kevin J.

    2017-01-01

    Exploratory mediation analysis refers to a class of methods used to identify a set of potential mediators of a process of interest. Despite its exploratory nature, conventional approaches are rooted in confirmatory traditions, and as such have limitations in exploratory contexts. We propose a two-stage approach called exploratory mediation analysis via regularization (XMed) to better address these concerns. We demonstrate that this approach is able to correctly identify mediators more often than conventional approaches and that its estimates are unbiased. Finally, this approach is illustrated through an empirical example examining the relationship between college acceptance and enrollment. PMID:29225454

  9. Partial regularity of viscosity solutions for a class of Kolmogorov equations arising from mathematical finance

    NASA Astrophysics Data System (ADS)

    Rosestolato, M.; Święch, A.

    2017-02-01

    We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.

  10. Effects that different types of sports have on the hearts of children and adolescents and the value of two-dimensional strain-strain-rate echocardiography.

    PubMed

    Binnetoğlu, Fatih Köksal; Babaoğlu, Kadir; Altun, Gürkan; Kayabey, Özlem

    2014-01-01

    Whether the hypertrophy found in the hearts of athletes is physiologic or a risk factor for the progression of pathologic hypertrophy remains controversial. The diastolic and systolic functions of athletes with left ventricular (LV) hypertrophy usually are normal when measured by conventional methods. More precise assessment of global and regional myocardial function may be possible using a newly developed two-dimensional (2D) strain echocardiographic method. This study evaluated the effects that different types of sports have on the hearts of children and adolescents and compared the results of 2D strain and strain-rate echocardiographic techniques with conventional methods. Athletes from clubs for five different sports (basketball, swimming, football, wrestling, and tennis) who had practiced regularly at least 3 h per week during at least the previous 2 years were included in the study. The control group consisted of sedentary children and adolescents with no known cardiac or systemic diseases (n = 25). The athletes were grouped according to the type of exercise: dynamic (football, tennis), static (wrestling), or static and dynamic (basketball, swimming). Shortening fraction and ejection fraction values were within normal limits for the athletes in all the sports disciplines. Across all 140 athletes, LV geometry was normal in 58 athletes (41.4 %), whereas 22 athletes (15.7 %) had concentric remodeling, 20 (14.3 %) had concentric hypertrophy, and 40 (28.6 %) had eccentric hypertrophy. Global LV longitudinal strain values obtained from the average of apical four-, two-, and three-chamber global strain values were significantly lower for the basketball players than for all the other groups (p < 0.001).

  11. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  13. Procedural validity of the AUDADIS-5 depression, anxiety and post-traumatic stress disorder modules: substance abusers and others in the general population*

    PubMed Central

    Hasin, Deborah S.; Shmulewitz, Dvora; Stohl, Malka; Greenstein, Eliana; Aivadyan, Christina; Morita, Kara; Saha, Tulshi; Aharonovich, Efrat; Jung, Jeesun; Zhang, Haitao; Nunes, Edward V.; Grant, Bridget F.

    2016-01-01

    Background Little is known about the procedural validity of lay-administered, fully-structured assessments of depressive, anxiety and post-traumatic stress (PTSD) disorders in the general population as determined by comparison to clinical re-appraisal, and whether this differs between current regular substance abusers and others. We evaluated the procedural validity of the Alcohol Use Disorder and Associated Disabilities Interview Schedule, DSM-5 Version (AUDADIS-5) assessment of these disorders through clinician re-interviews. Methods Test-retest design among respondents from the National Epidemiologic Survey on Alcohol and Related Conditions-III (NESARC-III): (264 current regular substance abusers, 447 others). Clinicians blinded to AUDADIS-5 results administered the semi-structured Psychiatric Research Interview for Substance and Mental Disorders, DSM-5 version (PRISM-5). AUDADIS-5/PRISM-5 concordance was indicated by kappa (κ) for diagnoses and intraclass correlation coefficients (ICC) for dimensional measures (DSM-5 symptom or criterion counts). Results were compared between current regular substance abusers and others. Results AUDADIS-5 and PRISM-5 concordance for DSM-5 depressive disorders, anxiety disorders and PTSD was generally fair to moderate (κ =0.24–0.59), with concordance on dimensional scales much better (ICC=0.53–0.81). Concordance differed little between regular substance abusers and others. Conclusions AUDADIS-5/PRISM-5 concordance indicated procedural validity for the AUDADIS-5 among substance abusers and others, suggesting that AUDADIS-5 diagnoses of DSM-5 depressive, anxiety and PTSD diagnoses are informative measures in both groups in epidemiologic studies. The stronger concordance on dimensional measures supports the current movement towards dimensional psychopathology measures, suggesting that such measures provide important information for research in the NESARC-III and other datasets, and possibly for clinical purposes as well. PMID:25939727

  14. Evaluation of physicochemical properties of root-end filling materials using conventional and Micro-CT tests.

    PubMed

    Torres, Fernanda Ferrari Esteves; Bosso-Martelo, Roberta; Espir, Camila Galletti; Cirelli, Joni Augusto; Guerreiro-Tanomaru, Juliane Maria; Tanomaru-Filho, Mario

    2017-01-01

    To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. 7. The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests.

  15. [Evaluation of production and clinical working time of computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays for complete denture].

    PubMed

    Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X

    2017-02-18

    To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (P<0.05). The average time spent on fabricating three-dimensional printing custom trays using FSD system and making the final impression with the trays are less than those of the conventional custom trays fabricated manually, which reveals that the FSD three-dimensional printing custom trays is less time-consuming both in the clinical and laboratory process than the conventional custom trays. In addition, when we manufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being popularized.

  16. Behaviour of Cohesionless Soil Reinforced with Three Dimensional Inclusions Under Plane Strain Conditions

    NASA Astrophysics Data System (ADS)

    Harikumar, M.; Sankar, N.; Chandrakaran, S.

    2015-09-01

    Since 1969, when the concept of earth reinforcing was brought about by Henry Vidal, a large variety of materials such as steel bars, tire shreds, polypropylene, polyester, glass fibres, coir and jute fibres etc. have been widely added to soil mass randomly or in a regular, oriented manner. The conventional reinforcements in use were two dimensional or planar, in the form of strips with negligible widths or in the form of sheets. In this investigation, a novel concept of multi oriented plastic reinforcement (hexa-pods) is discussed. Direct shear tests were conducted on unreinforced and reinforced dry fine, medium and coarse sands. Detailed parametric studies with respect to the effective grain size of soil (d10), normal stress (σ) and the volume ratio of hexa-pods (Vr) were performed. It was noticed that addition of hexa-pods resulted in increase in the shear strength parameters viz. peak deviatoric stresses and increased angle of internal friction. The hexa-pods also changed the brittle behaviour of unreinforced sand samples to ductile ones. Although the peak shear stress did not show a considerable improvement, the angle of internal friction improved noticeably. Addition of a single layer of reinforcement along the shear plane also reduced the post peak loss of strength and changed the soil behavior from brittle to a ductile one.

  17. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  18. The LPM effect in sequential bremsstrahlung: dimensional regularization

    DOE PAGES

    Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin

    2016-10-19

    The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-levelmore » amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.« less

  19. A note on the regularity of solutions of infinite dimensional Riccati equations

    NASA Technical Reports Server (NTRS)

    Burns, John A.; King, Belinda B.

    1994-01-01

    This note is concerned with the regularity of solutions of algebraic Riccati equations arising from infinite dimensional LQR and LQG control problems. We show that distributed parameter systems described by certain parabolic partial differential equations often have a special structure that smoothes solutions of the corresponding Riccati equation. This analysis is motivated by the need to find specific representations for Riccati operators that can be used in the development of computational schemes for problems where the input and output operators are not Hilbert-Schmidt. This situation occurs in many boundary control problems and in certain distributed control problems associated with optimal sensor/actuator placement.

  20. A closed expression for the UV-divergent parts of one-loop tensor integrals in dimensional regularization

    NASA Astrophysics Data System (ADS)

    Sulyok, G.

    2017-07-01

    Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.

  1. Optically programmable encoder based on light propagation in two-dimensional regular nanoplates.

    PubMed

    Li, Ya; Zhao, Fangyin; Guo, Shuai; Zhang, Yongyou; Niu, Chunhui; Zeng, Ruosheng; Zou, Bingsuo; Zhang, Wensheng; Ding, Kang; Bukhtiar, Arfan; Liu, Ruibin

    2017-04-07

    We design an efficient optically controlled microdevice based on CdSe nanoplates. Two-dimensional CdSe nanoplates exhibit lighting patterns around the edges and can be realized as a new type of optically controlled programmable encoder. The light source is used to excite the nanoplates and control the logical position under vertical pumping mode by the objective lens. At each excitation point in the nanoplates, the preferred light-propagation routes are along the normal direction and perpendicular to the edges, which then emit out from the edges to form a localized lighting section. The intensity distribution around the edges of different nanoplates demonstrates that the lighting part with a small scale is much stronger, defined as '1', than the dark section, defined as '0', along the edge. These '0' and '1' are the basic logic elements needed to compose logically functional devices. The observed propagation rules are consistent with theoretical simulations, meaning that the guided-light route in two-dimensional semiconductor nanoplates is regular and predictable. The same situation was also observed in regular CdS nanoplates. Basic theoretical analysis and experiments prove that the guided light and exit position follow rules mainly originating from the shape rather than material itself.

  2. Constrained Low-Rank Learning Using Least Squares-Based Regularization.

    PubMed

    Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong

    2017-12-01

    Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.

  3. 77 FR 2270 - Proposed Information Collection; Comment Request; Firearms Convention

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-17

    ... Inter-American Convention Against the Illicit Manufacturing of and Trafficking in Firearms Ammunition... Collection Submitted electronically or on paper. III. Data OMB Control Number: 0694-0114. Form Number(s): N/A. Type of Review: Regular submission. Affected Public: Business or other for-profit organizations...

  4. Hydrologic Process Regularization for Improved Geoelectrical Monitoring of a Lab-Scale Saline Tracer Experiment

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.

    2016-12-01

    Regularization stabilizes the geophysical imaging problem resulting from sparse and noisy measurements that render solutions unstable and non-unique. Conventional regularization constraints are, however, independent of the physics of the underlying process and often produce smoothed-out tomograms with mass underestimation. Cascaded time-lapse (CTL) is a widely used reconstruction technique for monitoring wherein a tomogram obtained from the background dataset is employed as starting model for the inversion of subsequent time-lapse datasets. In contrast, a proper orthogonal decomposition (POD)-constrained inversion framework enforces physics-based regularization based upon prior understanding of the expected evolution of state variables. The physics-based constraints are represented in the form of POD basis vectors. The basis vectors are constructed from numerically generated training images (TIs) that mimic the desired process. The target can be reconstructed from a small number of selected basis vectors, hence, there is a reduction in the number of inversion parameters compared to the full dimensional space. The inversion involves finding the optimal combination of the selected basis vectors conditioned on the geophysical measurements. We apply the algorithm to 2-D lab-scale saline transport experiments with electrical resistivity (ER) monitoring. We consider two transport scenarios with one and two mass injection points evolving into unimodal and bimodal plume morphologies, respectively. The unimodal plume is consistent with the assumptions underlying the generation of the TIs, whereas bimodality in plume morphology was not conceptualized. We compare difference tomograms retrieved from POD with those obtained from CTL. Qualitative comparisons of the difference tomograms with images of their corresponding dye plumes suggest that POD recovered more compact plumes in contrast to those of CTL. While mass recovery generally deteriorated with increasing number of time-steps, POD outperformed CTL in terms of mass recovery accuracy rates. POD is computationally superior requiring only 2.5 mins to complete each inversion compared to 3 hours for CTL to do the same.

  5. Information Gain Based Dimensionality Selection for Classifying Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less

  6. Semi-automated brain tumor segmentation on multi-parametric MRI using regularized non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van

    2017-05-04

    Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.

  7. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  8. Application of real rock pore-threat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakibul, M.; Sarker, H.; McIntyre, D.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data.« less

  9. Application of real rock pore-throat statistics to a regular pore network model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarker, M.R.; McIntyre, D.; Ferer, M.

    2011-01-01

    This work reports the application of real rock statistical data to a previously developed regular pore network model in an attempt to produce an accurate simulation tool with low computational overhead. A core plug from the St. Peter Sandstone formation in Indiana was scanned with a high resolution micro CT scanner. The pore-throat statistics of the three-dimensional reconstructed rock were extracted and the distribution of the pore-throat sizes was applied to the regular pore network model. In order to keep the equivalent model regular, only the throat area or the throat radius was varied. Ten realizations of randomly distributed throatmore » sizes were generated to simulate the drainage process and relative permeability was calculated and compared with the experimentally determined values of the original rock sample. The numerical and experimental procedures are explained in detail and the performance of the model in relation to the experimental data is discussed and analyzed. Petrophysical properties such as relative permeability are important in many applied fields such as production of petroleum fluids, enhanced oil recovery, carbon dioxide sequestration, ground water flow, etc. Relative permeability data are used for a wide range of conventional reservoir engineering calculations and in numerical reservoir simulation. Two-phase oil water relative permeability data are generated on the same core plug from both pore network model and experimental procedure. The shape and size of the relative permeability curves were compared and analyzed and good match has been observed for wetting phase relative permeability but for non-wetting phase, simulation results were found to be deviated from the experimental ones. Efforts to determine petrophysical properties of rocks using numerical techniques are to eliminate the necessity of regular core analysis, which can be time consuming and expensive. So a numerical technique is expected to be fast and to produce reliable results. In applied engineering, sometimes quick result with reasonable accuracy is acceptable than the more time consuming results. Present work is an effort to check the accuracy and validity of a previously developed pore network model for obtaining important petrophysical properties of rocks based on cutting-sized sample data. Introduction« less

  10. A regularity condition and temporal asymptotics for chemotaxis-fluid equations

    NASA Astrophysics Data System (ADS)

    Chae, Myeongju; Kang, Kyungkeun; Lee, Jihoon; Lee, Ki-Ahm

    2018-02-01

    We consider two dimensional chemotaxis equations coupled to the Navier-Stokes equations. We present a new localized regularity criterion that is localized in a neighborhood at each point. Secondly, we establish temporal decays of the regular solutions under the assumption that the initial mass of biological cell density is sufficiently small. Both results are improvements of previously known results given in Chae et al (2013 Discrete Continuous Dyn. Syst. A 33 2271-97) and Chae et al (2014 Commun. PDE 39 1205-35)

  11. Behavioral Dimensions in One-Year-Olds and Dimensional Stability in Infancy.

    ERIC Educational Resources Information Center

    Hagekull, Berit; And Others

    1980-01-01

    The dimensional structure of infants' behavioral repertoire was shown to be highly stable over 3 to 15 months of age. Factor analysis of parent questionnaire data produced seven factors named Intensity/Activity, Regularity, Approach-Withdrawal, Sensory Sensitivity, Attentiveness, Manageability and Sensitivity to New Food. An eighth factor,…

  12. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  13. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  14. Comparison of Conventional Versus Spiral Computed Tomography with Three Dimensional Reconstruction in Chronic Otitis Media with Ossicular Chain Destruction.

    PubMed

    Naghibi, Saeed; Seifirad, Sirous; Adami Dehkordi, Mahboobeh; Einolghozati, Sasan; Ghaffarian Eidgahi Moghadam, Nafiseh; Akhavan Rezayat, Amir; Seifirad, Soroush

    2016-01-01

    Chronic otitis media (COM) can be treated with tympanoplasty with or without mastoidectomy. In patients who have undergone middle ear surgery, three-dimensional spiral computed tomography (CT) scan plays an important role in optimizing surgical planning. This study was performed to compare the findings of three-dimensional reconstructed spiral and conventional CT scan of ossicular chain study in patients with COM. Fifty patients enrolled in the study underwent plane and three dimensional CT scan (PHILIPS-MX 8000). Ossicles changes, mastoid cavity, tympanic cavity, and presence of cholesteatoma were evaluated. Results of the two methods were then compared and interpreted by a radiologist, recorded in questionnaires, and analyzed. Logistic regression test and Kappa coefficient of agreement were used for statistical analyses. Sixty two ears with COM were found in physical examination. A significant difference was observed between the findings of the two methods in ossicle erosion (11.3% in conventional CT vs. 37.1% in spiral CT, P = 0.0001), decrease of mastoid air cells (82.3% in conventional CT vs. 93.5% in spiral CT, P = 0.001), and tympanic cavity opacity (12.9% in conventional CT vs. 40.3% in spiral CT, P=0.0001). No significant difference was observed between the findings of the two methods in ossicle destruction (6.5% conventional CT vs. 56.4% in spiral CT, P = 0.125), and presence of cholesteatoma (3.2% in conventional CT vs. 42% in spiral CT, P = 0.172). In this study, spiral CT scan demonstrated ossicle dislocation in 9.6%, decrease of mastoid air cells in 4.8%, and decrease of volume in the tympanic cavity in 1.6%; whereas, none of these findings were reported in the patients' conventional CT scans. Spiral-CT scan is superior to conventional CT in the diagnosis of lesions in COM before operation. It can be used for detailed evaluation of ossicular chain in such patients.

  15. Highly undersampled contrast-enhanced MRA with iterative reconstruction: Integration in a clinical setting.

    PubMed

    Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O

    2015-12-01

    To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.

  16. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization.

    PubMed

    Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z

    2015-11-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Partial covariance based functional connectivity computation using Ledoit-Wolf covariance regularization

    PubMed Central

    Brier, Matthew R.; Mitra, Anish; McCarthy, John E.; Ances, Beau M.; Snyder, Abraham Z.

    2015-01-01

    Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. PMID:26208872

  18. Vacuum polarization in the field of a multidimensional global monopole

    NASA Astrophysics Data System (ADS)

    Grats, Yu. V.; Spirin, P. A.

    2016-11-01

    An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values <ϕ2>ren and < T 00>ren have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.

  19. Regularity and dimensional salience in temporal grouping.

    PubMed

    Prince, Jon B; Rice, Tim

    2018-04-30

    How do pitch and duration accents combine to influence the perceived grouping of musical sequences? Sequence context influences the relative importance of these accents; for example, the presence of learned structure in pitch exaggerates the effect of pitch accents at the expense of duration accents despite being irrelevant to the task and not attributable to attention (Prince, 2014b). In the current study, two experiments examined whether the presence of temporal structure has the opposite effect. Experiment 1 tested baseline conditions, in which participants (N = 30) heard sequences with various sizes of either pitch or duration accents, which implied either duple or triple groupings (accent every two or three notes, respectively). Sequences either had regular temporal structure (isochronous) or not (irregular, via using random interonset intervals). Regularity enhanced the effect of duration accents but had negligible influence on pitch accents. The accent sizes that gave the most equivalent ratings across dimension and regularity levels were used in Experiment 2 (N = 33), in which sequences contained both pitch and duration accents that suggested either duple, triple, or neutral groupings. Despite controlling for the baseline effect of regularity by selecting equally effective accent sizes, regularity had additional effects on duration accents, but only for duple groupings. Regularity did not influence the effectiveness of pitch accents when combined with duration accents. These findings offer some support for a dimensional salience hypothesis, which proposes that the presence of temporal structure should foster duration accent effectiveness at the expense of pitch accents. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. A deformation of Sasakian structure in the presence of torsion and supergravity solutions

    NASA Astrophysics Data System (ADS)

    Houri, Tsuyoshi; Takeuchi, Hiroshi; Yasui, Yukinori

    2013-07-01

    A deformation of Sasakian structure in the presence of totally skew-symmetric torsion is discussed on odd-dimensional manifolds whose metric cones are Kähler with torsion. It is shown that such a geometry inherits similar properties to those of Sasakian geometry. As their example, we present an explicit expression of local metrics. It is also demonstrated that our example of the metrics admits the existence of hidden symmetry described by non-trivial odd-rank generalized closed conformal Killing-Yano tensors. Furthermore, using these metrics as an ansatz, we construct exact solutions in five-dimensional minimal gauged/ungauged supergravity and 11-dimensional supergravity. Finally, the global structures of the solutions are discussed. We obtain regular metrics on compact manifolds in five dimensions, which give natural generalizations of Sasaki-Einstein manifolds Yp, q and La, b, c. We also briefly discuss regular metrics on non-compact manifolds in 11 dimensions.

  1. Assembly of the most topologically regular two-dimensional micro and nanocrystals with spherical, conical, and tubular shapes

    NASA Astrophysics Data System (ADS)

    Roshal, D. S.; Konevtsova, O. V.; Myasnikova, A. E.; Rochal, S. B.

    2016-11-01

    We consider how to control the extension of curvature-induced defects in the hexagonal order covering different curved surfaces. In these frames we propose a physical mechanism for improving structures of two-dimensional spherical colloidal crystals (SCCs). For any SCC comprising of about 300 or less particles the mechanism transforms all extended topological defects (ETDs) in the hexagonal order into the point disclinations. Perfecting the structure is carried out by successive cycles of the particle implantation and subsequent relaxation of the crystal. The mechanism is potentially suitable for obtaining colloidosomes with better selective permeability. Our approach enables modeling the most topologically regular tubular and conical two-dimensional nanocrystals including various possible polymorphic forms of the HIV viral capsid. Different HIV-like shells with an arbitrary number of structural units (SUs) and desired geometrical parameters are easily formed. Faceting of the obtained structures is performed by minimizing the suggested elastic energy.

  2. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  3. U.N. Convention Against Torture (CAT): Overview and Application to Interrogation Techniques

    DTIC Science & Technology

    2008-01-25

    torture”); Al- Saher v. I.N.S., 268 F.3d 1143 (9th Cir. 2001) (finding that regular, severe beatings and cigarette burns inflicted upon an Iraqi alien by...continued) be of sufficient severity to constitute torture. See Al- Saher , 268 F.3d at 1143 (regular, severe beatings and cigarette burns

  4. Effect of Robotic-Assisted Gait Training in Patients With Incomplete Spinal Cord Injury

    PubMed Central

    Shin, Ji Cheol; Kim, Ji Yong; Park, Han Kyul

    2014-01-01

    Objective To determine the effect of robotic-assisted gait training (RAGT) compared to conventional overground training. Methods Sixty patients with motor incomplete spinal cord injury (SCI) were included in a prospective, randomized clinical trial by comparing RAGT to conventional overground training. The RAGT group received RAGT three sessions per week at duration of 40 minutes with regular physiotherapy in 4 weeks. The conventional group underwent regular physiotherapy twice a day, 5 times a week. Main outcomes were lower extremity motor score of American Spinal Injury Association impairment scale (LEMS), ambulatory motor index (AMI), Spinal Cord Independence Measure III mobility section (SCIM3-M), and walking index for spinal cord injury version II (WISCI-II) scale. Results At the end of rehabilitation, both groups showed significant improvement in LEMS, AMI, SCIM3-M, and WISCI-II. Based on WISCI-II, statistically significant improvement was observed in the RAGT group. For the remaining variables, no difference was found. Conclusion RAGT combined with conventional physiotherapy could yield more improvement in ambulatory function than conventional therapy alone. RAGT should be considered as one additional tool to provide neuromuscular reeducation in patient with incomplete SCI. PMID:25566469

  5. Self-assembly of a binodal metal-organic framework exhibiting a demi-regular lattice.

    PubMed

    Yan, Linghao; Kuang, Guowen; Zhang, Qiushi; Shang, Xuesong; Liu, Pei Nian; Lin, Nian

    2017-10-26

    Designing metal-organic frameworks with new topologies is a long-standing quest because new topologies often accompany new properties and functions. Here we report that 1,3,5-tris[4-(pyridin-4-yl)phenyl]benzene molecules coordinate with Cu atoms to form a two-dimensional framework in which Cu adatoms form a nanometer-scale demi-regular lattice. The lattice is articulated by perfectly arranged twofold and threefold pyridyl-Cu coordination motifs in a ratio of 1 : 6 and features local dodecagonal symmetry. This structure is thermodynamically robust and emerges solely when the molecular density is at a critical value. In comparison, we present three framework structures that consist of semi-regular and regular lattices of Cu atoms self-assembled out of 1,3,5-tris[4-(pyridin-4-yl)phenyl]benzene and trispyridylbenzene molecules. Thus a family of regular, semi-regular and demi-regular lattices can be achieved by Cu-pyridyl coordination.

  6. Evaluation of physicochemical properties of root-end filling materials using conventional and Micro-CT tests

    PubMed Central

    TORRES, Fernanda Ferrari Esteves; BOSSO-MARTELO, Roberta; ESPIR, Camila Galletti; CIRELLI, Joni Augusto; GUERREIRO-TANOMARU, Juliane Maria; TANOMARU-FILHO, Mario

    2017-01-01

    Abstract Objective To evaluate solubility, dimensional stability, filling ability and volumetric change of root-end filling materials using conventional tests and new Micro-CT-based methods. Material and Methods 7 Results The results suggested correlated or complementary data between the proposed tests. At 7 days, BIO showed higher solubility and at 30 days, showed higher volumetric change in comparison with MTA (p<0.05). With regard to volumetric change, the tested materials were similar (p>0.05) at 7 days. At 30 days, they presented similar solubility. BIO and MTA showed higher dimensional stability than ZOE (p<0.05). ZOE and BIO showed higher filling ability (p<0.05). Conclusions ZOE presented a higher dimensional change, and BIO had greater solubility after 7 days. BIO presented filling ability and dimensional stability, but greater volumetric change than MTA after 30 days. Micro-CT can provide important data on the physicochemical properties of materials complementing conventional tests. PMID:28877275

  7. Early detection of myocardial dysfunction using two-dimensional speckle tracking echocardiography in a young cat with hypertrophic cardiomyopathy

    PubMed Central

    Mochizuki, Yohei; Yoshimatsu, Hiroki; Niina, Ayaka; Teshima, Takahiro; Matsumoto, Hirotaka; Koyama, Hidekazu

    2018-01-01

    Case summary A 5-month-old intact female Scottish Fold cat was presented for cardiac evaluation. Careful auscultation detected a slight systolic murmur (Levine I/VI). The findings of electrocardiography, thoracic radiography, non-invasive blood pressure measurements and conventional echocardiographic studies were unremarkable. However, two-dimensional speckle tracking echocardiography revealed abnormalities in myocardial deformations, including decreased early-to-late diastolic strain rate ratios in longitudinal, radial and circumferential directions, and deteriorated segmental systolic longitudinal strain. At the follow-up examinations, the cat exhibited echocardiographic left ventricular hypertrophy and was diagnosed with hypertrophic cardiomyopathy using conventional echocardiography. Relevance and novel information This is the first report on the use of two-dimensional speckle tracking echocardiography for the early detection of myocardial dysfunction in a cat with hypertrophic cardiomyopathy; the myocardial dysfunction was detected before the development of hypertrophy. The findings from this case suggest that two-dimensional speckle tracking echocardiography can be useful for myocardial assessment when conventional echocardiographic and Doppler findings are ambiguous. PMID:29449957

  8. Soft chemical synthesis of silicon nanosheets and their applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, Hideyuki; Ikuno, Takashi

    2016-12-15

    Two-dimensional silicon nanomaterials are expected to show different properties from those of bulk silicon materials by virtue of surface functionalization and quantum size effects. Since facile fabrication processes of large area silicon nanosheets (SiNSs) are required for practical applications, a development of soft chemical synthesis route without using conventional vacuum processes is a challenging issue. We have recently succeeded to prepare SiNSs with sub-nanometer thicknesses by exfoliating layered silicon compounds, and they are found to be composed of crystalline single-atom-thick silicon layers. In this review, we present the synthesis and modification methods of SiNSs. These SiNSs have atomically flat andmore » smooth surfaces due to dense coverage of organic moieties, and they are easily self-assembled in a concentrated state to form a regularly stacked structure. We have also characterized the electron transport properties and the electronic structures of SiNSs. Finally, the potential applications of these SiNSs and organic modified SiNSs are also reviewed.« less

  9. Synchronization in oscillator networks with delayed coupling: a stability criterion.

    PubMed

    Earl, Matthew G; Strogatz, Steven H

    2003-03-01

    We derive a stability criterion for the synchronous state in networks of identical phase oscillators with delayed coupling. The criterion applies to any network (whether regular or random, low dimensional or high dimensional, directed or undirected) in which each oscillator receives delayed signals from k others, where k is uniform for all oscillators.

  10. Unimodular gravity and the lepton anomalous magnetic moment at one-loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martín, Carmelo P., E-mail: carmelop@fis.ucm.es

    We work out the one-loop contribution to the lepton anomalous magnetic moment coming from Unimodular Gravity. We use Dimensional Regularization and Dimensional Reduction to carry out the computations. In either case, we find that Unimodular Gravity gives rise to the same one-loop correction as that of General Relativity.

  11. Investigating Various Application Areas of Three-Dimensional Virtual Worlds for Higher Education

    ERIC Educational Resources Information Center

    Ghanbarzadeh, Reza; Ghapanchi, Amir Hossein

    2018-01-01

    Three-dimensional virtual world (3DVW) have been adopted extensively in the education sector worldwide, and there has been remarkable growth in the application of these environments for distance learning. A wide variety of universities and educational organizations across the world have utilized this technology for their regular learning and…

  12. A work study of the CAD/CAM method and conventional manual method in the fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Wong, M W; So, S F

    2005-04-01

    A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.

  13. Existence and Regularity of Invariant Measures for the Three Dimensional Stochastic Primitive Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glatt-Holtz, Nathan, E-mail: negh@vt.edu; Kukavica, Igor, E-mail: kukavica@usc.edu; Ziane, Mohammed, E-mail: ziane@usc.edu

    2014-05-15

    We establish the continuity of the Markovian semigroup associated with strong solutions of the stochastic 3D Primitive Equations, and prove the existence of an invariant measure. The proof is based on new moment bounds for strong solutions. The invariant measure is supported on strong solutions and is furthermore shown to have higher regularity properties.

  14. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  15. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  16. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  17. Gene selection for microarray data classification via subspace learning and manifold regularization.

    PubMed

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  18. Three-dimensional navigation is more accurate than two-dimensional navigation or conventional fluoroscopy for percutaneous sacroiliac screw fixation in the dysmorphic sacrum: a randomized multicenter study.

    PubMed

    Matityahu, Amir; Kahler, David; Krettek, Christian; Stöckle, Ulrich; Grutzner, Paul Alfred; Messmer, Peter; Ljungqvist, Jan; Gebhard, Florian

    2014-12-01

    To evaluate the accuracy of computer-assisted sacral screw fixation compared with conventional techniques in the dysmorphic versus normal sacrum. Review of a previous study database. Database of a multinational study with 9 participating trauma centers. The reviewed group included 130 patients, 72 from the navigated group and 58 from the conventional group. Of these, 109 were in the nondysmorphic group and 21 in the dysmorphic group. Placement of sacroiliac (SI) screws was performed using standard fluoroscopy for the conventional group and BrainLAB navigation software with either 2-dimensional or 3-dimensional (3D) navigation for the navigated group. Accuracy of SI screw placement by 2-dimensional and 3D navigation versus conventional fluoroscopy in dysmorphic and nondysmorphic patients, as evaluated by 6 observers using postoperative computerized tomography imaging at least 1 year after initial surgery. Intraobserver agreement was also evaluated. There were 11.9% (13/109) of patients with misplaced screws in the nondysmorphic group and 28.6% (6/21) of patients with misplaced screws in the dysmorphic group, none of which were in the 3D navigation group. Raw agreement between the 6 observers regarding misplaced screws was 32%. However, the percent overall agreement was 69.0% (kappa = 0.38, P < 0.05). The use of 3D navigation to improve intraoperative imaging for accurate insertion of SI screws is magnified in the dysmorphic proximal sacral segment. We recommend the use of 3D navigation, where available, for insertion of SI screws in patients with normal and dysmorphic proximal sacral segments. Therapeutic level I.

  19. One-loop calculations in Supersymmetric Lattice QCD

    NASA Astrophysics Data System (ADS)

    Costa, M.; Panagopoulos, H.

    2017-03-01

    We study the self energies of all particles which appear in a lattice regularization of supersymmetric QCD (N = 1). We compute, perturbatively to one-loop, the relevant two-point Green's functions using both the dimensional and the lattice regularizations. Our lattice formulation employs the Wilson fermion acrion for the gluino and quark fields. The gauge group that we consider is SU(Nc) while the number of colors, Nc and the number of flavors, Nf , are kept as generic parameters. We have also searched for relations among the propagators which are computed from our one-loop results. We have obtained analytic expressions for the renormalization functions of the quark field (Zψ), gluon field (Zu), gluino field (Zλ) and squark field (ZA±). We present here results from dimensional regularization, relegating to a forthcoming publication [1] our results along with a more complete list of references. Part of the lattice study regards also the renormalization of quark bilinear operators which, unlike the nonsupersymmetric case, exhibit a rich pattern of operator mixing at the quantum level.

  20. High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries

    NASA Astrophysics Data System (ADS)

    Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.

    The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.

  1. Implant platform switching: biomechanical approach using two-dimensional finite element analysis.

    PubMed

    Tabata, Lucas Fernando; Assunção, Wirley Gonçalves; Adelino Ricardo Barão, Valentim; de Sousa, Edson Antonio Capello; Gomes, Erica Alves; Delben, Juliana Aparecida

    2010-01-01

    In implant therapy, a peri-implant bone resorption has been noticed mainly in the first year after prosthesis insertion. This bone remodeling can sometimes jeopardize the outcome of the treatment, especially in areas in which short implants are used and also in aesthetic cases. To avoid this occurrence, the use of platform switching (PS) has been used. This study aimed to evaluate the biomechanical concept of PS with relation to stress distribution using two-dimensional finite element analysis. A regular matching diameter connection of abutment-implant (regular platform group [RPG]) and a PS connection (PS group [PSG]) were simulated by 2 two-dimensional finite element models that reproduced a 2-piece implant system with peri-implant bone tissue. A regular implant (prosthetic platform of 4.1 mm) and a wide implant (prosthetic platform of 5.0 mm) were used to represent the RPG and PSG, respectively, in which a regular prosthetic component of 4.1 mm was connected to represent the crown. A load of 100 N was applied on the models using ANSYS software. The RPG spreads the stress over a wider area in the peri-implant bone tissue (159 MPa) and the implant (1610 MPa), whereas the PSG seems to diminish the stress distribution on bone tissue (34 MPa) and implant (649 MPa). Within the limitation of the study, the PS presented better biomechanical behavior in relation to stress distribution on the implant but especially in the bone tissue (80% less). However, in the crown and retention screw, an increase in stress concentration was observed.

  2. Three-dimensional assessment of facial asymmetry: A systematic review.

    PubMed

    Akhil, Gopi; Senthil Kumar, Kullampalayam Palanisamy; Raja, Subramani; Janardhanan, Kumaresan

    2015-08-01

    For patients with facial asymmetry, complete and precise diagnosis, and surgical treatments to correct the underlying cause of the asymmetry are significant. Conventional diagnostic radiographs (submento-vertex projections, posteroanterior radiography) have limitations in asymmetry diagnosis due to two-dimensional assessments of three-dimensional (3D) images. The advent of 3D images has greatly reduced the magnification and projection errors that are common in conventional radiographs making it as a precise diagnostic aid for assessment of facial asymmetry. Thus, this article attempts to review the newly introduced 3D tools in the diagnosis of more complex facial asymmetries.

  3. Alternation of regular and chaotic dynamics in a simple two-degree-of-freedom system with nonlinear inertial coupling.

    PubMed

    Sigalov, G; Gendelman, O V; AL-Shudeifat, M A; Manevitch, L I; Vakakis, A F; Bergman, L A

    2012-03-01

    We show that nonlinear inertial coupling between a linear oscillator and an eccentric rotator can lead to very interesting interchanges between regular and chaotic dynamical behavior. Indeed, we show that this model demonstrates rather unusual behavior from the viewpoint of nonlinear dynamics. Specifically, at a discrete set of values of the total energy, the Hamiltonian system exhibits non-conventional nonlinear normal modes, whose shape is determined by phase locking of rotatory and oscillatory motions of the rotator at integer ratios of characteristic frequencies. Considering the weakly damped system, resonance capture of the dynamics into the vicinity of these modes brings about regular motion of the system. For energy levels far from these discrete values, the motion of the system is chaotic. Thus, the succession of resonance captures and escapes by a discrete set of the normal modes causes a sequence of transitions between regular and chaotic behavior, provided that the damping is sufficiently small. We begin from the Hamiltonian system and present a series of Poincaré sections manifesting the complex structure of the phase space of the considered system with inertial nonlinear coupling. Then an approximate analytical description is presented for the non-conventional nonlinear normal modes. We confirm the analytical results by numerical simulation and demonstrate the alternate transitions between regular and chaotic dynamics mentioned above. The origin of the chaotic behavior is also discussed.

  4. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  5. The ARM Best Estimate 2-dimensional Gridded Surface

    DOE Data Explorer

    Xie,Shaocheng; Qi, Tang

    2015-06-15

    The ARM Best Estimate 2-dimensional Gridded Surface (ARMBE2DGRID) data set merges together key surface measurements at the Southern Great Plains (SGP) sites and interpolates the data to a regular 2D grid to facilitate data application. Data from the original site locations can be found in the ARM Best Estimate Station-based Surface (ARMBESTNS) data set.

  6. Prediction of mortality after radical cystectomy for bladder cancer by machine learning techniques.

    PubMed

    Wang, Guanjin; Lam, Kin-Man; Deng, Zhaohong; Choi, Kup-Sze

    2015-08-01

    Bladder cancer is a common cancer in genitourinary malignancy. For muscle invasive bladder cancer, surgical removal of the bladder, i.e. radical cystectomy, is in general the definitive treatment which, unfortunately, carries significant morbidities and mortalities. Accurate prediction of the mortality of radical cystectomy is therefore needed. Statistical methods have conventionally been used for this purpose, despite the complex interactions of high-dimensional medical data. Machine learning has emerged as a promising technique for handling high-dimensional data, with increasing application in clinical decision support, e.g. cancer prediction and prognosis. Its ability to reveal the hidden nonlinear interactions and interpretable rules between dependent and independent variables is favorable for constructing models of effective generalization performance. In this paper, seven machine learning methods are utilized to predict the 5-year mortality of radical cystectomy, including back-propagation neural network (BPN), radial basis function (RBFN), extreme learning machine (ELM), regularized ELM (RELM), support vector machine (SVM), naive Bayes (NB) classifier and k-nearest neighbour (KNN), on a clinicopathological dataset of 117 patients of the urology unit of a hospital in Hong Kong. The experimental results indicate that RELM achieved the highest average prediction accuracy of 0.8 at a fast learning speed. The research findings demonstrate the potential of applying machine learning techniques to support clinical decision making. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Task-based statistical image reconstruction for high-quality cone-beam CT

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Xu, Jennifer; Zbijewski, Wojciech; Sisniega, Alejandro; Mow, Michael; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-11-01

    Task-based analysis of medical imaging performance underlies many ongoing efforts in the development of new imaging systems. In statistical image reconstruction, regularization is often formulated in terms to encourage smoothness and/or sharpness (e.g. a linear, quadratic, or Huber penalty) but without explicit formulation of the task. We propose an alternative regularization approach in which a spatially varying penalty is determined that maximizes task-based imaging performance at every location in a 3D image. We apply the method to model-based image reconstruction (MBIR—viz., penalized weighted least-squares, PWLS) in cone-beam CT (CBCT) of the head, focusing on the task of detecting a small, low-contrast intracranial hemorrhage (ICH), and we test the performance of the algorithm in the context of a recently developed CBCT prototype for point-of-care imaging of brain injury. Theoretical predictions of local spatial resolution and noise are computed via an optimization by which regularization (specifically, the quadratic penalty strength) is allowed to vary throughout the image to maximize local task-based detectability index ({{d}\\prime} ). Simulation studies and test-bench experiments were performed using an anthropomorphic head phantom. Three PWLS implementations were tested: conventional (constant) penalty; a certainty-based penalty derived to enforce constant point-spread function, PSF; and the task-based penalty derived to maximize local detectability at each location. Conventional (constant) regularization exhibited a fairly strong degree of spatial variation in {{d}\\prime} , and the certainty-based method achieved uniform PSF, but each exhibited a reduction in detectability compared to the task-based method, which improved detectability up to ~15%. The improvement was strongest in areas of high attenuation (skull base), where the conventional and certainty-based methods tended to over-smooth the data. The task-driven reconstruction method presents a promising regularization method in MBIR by explicitly incorporating task-based imaging performance as the objective. The results demonstrate improved ICH conspicuity and support the development of high-quality CBCT systems.

  8. New generic indexing technology

    NASA Technical Reports Server (NTRS)

    Freeston, Michael

    1996-01-01

    There has been no fundamental change in the dynamic indexing methods supporting database systems since the invention of the B-tree twenty-five years ago. And yet the whole classical approach to dynamic database indexing has long since become inappropriate and increasingly inadequate. We are moving rapidly from the conventional one-dimensional world of fixed-structure text and numbers to a multi-dimensional world of variable structures, objects and images, in space and time. But, even before leaving the confines of conventional database indexing, the situation is highly unsatisfactory. In fact, our research has led us to question the basic assumptions of conventional database indexing. We have spent the past ten years studying the properties of multi-dimensional indexing methods, and in this paper we draw the strands of a number of developments together - some quite old, some very new, to show how we now have the basis for a new generic indexing technology for the next generation of database systems.

  9. Regular transport dynamics produce chaotic travel times.

    PubMed

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  10. Regular transport dynamics produce chaotic travel times

    NASA Astrophysics Data System (ADS)

    Villalobos, Jorge; Muñoz, Víctor; Rogan, José; Zarama, Roberto; Johnson, Neil F.; Toledo, Benjamín; Valdivia, Juan Alejandro

    2014-06-01

    In the hope of making passenger travel times shorter and more reliable, many cities are introducing dedicated bus lanes (e.g., Bogota, London, Miami). Here we show that chaotic travel times are actually a natural consequence of individual bus function, and hence of public transport systems more generally, i.e., chaotic dynamics emerge even when the route is empty and straight, stops and lights are equidistant and regular, and loading times are negligible. More generally, our findings provide a novel example of chaotic dynamics emerging from a single object following Newton's laws of motion in a regularized one-dimensional system.

  11. Description of a highly symmetric polytope observed in Thomson's problem of charges on a hypersphere

    NASA Astrophysics Data System (ADS)

    Roth, J.

    2007-10-01

    In a recent paper, Altschuler and Pérez-Garrido [Phys. Rev. E 76, 016705 (2007)] have presented a four-dimensional polytope with 80 vertices. We demonstrate how this polytope can be derived from the regular four-dimensional 600-cell with 120 vertices if two orthogonal positive disclinations are created. Some related polytopes are also described.

  12. Fibonacci Grids

    NASA Technical Reports Server (NTRS)

    Swinbank, Richard; Purser, James

    2006-01-01

    Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.

  13. Automatic Constraint Detection for 2D Layout Regularization.

    PubMed

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  14. Vacuum polarization in the field of a multidimensional global monopole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grats, Yu. V., E-mail: grats@phys.msu.ru; Spirin, P. A.

    2016-11-15

    An approximate expression for the Euclidean Green function of a massless scalar field in the spacetime of a multidimensional global monopole has been derived. Expressions for the vacuum expectation values 〈ϕ{sup 2}〉{sub ren} and 〈T{sub 00}〉{sub ren} have been derived by the dimensional regularization method. Comparison with the results obtained by alternative regularization methods is made.

  15. Fast Spatial Resolution Analysis of Quadratic Penalized Least-Squares Image Reconstruction With Separate Real and Imaginary Roughness Penalty: Application to fMRI.

    PubMed

    Olafsson, Valur T; Noll, Douglas C; Fessler, Jeffrey A

    2018-02-01

    Penalized least-squares iterative image reconstruction algorithms used for spatial resolution-limited imaging, such as functional magnetic resonance imaging (fMRI), commonly use a quadratic roughness penalty to regularize the reconstructed images. When used for complex-valued images, the conventional roughness penalty regularizes the real and imaginary parts equally. However, these imaging methods sometimes benefit from separate penalties for each part. The spatial smoothness from the roughness penalty on the reconstructed image is dictated by the regularization parameter(s). One method to set the parameter to a desired smoothness level is to evaluate the full width at half maximum of the reconstruction method's local impulse response. Previous work has shown that when using the conventional quadratic roughness penalty, one can approximate the local impulse response using an FFT-based calculation. However, that acceleration method cannot be applied directly for separate real and imaginary regularization. This paper proposes a fast and stable calculation for this case that also uses FFT-based calculations to approximate the local impulse responses of the real and imaginary parts. This approach is demonstrated with a quadratic image reconstruction of fMRI data that uses separate roughness penalties for the real and imaginary parts.

  16. Refraction tomography mapping of near-surface dipping layers using landstreamer data at East Canyon Dam, Utah

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.

    2008-01-01

    We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.

  17. Statistical investigation of avalanches of three-dimensional small-world networks and their boundary and bulk cross-sections

    NASA Astrophysics Data System (ADS)

    Najafi, M. N.; Dashti-Naserabadi, H.

    2018-03-01

    In many situations we are interested in the propagation of energy in some portions of a three-dimensional system with dilute long-range links. In this paper, a sandpile model is defined on the three-dimensional small-world network with real dissipative boundaries and the energy propagation is studied in three dimensions as well as the two-dimensional cross-sections. Two types of cross-sections are defined in the system, one in the bulk and another in the system boundary. The motivation of this is to make clear how the statistics of the avalanches in the bulk cross-section tend to the statistics of the dissipative avalanches, defined in the boundaries as the concentration of long-range links (α ) increases. This trend is numerically shown to be a power law in a manner described in the paper. Two regimes of α are considered in this work. For sufficiently small α s the dominant behavior of the system is just like that of the regular BTW, whereas for the intermediate values the behavior is nontrivial with some exponents that are reported in the paper. It is shown that the spatial extent up to which the statistics is similar to the regular BTW model scales with α just like the dissipative BTW model with the dissipation factor (mass in the corresponding ghost model) m2˜α for the three-dimensional system as well as its two-dimensional cross-sections.

  18. Three-dimensional cell shapes and arrangements in human sweat glands as revealed by whole-mount immunostaining

    PubMed Central

    Kurata, Ryuichiro; Futaki, Sugiko; Nakano, Itsuko; Fujita, Fumitaka; Tanemura, Atsushi; Murota, Hiroyuki; Katayama, Ichiro; Okada, Fumihiro

    2017-01-01

    Because sweat secretion is facilitated by mechanical contraction of sweat gland structures, understanding their structure-function relationship could lead to more effective treatments for patients with sweat gland disorders such as heat stroke. Conventional histological studies have shown that sweat glands are three-dimensionally coiled tubular structures consisting of ducts and secretory portions, although their detailed structural anatomy remains unclear. To better understand the details of the three-dimensional (3D) coiled structures of sweat glands, a whole-mount staining method was employed to visualize 3D coiled gland structures with sweat gland markers for ductal luminal, ductal basal, secretory luminal, and myoepithelial cells. Imaging the 3D coiled gland structures demonstrated that the ducts and secretory portions were comprised of distinct tubular structures. Ductal tubules were occasionally bent, while secretory tubules were frequently bent and formed a self-entangled coiled structure. Whole-mount staining of complex coiled gland structures also revealed the detailed 3D cellular arrangements in the individual sweat gland compartments. Ducts were composed of regularly arranged cuboidal shaped cells, while secretory portions were surrounded by myoepithelial cells longitudinally elongated along entangled secretory tubules. Whole-mount staining was also used to visualize the spatial arrangement of blood vessels and nerve fibers, both of which facilitate sweat secretion. The blood vessels ran longitudinally parallel to the sweat gland tubules, while nerve fibers wrapped around secretory tubules, but not ductal tubules. Taken together, whole-mount staining of sweat glands revealed the 3D cell shapes and arrangements of complex coiled gland structures and provides insights into the mechanical contraction of coiled gland structures during sweat secretion. PMID:28636607

  19. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  20. The usefulness of mobile insulator sheets for the optimisation of deep heating area for regional hyperthermia using a capacitively coupled heating method: phantom, simulation and clinical prospective studies.

    PubMed

    Tomura, Kyosuke; Ohguri, Takayuki; Mulder, Hendrik Thijmen; Murakami, Motohiro; Nakahara, Sota; Yahara, Katsuya; Korogi, Yukunori

    2017-11-20

    To evaluate the feasibility and efficacy of deep regional hyperthermia with the use of mobile insulator sheets in a capacitively coupled heating device. The heat was applied using an 8-MHz radiofrequency-capacitive device. The insulator sheet was inserted between the regular bolus and cooled overlay bolus in each of upper and lower side of the electrode. Several settings using the insulator sheets were investigated in an experimental study using an agar phantom to evaluate the temperature distributions. The specific absorption rate (SAR) distributions in several organs were also computed for the three-dimensional patient model. In a clinical prospective study, a total of five heating sessions were scheduled for the pelvic tumours, to assess the thermal parameters. The conventional setting was used during the first, third and fifth treatment sessions, and insulator sheets were used during the second and fourth treatment sessions. In the phantom study, the higher heating area improved towards the centre when the mobile insulator sheets were used. The subcutaneous fat/target ratios for the averaged SARs in the setting with the mobile insulator (median, 2.5) were significantly improved compared with those in the conventional setting (median, 3.4). In the clinical study, the thermal dose parameters of CEM43°CT90 in the sessions with the mobile insulator sheets (median, 1.9 min) were significantly better than those in the sessions using a conventional setting (median, 1.0 min). Our novel heating method using mobile insulator sheets was thus found to improve the thermal dose parameters. Further investigations are expected.

  1. Comparison of intraoral scanning and conventional impression techniques using 3-dimensional superimposition.

    PubMed

    Rhee, Ye-Kyu; Huh, Yoon-Hyuk; Cho, Lee-Ra; Park, Chan-Jin

    2015-12-01

    The aim of this study is to evaluate the appropriate impression technique by analyzing the superimposition of 3D digital model for evaluating accuracy of conventional impression technique and digital impression. Twenty-four patients who had no periodontitis or temporomandibular joint disease were selected for analysis. As a reference model, digital impressions with a digital impression system were performed. As a test models, for conventional impression dual-arch and full-arch, impression techniques utilizing addition type polyvinylsiloxane for fabrication of cast were applied. 3D laser scanner is used for scanning the cast. Each 3 pairs for 25 STL datasets were imported into the inspection software. The three-dimensional differences were illustrated in a color-coded map. For three-dimensional quantitative analysis, 4 specified contact locations(buccal and lingual cusps of second premolar and molar) were established. For twodimensional quantitative analysis, the sectioning from buccal cusp to lingual cusp of second premolar and molar were acquired depending on the tooth axis. In color-coded map, the biggest difference between intraoral scanning and dual-arch impression was seen (P<.05). In three-dimensional analysis, the biggest difference was seen between intraoral scanning and dual-arch impression and the smallest difference was seen between dual-arch and full-arch impression. The two- and three-dimensional deviations between intraoral scanner and dual-arch impression was bigger than full-arch and dual-arch impression (P<.05). The second premolar showed significantly bigger three-dimensional deviations than the second molar in the three-dimensional deviations (P>.05).

  2. Comparison of intraoral scanning and conventional impression techniques using 3-dimensional superimposition

    PubMed Central

    Rhee, Ye-Kyu

    2015-01-01

    PURPOSE The aim of this study is to evaluate the appropriate impression technique by analyzing the superimposition of 3D digital model for evaluating accuracy of conventional impression technique and digital impression. MATERIALS AND METHODS Twenty-four patients who had no periodontitis or temporomandibular joint disease were selected for analysis. As a reference model, digital impressions with a digital impression system were performed. As a test models, for conventional impression dual-arch and full-arch, impression techniques utilizing addition type polyvinylsiloxane for fabrication of cast were applied. 3D laser scanner is used for scanning the cast. Each 3 pairs for 25 STL datasets were imported into the inspection software. The three-dimensional differences were illustrated in a color-coded map. For three-dimensional quantitative analysis, 4 specified contact locations(buccal and lingual cusps of second premolar and molar) were established. For twodimensional quantitative analysis, the sectioning from buccal cusp to lingual cusp of second premolar and molar were acquired depending on the tooth axis. RESULTS In color-coded map, the biggest difference between intraoral scanning and dual-arch impression was seen (P<.05). In three-dimensional analysis, the biggest difference was seen between intraoral scanning and dual-arch impression and the smallest difference was seen between dual-arch and full-arch impression. CONCLUSION The two- and three-dimensional deviations between intraoral scanner and dual-arch impression was bigger than full-arch and dual-arch impression (P<.05). The second premolar showed significantly bigger three-dimensional deviations than the second molar in the three-dimensional deviations (P>.05). PMID:26816576

  3. Helicity moduli of three-dimensional dilute XY models

    NASA Astrophysics Data System (ADS)

    Garg, Anupam; Pandit, Rahul; Solla, Sara A.; Ebner, C.

    1984-07-01

    The helicity moduli of various dilute, classical XY models on three-dimensional lattices are studied with a view to understanding some aspects of the superfluidity of 4He in Vycor glass. A spinwave calculation is used to obtain the low-temperature helicity modulus of a regularly-diluted XY model. A similar calculation is performed for the randomly bond-diluted and site-diluted XY models in the limit of low dilution. A Monte Carlo simulation is used to obtain the helicity modulus of the randomly bond-diluted XY model over a wide range of temperature and dilution. It is found that the randomly diluted models do agree and the regularly diluted model does not agree with certain experimentally found features of the variation in superfluid fraction with coverage of 4He in Vycor glass.

  4. On solvability of boundary value problems for hyperbolic fourth-order equations with nonlocal boundary conditions of integral type

    NASA Astrophysics Data System (ADS)

    Popov, Nikolay S.

    2017-11-01

    Solvability of some initial-boundary value problems for linear hyperbolic equations of the fourth order is studied. A condition on the lateral boundary in these problems relates the values of a solution or the conormal derivative of a solution to the values of some integral operator applied to a solution. Nonlocal boundary-value problems for one-dimensional hyperbolic second-order equations with integral conditions on the lateral boundary were considered in the articles by A.I. Kozhanov. Higher-dimensional hyperbolic equations of higher order with integral conditions on the lateral boundary were not studied earlier. The existence and uniqueness theorems of regular solutions are proven. The method of regularization and the method of continuation in a parameter are employed to establish solvability.

  5. Discriminant analysis for fast multiclass data classification through regularized kernel function approximation.

    PubMed

    Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K

    2010-06-01

    In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.

  6. Three-dimensional anthropometry of the adult face.

    DOT National Transportation Integrated Search

    1978-03-01

    This study describes a new three-dimensional anatomical axis system based on four conventional anthropometrical face landmarks. Coincident as a coordinate (orthogonal) axis system, this reference system was developed to provide convenient orientation...

  7. Impedance Eduction in Sound Fields With Peripherally Varying Liners and Flow

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Jones, M. G.

    2015-01-01

    A two-dimensional impedance eduction theory is extended to three-dimensional sound fields and peripherally varying duct liners. The approach is to first measure the acoustic pressure field at a series of flush-mounted wall microphones located around the periphery of the flow duct. The numerical solution for the acoustic pressure field at these microphones is also obtained by solving the three-dimensional convected Helmholtz equation using the finite element method. A quadratic objective function based on the difference between the measured and finite element solution is constructed and the unknown impedance function is obtained by minimizing this objective function. Impedance spectra educed for two uniform-structure liners (a wire-mesh and a conventional liner) and a hard-soft-hard peripherally varying liner (for which the soft segment is that of the conventional liner) are presented. Results are presented at three mean flow Mach numbers and fourteen sound source frequencies. The impedance spectra of the uniform-structure liners are also computed using a two-dimensional impedance eduction theory. The primary conclusions of the study are: 1) when measured data is used with the uniform-structure liners, the three-dimensional theory reproduces the same impedance spectra as the two-dimensional theory except for frequencies corresponding to very low or very high liner attenuation; and 2) good agreement between the educed impedance spectra of the uniform structure conventional liner and the soft segment of the peripherally varying liner is obtained.

  8. Self-assembled one dimensional functionalized metal-organic nanotubes (MONTs) for proton conduction.

    PubMed

    Panda, Tamas; Kundu, Tanay; Banerjee, Rahul

    2012-06-04

    Two self-assembled isostructural functionalized metal-organic nanotubes have been synthesized using 5-triazole isophthalic acid (5-TIA) with In(III) and Cd(II). In- and Cd-5TIA possess one-dimensional (1D) nanotubular architecture and show proton conductivity along regular 1D channels, measured as 5.35 × 10(-5) and 3.61 × 10(-3) S cm(-1) respectively.

  9. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  10. Direct determination of one-dimensional interphase structures using normalized crystal truncation rod analysis

    DOE PAGES

    Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...

    2018-04-20

    Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.

  11. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    PubMed

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  12. A review on the multivariate statistical methods for dimensional reduction studies

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Kiang, Lam Chee; Mohamed, Zulkifley Bin; Hong, Tan Wei

    2017-05-01

    In this research study we have discussed multivariate statistical methods for dimensional reduction, which has been done by various researchers. The reduction of dimensionality is valuable to accelerate algorithm progression, as well as really may offer assistance with the last grouping/clustering precision. A lot of boisterous or even flawed info information regularly prompts a not exactly alluring algorithm progression. Expelling un-useful or dis-instructive information segments may for sure help the algorithm discover more broad grouping locales and principles and generally speaking accomplish better exhibitions on new data set.

  13. Mathematical Modeling the Geometric Regularity in Proteus Mirabilis Colonies

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Jiang, Yi; Minsu Kim Collaboration

    Proteus Mirabilis colony exhibits striking spatiotemporal regularity, with concentric ring patterns with alternative high and low bacteria density in space, and periodicity for repetition process of growth and swarm in time. We present a simple mathematical model to explain the spatiotemporal regularity of P. Mirabilis colonies. We study a one-dimensional system. Using a reaction-diffusion model with thresholds in cell density and nutrient concentration, we recreated periodic growth and spread patterns, suggesting that the nutrient constraint and cell density regulation might be sufficient to explain the spatiotemporal periodicity in P. Mirabilis colonies. We further verify this result using a cell based model.

  14. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    PubMed

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  15. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  16. Crossing the dividing surface of transition state theory. IV. Dynamical regularity and dimensionality reduction as key features of reactive trajectories

    NASA Astrophysics Data System (ADS)

    Lorquet, J. C.

    2017-04-01

    The atom-diatom interaction is studied by classical mechanics using Jacobi coordinates (R, r, θ). Reactivity criteria that go beyond the simple requirement of transition state theory (i.e., PR* > 0) are derived in terms of specific initial conditions. Trajectories that exactly fulfill these conditions cross the conventional dividing surface used in transition state theory (i.e., the plane in configuration space passing through a saddle point of the potential energy surface and perpendicular to the reaction coordinate) only once. Furthermore, they are observed to be strikingly similar and to form a tightly packed bundle of perfectly collimated trajectories in the two-dimensional (R, r) configuration space, although their angular motion is highly specific for each one. Particular attention is paid to symmetrical transition states (i.e., either collinear or T-shaped with C2v symmetry) for which decoupling between angular and radial coordinates is observed, as a result of selection rules that reduce to zero Coriolis couplings between modes that belong to different irreducible representations. Liapunov exponents are equal to zero and Hamilton's characteristic function is planar in that part of configuration space that is visited by reactive trajectories. Detailed consideration is given to the concept of average reactive trajectory, which starts right from the saddle point and which is shown to be free of curvature-induced Coriolis coupling. The reaction path Hamiltonian model, together with a symmetry-based separation of the angular degree of freedom, provides an appropriate framework that leads to the formulation of an effective two-dimensional Hamiltonian. The success of the adiabatic approximation in this model is due to the symmetry of the transition state, not to a separation of time scales. Adjacent trajectories, i.e., those that do not exactly fulfill the reactivity conditions have similar characteristics, but the quality of the approximation is lower. At higher energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.

  17. REGULARIZATION FOR COX’S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY*

    PubMed Central

    Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox’s proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the “irrepresentable condition” needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples. PMID:23066171

  18. REGULARIZATION FOR COX'S PROPORTIONAL HAZARDS MODEL WITH NP-DIMENSIONALITY.

    PubMed

    Bradic, Jelena; Fan, Jianqing; Jiang, Jiancheng

    2011-01-01

    High throughput genetic sequencing arrays with thousands of measurements per sample and a great amount of related censored clinical data have increased demanding need for better measurement specific model selection. In this paper we establish strong oracle properties of non-concave penalized methods for non-polynomial (NP) dimensional data with censoring in the framework of Cox's proportional hazards model. A class of folded-concave penalties are employed and both LASSO and SCAD are discussed specifically. We unveil the question under which dimensionality and correlation restrictions can an oracle estimator be constructed and grasped. It is demonstrated that non-concave penalties lead to significant reduction of the "irrepresentable condition" needed for LASSO model selection consistency. The large deviation result for martingales, bearing interests of its own, is developed for characterizing the strong oracle property. Moreover, the non-concave regularized estimator, is shown to achieve asymptotically the information bound of the oracle estimator. A coordinate-wise algorithm is developed for finding the grid of solution paths for penalized hazard regression problems, and its performance is evaluated on simulated and gene association study examples.

  19. Influence of platform and abutment angulation on peri-implant bone. A three-dimensional finite element stress analysis.

    PubMed

    Martini, Ana Paula; Barros, Rosália Moreira; Júnior, Amilcar Chagas Freitas; Rocha, Eduardo Passos; de Almeida, Erika Oliveira; Ferraz, Cacilda Cunha; Pellegrin, Maria Cristina Jimenez; Anchieta, Rodolfo Bruniera

    2013-12-01

    The aim of this study was to evaluate stress distribution on the peri-implant bone, simulating the influence of Nobel Select implants with straight or angulated abutments on regular and switching platform in the anterior maxilla, by means of 3-dimensional finite element analysis. Four mathematical models of a central incisor supported by external hexagon implant (13 mm × 5 mm) were created varying the platform (R, regular or S, switching) and the abutments (S, straight or A, angulated 15°). The models were created by using Mimics 13 and Solid Works 2010 software programs. The numerical analysis was performed using ANSYS Workbench 10.0. Oblique forces (100 N) were applied to the palatine surface of the central incisor. The bone/implant interface was considered perfectly integrated. Maximum (σmax) and minimum (σmin) principal stress values were obtained. For the cortical bone the highest stress values (σmax) were observed in the RA (regular platform and angulated abutment, 51 MPa), followed by SA (platform switching and angulated abutment, 44.8 MPa), RS (regular platform and straight abutment, 38.6 MPa) and SS (platform switching and straight abutment, 36.5 MPa). For the trabecular bone, the highest stress values (σmax) were observed in the RA (6.55 MPa), followed by RS (5.88 MPa), SA (5.60 MPa), and SS (4.82 MPa). The regular platform generated higher stress in the cervical periimplant region on the cortical and trabecular bone than the platform switching, irrespective of the abutment used (straight or angulated).

  20. On the Global Regularity for the 3D Magnetohydrodynamics Equations Involving Partial Components

    NASA Astrophysics Data System (ADS)

    Qian, Chenyin

    2018-03-01

    In this paper, we study the regularity criteria of the three-dimensional magnetohydrodynamics system in terms of some components of the velocity field and the magnetic field. With a decomposition of the four nonlinear terms of the system, this result gives an improvement of some corresponding previous works (Yamazaki in J Math Fluid Mech 16: 551-570, 2014; Jia and Zhou in Nonlinear Anal Real World Appl 13: 410-418, 2012).

  1. Post-Newtonian and numerical calculations of the gravitational self-force for circular orbits in the Schwarzschild geometry

    NASA Astrophysics Data System (ADS)

    Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.

    2010-03-01

    The problem of a compact binary system whose components move on circular orbits is addressed using two different approximation techniques in general relativity. The post-Newtonian (PN) approximation involves an expansion in powers of v/c≪1, and is most appropriate for small orbital velocities v. The perturbative self-force analysis requires an extreme mass ratio m1/m2≪1 for the components of the binary. A particular coordinate-invariant observable is determined as a function of the orbital frequency of the system using these two different approximations. The post-Newtonian calculation is pushed up to the third post-Newtonian (3PN) order. It involves the metric generated by two point particles and evaluated at the location of one of the particles. We regularize the divergent self-field of the particle by means of dimensional regularization. We show that the poles ∝(d-3)-1 appearing in dimensional regularization at the 3PN order cancel out from the final gauge invariant observable. The 3PN analytical result, through first order in the mass ratio, and the numerical self-force calculation are found to agree well. The consistency of this cross cultural comparison confirms the soundness of both approximations in describing compact binary systems. In particular, it provides an independent test of the very different regularization procedures invoked in the two approximation schemes.

  2. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  3. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  4. SIC-POVMS and MUBS: Geometrical Relationships in Prime Dimension

    NASA Astrophysics Data System (ADS)

    Appleby, D. M.

    2009-03-01

    The paper concerns Weyl-Heisenberg covariant SIC-POVMs (symmetric informationally complete positive operator valued measures) and full sets of MUBs (mutually unbiased bases) in prime dimension. When represented as vectors in generalized Bloch space a SIC-POVM forms a d2-1 dimensional regular simplex (d being the Hilbert space dimension). By contrast, the generalized Bloch vectors representing a full set of MUBs form d+1 mutually orthogonal d-1 dimensional regular simplices. In this paper we show that, in the Weyl-Heisenberg case, there are some simple geometrical relationships between the single SIC-POVM simplex and the d+1 MUB simplices. We go on to give geometrical interpretations of the minimum uncertainty states introduced by Wootters and Sussman, and by Appleby, Dang and Fuchs, and of the fiduciality condition given by Appleby, Dang and Fuchs.

  5. Zero-dimensional to three-dimensional nanojoining: current status and potential applications

    DOE PAGES

    Ma, Ying; Li, Hong; Bridges, Denzel; ...

    2016-08-01

    We report that the continuing miniaturization of microelectronics is pushing advanced manufacturing into nanomanufacturing. Nanojoining is a bottom-up assembly technique that enables functional nanodevice fabrication with dissimilar nanoscopic building blocks and/or molecular components. Various conventional joining techniques have been modified and re-invented for joining nanomaterials. Our review surveys recent progress in nanojoining methods, as compared to conventional joining processes. Examples of nanojoining are given and classified by the dimensionality of the joining materials. At each classification, nanojoining is reviewed and discussed according to materials specialties, low dimensional processing features, energy input mechanisms and potential applications. The preparation of new intermetallicmore » materials by reactive nanoscale multilayer foils based on self-propagating high-temperature synthesis is highlighted. This review will provide insight into nanojoining fundamentals and innovative applications in power electronics packaging, plasmonic devices, nanosoldering for printable electronics, 3D printing and space manufacturing.« less

  6. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  7. High efficiency and non-Richardson thermionics in three dimensional Dirac materials

    NASA Astrophysics Data System (ADS)

    Huang, Sunchao; Sanderson, Matthew; Zhang, Yan; Zhang, Chao

    2017-10-01

    Three dimensional (3D) topological materials have a linear energy dispersion and exhibit many electronic properties superior to conventional materials such as fast response times, high mobility, and chiral transport. In this work, we demonstrate that 3D Dirac materials also have advantages over conventional semiconductors and graphene in thermionic applications. The low emission current suffered in graphene due to the vanishing density of states is enhanced by an increased group velocity in 3D Dirac materials. Furthermore, the thermal energy carried by electrons in 3D Dirac materials is twice of that in conventional materials with a parabolic electron energy dispersion. As a result, 3D Dirac materials have the best thermal efficiency or coefficient of performance when compared to conventional semiconductors and graphene. The generalized Richardson-Dushman law in 3D Dirac materials is derived. The law exhibits the interplay of the reduced density of states and enhanced emission velocity.

  8. Superresolution parallel magnetic resonance imaging: Application to functional and spectroscopic imaging

    PubMed Central

    Otazo, Ricardo; Lin, Fa-Hsuan; Wiggins, Graham; Jordan, Ramiro; Sodickson, Daniel; Posse, Stefan

    2009-01-01

    Standard parallel magnetic resonance imaging (MRI) techniques suffer from residual aliasing artifacts when the coil sensitivities vary within the image voxel. In this work, a parallel MRI approach known as Superresolution SENSE (SURE-SENSE) is presented in which acceleration is performed by acquiring only the central region of k-space instead of increasing the sampling distance over the complete k-space matrix and reconstruction is explicitly based on intra-voxel coil sensitivity variation. In SURE-SENSE, parallel MRI reconstruction is formulated as a superresolution imaging problem where a collection of low resolution images acquired with multiple receiver coils are combined into a single image with higher spatial resolution using coil sensitivities acquired with high spatial resolution. The effective acceleration of conventional gradient encoding is given by the gain in spatial resolution, which is dictated by the degree of variation of the different coil sensitivity profiles within the low resolution image voxel. Since SURE-SENSE is an ill-posed inverse problem, Tikhonov regularization is employed to control noise amplification. Unlike standard SENSE, for which acceleration is constrained to the phase-encoding dimension/s, SURE-SENSE allows acceleration along all encoding directions — for example, two-dimensional acceleration of a 2D echo-planar acquisition. SURE-SENSE is particularly suitable for low spatial resolution imaging modalities such as spectroscopic imaging and functional imaging with high temporal resolution. Application to echo-planar functional and spectroscopic imaging in human brain is presented using two-dimensional acceleration with a 32-channel receiver coil. PMID:19341804

  9. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    NASA Astrophysics Data System (ADS)

    Storm, Emma; Weniger, Christoph; Calore, Francesca

    2017-08-01

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (gtrsim 105) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that are motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |l|<90o and |b|<20o, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.

  10. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  11. Ambiguities and conventions in the perception of visual art.

    PubMed

    Mamassian, Pascal

    2008-09-01

    Vision perception is ambiguous and visual arts play with these ambiguities. While perceptual ambiguities are resolved with prior constraints, artistic ambiguities are resolved by conventions. Is there a relationship between priors and conventions? This review surveys recent work related to these ambiguities in composition, spatial scale, illumination and color, three-dimensional layout, shape, and movement. While most conventions seem to have their roots in perceptual constraints, those conventions that differ from priors may help us appreciate how visual arts differ from everyday perception.

  12. Dimensional change in complete dentures fabricated by injection molding and microwave processing.

    PubMed

    Keenan, Phillip L J; Radford, David R; Clark, Robert K F

    2003-01-01

    Acrylic resin complete dentures undergo dimensional changes during polymerization. Techniques with injection molding and polymerization and microwave polymerization are reported to reduce these changes and thereby improve clinical fit. These dimensional changes need to be quantified. The purpose of this study was to compare differences in dimensional changes of simulated maxillary complete dentures during polymerization and storage in water after injection molding and conventional polymerization, or microwave polymerization against a control of conventionally packed and polymerized simulated maxillary complete dentures. Forty identical maxillary denture bases were prepared in dental wax with anatomic teeth. They were invested and the wax eliminated from the molds. Ten specimens each were randomly assigned to 1 of 4 groups. Group 1 was compression molded and conventionally polymerized; group 2 was injection molded and conventionally polymerized (Success); group 3 was injection molded and microwave polymerized (Acron MC); and group 4 was injection molded and microwave polymerized (Microbase). Intermolar width and changes in vertical dimension of occlusion, were determined after polymerization and after storage in water for 28 days. Measurements in triplicate were made between points scribed on the second molar teeth with a traveling microscope (accurate to 0.005 mm). Vertical dimension of occlusion was measured between points scribed on the upper and lower members of an articulator by use of an internal micrometer (accurate to 0.05 mm). Data were analyzed by use of a 1-way analysis of variance with Tukey post-hoc contrasts (P <.05). Polymerization contractions (intermolar widths) for each group were: group 1, -0.24%; group 2, -0.27%; group 3, -0.35%; and group 4, -0.37%. The Microbase specimens had greater shrinkage than conventionally polymerized specimens, but there were no significant differences between the groups. All injection methods had less postpolymerization increase in vertical dimension of occlusion (0.63 to 0.41 mm) than the conventional Trevalon control (0.74 mm), but only group 4 was significantly different (P<.004). After storage in water for 28 days, all specimens increased in vertical dimension of occlusion (0.10% to 0.16%) from polymerization techniques, but there were no significant differences between groups. Within the limitations of this study, injection molding resulted in a slightly less increase of vertical dimension of occlusion than conventional polymerization techniques, the difference being significant for Microbase compared with the conventional Trevalon control.

  13. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  14. Nongyrotropic electron orbits in collisionless magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Zenitani, S.

    2016-12-01

    In order to study inner workings of magnetic reconnection, NASA has recently launched Magnetospheric MultiScale (MMS) spacecraft. It is expected to observe electron velocity distribution functions (VDFs) at high resolution in magnetotail reconnection sites in 2017. Since VDFs are outcomes of many particle orbits, it is important to understand the relation between electron orbits and VDFs. In this work, we study electron orbits and associated VDFs in the electron current layer in magnetic reconnection, by using a two-dimensional particle-in-cell (PIC) simulation. By analyzing millions of electron orbits, we discover several new orbits: (1) Figure-eight-shaped regular orbits inside the super-Alfvenic electron jet, (2) noncrossing Speiser orbits that do not cross the midplane, (3) noncrossing regular orbits on the jet flanks, and (4) nongyrotropic electrons in the downstream of the jet termination region. Properties of these orbits are organized by a theory on particle orbits (Buchner & Zelenyi 1989 JGR). The noncrossing orbits are mediated by the polarization electric field (Hall electric field E_z) near the midplane. These orbits can be understood as electrostatic extensions of the conventional theory. Properties of the super-Alfvenic electron jet are attributed to the traditional Speiser-orbit electrons. On the other hand, the noncrossing electrons are the majority in number density in the jet flanks. This raise a serious question to our present understanding of physics of collisionless magnetic reconnection, which only assumes crossing populations. We will also discuss spatial distribution of energetic electrons and observational signatures of noncrossing electrons. Reference: Zenitani & Nagai (2016), submitted to Phys. Plasmas.

  15. Global regularity for a family of 3D models of the axi-symmetric Navier–Stokes equations

    NASA Astrophysics Data System (ADS)

    Hou, Thomas Y.; Liu, Pengfei; Wang, Fei

    2018-05-01

    We consider a family of three-dimensional models for the axi-symmetric incompressible Navier–Stokes equations. The models are derived by changing the strength of the convection terms in the axisymmetric Navier–Stokes equations written using a set of transformed variables. We prove the global regularity of the family of models in the case that the strength of convection is slightly stronger than that of the original Navier–Stokes equations, which demonstrates the potential stabilizing effect of convection.

  16. Paper-Based Textbooks with Audio Support for Print-Disabled Students.

    PubMed

    Fujiyoshi, Akio; Ohsawa, Akiko; Takaira, Takuya; Tani, Yoshiaki; Fujiyoshi, Mamoru; Ota, Yuko

    2015-01-01

    Utilizing invisible 2-dimensional codes and digital audio players with a 2-dimensional code scanner, we developed paper-based textbooks with audio support for students with print disabilities, called "multimodal textbooks." Multimodal textbooks can be read with the combination of the two modes: "reading printed text" and "listening to the speech of the text from a digital audio player with a 2-dimensional code scanner." Since multimodal textbooks look the same as regular textbooks and the price of a digital audio player is reasonable (about 30 euro), we think multimodal textbooks are suitable for students with print disabilities in ordinary classrooms.

  17. Fate of superconductivity in three-dimensional disordered Luttinger semimetals

    NASA Astrophysics Data System (ADS)

    Mandal, Ipsita

    2018-05-01

    Superconducting instability can occur in three-dimensional quadratic band crossing semimetals only at a finite coupling strength due to the vanishing of density of states at the quadratic band touching point. Since realistic materials are always disordered to some extent, we study the effect of short-ranged-correlated disorder on this superconducting quantum critical point using a controlled loop-expansion applying dimensional regularization. The renormalization group (RG) scheme allows us to determine the RG flows of the various interaction strengths and shows that disorder destroys the superconducting quantum critical point. In fact, the system exhibits a runaway flow to strong disorder.

  18. Evaluation of Computer Based Testing in lieu of Regular Examinations in Computer Literacy

    NASA Astrophysics Data System (ADS)

    Murayama, Koichi

    Because computer based testing (CBT) has many advantages compared with the conventional paper and pencil testing (PPT) examination method, CBT has begun to be used in various situations in Japan, such as in qualifying examinations and in the TOEFL. This paper describes the usefulness and the problems of CBT applied to a regular college examination. The regular computer literacy examinations for first year students were held using CBT, and the results were analyzed. Responses to a questionnaire indicated many students accepted CBT with no unpleasantness and considered CBT a positive factor, improving their motivation to study. CBT also decreased the work of faculty in terms of marking tests and reducing data.

  19. Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.

    PubMed

    Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos

    2010-07-01

    To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.

  20. Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform

    PubMed Central

    Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos

    2013-01-01

    Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028

  1. Large three-dimensional photonic crystals based on monocrystalline liquid crystal blue phases.

    PubMed

    Chen, Chun-Wei; Hou, Chien-Tsung; Li, Cheng-Chang; Jau, Hung-Chang; Wang, Chun-Ta; Hong, Ching-Lang; Guo, Duan-Yi; Wang, Cheng-Yu; Chiang, Sheng-Ping; Bunning, Timothy J; Khoo, Iam-Choon; Lin, Tsung-Hsien

    2017-09-28

    Although there have been intense efforts to fabricate large three-dimensional photonic crystals in order to realize their full potential, the technologies developed so far are still beset with various material processing and cost issues. Conventional top-down fabrications are costly and time-consuming, whereas natural self-assembly and bottom-up fabrications often result in high defect density and limited dimensions. Here we report the fabrication of extraordinarily large monocrystalline photonic crystals by controlling the self-assembly processes which occur in unique phases of liquid crystals that exhibit three-dimensional photonic-crystalline properties called liquid-crystal blue phases. In particular, we have developed a gradient-temperature technique that enables three-dimensional photonic crystals to grow to lateral dimensions of ~1 cm (~30,000 of unit cells) and thickness of ~100 μm (~ 300 unit cells). These giant single crystals exhibit extraordinarily sharp photonic bandgaps with high reflectivity, long-range periodicity in all dimensions and well-defined lattice orientation.Conventional fabrication approaches for large-size three-dimensional photonic crystals are problematic. By properly controlling the self-assembly processes, the authors report the fabrication of monocrystalline blue phase liquid crystals that exhibit three-dimensional photonic-crystalline properties.

  2. Multi-Dimensional, Inviscid Flux Reconstruction for Simulation of Hypersonic Heating on Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2009-01-01

    The quality of simulated hypersonic stagnation region heating on tetrahedral meshes is investigated by using a three-dimensional, upwind reconstruction algorithm for the inviscid flux vector. Two test problems are investigated: hypersonic flow over a three-dimensional cylinder with special attention to the uniformity of the solution in the spanwise direction and hypersonic flow over a three-dimensional sphere. The tetrahedral cells used in the simulation are derived from a structured grid where cell faces are bisected across the diagonal resulting in a consistent pattern of diagonals running in a biased direction across the otherwise symmetric domain. This grid is known to accentuate problems in both shock capturing and stagnation region heating encountered with conventional, quasi-one-dimensional inviscid flux reconstruction algorithms. Therefore the test problem provides a sensitive test for algorithmic effects on heating. This investigation is believed to be unique in its focus on three-dimensional, rotated upwind schemes for the simulation of hypersonic heating on tetrahedral grids. This study attempts to fill the void left by the inability of conventional (quasi-one-dimensional) approaches to accurately simulate heating in a tetrahedral grid system. Results show significant improvement in spanwise uniformity of heating with some penalty of ringing at the captured shock. Issues with accuracy near the peak shear location are identified and require further study.

  3. Regular three-dimensional presentations improve in the identification of surgical liver anatomy - a randomized study.

    PubMed

    Müller-Stich, Beat P; Löb, Nicole; Wald, Diana; Bruckner, Thomas; Meinzer, Hans-Peter; Kadmon, Martina; Büchler, Markus W; Fischer, Lars

    2013-09-25

    Three-dimensional (3D) presentations enhance the understanding of complex anatomical structures. However, it has been shown that two dimensional (2D) "key views" of anatomical structures may suffice in order to improve spatial understanding. The impact of real 3D images (3Dr) visible only with 3D glasses has not been examined yet. Contrary to 3Dr, regular 3D images apply techniques such as shadows and different grades of transparency to create the impression of 3D.This randomized study aimed to define the impact of both the addition of key views to CT images (2D+) and the use of 3Dr on the identification of liver anatomy in comparison with regular 3D presentations (3D). A computer-based teaching module (TM) was used. Medical students were randomized to three groups (2D+ or 3Dr or 3D) and asked to answer 11 anatomical questions and 4 evaluative questions. Both 3D groups had animated models of the human liver available to them which could be moved in all directions. 156 medical students (57.7% female) participated in this randomized trial. Students exposed to 3Dr and 3D performed significantly better than those exposed to 2D+ (p < 0.01, ANOVA). There were no significant differences between 3D and 3Dr and no significant gender differences (p > 0.1, t-test). Students randomized to 3D and 3Dr not only had significantly better results, but they also were significantly faster in answering the 11 anatomical questions when compared to students randomized to 2D+ (p < 0.03, ANOVA). Whether or not "key views" were used had no significant impact on the number of correct answers (p > 0.3, t-test). This randomized trial confirms that regular 3D visualization improve the identification of liver anatomy.

  4. Gravitational catalysis of merons in Einstein-Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Canfora, Fabrizio; Oh, Seung Hun; Salgado-Rebolledo, Patricio

    2017-10-01

    We construct regular configurations of the Einstein-Yang-Mills theory in various dimensions. The gauge field is of meron-type: it is proportional to a pure gauge (with a suitable parameter λ determined by the field equations). The corresponding smooth gauge transformation cannot be deformed continuously to the identity. In the three-dimensional case we consider the inclusion of a Chern-Simons term into the analysis, allowing λ to be different from its usual value of 1 /2 . In four dimensions, the gravitating meron is a smooth Euclidean wormhole interpolating between different vacua of the theory. In five and higher dimensions smooth meron-like configurations can also be constructed by considering warped products of the three-sphere and lower-dimensional Einstein manifolds. In all cases merons (which on flat spaces would be singular) become regular due to the coupling with general relativity. This effect is named "gravitational catalysis of merons".

  5. On the local well-posedness and a Prodi-Serrin-type regularity criterion of the three-dimensional MHD-Boussinesq system without thermal diffusion

    NASA Astrophysics Data System (ADS)

    Larios, Adam; Pei, Yuan

    2017-07-01

    We prove a Prodi-Serrin-type global regularity condition for the three-dimensional Magnetohydrodynamic-Boussinesq system (3D MHD-Boussinesq) without thermal diffusion, in terms of only two velocity and two magnetic components. To the best of our knowledge, this is the first Prodi-Serrin-type criterion for such a 3D hydrodynamic system which is not fully dissipative, and indicates that such an approach may be successful on other systems. In addition, we provide a constructive proof of the local well-posedness of solutions to the fully dissipative 3D MHD-Boussinesq system, and also the fully inviscid, irresistive, non-diffusive MHD-Boussinesq equations. We note that, as a special case, these results include the 3D non-diffusive Boussinesq system and the 3D MHD equations. Moreover, they can be extended without difficulty to include the case of a Coriolis rotational term.

  6. Using the small alignment index chaos indicator to characterize the vibrational dynamics of a molecular system: LiNC-LiCN.

    PubMed

    Benitez, P; Losada, J C; Benito, R M; Borondo, F

    2015-10-01

    A study of the dynamical characteristics of the phase space corresponding to the vibrations of the LiNC-LiCN molecule using an analysis based on the small alignment index (SALI) is presented. SALI is a good indicator of chaos that can easily determine whether a given trajectory is regular or chaotic regardless of the dimensionality of the system, and can also provide a wealth of dynamical information when conveniently implemented. In two-dimensional (2D) systems SALI maps are computed as 2D phase space representations, where the SALI asymptotic values are represented in color scale. We show here how these maps provide full information on the dynamical phase space structure of the LiNC-LiCN system, even quantifying numerically the volume of the different zones of chaos and regularity as a function of the molecule excitation energy.

  7. Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri

    2017-03-01

    Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.

  8. Advantages of multigrid methods for certifying the accuracy of PDE modeling

    NASA Technical Reports Server (NTRS)

    Forester, C. K.

    1981-01-01

    Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.

  9. 3D ultrasound imaging in image-guided intervention.

    PubMed

    Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare

    2014-01-01

    Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.

  10. Three-dimensional models of conventional and vertical junction laser-photovoltaic energy converters

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Walker, Gilbert H.

    1988-01-01

    Three-dimensional models of both conventional planar junction and vertical junction photovoltaic energy converters have been constructed. The models are a set of linear partial differential equations and take into account many photoconverter design parameters. The model is applied to Si photoconverters; however, the model may be used with other semiconductors. When used with a Nd laser, the conversion efficiency of the Si vertical junction photoconverter is 47 percent, whereas the efficiency for the conventional planar Si photoconverter is only 17 percent. A parametric study of the Si vertical junction photoconverter is then done in order to describe the optimum converter for use with the 1.06-micron Nd laser. The efficiency of this optimized vertical junction converter is 44 percent at 1 kW/sq cm.

  11. Robust dynamic myocardial perfusion CT deconvolution using adaptive-weighted tensor total variation regularization

    NASA Astrophysics Data System (ADS)

    Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.

  12. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  13. Distributed Wavelet Transform for Irregular Sensor Network Grids

    DTIC Science & Technology

    2005-01-01

    implement it in a multi-hop, wireless sensor network ; and illustrate with several simulations. The new transform performs on par with conventional wavelet methods in a head-to-head comparison on a regular grid of sensor nodes.

  14. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  15. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Cubic map algebra functions for spatio-temporal analysis

    USGS Publications Warehouse

    Mennis, J.; Viger, R.; Tomlin, C.D.

    2005-01-01

    We propose an extension of map algebra to three dimensions for spatio-temporal data handling. This approach yields a new class of map algebra functions that we call "cube functions." Whereas conventional map algebra functions operate on data layers representing two-dimensional space, cube functions operate on data cubes representing two-dimensional space over a third-dimensional period of time. We describe the prototype implementation of a spatio-temporal data structure and selected cube function versions of conventional local, focal, and zonal map algebra functions. The utility of cube functions is demonstrated through a case study analyzing the spatio-temporal variability of remotely sensed, southeastern U.S. vegetation character over various land covers and during different El Nin??o/Southern Oscillation (ENSO) phases. Like conventional map algebra, the application of cube functions may demand significant data preprocessing when integrating diverse data sets, and are subject to limitations related to data storage and algorithm performance. Solutions to these issues include extending data compression and computing strategies for calculations on very large data volumes to spatio-temporal data handling.

  17. The Evolution of Photography and Three-Dimensional Imaging in Plastic Surgery.

    PubMed

    Weissler, Jason M; Stern, Carrie S; Schreiber, Jillian E; Amirlak, Bardia; Tepper, Oren M

    2017-03-01

    Throughout history, the technological advancements of conventional clinical photography in plastic surgery have not only refined the methods available to the plastic surgeon, but have invigorated the profession through technology. The technology of the once traditional two-dimensional photograph has since been revolutionized and refashioned to incorporate novel applications, which have since become the standard in clinical photography. Contrary to traditional standardized two-dimensional photographs, three-dimensional photography provides the surgeon with an invaluable volumetric and morphologic analysis by demonstrating true surface dimensions both preoperatively and postoperatively. Clinical photography has served as one of the fundamental objective means by which plastic surgeons review outcomes; however, the newer three-dimensional technology has been primarily used to enhance the preoperative consultation with surgical simulations. The authors intend to familiarize readers with the notion that three-dimensional photography extends well beyond its marketing application during surgical consultation. For the cosmetic surgeon, as the application of three-dimensional photography continues to mature in facial plastic surgery, it will continue to bypass the dated conventional photographic methods plastic surgeons once relied on. This article reviews a paradigm shift and provides a historical review of the fascinating evolution of photography in plastic surgery by highlighting the clinical utility of three-dimensional photography as an adjunct to plastic and reconstructive surgery practices. As three-dimensional photographic technology continues to evolve, its application in facial plastic surgery will provide an opportunity for a new objective standard in plastic surgery.

  18. Task-Driven Tube Current Modulation and Regularization Design in Computed Tomography with Penalized-Likelihood Reconstruction.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2016-02-01

    This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.

  19. Long-Term Acupuncture Therapy for Low-Income Older Adults with Multimorbidity: A Qualitative Study of Patient Perceptions.

    PubMed

    Pagones, Rachel; Lee, Janet L; Hurst, Samantha

    2018-02-01

    Multimorbidity is common, but often poorly managed, among the rapidly growing population of older adults. The existing guidelines followed by physicians frequently lead to polypharmacy and a complex treatment burden. The objective of this study was to explore what benefits are perceived by older adults with multimorbidity as a result of long-term, regular acupuncture treatment. A qualitative design with inductive thematic analysis of semistructured interviews. Participants were recruited from a no-cost, college-affiliated acupuncture clinic for low-income older adults in an urban, racially/ethnically diverse neighborhood in southern California. Fifteen patients aged 60 years and older suffering from at least two chronic conditions. Five themes were identified: (1) mind-body effects, (2) the enhanced therapeutic alliance, (3) what they liked best, (4) the conventional healthcare system, and (5) importance of regular schedule. A notable mind-body effect, reported by a substantial number of participants, was medication reduction. Participants also cited changes in mood, energy, and well-being as important benefits. In addition, they voiced widespread dissatisfaction with conventional healthcare. Keeping up regular treatments as a way to deal with new complaints and encourage a healthier lifestyle was seen an important aspect of care at the clinic. This cohort of older adults with multimorbidity valued acupuncture as a way to reduce medication as well as a means to maintain physical and mental health. In addition, they developed a strong trust in the clinic's ability to support the totality of their health as individuals, which they contrasted to the specialized and impersonal approach of the conventional medical clinic.

  20. Conventional and Advanced Separations in Mass Spectrometry-Based Metabolomics: Methodologies and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heyman, Heino M.; Zhang, Xing; Tang, Keqi

    2016-02-16

    Metabolomics is the quantitative analysis of all metabolites in a given sample. Due to the chemical complexity of the metabolome, optimal separations are required for comprehensive identification and quantification of sample constituents. This chapter provides an overview of both conventional and advanced separations methods in practice for reducing the complexity of metabolite extracts delivered to the mass spectrometer detector, and covers gas chromatography (GC), liquid chromatography (LC), capillary electrophoresis (CE), supercritical fluid chromatography (SFC) and ion mobility spectrometry (IMS) separation techniques coupled with mass spectrometry (MS) as both uni-dimensional and as multi-dimensional approaches.

  1. Analysis of a Light Cross Country Combat Vehicle - The Cobra

    DTIC Science & Technology

    1951-06-01

    34-’"- _.-- ; ..-".’ ;" - -; - Regular fracks > Beguiar tracks, baled on the conventional :d©«= trend defined i>y the close spacing of track links...Annex .2 ). The complexity of conventional steering mechanisms is high. Plan- etary gears «, hydraulic controls., brakes, and other accessories...sntiite vehicle« i£n straight ^ruaning*. the joint is completely closed and is held In that position by the two hydraulic cylinders» ’To mate

  2. Shaping highly regular glass architectures: A lesson from nature

    PubMed Central

    Schoeppler, Vanessa; Reich, Elke; Vacelet, Jean; Rosenthal, Martin; Pacureanu, Alexandra; Rack, Alexander; Zaslansky, Paul; Zolotoyabko, Emil; Zlotnikov, Igor

    2017-01-01

    Demospongiae is a class of marine sponges that mineralize skeletal elements, the glass spicules, made of amorphous silica. The spicules exhibit a diversity of highly regular three-dimensional branched morphologies that are a paradigm example of symmetry in biological systems. Current glass shaping technology requires treatment at high temperatures. In this context, the mechanism by which glass architectures are formed by living organisms remains a mystery. We uncover the principles of spicule morphogenesis. During spicule formation, the process of silica deposition is templated by an organic filament. It is composed of enzymatically active proteins arranged in a mesoscopic hexagonal crystal-like structure. In analogy to synthetic inorganic nanocrystals that show high spatial regularity, we demonstrate that the branching of the filament follows specific crystallographic directions of the protein lattice. In correlation with the symmetry of the lattice, filament branching determines the highly regular morphology of the spicules on the macroscale. PMID:29057327

  3. Sea of Majorana fermions from pseudo-scalar superconducting order in three dimensional Dirac materials.

    PubMed

    Salehi, Morteza; Jafari, S A

    2017-08-15

    We suggest that spin-singlet pseudo-scalar s-wave superconducting pairing creates a two dimensional sea of Majorana fermions on the surface of three dimensional Dirac superconductors (3DDS). This pseudo-scalar superconducting order parameter Δ 5 , in competition with scalar Dirac mass m, leads to a topological phase transition due to band inversion. We find that a perfect Andreev-Klein reflection is guaranteed by presence of anomalous Andreev reflection along with the conventional one. This effect manifests itself in a resonant peak of the differential conductance. Furthermore, Josephson current of the Δ 5 |m|Δ 5 junction in the presence of anomalous Andreev reflection is fractional with 4π period. Our finding suggests another search area for condensed matter realization of Majorana fermions which are beyond the vortex-core of p-wave superconductors. The required Δ 5 pairing can be extrinsically induced by a conventional s-wave superconductor into a three dimensional Dirac material (3DDM).

  4. Using three dimensional silicone ``boots`` to solve complex remedial design problems in curtain walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y.J.

    1998-12-31

    Stick system curtain wall leak problems are frequently caused by water entry at the splice joints of the curtain wall frame and failure of the internal metal joinery seals. Remedial solutions involving occupied buildings inevitably face the multiple constraints of existing construction and business operations not present during the original curtain wall construction. In most cases, even partial disassembly of the curtain wall for internal seal repairs is not feasible. Remedial solutions which must be executed from the exterior of the curtain wall often involve wet-applied or preformed sealant tape bridge joints. However, some of the more complex joints cannotmore » be repaired effectively or economically with the conventional bridge joint. Fortunately, custom fabricated three-dimensional preformed sealant boots are becoming available to address these situations. This paper discusses the design considerations and the selective use of three-dimensional preformed boots in sealing complex joint geometry that would not be effective with the conventional two-dimensional bridge joint.« less

  5. Bose-Einstein condensation in chains with power-law hoppings: Exact mapping on the critical behavior in d-dimensional regular lattices.

    PubMed

    Dias, W S; Bertrand, D; Lyra, M L

    2017-06-01

    Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d>4.

  6. Bose-Einstein condensation in chains with power-law hoppings: Exact mapping on the critical behavior in d -dimensional regular lattices

    NASA Astrophysics Data System (ADS)

    Dias, W. S.; Bertrand, D.; Lyra, M. L.

    2017-06-01

    Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d >4 .

  7. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  8. Apparently noninvariant terms of nonlinear sigma models in lattice perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harada, Koji; Hattori, Nozomu; Kubo, Hirofumi

    2009-03-15

    Apparently noninvariant terms (ANTs) that appear in loop diagrams for nonlinear sigma models are revisited in lattice perturbation theory. The calculations have been done mostly with dimensional regularization so far. In order to establish that the existence of ANTs is independent of the regularization scheme, and of the potential ambiguities in the definition of the Jacobian of the change of integration variables from group elements to 'pion' fields, we employ lattice regularization, in which everything (including the Jacobian) is well defined. We show explicitly that lattice perturbation theory produces ANTs in the four-point functions of the pion fields at one-loopmore » and the Jacobian does not play an important role in generating ANTs.« less

  9. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  10. Information yield: a comparison of Kodak T-Mat G, Ortho L and RP X-Omat films.

    PubMed

    Miles, D A; Van Dis, M L; Peterson, M G

    1989-02-01

    The information yield from two rare-earth screen-film combinations, Lanex Regular/T-Mat G (TMG) and Lanex regular/Ortho L (OL) has been compared with that from a conventional calcium-tungstate combination, X-Omatic regular/RP X-Omat (XRP), by means of perceptibility curves generated from an aluminum test object. The TMG and OL systems were faster than the XRP and the OL had the widest latitude. The maximum number of details perceived was similar for all three systems. The results support the suggestion that the TMG and OL systems permit more information to be perceived than XRP and that the newer imaging systems do not lose information despite their increased speed.

  11. Combining of ETHOS Operating Ergonomic Platform, Three-dimensional Laparoscopic Camera, and Radius Surgical System Manipulators Improves Ergonomy in Urologic Laparoscopy: Comparison with Conventional Laparoscopy and da Vinci in a Pelvi Trainer.

    PubMed

    Tokas, Theodoros; Gözen, Ali Serdar; Avgeris, Margaritis; Tschada, Alexandra; Fiedler, Marcel; Klein, Jan; Rassweiler, Jens

    2017-10-01

    Posture, vision, and instrumentation limitations are the main predicaments of conventional laparoscopy. To combine the ETHOS surgical chair, the three-dimensional laparoscope, and the Radius Surgical System manipulators, and compare the system with conventional laparoscopy and da Vinci in terms of task completion times and discomfort. Fifteen trainees performed the three main laparoscopic suturing tasks of the Heilbronn training program (IV: simulation of dorsal venous complex suturing; V: circular suturing of tubular structure; and VI: urethrovesical anastomosis) in a pelvi trainer. The tasks were performed conventionally, utilizing the three devices, and robotically. Task completion times were recorded and the surgeon discomfort was evaluated using questionnaires. Task completion times were compared using nonparametric Wilcoxon signed rank test and ergonomic scores were compared using Pearson chi-square test. The use of the full laparoscopic set (ETHOS chair, three-dimensional laparoscopic camera, Radius Surgical System needle holders), resulted in a significant improvement of the completion time of the three tested tasks compared with conventional laparoscopy (p<0.001) and similar to da Vinci surgery. After completing Tasks IV, V, and VI conventionally, 12 (80%), 13 (86.7%), and 13 (86.7%) of the 15 trainees, respectively, reported heavy total discomfort. The full laparoscopic system nullified heavy discomfort for Tasks IV and V and minimized it (6.7%) for the most demanding Task VI. Especially for Task VI, all trainees gained benefit, by using the system, in terms of task completion times and discomfort. The limited trainee robotic experience and the questionnaire subjectivity could be a potential limitation. The ergonomic laparoscopic system offers significantly improved task completion times and ergonomy than conventional laparoscopy. Furthermore, it demonstrates comparable results to robotic surgery. The study was conducted in a pelvi trainer and no patients were recruited. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  12. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  13. Toward a constructive physics

    NASA Astrophysics Data System (ADS)

    Noyes, H. P.; Gefwert, C.; Manthey, M. J.

    1983-06-01

    The discretization of physics which has occurred thanks to the advent of quantum mechanics has replaced the continuum standards of time, length and mass which brought physics to maturity by counting. The (arbitrary in the sense of conventional dimensional analysis) standards were replaced by three dimensional constants: the limiting velocity c, the unit of action h, and either a reference mass (eg m/sub p/) or a coupling constant (et G related to mass scale by hc/(2(LC OMEGA)Gm/sub/p(2)) approx. - 1.7 x 10 to the 38th power. Once these physical and experimental reference standards are accepted, the conventional approach is to connect physics to mathematics by means of dimensionless ratios. A program for physics which will meet these rigid criteria while preserving, in so far as possible, the successes that conventional physics has already achieved is outlined.

  14. Schatten Matrix Norm Based Polarimetric SAR Data Regularization Application over Chamonix Mont-Blanc

    NASA Astrophysics Data System (ADS)

    Le, Thu Trang; Atto, Abdourrahmane M.; Trouve, Emmanuel

    2013-08-01

    The paper addresses the filtering of Polarimetry Synthetic Aperture Radar (PolSAR) images. The filtering strategy is based on a regularizing cost function associated with matrix norms called the Schatten p-norms. These norms apply on matrix singular values. The proposed approach is illustrated upon scattering and coherency matrices on RADARSAT-2 PolSAR images over the Chamonix Mont-Blanc site. Several p values of Schatten p-norms are surveyed and their capabilities on filtering PolSAR images is provided in comparison with conventional strategies for filtering PolSAR data.

  15. Comparison of Bolton analysis and Little’s irregularity index on laser scanned three-dimensional digital study models with conventional study models

    NASA Astrophysics Data System (ADS)

    Kurnia, H.; Noerhadi, N. A. I.

    2017-08-01

    Three-dimensional digital study models were introduced following advances in digital technology. This study was carried out to assess the reliability of digital study models scanned by a laser scanning device newly assembled. The aim of this study was to compare the digital study models and conventional models. Twelve sets of dental impressions were taken from patients with mild-to-moderate crowding. The impressions were taken twice, one with alginate and the other with polyvinylsiloxane. The alginate impressions were made into conventional models, and the polyvinylsiloxane impressions were scanned to produce digital models. The mesiodistal tooth width and Little’s irregularity index (LII) were measured manually with digital calipers on the conventional models and digitally on the digital study models. Bolton analysis was performed on each study models. Each method was carried out twice to check for intra-observer variability. The reproducibility (comparison of the methods) was assessed using independent-sample t-tests. The mesiodistal tooth width between conventional and digital models did not significantly differ (p > 0.05). Independent-sample t-tests did not identify statistically significant differences for Bolton analysis and LII (p = 0.603 for Bolton and p = 0894 for LII). The measurements of the digital study models are as accurate as those of the conventional models.

  16. Fabrication and Characterization of Three Dimensional Photonic Crystals Generated by Multibeam Interference Lithography

    DTIC Science & Technology

    2009-01-01

    and J. A. Lewis, "Microperiodic structures - Direct writing of three-dimensional webs ," Nature, vol. 428, pp. 386-386, 2004. [9] M. Campbell, D. N...of Applied Physics Part 1-Regular Papers Brief Communications & Review Papers , vol. 44, pp. 6355-6367, 2005. [75] P. Cloetens, W. Ludwig, J... paper screen on the sample holder and marking the beam position. If the central beam is properly aligned, the spot on the screen remains at the

  17. The hydrogen atom in D = 3 - 2ɛ dimensions

    NASA Astrophysics Data System (ADS)

    Adkins, Gregory S.

    2018-06-01

    The nonrelativistic hydrogen atom in D = 3 - 2 ɛ dimensions is the reference system for perturbative schemes used in dimensionally regularized nonrelativistic effective field theories to describe hydrogen-like atoms. Solutions to the D-dimensional Schrödinger-Coulomb equation are given in the form of a double power series. Energies and normalization integrals are obtained numerically and also perturbatively in terms of ɛ. The utility of the series expansion is demonstrated by the calculation of the divergent expectation value <(V‧)2 >.

  18. Diffuse optical correlation tomography of cerebral blood flow during cortical spreading depression in rat brain

    NASA Astrophysics Data System (ADS)

    Zhou, Chao; Yu, Guoqiang; Furuya, Daisuke; Greenberg, Joel; Yodh, Arjun; Durduran, Turgut

    2006-02-01

    Diffuse optical correlation methods were adapted for three-dimensional (3D) tomography of cerebral blood flow (CBF) in small animal models. The image reconstruction was optimized using a noise model for diffuse correlation tomography which enabled better data selection and regularization. The tomographic approach was demonstrated with simulated data and during in-vivo cortical spreading depression (CSD) in rat brain. Three-dimensional images of CBF were obtained through intact skull in tissues(~4mm) deep below the cortex.

  19. Mining protein loops using a structural alphabet and statistical exceptionality

    PubMed Central

    2010-01-01

    Background Protein loops encompass 50% of protein residues in available three-dimensional structures. These regions are often involved in protein functions, e.g. binding site, catalytic pocket... However, the description of protein loops with conventional tools is an uneasy task. Regular secondary structures, helices and strands, have been widely studied whereas loops, because they are highly variable in terms of sequence and structure, are difficult to analyze. Due to data sparsity, long loops have rarely been systematically studied. Results We developed a simple and accurate method that allows the description and analysis of the structures of short and long loops using structural motifs without restriction on loop length. This method is based on the structural alphabet HMM-SA. HMM-SA allows the simplification of a three-dimensional protein structure into a one-dimensional string of states, where each state is a four-residue prototype fragment, called structural letter. The difficult task of the structural grouping of huge data sets is thus easily accomplished by handling structural letter strings as in conventional protein sequence analysis. We systematically extracted all seven-residue fragments in a bank of 93000 protein loops and grouped them according to the structural-letter sequence, named structural word. This approach permits a systematic analysis of loops of all sizes since we consider the structural motifs of seven residues rather than complete loops. We focused the analysis on highly recurrent words of loops (observed more than 30 times). Our study reveals that 73% of loop-lengths are covered by only 3310 highly recurrent structural words out of 28274 observed words). These structural words have low structural variability (mean RMSd of 0.85 Å). As expected, half of these motifs display a flanking-region preference but interestingly, two thirds are shared by short (less than 12 residues) and long loops. Moreover, half of recurrent motifs exhibit a significant level of amino-acid conservation with at least four significant positions and 87% of long loops contain at least one such word. We complement our analysis with the detection of statistically over-represented patterns of structural letters as in conventional DNA sequence analysis. About 30% (930) of structural words are over-represented, and cover about 40% of loop lengths. Interestingly, these words exhibit lower structural variability and higher sequential specificity, suggesting structural or functional constraints. Conclusions We developed a method to systematically decompose and study protein loops using recurrent structural motifs. This method is based on the structural alphabet HMM-SA and not on structural alignment and geometrical parameters. We extracted meaningful structural motifs that are found in both short and long loops. To our knowledge, it is the first time that pattern mining helps to increase the signal-to-noise ratio in protein loops. This finding helps to better describe protein loops and might permit to decrease the complexity of long-loop analysis. Detailed results are available at http://www.mti.univ-paris-diderot.fr/publication/supplementary/2009/ACCLoop/. PMID:20132552

  20. Mining protein loops using a structural alphabet and statistical exceptionality.

    PubMed

    Regad, Leslie; Martin, Juliette; Nuel, Gregory; Camproux, Anne-Claude

    2010-02-04

    Protein loops encompass 50% of protein residues in available three-dimensional structures. These regions are often involved in protein functions, e.g. binding site, catalytic pocket... However, the description of protein loops with conventional tools is an uneasy task. Regular secondary structures, helices and strands, have been widely studied whereas loops, because they are highly variable in terms of sequence and structure, are difficult to analyze. Due to data sparsity, long loops have rarely been systematically studied. We developed a simple and accurate method that allows the description and analysis of the structures of short and long loops using structural motifs without restriction on loop length. This method is based on the structural alphabet HMM-SA. HMM-SA allows the simplification of a three-dimensional protein structure into a one-dimensional string of states, where each state is a four-residue prototype fragment, called structural letter. The difficult task of the structural grouping of huge data sets is thus easily accomplished by handling structural letter strings as in conventional protein sequence analysis. We systematically extracted all seven-residue fragments in a bank of 93000 protein loops and grouped them according to the structural-letter sequence, named structural word. This approach permits a systematic analysis of loops of all sizes since we consider the structural motifs of seven residues rather than complete loops. We focused the analysis on highly recurrent words of loops (observed more than 30 times). Our study reveals that 73% of loop-lengths are covered by only 3310 highly recurrent structural words out of 28274 observed words). These structural words have low structural variability (mean RMSd of 0.85 A). As expected, half of these motifs display a flanking-region preference but interestingly, two thirds are shared by short (less than 12 residues) and long loops. Moreover, half of recurrent motifs exhibit a significant level of amino-acid conservation with at least four significant positions and 87% of long loops contain at least one such word. We complement our analysis with the detection of statistically over-represented patterns of structural letters as in conventional DNA sequence analysis. About 30% (930) of structural words are over-represented, and cover about 40% of loop lengths. Interestingly, these words exhibit lower structural variability and higher sequential specificity, suggesting structural or functional constraints. We developed a method to systematically decompose and study protein loops using recurrent structural motifs. This method is based on the structural alphabet HMM-SA and not on structural alignment and geometrical parameters. We extracted meaningful structural motifs that are found in both short and long loops. To our knowledge, it is the first time that pattern mining helps to increase the signal-to-noise ratio in protein loops. This finding helps to better describe protein loops and might permit to decrease the complexity of long-loop analysis. Detailed results are available at http://www.mti.univ-paris-diderot.fr/publication/supplementary/2009/ACCLoop/.

  1. Supramolecular organic frameworks: engineering periodicity in water through host-guest chemistry.

    PubMed

    Tian, Jia; Chen, Lan; Zhang, Dan-Wei; Liu, Yi; Li, Zhan-Ting

    2016-05-11

    The development of homogeneous, water-soluble periodic self-assembled structures comprise repeating units that produce porosity in two-dimensional (2D) or three-dimensional (3D) spaces has become a topic of growing interest in the field of supramolecular chemistry. Such novel self-assembled entities, known as supramolecular organic frameworks (SOFs), are the result of programmed host-guest interactions, which allows for the thermodynamically controlled generation of monolayer sheets or a diamondoid architecture with regular internal cavities or pores under mild conditions. This feature article aims at propagating the conceptually novel SOFs as a new entry into conventional supramolecular polymers. In the first section, we will describe the background of porous solid frameworks and supramolecular polymers. We then introduce the self-assembling behaviour of several multitopic flexible molecules, which is closely related to the design of periodic SOFs from rigid multitopic building blocks. This is followed by a brief discussion of cucurbit[8]uril (CB[8])-encapsulation-enhanced aromatic stacking in water. The three-component host-guest pattern based on this stacking motif has been utilized to drive the formation of most of the new SOFs. In the following two sections, we will highlight the main advances in the construction of 2D and 3D SOFs and the related functional aspects. Finally, we will offer our opinions on future directions for both structures and functions. We hope that this article will trigger the interest of researchers in the field of chemistry, physics, biology and materials science, which should help accelerate the applications of this new family of soft self-assembled organic frameworks.

  2. Effect of 3D animation videos over 2D video projections in periodontal health education among dental students.

    PubMed

    Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha

    2015-01-01

    There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p<0.001) at baseline, immediate and after 1 month. At baseline, all the clinical parameters in the both the groups were similar and showed a significant reduction (p<0.001)p after 1 month, whereas no significant difference was noticed post intervention between the groups. 3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes.

  3. SkyFACT: high-dimensional modeling of gamma-ray emission with adaptive templates and penalized likelihoods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storm, Emma; Weniger, Christoph; Calore, Francesca, E-mail: e.m.storm@uva.nl, E-mail: c.weniger@uva.nl, E-mail: francesca.calore@lapth.cnrs.fr

    We present SkyFACT (Sky Factorization with Adaptive Constrained Templates), a new approach for studying, modeling and decomposing diffuse gamma-ray emission. Like most previous analyses, the approach relies on predictions from cosmic-ray propagation codes like GALPROP and DRAGON. However, in contrast to previous approaches, we account for the fact that models are not perfect and allow for a very large number (∼> 10{sup 5}) of nuisance parameters to parameterize these imperfections. We combine methods of image reconstruction and adaptive spatio-spectral template regression in one coherent hybrid approach. To this end, we use penalized Poisson likelihood regression, with regularization functions that aremore » motivated by the maximum entropy method. We introduce methods to efficiently handle the high dimensionality of the convex optimization problem as well as the associated semi-sparse covariance matrix, using the L-BFGS-B algorithm and Cholesky factorization. We test the method both on synthetic data as well as on gamma-ray emission from the inner Galaxy, |ℓ|<90{sup o} and | b |<20{sup o}, as observed by the Fermi Large Area Telescope. We finally define a simple reference model that removes most of the residual emission from the inner Galaxy, based on conventional diffuse emission components as well as components for the Fermi bubbles, the Fermi Galactic center excess, and extended sources along the Galactic disk. Variants of this reference model can serve as basis for future studies of diffuse emission in and outside the Galactic disk.« less

  4. A unified framework for penalized statistical muon tomography reconstruction with edge preservation priors of lp norm type

    NASA Astrophysics Data System (ADS)

    Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping

    2016-01-01

    The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.

  5. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  7. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  8. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics

    PubMed Central

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-01-01

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry. PMID:24603964

  9. Roll-to-roll fabrication of large scale and regular arrays of three-dimensional nanospikes for high efficiency and flexible photovoltaics.

    PubMed

    Leung, Siu-Fung; Gu, Leilei; Zhang, Qianpeng; Tsui, Kwong-Hoi; Shieh, Jia-Min; Shen, Chang-Hong; Hsiao, Tzu-Hsuan; Hsu, Chin-Hung; Lu, Linfeng; Li, Dongdong; Lin, Qingfeng; Fan, Zhiyong

    2014-03-07

    Three-dimensional (3-D) nanostructures have demonstrated enticing potency to boost performance of photovoltaic devices primarily owning to the improved photon capturing capability. Nevertheless, cost-effective and scalable fabrication of regular 3-D nanostructures with decent robustness and flexibility still remains as a challenging task. Meanwhile, establishing rational design guidelines for 3-D nanostructured solar cells with the balanced electrical and optical performance are of paramount importance and in urgent need. Herein, regular arrays of 3-D nanospikes (NSPs) were fabricated on flexible aluminum foil with a roll-to-roll compatible process. The NSPs have precisely controlled geometry and periodicity which allow systematic investigation on geometry dependent optical and electrical performance of the devices with experiments and modeling. Intriguingly, it has been discovered that the efficiency of an amorphous-Si (a-Si) photovoltaic device fabricated on NSPs can be improved by 43%, as compared to its planar counterpart, in an optimal case. Furthermore, large scale flexible NSP solar cell devices have been fabricated and demonstrated. These results not only have shed light on the design rules of high performance nanostructured solar cells, but also demonstrated a highly practical process to fabricate efficient solar panels with 3-D nanostructures, thus may have immediate impact on thin film photovoltaic industry.

  10. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  11. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  12. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  13. Graphene-and-Copper Artificial Nacre Fabricated by a Preform Impregnation Process: Bioinspired Strategy for Strengthening-Toughening of Metal Matrix Composite.

    PubMed

    Xiong, Ding-Bang; Cao, Mu; Guo, Qiang; Tan, Zhanqiu; Fan, Genlian; Li, Zhiqiang; Zhang, Di

    2015-07-28

    Metals can be strengthened by adding hard reinforcements, but such strategy usually compromises ductility and toughness. Natural nacre consists of hard and soft phases organized in a regular "brick-and-mortar" structure and exhibits a superior combination of mechanical strength and toughness, which is an attractive model for strengthening and toughening artificial composites, but such bioinspired metal matrix composite has yet to be made. Here we prepared nacre-like reduced graphene oxide (RGrO) reinforced Cu matrix composite based on a preform impregnation process, by which two-dimensional RGrO was used as "brick" and inserted into "□-and-mortar" ordered porous Cu preform (the symbol "□" means the absence of "brick"), followed by compacting. This process realized uniform dispersion and alignment of RGrO in Cu matrix simultaneously. The RGrO-and-Cu artificial nacres exhibited simultaneous enhancement on yield strength and ductility as well as increased modulus, attributed to RGrO strengthening, effective crack deflection and a possible combined failure mode of RGrO. The artificial nacres also showed significantly higher strengthening efficiency than other conventional Cu matrix composites, which might be related to the alignment of RGrO.

  14. Multislice spiral CT simulator for dynamic cardiopulmonary studies

    NASA Astrophysics Data System (ADS)

    De Francesco, Silvia; Ferreira da Silva, Augusto M.

    2002-04-01

    We've developed a Multi-slice Spiral CT Simulator modeling the acquisition process of a real tomograph over a 4-dimensional phantom (4D MCAT) of the human thorax. The simulator allows us to visually characterize artifacts due to insufficient temporal sampling and a priori evaluate the quality of the images obtained in cardio-pulmonary studies (both with single-/multi-slice and ECG gated acquisition processes). The simulating environment allows both for conventional and spiral scanning modes and includes a model of noise in the acquisition process. In case of spiral scanning, reconstruction facilities include longitudinal interpolation methods (360LI and 180LI both for single and multi-slice). Then, the reconstruction of the section is performed through FBP. The reconstructed images/volumes are affected by distortion due to insufficient temporal sampling of the moving object. The developed simulating environment allows us to investigate the nature of the distortion characterizing it qualitatively and quantitatively (using, for example, Herman's measures). Much of our work is focused on the determination of adequate temporal sampling and sinogram regularization techniques. At the moment, the simulator model is limited to the case of multi-slice tomograph, being planned as a next step of development the extension to cone beam or area detectors.

  15. Electromagnetism on anisotropic fractal media

    NASA Astrophysics Data System (ADS)

    Ostoja-Starzewski, Martin

    2013-04-01

    Basic equations of electromagnetic fields in anisotropic fractal media are obtained using a dimensional regularization approach. First, a formulation based on product measures is shown to satisfy the four basic identities of the vector calculus. This allows a generalization of the Green-Gauss and Stokes theorems as well as the charge conservation equation on anisotropic fractals. Then, pursuing the conceptual approach, we derive the Faraday and Ampère laws for such fractal media, which, along with two auxiliary null-divergence conditions, effectively give the modified Maxwell equations. Proceeding on a separate track, we employ a variational principle for electromagnetic fields, appropriately adapted to fractal media, so as to independently derive the same forms of these two laws. It is next found that the parabolic (for a conducting medium) and the hyperbolic (for a dielectric medium) equations involve modified gradient operators, while the Poynting vector has the same form as in the non-fractal case. Finally, Maxwell's electromagnetic stress tensor is reformulated for fractal systems. In all the cases, the derived equations for fractal media depend explicitly on fractal dimensions in three different directions and reduce to conventional forms for continuous media with Euclidean geometries upon setting these each of dimensions equal to unity.

  16. Dimension-Based Statistical Learning Affects Both Speech Perception and Production

    ERIC Educational Resources Information Center

    Lehet, Matthew; Holt, Lori L.

    2017-01-01

    Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more "perceptual weight" and more effectively signal category membership…

  17. Optimal swimming of a sheet.

    PubMed

    Montenegro-Johnson, Thomas D; Lauga, Eric

    2014-06-01

    Propulsion at microscopic scales is often achieved through propagating traveling waves along hairlike organelles called flagella. Taylor's two-dimensional swimming sheet model is frequently used to provide insight into problems of flagellar propulsion. We derive numerically the large-amplitude wave form of the two-dimensional swimming sheet that yields optimum hydrodynamic efficiency: the ratio of the squared swimming speed to the rate-of-working of the sheet against the fluid. Using the boundary element method, we show that the optimal wave form is a front-back symmetric regularized cusp that is 25% more efficient than the optimal sine wave. This optimal two-dimensional shape is smooth, qualitatively different from the kinked form of Lighthill's optimal three-dimensional flagellum, not predicted by small-amplitude theory, and different from the smooth circular-arc-like shape of active elastic filaments.

  18. Research Productivity: Some Paths Less Travelled

    ERIC Educational Resources Information Center

    Martin, Brian

    2009-01-01

    Conventional approaches for fostering research productivity, such as recruitment and incentives, do relatively little to develop latent capacities in researchers. Six promising unorthodox approaches are the promotion of regular writing, tools for creativity, good luck, happiness, good health and crowd wisdom. These options challenge conventional…

  19. [3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].

    PubMed

    Kneist, W; Huber, T; Paschold, M; Lang, H

    2016-06-01

    The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.

  20. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  1. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  2. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    PubMed

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  3. Impact of an irregular friction formulation on dynamics of a minimal model for brake squeal

    NASA Astrophysics Data System (ADS)

    Stender, Merten; Tiedemann, Merten; Hoffmann, Norbert; Oberst, Sebastian

    2018-07-01

    Friction-induced vibrations are of major concern in the design of reliable, efficient and comfortable technical systems. Well-known examples for systems susceptible to self-excitation can be found in fluid structure interaction, disk brake squeal, rotor dynamics, hip implants noise and many more. While damping elements and amplitude reduction are well-understood in linear systems, nonlinear systems and especially self-excited dynamics still constitute a challenge for damping element design. Additionally, complex dynamical systems exhibit deterministic chaotic cores which add severe sensitivity to initial conditions to the system response. Especially the complex friction interface dynamics remain a challenging task for measurements and modeling. Today, mostly simple and regular friction models are investigated in the field of self-excited brake system vibrations. This work aims at investigating the effect of high-frequency irregular interface dynamics on the nonlinear dynamical response of a self-excited structure. Special focus is put on the characterization of the system response time series. A low-dimensional minimal model is studied which features self-excitation, gyroscopic effects and friction-induced damping. Additionally, the employed friction formulation exhibits temperature as inner variable and superposed chaotic fluctuations governed by a Lorenz attractor. The time scale of the irregular fluctuations is chosen one order smaller than the overall system dynamics. The influence of those fluctuations on the structural response is studied in various ways, i.e. in time domain and by means of recurrence analysis. The separate time scales are studied in detail and regimes of dynamic interactions are identified. The results of the irregular friction formulation indicate dynamic interactions on multiple time scales, which trigger larger vibration amplitudes as compared to regular friction formulations conventionally studied in the field of friction-induced vibrations.

  4. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  5. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman

    2017-02-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.

  6. Accuracy of a separating foil impression using a novel polyolefin foil compared to a custom tray and a stock tray technique

    PubMed Central

    Pastoret, Marie-Hélène; Bühler, Julia; Weiger, Roland

    2017-01-01

    PURPOSE To compare the dimensional accuracy of three impression techniques- a separating foil impression, a custom tray impression, and a stock tray impression. MATERIALS AND METHODS A machined mandibular complete-arch metal model with special modifications served as a master cast. Three different impression techniques (n = 6 in each group) were performed with addition-cured silicon materials: i) putty-wash technique with a prefabricated metal tray (MET) using putty and regular body, ii) single-phase impression with custom tray (CUS) using regular body material, and iii) two-stage technique with stock metal tray (SEP) using putty with a separating foil and regular body material. All impressions were poured with epoxy resin. Six different distances (four intra-abutment and two inter-abutment distances) were gauged on the metal master model and on the casts with a microscope in combination with calibrated measuring software. The differences of the evaluated distances between the reference and the three test groups were calculated and expressed as mean (± SD). Additionally, the 95% confidence intervals were calculated and significant differences between the experimental groups were assumed when confidence intervals did not overlap. RESULTS Dimensional changes compared to reference values varied between -74.01 and 32.57 µm (MET), -78.86 and 30.84 (CUS), and between -92.20 and 30.98 (SEP). For the intra-abutment distances, no significant differences among the experimental groups were detected. CUS showed a significantly higher dimensional accuracy for the inter-abutment distances with -0.02 and -0.08 percentage deviation compared to MET and SEP. CONCLUSION The separation foil technique is a simple alternative to the custom tray technique for single tooth restorations, while limitations may exist for extended restorations with multiple abutment teeth. PMID:28874996

  7. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    NASA Astrophysics Data System (ADS)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  8. Spin and Valley Noise in Two-Dimensional Dirac Materials

    NASA Astrophysics Data System (ADS)

    Tse, Wang-Kong; Saxena, A.; Smith, D. L.; Sinitsyn, N. A.

    2014-07-01

    We develop a theory for optical Faraday rotation noise in two-dimensional Dirac materials. In contrast to spin noise in conventional semiconductors, we find that the Faraday rotation fluctuations are influenced not only by spins but also the valley degrees of freedom attributed to intervalley scattering processes. We illustrate our theory with two-dimensional transition-metal dichalcogenides and discuss signatures of spin and valley noise in the Faraday noise power spectrum. We propose optical Faraday noise spectroscopy as a technique for probing both spin and valley relaxation dynamics in two-dimensional Dirac materials.

  9. Comparison of measuring strategies for the 3-D electrical resistivity imaging of tumuli

    NASA Astrophysics Data System (ADS)

    Tsourlos, Panagiotis; Papadopoulos, Nikos; Yi, Myeong-Jong; Kim, Jung-Ho; Tsokas, Gregory

    2014-02-01

    Artificial erected hills like tumuli, mounds, barrows and kurgans comprise monuments of the past human activity and offer opportunities to reconstruct habitation models regarding the life and customs during their building period. These structures also host features of archeological significance like architectural relics, graves or chamber tombs. Tumulus exploration is a challenging geophysical problem due to the complex distribution of the subsurface physical properties, the size and burial depth of potential relics and the uneven topographical terrain. Geoelectrical methods by means of three-dimensional (3-D) inversion are increasingly popular for tumulus investigation. Typically data are obtained by establishing a regular rectangular grid and assembling the data collected by parallel two-dimensional (2-D) tomographies. In this work the application of radial 3-D mode is studied, which is considered as the assembly of data collected by radially positioned Electrical Resistivity Tomography (ERT) lines. The relative advantages and disadvantages of this measuring mode over the regular grid measurements were investigated and optimum ways to perform 3-D ERT surveys for tumuli investigations were proposed. Comparative test was performed by means of synthetic examples as well as by tests with field data. Overall all tested models verified the superiority of the radial mode in delineating bodies positioned at the central part of the tumulus while regular measuring mode proved superior in recovering bodies positioned away from the center of the tumulus. The combined use of radial and regular modes seems to produce superior results in the expense of time required for data acquisition and processing.

  10. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  11. Ensemble of sparse classifiers for high-dimensional biological data.

    PubMed

    Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao

    2015-01-01

    Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.

  12. Mineralization of collagen may occur on fibril surfaces: evidence from conventional and high-voltage electron microscopy and three-dimensional imaging

    NASA Technical Reports Server (NTRS)

    Landis, W. J.; Hodgens, K. J.; Song, M. J.; Arena, J.; Kiyonaga, S.; Marko, M.; Owen, C.; McEwen, B. F.

    1996-01-01

    The interaction between collagen and mineral crystals in the normally calcifying leg tendons from the domestic turkey, Meleagris gallopavo, has been investigated at an ultrastructural level with conventional and high-voltage electron microscopy, computed tomography, and three-dimensional image reconstruction methods. Specimens treated by either aqueous or anhydrous techniques and resin-embedded were appropriately sectioned and regions of early tendon mineralization were photographed. On the basis of individual photomicrographs, stereoscopic pairs of images, and tomographic three-dimensional image reconstructions, platelet-shaped crystals may be demonstrated for the first time in association with the surface of collagen fibrils. Mineral is also observed in closely parallel arrays within collagen hole and overlap zones. The mineral deposition at these spatially distinct locations in the tendon provides insight into possible means by which calcification is mediated by collagen as a fundamental event in skeletal and dental formation among vertebrates.

  13. First-Passage Times in d -Dimensional Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Vaccario, G.; Antoine, C.; Talbot, J.

    2015-12-01

    Although there are many theoretical studies of the mean first-passage time (MFPT), most neglect the diffusive heterogeneity of real systems. We present exact analytical expressions for the MFPT and residence times of a pointlike particle diffusing in a spherically symmetric d -dimensional heterogeneous system composed of two concentric media with different diffusion coefficients with an absorbing inner boundary (target) and a reflecting outer boundary. By varying the convention, e.g., Itō, Stratonovich, or isothermal, chosen to interpret the overdamped Langevin equation with multiplicative noise describing the diffusion process, we find different predictions and counterintuitive results for the residence time in the outer region and hence for the MFPT, while the residence time in the inner region is independent of the convention. This convention dependence of residence times and the MFPT could provide insights about the heterogeneous diffusion in a cell or in a tumor, or for animal and insect searches inside their home range.

  14. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  15. On regularization and error estimates for the backward heat conduction problem with time-dependent thermal diffusivity factor

    NASA Astrophysics Data System (ADS)

    Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba

    2018-10-01

    This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.

  16. Consistency-based rectification of nonrigid registrations

    PubMed Central

    Gass, Tobias; Székely, Gábor; Goksel, Orcun

    2015-01-01

    Abstract. We present a technique to rectify nonrigid registrations by improving their group-wise consistency, which is a widely used unsupervised measure to assess pair-wise registration quality. While pair-wise registration methods cannot guarantee any group-wise consistency, group-wise approaches typically enforce perfect consistency by registering all images to a common reference. However, errors in individual registrations to the reference then propagate, distorting the mean and accumulating in the pair-wise registrations inferred via the reference. Furthermore, the assumption that perfect correspondences exist is not always true, e.g., for interpatient registration. The proposed consistency-based registration rectification (CBRR) method addresses these issues by minimizing the group-wise inconsistency of all pair-wise registrations using a regularized least-squares algorithm. The regularization controls the adherence to the original registration, which is additionally weighted by the local postregistration similarity. This allows CBRR to adaptively improve consistency while locally preserving accurate pair-wise registrations. We show that the resulting registrations are not only more consistent, but also have lower average transformation error when compared to known transformations in simulated data. On clinical data, we show improvements of up to 50% target registration error in breathing motion estimation from four-dimensional MRI and improvements in atlas-based segmentation quality of up to 65% in terms of mean surface distance in three-dimensional (3-D) CT. Such improvement was observed consistently using different registration algorithms, dimensionality (two-dimensional/3-D), and modalities (MRI/CT). PMID:26158083

  17. Artificial Neural Network with Regular Graph for Maximum Air Temperature Forecasting:. the Effect of Decrease in Nodes Degree on Learning

    NASA Astrophysics Data System (ADS)

    Ghaderi, A. H.; Darooneh, A. H.

    The behavior of nonlinear systems can be analyzed by artificial neural networks. Air temperature change is one example of the nonlinear systems. In this work, a new neural network method is proposed for forecasting maximum air temperature in two cities. In this method, the regular graph concept is used to construct some partially connected neural networks that have regular structures. The learning results of fully connected ANN and networks with proposed method are compared. In some case, the proposed method has the better result than conventional ANN. After specifying the best network, the effect of input pattern numbers on the prediction is studied and the results show that the increase of input patterns has a direct effect on the prediction accuracy.

  18. Effects of Individual's Self-Examination on Cooperation in Prisoner's Dilemma Game

    NASA Astrophysics Data System (ADS)

    Guan, Jian-Yue; Sun, Jin-Tu; Wang, Ying-Hai

    We study a spatial evolutionary prisoner's dilemma game on regular network's one-dimensional regular ring and two-dimensional square lattice. The individuals located on the sites of networks can either cooperate with their neighbors or defect. The effects of individual's self-examination are introduced. Using Monte Carlo simulations and pair approximation method, we investigate the average density of cooperators in the stationary state for various values of payoff parameters b and the time interval Δt. The effects of the fraction p of players in the system who are using the self-examination on cooperation are also discussed. It is shown that compared with the case of no individual's self-examination, the persistence of cooperation is inhibited when the payoff parameter b is small and at certain Δt (Δt > 0) or p (p > 0), cooperation is mostly inhibited, while when b is large, the emergence of cooperation can be remarkably enhanced and mostly enhanced at Δt = 0 or p = 1.

  19. Invasion percolation between two sites in two, three, and four dimensions

    NASA Astrophysics Data System (ADS)

    Lee, Sang Bub

    2009-06-01

    The mass distribution of invaded clusters in non-trapping invasion percolation between an injection site and an extraction site has been studied, in two, three, and four dimensions. This study is an extension of the recent study focused on two dimensions by Araújo et al. [A.D. Araújo, T.F. Vasconcelos, A.A. Moreira, L.S. Lucena, J.S. Andrade Jr., Phys. Rev. E 72 (2005) 041404] with respect to higher dimensions. The mass distribution exhibits a power-law behavior, P(m)∝m. It has been found that the index α for pe

  20. Chaotic orbits obeying one isolating integral in a four-dimensional map

    NASA Astrophysics Data System (ADS)

    Muzzio, J. C.

    2018-02-01

    We have recently presented strong evidence that chaotic orbits that obey one isolating integral besides energy exist in a toy Hamiltonian model with three degrees of freedom and are bounded by regular orbits that isolate them from the Arnold web. The interval covered by those numerical experiments was equivalent to about one million Hubble times in a galactic context. Here, we use a four-dimensional map to confirm our previous results and to extend that interval 50 times. We show that, at least within that interval, features found in lower dimension Hamiltonian systems and maps are also present in our study, e.g. within the phase space occupied by a chaotic orbit that obeys one integral there are subspaces where that orbit does not enter and are, instead, occupied by regular orbits that, if tori, bound other chaotic orbits obeying one integral and, if cantori, produce stickiness. We argue that the validity of our results might exceed the time intervals covered by the numerical experiments.

  1. Percolation of spatially constraint networks

    NASA Astrophysics Data System (ADS)

    Li, Daqing; Li, Guanliang; Kosmidis, Kosmas; Stanley, H. E.; Bunde, Armin; Havlin, Shlomo

    2011-03-01

    We study how spatial constraints are reflected in the percolation properties of networks embedded in one-dimensional chains and two-dimensional lattices. We assume long-range connections between sites on the lattice where two sites at distance r are chosen to be linked with probability p(r)~r-δ. Similar distributions have been found in spatially embedded real networks such as social and airline networks. We find that for networks embedded in two dimensions, with 2<δ<4, the percolation properties show new intermediate behavior different from mean field, with critical exponents that depend on δ. For δ<2, the percolation transition belongs to the universality class of percolation in Erdös-Rényi networks (mean field), while for δ>4 it belongs to the universality class of percolation in regular lattices. For networks embedded in one dimension, we find that, for δ<1, the percolation transition is mean field. For 1<δ<2, the critical exponents depend on δ, while for δ>2 there is no percolation transition as in regular linear chains.

  2. A Protein in the Palm of Your Hand through Augmented Reality

    ERIC Educational Resources Information Center

    Berry, Colin; Board, Jason

    2014-01-01

    Understanding of proteins and other biological macromolecules must be based on an appreciation of their 3-dimensional shape and the fine details of their structure. Conveying these details in a clear and stimulating fashion can present challenges using conventional approaches and 2-dimensional monitors and projectors. Here we describe a method for…

  3. Report of the Ethics Committee, 2008

    ERIC Educational Resources Information Center

    American Psychologist, 2009

    2009-01-01

    In accordance with the bylaws of the American Psychological Association (APA), the Ethics Committee reports regularly to the membership regarding the number and types of ethics complaints investigated and the major programs undertaken. In 2008, ethics adjudication, ethics education and consultation, convention programs, ethics publications,…

  4. Avoiding the Coming Higher Ed Wars

    ERIC Educational Resources Information Center

    Newfield, Christopher

    2010-01-01

    For the past thirty years, conventional wisdom has held that cutting public funding will make public institutions more efficient. This idea has profoundly altered support for higher education. University leaders have regularly assured legislators, and the general public, that business-oriented science, fundraising, and sophisticated financing…

  5. A new approach to increase the two-dimensional detection probability of CSI algorithm for WAS-GMTI mode

    NASA Astrophysics Data System (ADS)

    Yan, H.; Zheng, M. J.; Zhu, D. Y.; Wang, H. T.; Chang, W. S.

    2015-07-01

    When using clutter suppression interferometry (CSI) algorithm to perform signal processing in a three-channel wide-area surveillance radar system, the primary concern is to effectively suppress the ground clutter. However, a portion of moving target's energy is also lost in the process of channel cancellation, which is often neglected in conventional applications. In this paper, we firstly investigate the two-dimensional (radial velocity dimension and squint angle dimension) residual amplitude of moving targets after channel cancellation with CSI algorithm. Then, a new approach is proposed to increase the two-dimensional detection probability of moving targets by reserving the maximum value of the three channel cancellation results in non-uniformly spaced channel system. Besides, theoretical expression of the false alarm probability with the proposed approach is derived in the paper. Compared with the conventional approaches in uniformly spaced channel system, simulation results validate the effectiveness of the proposed approach. To our knowledge, it is the first time that the two-dimensional detection probability of CSI algorithm is studied.

  6. Forming three-dimensional closed shapes from two-dimensional soft ribbons by controlled buckling

    PubMed Central

    Aoki, Michio

    2018-01-01

    Conventional manufacturing techniques—moulding, machining and casting—exist to produce three-dimensional (3D) shapes. However, these industrial processes are typically geared for mass production and are not directly applicable to residential settings, where inexpensive and versatile tools are desirable. Moreover, those techniques are, in general, not adequate to process soft elastic materials. Here, we introduce a new concept of forming 3D closed hollow shapes from two-dimensional (2D) elastic ribbons by controlled buckling. We numerically and experimentally characterize how the profile and thickness of the ribbon determine its buckled shape. We find a 2D master profile with which various elliptical 3D shapes can be formed. More complex natural and artificial hollow shapes, such as strawberry, hourglass and wheel, can also be achieved via strategic design and pattern engraving on the ribbons. The nonlinear response of the post-buckling regime is rationalized through finite-element analysis, which shows good quantitative agreement with experiments. This robust fabrication should complement conventional techniques and provide a rich arena for future studies on the mechanics and new applications of elastic hollow structures. PMID:29515894

  7. Density-controlled quantum Hall ferromagnetic transition in a two-dimensional hole system

    DOE PAGES

    Lu, T. M.; Tracy, L. A.; Laroche, D.; ...

    2017-06-01

    We typically achieve Quantum Hall ferromagnetic transitions by increasing the Zeeman energy through in-situ sample rotation, while transitions in systems with pseudo-spin indices can be induced by gate control. We report here a gate-controlled quantum Hall ferromagnetic transition between two real spin states in a conventional two-dimensional system without any in-plane magnetic field. We also show that the ratio of the Zeeman splitting to the cyclotron gap in a Ge two-dimensional hole system increases with decreasing density owing to inter-carrier interactions. Below a critical density of ~2.4 × 10 10 cm -2, this ratio grows greater than 1, resulting inmore » a ferromagnetic ground state at filling factor ν = 2. At the critical density, a resistance peak due to the formation of microscopic domains of opposite spin orientations is observed. For such gate-controlled spin-polarizations in the quantum Hall regime the door opens in order to realize Majorana modes using two-dimensional systems in conventional, low-spin-orbit-coupling semiconductors.« less

  8. Performance analysis of three-dimensional-triple-level cell and two-dimensional-multi-level cell NAND flash hybrid solid-state drives

    NASA Astrophysics Data System (ADS)

    Sakaki, Yukiya; Yamada, Tomoaki; Matsui, Chihiro; Yamaga, Yusuke; Takeuchi, Ken

    2018-04-01

    In order to improve performance of solid-state drives (SSDs), hybrid SSDs have been proposed. Hybrid SSDs consist of more than two types of NAND flash memories or NAND flash memories and storage-class memories (SCMs). However, the cost of hybrid SSDs adopting SCMs is more expensive than that of NAND flash only SSDs because of the high bit cost of SCMs. This paper proposes unique hybrid SSDs with two-dimensional (2D) horizontal multi-level cell (MLC)/three-dimensional (3D) vertical triple-level cell (TLC) NAND flash memories to achieve higher cost-performance. The 2D-MLC/3D-TLC hybrid SSD achieves up to 31% higher performance than the conventional 2D-MLC/2D-TLC hybrid SSD. The factors of different performance between the proposed hybrid SSD and the conventional hybrid SSD are analyzed by changing its block size, read/write/erase latencies, and write unit of 3D-TLC NAND flash memory, by means of a transaction-level modeling simulator.

  9. Forming three-dimensional closed shapes from two-dimensional soft ribbons by controlled buckling

    NASA Astrophysics Data System (ADS)

    Aoki, Michio; Juang, Jia-Yang

    2018-02-01

    Conventional manufacturing techniques-moulding, machining and casting-exist to produce three-dimensional (3D) shapes. However, these industrial processes are typically geared for mass production and are not directly applicable to residential settings, where inexpensive and versatile tools are desirable. Moreover, those techniques are, in general, not adequate to process soft elastic materials. Here, we introduce a new concept of forming 3D closed hollow shapes from two-dimensional (2D) elastic ribbons by controlled buckling. We numerically and experimentally characterize how the profile and thickness of the ribbon determine its buckled shape. We find a 2D master profile with which various elliptical 3D shapes can be formed. More complex natural and artificial hollow shapes, such as strawberry, hourglass and wheel, can also be achieved via strategic design and pattern engraving on the ribbons. The nonlinear response of the post-buckling regime is rationalized through finite-element analysis, which shows good quantitative agreement with experiments. This robust fabrication should complement conventional techniques and provide a rich arena for future studies on the mechanics and new applications of elastic hollow structures.

  10. Density-controlled quantum Hall ferromagnetic transition in a two-dimensional hole system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, T. M.; Tracy, L. A.; Laroche, D.

    We typically achieve Quantum Hall ferromagnetic transitions by increasing the Zeeman energy through in-situ sample rotation, while transitions in systems with pseudo-spin indices can be induced by gate control. We report here a gate-controlled quantum Hall ferromagnetic transition between two real spin states in a conventional two-dimensional system without any in-plane magnetic field. We also show that the ratio of the Zeeman splitting to the cyclotron gap in a Ge two-dimensional hole system increases with decreasing density owing to inter-carrier interactions. Below a critical density of ~2.4 × 10 10 cm -2, this ratio grows greater than 1, resulting inmore » a ferromagnetic ground state at filling factor ν = 2. At the critical density, a resistance peak due to the formation of microscopic domains of opposite spin orientations is observed. For such gate-controlled spin-polarizations in the quantum Hall regime the door opens in order to realize Majorana modes using two-dimensional systems in conventional, low-spin-orbit-coupling semiconductors.« less

  11. Microwave processing of a dental ceramic used in computer-aided design/computer-aided manufacturing.

    PubMed

    Pendola, Martin; Saha, Subrata

    2015-01-01

    Because of their favorable mechanical properties and natural esthetics, ceramics are widely used in restorative dentistry. The conventional ceramic sintering process required for their use is usually slow, however, and the equipment has an elevated energy consumption. Sintering processes that use microwaves have several advantages compared to regular sintering: shorter processing times, lower energy consumption, and the capacity for volumetric heating. The objective of this study was to test the mechanical properties of a dental ceramic used in computer-aided design/computer-aided manufacturing (CAD/CAM) after the specimens were processed with microwave hybrid sintering. Density, hardness, and bending strength were measured. When ceramic specimens were sintered with microwaves, the processing times were reduced and protocols were simplified. Hardness was improved almost 20% compared to regular sintering, and flexural strength measurements suggested that specimens were approximately 50% stronger than specimens sintered in a conventional system. Microwave hybrid sintering may preserve or improve the mechanical properties of dental ceramics designed for CAD/CAM processing systems, reducing processing and waiting times.

  12. Improved Virtual Planning for Bimaxillary Orthognathic Surgery.

    PubMed

    Hatamleh, Muhanad; Turner, Catherine; Bhamrah, Gurprit; Mack, Gavin; Osher, Jonas

    2016-09-01

    Conventional model surgery planning for bimaxillary orthognathic surgery can be laborious, time-consuming and may contain potential errors; hence three-dimensional (3D) virtual orthognathic planning has been proven to be an efficient, reliable, and cost-effective alternative. In this report, the 3D planning is described for a patient presenting with a Class III incisor relationship on a Skeletal III base with pan facial asymmetry complicated by reverse overjet and anterior open bite. A combined scan data of direct cone beam computer tomography and indirect dental scan were used in the planning. Additionally, a new method of establishing optimum intercuspation by scanning dental casts in final occlusion and positioning it to the composite-scans model was shown. Furthermore, conventional model surgery planning was carried out following in-house protocol. Intermediate and final intermaxillary splints were produced following the conventional method and 3D printing. Three-dimensional planning showed great accuracy and treatment outcome and reduced laboratory time in comparison with the conventional method. Establishing the final dental occlusion on casts and integrating it in final 3D planning enabled us to achieve the best possible intercuspation.

  13. Electron tunneling in nanoscale electrodes for battery applications

    NASA Astrophysics Data System (ADS)

    Yamada, Hidenori; Narayanan, Rajaram; Bandaru, Prabhakar R.

    2018-03-01

    It is shown that the electrical current that may be obtained from a nanoscale electrochemical system is sensitive to the dimensionality of the electrode and the density of states (DOS). Considering the DOS of lower dimensional systems, such as two-dimensional graphene, one-dimensional nanotubes, or zero-dimensional quantum dots, yields a distinct variation of the current-voltage characteristics. Such aspects go beyond conventional Arrhenius theory based kinetics which are often used in experimental interpretation. The obtained insights may be adapted to other devices, such as solid-state batteries. It is also indicated that electron transport in such devices may be considered through electron tunneling.

  14. General flat four-dimensional world pictures and clock systems

    NASA Technical Reports Server (NTRS)

    Hsu, J. P.; Underwood, J. A.

    1978-01-01

    We explore the mathematical structure and the physical implications of a general four-dimensional symmetry framework which is consistent with the Poincare-Einstein principle of relativity for physical laws and with experiments. In particular, we discuss a four-dimensional framework in which all observers in different frames use one and the same grid of clocks. The general framework includes special relativity and a recently proposed new four-dimensional symmetry with a nonuniversal light speed as two special simple cases. The connection between the properties of light propagation and the convention concerning clock systems is also discussed, and is seen to be nonunique within the four-dimensional framework.

  15. Complex supramolecular interfacial tessellation through convergent multi-step reaction of a dissymmetric simple organic precursor

    NASA Astrophysics Data System (ADS)

    Zhang, Yi-Qi; Paszkiewicz, Mateusz; Du, Ping; Zhang, Liding; Lin, Tao; Chen, Zhi; Klyatskaya, Svetlana; Ruben, Mario; Seitsonen, Ari P.; Barth, Johannes V.; Klappenberger, Florian

    2018-03-01

    Interfacial supramolecular self-assembly represents a powerful tool for constructing regular and quasicrystalline materials. In particular, complex two-dimensional molecular tessellations, such as semi-regular Archimedean tilings with regular polygons, promise unique properties related to their nontrivial structures. However, their formation is challenging, because current methods are largely limited to the direct assembly of precursors, that is, where structure formation relies on molecular interactions without using chemical transformations. Here, we have chosen ethynyl-iodophenanthrene (which features dissymmetry in both geometry and reactivity) as a single starting precursor to generate the rare semi-regular (3.4.6.4) Archimedean tiling with long-range order on an atomically flat substrate through a multi-step reaction. Intriguingly, the individual chemical transformations converge to form a symmetric alkynyl-Ag-alkynyl complex as the new tecton in high yields. Using a combination of microscopy and X-ray spectroscopy tools, as well as computational modelling, we show that in situ generated catalytic Ag complexes mediate the tecton conversion.

  16. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  17. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  18. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  19. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  20. Fabrication and structural studies of opal-III nitride nanocomposites

    NASA Astrophysics Data System (ADS)

    Davydov, V. Yu; Golubev, V. G.; Kartenko, N. F.; Kurdyukov, D. A.; Pevtsov, A. B.; Sharenkova, N. V.; Brogueira, P.; Schwarz, R.

    2000-12-01

    In this paper, regular three-dimensional systems of GaN, InN and InGaN nanoclusters have been fabricated for the first time in a void sublattice of artificial opal. The opal consisted of 220 nm diameter close packed amorphous silica spheres and had a regular sublattice of voids accessible to filling by other substances. GaN, InN and InGaN were synthesized directly in the opal voids from precursors such as metal salts and nitrogen hydrides. The composites' structures have been characterized using x-ray diffraction, Raman spectroscopy, atomic force microscopy and optical measurements.

  1. Equilibrium and nonequilibrium models on Solomon networks

    NASA Astrophysics Data System (ADS)

    Lima, F. W. S.

    2016-05-01

    We investigate the critical properties of the equilibrium and nonequilibrium systems on Solomon networks. The equilibrium and nonequilibrium systems studied here are the Ising and Majority-vote models, respectively. These systems are simulated by applying the Monte Carlo method. We calculate the critical points, as well as the critical exponents ratio γ/ν, β/ν and 1/ν. We find that both systems present identical exponents on Solomon networks and are of different universality class as the regular two-dimensional ferromagnetic model. Our results are in agreement with the Grinstein criterion for models with up and down symmetry on regular lattices.

  2. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  3. First moments of nucleon generalized parton distributions

    DOE PAGES

    Wang, P.; Thomas, A. W.

    2010-06-01

    We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.

  4. Existence, uniqueness and regularity of a time-periodic probability density distribution arising in a sedimentation-diffusion problem

    NASA Technical Reports Server (NTRS)

    Nitsche, Ludwig C.; Nitsche, Johannes M.; Brenner, Howard

    1988-01-01

    The sedimentation and diffusion of a nonneutrally buoyant Brownian particle in vertical fluid-filled cylinder of finite length which is instantaneously inverted at regular intervals are investigated analytically. A one-dimensional convective-diffusive equation is derived to describe the temporal and spatial evolution of the probability density; a periodicity condition is formulated; the applicability of Fredholm theory is established; and the parameter-space regions are determined within which the existence and uniqueness of solutions are guaranteed. Numerical results for sample problems are presented graphically and briefly characterized.

  5. Numerical assessment of bone remodeling around conventionally and early loaded titanium and titanium-zirconium alloy dental implants.

    PubMed

    Akça, Kıvanç; Eser, Atılım; Çavuşoğlu, Yeliz; Sağırkaya, Elçin; Çehreli, Murat Cavit

    2015-05-01

    The aim of this study was to investigate conventionally and early loaded titanium and titanium-zirconium alloy implants by three-dimensional finite element stress analysis. Three-dimensional model of a dental implant was created and a thread area was established as a region of interest in trabecular bone to study a localized part of the global model with a refined mesh. The peri-implant tissues around conventionally loaded (model 1) and early loaded (model 2) implants were implemented and were used to explore principal stresses, displacement values, and equivalent strains in the peri-implant region of titanium and titanium-zirconium implants under static load of 300 N with or without 30° inclination applied on top of the abutment surface. Under axial loading, principal stresses in both models were comparable for both implants and models. Under oblique loading, principal stresses around titanium-zirconium implants were slightly higher in both models. Comparable stress magnitudes were observed in both models. The displacement values and equivalent strain amplitudes around both implants and models were similar. Peri-implant bone around titanium and titanium-zirconium implants experiences similar stress magnitudes coupled with intraosseous implant displacement values under conventional loading and early loading simulations. Titanium-zirconium implants have biomechanical outcome comparable to conventional titanium implants under conventional loading and early loading.

  6. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  7. A Fractal Excursion.

    ERIC Educational Resources Information Center

    Camp, Dane R.

    1991-01-01

    After introducing the two-dimensional Koch curve, which is generated by simple recursions on an equilateral triangle, the process is extended to three dimensions with simple recursions on a regular tetrahedron. Included, for both fractal sequences, are iterative formulae, illustrations of the first several iterations, and a sample PASCAL program.…

  8. Anisotropic smoothing regularization (AnSR) in Thirion's Demons registration evaluates brain MRI tissue changes post-laser ablation.

    PubMed

    Hwuang, Eileen; Danish, Shabbar; Rusu, Mirabela; Sparks, Rachel; Toth, Robert; Madabhushi, Anant

    2013-01-01

    MRI-guided laser-induced interstitial thermal therapy (LITT) is a form of laser ablation and a potential alternative to craniotomy in treating glioblastoma multiforme (GBM) and epilepsy patients, but its effectiveness has yet to be fully evaluated. One way of assessing short-term treatment of LITT is by evaluating changes in post-treatment MRI as a measure of response. Alignment of pre- and post-LITT MRI in GBM and epilepsy patients via nonrigid registration is necessary to detect subtle localized treatment changes on imaging, which can then be correlated with patient outcome. A popular deformable registration scheme in the context of brain imaging is Thirion's Demons algorithm, but its flexibility often introduces artifacts without physical significance, which has conventionally been corrected by Gaussian smoothing of the deformation field. In order to prevent such artifacts, we instead present the Anisotropic smoothing regularizer (AnSR) which utilizes edge-detection and denoising within the Demons framework to regularize the deformation field at each iteration of the registration more aggressively in regions of homogeneously oriented displacements while simultaneously regularizing less aggressively in areas containing heterogeneous local deformation and tissue interfaces. In contrast, the conventional Gaussian smoothing regularizer (GaSR) uniformly averages over the entire deformation field, without carefully accounting for transitions across tissue boundaries and local displacements in the deformation field. In this work we employ AnSR within the Demons algorithm and perform pairwise registration on 2D synthetic brain MRI with and without noise after inducing a deformation that models shrinkage of the target region expected from LITT. We also applied Demons with AnSR for registering clinical T1-weighted MRI for one epilepsy and one GBM patient pre- and post-LITT. Our results demonstrate that by maintaining select displacements in the deformation field, AnSR outperforms both GaSR and no regularizer (NoR) in terms of normalized sum of squared differences (NSSD) with values such as 0.743, 0.807, and 1.000, respectively, for GBM.

  9. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  10. Equation of state of the one- and three-dimensional Bose-Bose gases

    NASA Astrophysics Data System (ADS)

    Chiquillo, Emerson

    2018-06-01

    We calculate the equation of state of Bose-Bose gases in one and three dimensions in the framework of an effective quantum field theory. The beyond-mean-field approximation at zero temperature and the one-loop finite-temperature results are obtained performing functional integration on a local effective action. The ultraviolet divergent zero-point quantum fluctuations are removed by means of dimensional regularization. We derive the nonlinear Schrödinger equation to describe one- and three-dimensional Bose-Bose mixtures and solve it analytically in the one-dimensional scenario. This equation supports self-trapped brightlike solitonic droplets and self-trapped darklike solitons. At low temperature, we also find that the pressure and the number of particles of symmetric quantum droplets have a nontrivial dependence on the chemical potential and the difference between the intra- and the interspecies coupling constants.

  11. 76 FR 68778 - Request for Information and Recommendations on Resolutions, Decisions, and Agenda Items for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-07

    ..., interjurisdictional resource management agencies, and international nongovernmental organizations with relevant... Parties present object: (a) International agencies or bodies, either governmental or nongovernmental, and... at the Sixteenth Regular Meeting of the Conference of the Parties to the Convention on International...

  12. Chemistry in a Large, Multidisciplinary Laboratory.

    ERIC Educational Resources Information Center

    Lingren, Wesley E.; Hughson, Robert C.

    1982-01-01

    Describes a science facility built at Seattle Pacific University for approximately 70 percent of the capital cost of a conventional science building. The building serves seven disciplines on a regular basis. The operation of the multidisciplinary laboratory, special features, laboratory security, and student experience/reactions are highlighted.…

  13. 77 FR 76040 - Government-wide Travel Advisory Committee (GTAC)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ... the Administrator of General Services on a regular basis. There exists no other source within the... companies, airline companies, travel and lodging associations, convention and visitor bureaus, state and... are open to public observers, unless prior notice has been provided for a closed meeting. Nomination...

  14. Beyond Special Education: A New Vision of Academic Support

    ERIC Educational Resources Information Center

    Mowschenson, Julie Joyal; Weintraub, Robert J.

    2009-01-01

    This article describes Brookline High School's new Tutorial Program, an alternative to the more traditional special education learning center. The Tutorial serves students with learning disabilities, replacing conventional special education support with academic guidance from regular classroom teachers. Tutorial students meet daily with a team of…

  15. Periodic measure for the stochastic equation of the barotropic viscous gas in a discretized one-dimensional domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benseghir, Rym, E-mail: benseghirrym@ymail.com, E-mail: benseghirrym@ymail.com; Benchettah, Azzedine, E-mail: abenchettah@hotmail.com; Raynaud de Fitte, Paul, E-mail: prf@univ-rouen.fr

    2015-11-30

    A stochastic equation system corresponding to the description of the motion of a barotropic viscous gas in a discretized one-dimensional domain with a weight regularizing the density is considered. In [2], the existence of an invariant measure was established for this discretized problem in the stationary case. In this paper, applying a slightly modified version of Khas’minskii’s theorem [5], we generalize this result in the periodic case by proving the existence of a periodic measure for this problem.

  16. Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    PubMed Central

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2014-01-01

    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076

  17. Enumeration of Extended m-Regular Linear Stacks.

    PubMed

    Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian

    2016-12-01

    The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].

  18. Isotropic-resolution linear-array-based photoacoustic computed tomography through inverse Radon transform

    NASA Astrophysics Data System (ADS)

    Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.

    2015-03-01

    Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.

  19. Eight weeks of a combination of high intensity interval training and conventional training reduce visceral adiposity and improve physical fitness: a group-based intervention.

    PubMed

    Giannaki, Christoforos D; Aphamis, George; Sakkis, Panikos; Hadjicharalambous, Marios

    2016-04-01

    High intensity interval training (HIIT) has been recently promoted as an effective, low volume and time-efficient training method for improving fitness and health related parameters. The aim of the current study was to examine the effect of a combination of a group-based HIIT and conventional gym training on physical fitness and body composition parameters in healthy adults. Thirty nine healthy adults volunteered to participate in this eight-week intervention study. Twenty three participants performed regular gym training 4 days a week (C group), whereas the remaining 16 participants engaged twice a week in HIIT and twice in regular gym training (HIIT-C group) as the other group. Total body fat and visceral adiposity levels were calculated using bioelectrical impedance analysis. Physical fitness parameters such as cardiorespiratory fitness, speed, lower limb explosiveness, flexibility and isometric arm strength were assessed through a battery of field tests. Both exercise programs were effective in reducing total body fat and visceral adiposity (P<0.05) and improving handgrip strength, sprint time, jumping ability and flexibility (P<0.05) whilst only the combination of HIIT and conventional training improved cardiorespiratory fitness levels (P<0.05). A between of group changes analysis revealed that HIIT-C resulted in significantly greater reduction in both abdominal girth and visceral adiposity compared with conventional training (P<0.05). Eight weeks of combined group-based HIIT and conventional training improve various physical fitness parameters and reduce both total and visceral fat levels. This type of training was also found to be superior compared with conventional exercise training alone in terms of reducing more visceral adiposity levels. Group-based HIIT may consider as a good methods for individuals who exercise in gyms and craving to acquire significant fitness benefits in relatively short period of time.

  20. Contrast-Enhanced Magnetic Resonance Cholangiography: Practical Tips and Clinical Indications for Biliary Disease Management.

    PubMed

    Palmucci, Stefano; Roccasalva, Federica; Piccoli, Marina; Fuccio Sanzà, Giovanni; Foti, Pietro Valerio; Ragozzino, Alfonso; Milone, Pietro; Ettorre, Giovanni Carlo

    2017-01-01

    Since its introduction, MRCP has been improved over the years due to the introduction of several technical advances and innovations. It consists of a noninvasive method for biliary tree representation, based on heavily T2-weighted images. Conventionally, its protocol includes two-dimensional single-shot fast spin-echo images, acquired with thin sections or with multiple thick slabs. In recent years, three-dimensional T2-weighted fast-recovery fast spin-echo images have been added to the conventional protocol, increasing the possibility of biliary anatomy demonstration and leading to a significant benefit over conventional 2D imaging. A significant innovation has been reached with the introduction of hepatobiliary contrasts, represented by gadoxetic acid and gadobenate dimeglumine: they are excreted into the bile canaliculi, allowing the opacification of the biliary tree. Recently, 3D interpolated T1-weighted spoiled gradient echo images have been proposed for the evaluation of the biliary tree, obtaining images after hepatobiliary contrast agent administration. Thus, the acquisition of these excretory phases improves the diagnostic capability of conventional MRCP-based on T2 acquisitions. In this paper, technical features of contrast-enhanced magnetic resonance cholangiography are briefly discussed; main diagnostic tips of hepatobiliary phase are showed, emphasizing the benefit of enhanced cholangiography in comparison with conventional MRCP.

  1. [Postmortem CT examination in a case of alleged drowning--a case report].

    PubMed

    Woźniak, Krzysztof; Urbanik, Andrzej; Rzepecka-Woźniak, Ewa; Moskała, Artur; Kłys, Małgorzata

    2009-01-01

    The authors present an analysis of postmortem CT examination in a case of drowning in fresh water of a young male. Both the results of conventional forensic autopsy and radiologic examination have been compared. The analysis is illustrated by two-dimensional and three-dimensional reconstructions based on the DICOM files obtained during postmortem CT examination.

  2. Sentinel Lymph Node Biopsy: Quantification of Lymphedema Risk Reduction

    DTIC Science & Technology

    2006-10-01

    dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J Radiat Oncol Biol Phys...techniques based on conventional photon beams, intensity modulated photon beams and proton beams for therapy of intact breast. Radiother Oncol. Feb...Harris JR. Three-dimensional internal mammary lymphoscintigraphy: implications for radiation therapy treatment planning for breast carcinoma. Int J

  3. Robust and sparse correlation matrix estimation for the analysis of high-dimensional genomics data.

    PubMed

    Serra, Angela; Coretto, Pietro; Fratello, Michele; Tagliaferri, Roberto; Stegle, Oliver

    2018-02-15

    Microarray technology can be used to study the expression of thousands of genes across a number of different experimental conditions, usually hundreds. The underlying principle is that genes sharing similar expression patterns, across different samples, can be part of the same co-expression system, or they may share the same biological functions. Groups of genes are usually identified based on cluster analysis. Clustering methods rely on the similarity matrix between genes. A common choice to measure similarity is to compute the sample correlation matrix. Dimensionality reduction is another popular data analysis task which is also based on covariance/correlation matrix estimates. Unfortunately, covariance/correlation matrix estimation suffers from the intrinsic noise present in high-dimensional data. Sources of noise are: sampling variations, presents of outlying sample units, and the fact that in most cases the number of units is much larger than the number of genes. In this paper, we propose a robust correlation matrix estimator that is regularized based on adaptive thresholding. The resulting method jointly tames the effects of the high-dimensionality, and data contamination. Computations are easy to implement and do not require hand tunings. Both simulated and real data are analyzed. A Monte Carlo experiment shows that the proposed method is capable of remarkable performances. Our correlation metric is more robust to outliers compared with the existing alternatives in two gene expression datasets. It is also shown how the regularization allows to automatically detect and filter spurious correlations. The same regularization is also extended to other less robust correlation measures. Finally, we apply the ARACNE algorithm on the SyNTreN gene expression data. Sensitivity and specificity of the reconstructed network is compared with the gold standard. We show that ARACNE performs better when it takes the proposed correlation matrix estimator as input. The R software is available at https://github.com/angy89/RobustSparseCorrelation. aserra@unisa.it or robtag@unisa.it. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  5. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  6. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  7. R (D(*)) anomalies in light of a nonminimal universal extra dimension

    NASA Astrophysics Data System (ADS)

    Biswas, Aritra; Shaw, Avirup; Patra, Sunando Kumar

    2018-02-01

    We estimate contributions from Kaluza-Klein excitations of gauge bosons and physical charge scalar for the explanation of the lepton flavor universality violating excess in the ratios R (D ) and R (D*) in 5 dimensional universal extra dimensional scenario with nonvanishing boundary localized terms. This model is conventionally known as nonminimal universal extra dimensional model. We obtain the allowed parameter space in accordance with constraints coming from Bc→τ ν decay, as well as those from the electroweak precision tests.

  8. Virtual Solar System Project: Building Understanding through Model Building.

    ERIC Educational Resources Information Center

    Barab, Sasha A.; Hay, Kenneth E.; Barnett, Michael; Keating, Thomas

    2000-01-01

    Describes an introductory astronomy course for undergraduate students in which students use three-dimensional (3-D) modeling tools to model the solar system and develop rich understandings of astronomical phenomena. Indicates that 3-D modeling can be used effectively in regular undergraduate university courses as a tool to develop understandings…

  9. Half-quadratic variational regularization methods for speckle-suppression and edge-enhancement in SAR complex image

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Wang, Guang-xin

    2008-12-01

    Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.

  10. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  11. Color-coded visualization of magnetic resonance imaging multiparametric maps

    NASA Astrophysics Data System (ADS)

    Kather, Jakob Nikolas; Weidner, Anja; Attenberger, Ulrike; Bukschat, Yannick; Weis, Cleo-Aron; Weis, Meike; Schad, Lothar R.; Zöllner, Frank Gerrit

    2017-01-01

    Multiparametric magnetic resonance imaging (mpMRI) data are emergingly used in the clinic e.g. for the diagnosis of prostate cancer. In contrast to conventional MR imaging data, multiparametric data typically include functional measurements such as diffusion and perfusion imaging sequences. Conventionally, these measurements are visualized with a one-dimensional color scale, allowing only for one-dimensional information to be encoded. Yet, human perception places visual information in a three-dimensional color space. In theory, each dimension of this space can be utilized to encode visual information. We addressed this issue and developed a new method for tri-variate color-coded visualization of mpMRI data sets. We showed the usefulness of our method in a preclinical and in a clinical setting: In imaging data of a rat model of acute kidney injury, the method yielded characteristic visual patterns. In a clinical data set of N = 13 prostate cancer mpMRI data, we assessed diagnostic performance in a blinded study with N = 5 observers. Compared to conventional radiological evaluation, color-coded visualization was comparable in terms of positive and negative predictive values. Thus, we showed that human observers can successfully make use of the novel method. This method can be broadly applied to visualize different types of multivariate MRI data.

  12. Numerical Study of Sound Emission by 2D Regular and Chaotic Vortex Configurations

    NASA Astrophysics Data System (ADS)

    Knio, Omar M.; Collorec, Luc; Juvé, Daniel

    1995-02-01

    The far-field noise generated by a system of three Gaussian vortices lying over a flat boundary is numerically investigated using a two-dimensional vortex element method. The method is based on the discretization of the vorticity field into a finite number of smoothed vortex elements of spherical overlapping cores. The elements are convected in a Lagrangian reference along particle trajectories using the local velocity vector, given in terms of a desingularized Biot-Savart law. The initial structure of the vortex system is triangular; a one-dimensional family of initial configurations is constructed by keeping one side of the triangle fixed and vertical, and varying the abscissa of the centroid of the remaining vortex. The inviscid dynamics of this vortex configuration are first investigated using non-deformable vortices. Depending on the aspect ratio of the initial system, regular or chaotic motion occurs. Due to wall-related symmetries, the far-field sound always exhibits a time-independent quadrupolar directivity with maxima parallel end perpendicular to the wall. When regular motion prevails, the noise spectrum is dominated by discrete frequencies which correspond to the fundamental system frequency and its superharmonics. For chaotic motion, a broadband spectrum is obtained; computed soundlevels are substantially higher than in non-chaotic systems. A more sophisticated analysis is then performed which accounts for vortex core dynamics. Results show that the vortex cores are susceptible to inviscid instability which leads to violent vorticity reorganization within the core. This phenomenon has little effect on the large-scale features of the motion of the system or on low frequency sound emission. However, it leads to the generation of a high-frequency noise band in the acoustic pressure spectrum. The latter is observed in both regular and chaotic system simulations.

  13. Three-dimensional finite element analysis of stress distribution on different bony ridges with different lengths of morse taper implants and prosthesis dimensions.

    PubMed

    Toniollo, Marcelo Bighetti; Macedo, Ana Paula; Rodrigues, Renata Cristina Silveira; Ribeiro, Ricardo Faria; de Mattos, Maria da Gloria Chiarello

    2012-11-01

    This finite element analysis (FEA) compared stress distribution on different bony ridges rehabilitated with different lengths of morse taper implants, varying dimensions of metal-ceramic crowns to maintain the occlusal alignment. Three-dimensional FE models were designed representing a posterior left side segment of the mandible: group control, 3 implants of 11 mm length; group 1, implants of 13 mm, 11 mm and 5 mm length; group 2, 1 implant of 11 mm and 2 implants of 5 mm length; and group 3, 3 implants of 5 mm length. The abutments heights were 3.5 mm for 13- and 11-mm implants (regular), and 0.8 mm for 5-mm implants (short). Evaluation was performed on Ansys software, oblique loads of 365N for molars and 200N for premolars. There was 50% higher stress on cortical bone for the short implants than regular implants. There was 80% higher stress on trabecular bone for the short implants than regular implants. There was higher stress concentration on the bone region of the short implants neck. However, these implants were capable of dissipating the stress to the bones, given the applied loads, but achieving near the threshold between elastic and plastic deformation to the trabecular bone. Distal implants and/or with biggest occlusal table generated greatest stress regions on the surrounding bone. It was concluded that patients requiring short implants associated with increased proportions implant prostheses need careful evaluation and occlusal adjustment, as a possible overload in these short implants, and even in regular ones, can generate stress beyond the physiological threshold of the surrounding bone, compromising the whole system.

  14. Disease Prediction based on Functional Connectomes using a Scalable and Spatially-Informed Support Vector Machine

    PubMed Central

    Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra

    2014-01-01

    Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268

  15. Special Section: Complementary and Alternative Medicine (CAM): Low Back Pain and CAM

    MedlinePlus

    ... back, he has used conventional and complementary and alternative medicine (CAM) approaches, including regular visits to the chiropractor and massage therapist to address his pain. "I'm looking for something so that I don't ... with other alternative and traditional therapies to help them resume normal ...

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackstone, R.; Graham, L.W.

    The dimensional changes observed in a range of graphitic materials following irradiation at 600, 900, and 1200 deg C are reported. The results are discussed in the light of current models for irradiation damage in graphite and it is concluded that for conventional materials the dimensional behaviour can be related to the material properties. Further confirmation of the extreme dependence of the dimensional changes on the crystallite size has been obtained. The way in which the rate of dimensional change varies with temperature is compatible with this effect being caused by vacancy loss at crystallite boundaries. For a given crystallitemore » size there appears to be a breakaway temperature above which the rate of dimensional change accelerates rapidly. (auth)« less

  17. Corn Yield and Soil Nitrous Oxide Emission under Different Fertilizer and Soil Management: A Three-Year Field Experiment in Middle Tennessee.

    PubMed

    Deng, Qi; Hui, Dafeng; Wang, Junming; Iwuozo, Stephen; Yu, Chih-Li; Jima, Tigist; Smart, David; Reddy, Chandra; Dennis, Sam

    2015-01-01

    A three-year field experiment was conducted to examine the responses of corn yield and soil nitrous oxide (N2O) emission to various management practices in middle Tennessee. The management practices include no-tillage + regular applications of urea ammonium nitrate (NT-URAN); no-tillage + regular applications of URAN + denitrification inhibitor (NT-inhibitor); no-tillage + regular applications of URAN + biochar (NT-biochar); no-tillage + 20% applications of URAN + chicken litter (NT-litter), no-tillage + split applications of URAN (NT-split); and conventional tillage + regular applications of URAN as a control (CT-URAN). Fertilizer equivalent to 217 kg N ha(-1) was applied to each of the experimental plots. Results showed that no-tillage (NT-URAN) significantly increased corn yield by 28% over the conventional tillage (CT-URAN) due to soil water conservation. The management practices significantly altered soil N2O emission, with the highest in the CT-URAN (0.48 mg N2O m(-2) h(-1)) and the lowest in the NT-inhibitor (0.20 mg N2O m(-2) h(-1)) and NT-biochar (0.16 mg N2O m(-2) h(-1)) treatments. Significant exponential relationships between soil N2O emission and water filled pore space were revealed in all treatments. However, variations in soil N2O emission among the treatments were positively correlated with the moisture sensitivity of soil N2O emission that likely reflects an interactive effect between soil properties and WFPS. Our results indicated that improved fertilizer and soil management have the potential to maintain highly productive corn yield while reducing greenhouse gas emissions.

  18. Corn Yield and Soil Nitrous Oxide Emission under Different Fertilizer and Soil Management: A Three-Year Field Experiment in Middle Tennessee

    PubMed Central

    Deng, Qi; Hui, Dafeng; Wang, Junming; Iwuozo, Stephen; Yu, Chih-Li; Jima, Tigist; Smart, David; Reddy, Chandra; Dennis, Sam

    2015-01-01

    Background A three-year field experiment was conducted to examine the responses of corn yield and soil nitrous oxide (N2O) emission to various management practices in middle Tennessee. Methodology/Principal Findings The management practices include no-tillage + regular applications of urea ammonium nitrate (NT-URAN); no-tillage + regular applications of URAN + denitrification inhibitor (NT-inhibitor); no-tillage + regular applications of URAN + biochar (NT-biochar); no-tillage + 20% applications of URAN + chicken litter (NT-litter), no-tillage + split applications of URAN (NT-split); and conventional tillage + regular applications of URAN as a control (CT-URAN). Fertilizer equivalent to 217 kg N ha-1 was applied to each of the experimental plots. Results showed that no-tillage (NT-URAN) significantly increased corn yield by 28% over the conventional tillage (CT-URAN) due to soil water conservation. The management practices significantly altered soil N2O emission, with the highest in the CT-URAN (0.48 mg N2O m-2 h-1) and the lowest in the NT-inhibitor (0.20 mg N2O m-2 h-1) and NT-biochar (0.16 mg N2O m-2 h-1) treatments. Significant exponential relationships between soil N2O emission and water filled pore space were revealed in all treatments. However, variations in soil N2O emission among the treatments were positively correlated with the moisture sensitivity of soil N2O emission that likely reflects an interactive effect between soil properties and WFPS. Conclusion/Significance Our results indicated that improved fertilizer and soil management have the potential to maintain highly productive corn yield while reducing greenhouse gas emissions. PMID:25923716

  19. Effects of the regularization on the restoration of chiral and axial symmetries

    NASA Astrophysics Data System (ADS)

    Costa, P.; Ruivo, M. C.; de Sousa, C. A.

    2008-05-01

    The effects of a type of regularization for finite temperatures on the restoration of chiral and axial symmetries are investigated within the SU(3) Nambu-Jona-Lasinio model. The regularization consists in using an infinite cutoff in the integrals that are convergent at finite temperature, a procedure that allows one to take into account the effects of high momentum quarks at high temperatures. It is found that the critical temperature for the phase transition is closer to lattice results than the one obtained with the conventional regularization, and the restoration of chiral and axial symmetries, signaled by the behavior of several observables, occurs simultaneously and at a higher temperature. The restoration of the axial symmetry appears as a natural consequence of the full recovering of the chiral symmetry that was dynamically broken. By using an additional ansatz that simulates instanton suppression effects, by means of a convenient temperature dependence of the anomaly coefficient, we found that the restoration of U(2) symmetry is shifted to lower values, but the dominant effect at high temperatures comes from the new regularization that enhances the decrease of quark condensates, especially in the strange sector.

  20. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  1. Analysis of the phase transition in the two-dimensional Ising ferromagnet using a Lempel-Ziv string-parsing scheme and black-box data-compression utilities

    NASA Astrophysics Data System (ADS)

    Melchert, O.; Hartmann, A. K.

    2015-02-01

    In this work we consider information-theoretic observables to analyze short symbolic sequences, comprising time series that represent the orientation of a single spin in a two-dimensional (2D) Ising ferromagnet on a square lattice of size L2=1282 for different system temperatures T . The latter were chosen from an interval enclosing the critical point Tc of the model. At small temperatures the sequences are thus very regular; at high temperatures they are maximally random. In the vicinity of the critical point, nontrivial, long-range correlations appear. Here we implement estimators for the entropy rate, excess entropy (i.e., "complexity"), and multi-information. First, we implement a Lempel-Ziv string-parsing scheme, providing seemingly elaborate entropy rate and multi-information estimates and an approximate estimator for the excess entropy. Furthermore, we apply easy-to-use black-box data-compression utilities, providing approximate estimators only. For comparison and to yield results for benchmarking purposes, we implement the information-theoretic observables also based on the well-established M -block Shannon entropy, which is more tedious to apply compared to the first two "algorithmic" entropy estimation procedures. To test how well one can exploit the potential of such data-compression techniques, we aim at detecting the critical point of the 2D Ising ferromagnet. Among the above observables, the multi-information, which is known to exhibit an isolated peak at the critical point, is very easy to replicate by means of both efficient algorithmic entropy estimation procedures. Finally, we assess how good the various algorithmic entropy estimates compare to the more conventional block entropy estimates and illustrate a simple modification that yields enhanced results.

  2. Effect of 3D animation videos over 2D video projections in periodontal health education among dental students

    PubMed Central

    Dhulipalla, Ravindranath; Marella, Yamuna; Katuri, Kishore Kumar; Nagamani, Penupothu; Talada, Kishore; Kakarlapudi, Anusha

    2015-01-01

    Background: There is limited evidence about the distinguished effect of 3D oral health education videos over conventional 2 dimensional projections in improving oral health knowledge. This randomized controlled trial was done to test the effect of 3 dimensional oral health educational videos among first year dental students. Materials and Methods: 80 first year dental students were enrolled and divided into two groups (test and control). In the test group, 3D animation and in the control group, regular 2D video projections pertaining to periodontal anatomy, etiology, presenting conditions, preventive measures and treatment of periodontal problems were shown. Effect of 3D animation was evaluated by using a questionnaire consisting of 10 multiple choice questions given to all participants at baseline, immediately after and 1month after the intervention. Clinical parameters like Plaque Index (PI), Gingival Bleeding Index (GBI), and Oral Hygiene Index Simplified (OHI-S) were measured at baseline and 1 month follow up. Results: A significant difference in the post intervention knowledge scores was found between the groups as assessed by unpaired t-test (p<0.001) at baseline, immediate and after 1 month. At baseline, all the clinical parameters in the both the groups were similar and showed a significant reduction (p<0.001)p after 1 month, whereas no significant difference was noticed post intervention between the groups. Conclusion: 3D animation videos are more effective over 2D videos in periodontal disease education and knowledge recall. The application of 3D animation results also demonstrate a better visual comprehension for students and greater health care outcomes. PMID:26759805

  3. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    PubMed

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  4. Markov prior-based block-matching algorithm for superdimension reconstruction of porous media

    NASA Astrophysics Data System (ADS)

    Li, Yang; He, Xiaohai; Teng, Qizhi; Feng, Junxi; Wu, Xiaohong

    2018-04-01

    A superdimension reconstruction algorithm is used for the reconstruction of three-dimensional (3D) structures of a porous medium based on a single two-dimensional image. The algorithm borrows the concepts of "blocks," "learning," and "dictionary" from learning-based superresolution reconstruction and applies them to the 3D reconstruction of a porous medium. In the neighborhood-matching process of the conventional superdimension reconstruction algorithm, the Euclidean distance is used as a criterion, although it may not really reflect the structural correlation between adjacent blocks in an actual situation. Hence, in this study, regular items are adopted as prior knowledge in the reconstruction process, and a Markov prior-based block-matching algorithm for superdimension reconstruction is developed for more accurate reconstruction. The algorithm simultaneously takes into consideration the probabilistic relationship between the already reconstructed blocks in three different perpendicular directions (x , y , and z ) and the block to be reconstructed, and the maximum value of the probability product of the blocks to be reconstructed (as found in the dictionary for the three directions) is adopted as the basis for the final block selection. Using this approach, the problem of an imprecise spatial structure caused by a point simulation can be overcome. The problem of artifacts in the reconstructed structure is also addressed through the addition of hard data and by neighborhood matching. To verify the improved reconstruction accuracy of the proposed method, the statistical and morphological features of the results from the proposed method and traditional superdimension reconstruction method are compared with those of the target system. The proposed superdimension reconstruction algorithm is confirmed to enable a more accurate reconstruction of the target system while also eliminating artifacts.

  5. Electrons in Flatland

    NASA Astrophysics Data System (ADS)

    MacDonald, Allan

    2007-04-01

    Like the classical squares and triangles in Edwin Abbott's 19th century social satire and science fiction novel Flatland, electrons and other quantum particles behave differently when confined to a two-dimensional world. Condensed matter physicists have been intrigued and regularly suprised by two-dimensional electron systems since they were first studied in semiconductor field-effect-transistor devices over forty years ago. I will discuss some important milestones in the study of two-dimensional electrn systems, from the discoveries of the integer and fractional quantum Hall effects in the 1980's to recent quantum Hall effect work on quasiparticles with non-Abelian quantum statistics. Special attention will be given to a new electronic Flatland that has risen to prominence recently, graphene, which consists of a single sheet of carbon atoms in a honeycomb lattice arrangement. Graphene provides a realization of two-dimensional massless Dirac fermions which interact via nearly instantaneous Coulomb interactions. Early research on graphene has demonstrated yet again that Flatland exceeds expectations.

  6. Model-based Clustering of High-Dimensional Data in Astrophysics

    NASA Astrophysics Data System (ADS)

    Bouveyron, C.

    2016-05-01

    The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.

  7. A Numerical Model of Exchange Chromatography Through 3D Lattice Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher; Robinson, David B.

    Rapid progress in the development of additive manufacturing technologies is opening new opportunities to fabricate structures that control mass transport in three dimensions across a broad range of length scales. We describe a structure that can be fabricated by newly available commercial 3D printers. It contains an array of regular three-dimensional flow paths that are in intimate contact with a solid phase, and thoroughly shuffle material among the paths. We implement a chemically reacting flow model to study its behavior as an exchange chromatography column, and compare it to an array of one-dimensional flow paths that resemble more traditional honeycombmore » monoliths. A reaction front moves through the columns and then elutes. Here, the front is sharper at all flow rates for the structure with three-dimensional flow paths, and this structure is more robust to channel width defects than the one-dimensional array.« less

  8. Three-dimensional electron diffraction of plant light-harvesting complex

    PubMed Central

    Wang, Da Neng; Kühlbrandt, Werner

    1992-01-01

    Electron diffraction patterns of two-dimensional crystals of light-harvesting chlorophyll a/b-protein complex (LHC-II) from photosynthetic membranes of pea chloroplasts, tilted at different angles up to 60°, were collected to 3.2 Å resolution at -125°C. The reflection intensities were merged into a three-dimensional data set. The Friedel R-factor and the merging R-factor were 21.8 and 27.6%, respectively. Specimen flatness and crystal size were critical for recording electron diffraction patterns from crystals at high tilts. The principal sources of experimental error were attributed to limitations of the number of unit cells contributing to an electron diffraction pattern, and to the critical electron dose. The distribution of strong diffraction spots indicated that the three-dimensional structure of LHC-II is less regular than that of other known membrane proteins and is not dominated by a particular feature of secondary structure. ImagesFIGURE 1FIGURE 2 PMID:19431817

  9. A Numerical Model of Exchange Chromatography Through 3D Lattice Structures

    DOE PAGES

    Salloum, Maher; Robinson, David B.

    2018-01-30

    Rapid progress in the development of additive manufacturing technologies is opening new opportunities to fabricate structures that control mass transport in three dimensions across a broad range of length scales. We describe a structure that can be fabricated by newly available commercial 3D printers. It contains an array of regular three-dimensional flow paths that are in intimate contact with a solid phase, and thoroughly shuffle material among the paths. We implement a chemically reacting flow model to study its behavior as an exchange chromatography column, and compare it to an array of one-dimensional flow paths that resemble more traditional honeycombmore » monoliths. A reaction front moves through the columns and then elutes. Here, the front is sharper at all flow rates for the structure with three-dimensional flow paths, and this structure is more robust to channel width defects than the one-dimensional array.« less

  10. Small-angle X-ray scattering tensor tomography: model of the three-dimensional reciprocal-space map, reconstruction algorithm and angular sampling requirements.

    PubMed

    Liebi, Marianne; Georgiadis, Marios; Kohlbrecher, Joachim; Holler, Mirko; Raabe, Jörg; Usov, Ivan; Menzel, Andreas; Schneider, Philipp; Bunk, Oliver; Guizar-Sicairos, Manuel

    2018-01-01

    Small-angle X-ray scattering tensor tomography, which allows reconstruction of the local three-dimensional reciprocal-space map within a three-dimensional sample as introduced by Liebi et al. [Nature (2015), 527, 349-352], is described in more detail with regard to the mathematical framework and the optimization algorithm. For the case of trabecular bone samples from vertebrae it is shown that the model of the three-dimensional reciprocal-space map using spherical harmonics can adequately describe the measured data. The method enables the determination of nanostructure orientation and degree of orientation as demonstrated previously in a single momentum transfer q range. This article presents a reconstruction of the complete reciprocal-space map for the case of bone over extended ranges of q. In addition, it is shown that uniform angular sampling and advanced regularization strategies help to reduce the amount of data required.

  11. Chemical processing of three-dimensional graphene networks on transparent conducting electrodes for depleted-heterojunction quantum dot solar cells.

    PubMed

    Tavakoli, Mohammad Mahdi; Simchi, Abdolreza; Fan, Zhiyong; Aashuri, Hossein

    2016-01-07

    We present a novel chemical procedure to prepare three-dimensional graphene networks (3DGNs) as a transparent conductive film to enhance the photovoltaic performance of PbS quantum-dot (QD) solar cells. It is shown that 3DGN electrodes enhance electron extraction, yielding a 30% improvement in performance compared with the conventional device.

  12. Comparing the Impact of Dynamic and Static Media on Students' Learning of One-Dimensional Kinematics

    ERIC Educational Resources Information Center

    Mešic, Vanes; Dervic, Dževdeta; Gazibegovic-Busuladžic, Azra; Salibašic, Džana; Erceg, Nataša

    2015-01-01

    In our study, we aimed to compare the impact of simulations, sequences of printed simulation frames and conventional static diagrams on the understanding of students with regard to the one-dimensional kinematics. Our student sample consisted of three classes of middle years students (N = 63; mostly 15 year-olds). These three classes served as…

  13. [3D FSPGR (fast spoiled gradient echo) magnetic resonance imaging in the diagnosis of focal cortical dysplasia in children].

    PubMed

    Alikhanov, A A; Sinitsyn, V E; Perepelova, E M; Mukhin, K Iu; Demushkina, A A; Omarova, M O; Piliia, S V

    2001-01-01

    Small dysplastic lesions of the cerebral cortex are often missed by conventional MRI methods. The identification of subtle structural abnormalities by traditional multiplanar rectilinear slices is often limited by the complex convolutional pattern of the brain. We used a method of FSPGR (fast spoiled gradient-echo) of three-dimensional MRI data that improves the anatomical display of the sulcal structure of the hemispheric convexities. It also reduces the asymmetric sampling of gray-white matter that may lead to false-positive results. We present 5 from 12 patients with dysplastic cortical lesions in whom conventional two-dimensional and three-dimensional MRI with multiplanar reformatting was initially considered normal. Subsequent studies using 3D FSPGR identified various types of focal cortical dysplasia in all. These results indicate that an increase in the detection of subtle focal dysplastic lesions may be accomplished when one improves the anatomical display of the brain sulcal structure by performing 3D FSPGR.

  14. Monte carlo simulations of enzyme reactions in two dimensions: fractal kinetics and spatial segregation.

    PubMed

    Berry, Hugues

    2002-10-01

    Conventional equations for enzyme kinetics are based on mass-action laws, that may fail in low-dimensional and disordered media such as biological membranes. We present Monte Carlo simulations of an isolated Michaelis-Menten enzyme reaction on two-dimensional lattices with varying obstacle densities, as models of biological membranes. The model predicts that, as a result of anomalous diffusion on these low-dimensional media, the kinetics are of the fractal type. Consequently, the conventional equations for enzyme kinetics fail to describe the reaction. In particular, we show that the quasi-stationary-state assumption can hardly be retained in these conditions. Moreover, the fractal characteristics of the kinetics are increasingly pronounced as obstacle density and initial substrate concentration increase. The simulations indicate that these two influences are mainly additive. Finally, the simulations show pronounced S-P segregation over the lattice at obstacle densities compatible with in vivo conditions. This phenomenon could be a source of spatial self organization in biological membranes.

  15. Monte carlo simulations of enzyme reactions in two dimensions: fractal kinetics and spatial segregation.

    PubMed Central

    Berry, Hugues

    2002-01-01

    Conventional equations for enzyme kinetics are based on mass-action laws, that may fail in low-dimensional and disordered media such as biological membranes. We present Monte Carlo simulations of an isolated Michaelis-Menten enzyme reaction on two-dimensional lattices with varying obstacle densities, as models of biological membranes. The model predicts that, as a result of anomalous diffusion on these low-dimensional media, the kinetics are of the fractal type. Consequently, the conventional equations for enzyme kinetics fail to describe the reaction. In particular, we show that the quasi-stationary-state assumption can hardly be retained in these conditions. Moreover, the fractal characteristics of the kinetics are increasingly pronounced as obstacle density and initial substrate concentration increase. The simulations indicate that these two influences are mainly additive. Finally, the simulations show pronounced S-P segregation over the lattice at obstacle densities compatible with in vivo conditions. This phenomenon could be a source of spatial self organization in biological membranes. PMID:12324410

  16. High-temperature superconductivity in space-charge regions of lanthanum cuprate induced by two-dimensional doping

    PubMed Central

    Baiutti, F.; Logvenov, G.; Gregori, G.; Cristiani, G.; Wang, Y.; Sigle, W.; van Aken, P. A.; Maier, J.

    2015-01-01

    The exploitation of interface effects turned out to be a powerful tool for generating exciting material properties. Such properties include magnetism, electronic and ionic transport and even superconductivity. Here, instead of using conventional homogeneous doping to enhance the hole concentration in lanthanum cuprate and achieve superconductivity, we replace single LaO planes with SrO dopant planes using atomic-layer-by-layer molecular beam epitaxy (two-dimensional doping). Electron spectroscopy and microscopy, conductivity measurements and zinc tomography reveal such negatively charged interfaces to induce layer-dependent superconductivity (Tc up to 35 K) in the space-charge zone at the side of the planes facing the substrate, where the strontium (Sr) profile is abrupt. Owing to the growth conditions, the other side exhibits instead a Sr redistribution resulting in superconductivity due to conventional doping. The present study represents a successful example of two-dimensional doping of superconducting oxide systems and demonstrates its power in this field. PMID:26481902

  17. Gross violation of the Wiedemann–Franz law in a quasi-one-dimensional conductor

    PubMed Central

    Wakeham, Nicholas; Bangura, Alimamy F.; Xu, Xiaofeng; Mercure, Jean-Francois; Greenblatt, Martha; Hussey, Nigel E.

    2011-01-01

    When charge carriers are spatially confined to one dimension, conventional Fermi-liquid theory breaks down. In such Tomonaga–Luttinger liquids, quasiparticles are replaced by distinct collective excitations of spin and charge that propagate independently with different velocities. Although evidence for spin–charge separation exists, no bulk low-energy probe has yet been able to distinguish successfully between Tomonaga–Luttinger and Fermi-liquid physics. Here we show experimentally that the ratio of the thermal and electrical Hall conductivities in the metallic phase of quasi-one-dimensional Li0.9Mo6O17 diverges with decreasing temperature, reaching a value five orders of magnitude larger than that found in conventional metals. Both the temperature dependence and magnitude of this ratio are consistent with Tomonaga–Luttinger liquid theory. Such a dramatic manifestation of spin–charge separation in a bulk three-dimensional solid offers a unique opportunity to explore how the fermionic quasiparticle picture recovers, and over what time scale, when coupling to a second or third dimension is restored. PMID:21772267

  18. Polarimetric image reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Valenzuela, John R.

    In the field of imaging polarimetry Stokes parameters are sought and must be inferred from noisy and blurred intensity measurements. Using a penalized-likelihood estimation framework we investigate reconstruction quality when estimating intensity images and then transforming to Stokes parameters (traditional estimator), and when estimating Stokes parameters directly (Stokes estimator). We define our cost function for reconstruction by a weighted least squares data fit term and a regularization penalty. It is shown that under quadratic regularization, the traditional and Stokes estimators can be made equal by appropriate choice of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating the Stokes parameters directly leads to lower RMS error in reconstruction. Also, the addition of a cross channel regularization term further lowers the RMS error for both methods especially in the case of low SNR. The technique of phase diversity has been used in traditional incoherent imaging systems to jointly estimate an object and optical system aberrations. We extend the technique of phase diversity to polarimetric imaging systems. Specifically, we describe penalized-likelihood methods for jointly estimating Stokes images and optical system aberrations from measurements that contain phase diversity. Jointly estimating Stokes images and optical system aberrations involves a large parameter space. A closed-form expression for the estimate of the Stokes images in terms of the aberration parameters is derived and used in a formulation that reduces the dimensionality of the search space to the number of aberration parameters only. We compare the performance of the joint estimator under both quadratic and edge-preserving regularization. The joint estimator with edge-preserving regularization yields higher fidelity polarization estimates than with quadratic regularization. Under quadratic regularization, using the reduced-parameter search strategy, accurate aberration estimates can be obtained without recourse to regularization "tuning". Phase-diverse wavefront sensing is emerging as a viable candidate wavefront sensor for adaptive-optics systems. In a quadratically penalized weighted least squares estimation framework a closed form expression for the object being imaged in terms of the aberrations in the system is available. This expression offers a dramatic reduction of the dimensionality of the estimation problem and thus is of great interest for practical applications. We have derived an expression for an approximate joint covariance matrix for object and aberrations in the phase diversity context. Our expression for the approximate joint covariance is compared with the "known-object" Cramer-Rao lower bound that is typically used for system parameter optimization. Estimates of the optimal amount of defocus in a phase-diverse wavefront sensor derived from the joint-covariance matrix, the known-object Cramer-Rao bound, and Monte Carlo simulations are compared for an extended scene and a point object. It is found that our variance approximation, that incorporates the uncertainty of the object, leads to an improvement in predicting the optimal amount of defocus to use in a phase-diverse wavefront sensor.

  19. Characteristic Examination of New Synchronous Motor that Composes Craw Teeth of Soft Magnetic Composite

    NASA Astrophysics Data System (ADS)

    Enomoto, Yuji; Ito, Motoya; Masaki, Ryozo; Asaka, Kazuo

    We examined the claw type teeth motor as one application of the soft magnetic composite to a motor core. In order to understand quantitatively the characteristics of the claw type teeth motor, we used the 3-dimensional electromagnetic field analysis to predict its characteristics in advance and manufactured a trial motor to estimate it. And we examined the advantages of the claw type teeth motor comparing with a conventional slot type motor. The results are: 1. By using the 3-dimensional electromagnetic field analysis, it is able to estimate with high accuracy the characteristics of the 3-phase permanent magnet synchronous claw type teeth motor having a core composed of the soft magnetic composite. 2. The claw type teeth motor is able to achieve about 20% higher output than a conventional slot type motor having an electromagnetic steel core, while both volumes are equal. 3. The motor efficiency of the claw type teeth motor is about 3.5% higher than the conventional motor.

  20. An evaluation of three experimental processes for two-dimensional transonic tests

    NASA Technical Reports Server (NTRS)

    Zuppardi, Gennaro

    1989-01-01

    The aerodynamic measurements in conventional wind tunnels usually suffer from the interference effects of the sting supporting the model and the test section walls. These effects are particularly severe in the transonic regime. Sting interference effects can be overcome through the Magnetic Suspension technique. Wall effects can be alleviated by: testing airfoils in conventional, ventilated tunnels at relatively small model to tunnel size ratios; treatment of the tunnel wall boundary layers; or by utilization of the Adaptive Wall Test Section (AWTS) concept. The operating capabilities and results from two of the foremost two-dimensional, transonic, AWTS facilities in existence are assessed. These facilities are the NASA 0.3-Meter Transonic Cryogenic Tunnel and the ONERA T-2 facility located in Toulouse, France. In addition, the results derived from the well known conventional facility, the NAE 5 ft x 5 ft Canadian wind tunnel will be assessed. CAST10/D0A2 Airfoil results will be used in all of the evaluations.

  1. Quasi-radial wall jets as a new concept in boundary layer flow control

    NASA Astrophysics Data System (ADS)

    Javadi, Khodayar; Hajipour, Majid

    2018-01-01

    This work aims to introduce a novel concept of wall jets wherein the flow is radially injected into a medium through a sector of a cylinder, called quasi-radial (QR) wall jets. The results revealed that fluid dynamics of the QR wall jet flow differs from that of conventional wall jets. Indeed, lateral and normal propagations of a conventional three-dimensional wall jet are via shear stresses. While, lateral propagation of a QR wall jet is due to mean lateral component of the velocity field. Moreover, discharged Arrays of conventional three-dimensional wall jets in quiescent air lead to formation of a combined wall jet at large distant from the nozzles, while QR wall jet immediately spread in lateral direction, meet each other and merge together very quickly in a short distance downstream of the jet nozzles. Furthermore, in discharging the conventional jets into an external flow, there is no strong interaction between them as they are moving parallel. While, in QR wall jets the lateral components of the velocity field strongly interact with boundary layer of the external flow and create strong helical vortices acting as vortex generators.

  2. Advances in neuroscience and the biological and toxin weapons convention.

    PubMed

    Dando, Malcolm

    2011-01-01

    This paper investigates the potential threat to the prohibition of the hostile misuse of the life sciences embodied in the Biological and Toxin Weapons Convention from the rapid advances in the field of neuroscience. The paper describes how the implications of advances in science and technology are considered at the Five Year Review Conferences of the Convention and how State Parties have developed their appreciations since the First Review Conference in 1980. The ongoing advances in neurosciences are then assessed and their implications for the Convention examined. It is concluded that State Parties should consider a much more regular and systematic review system for such relevant advances in science and technology when they meet at the Seventh Review Conference in late 2011, and that neuroscientists should be much more informed and engaged in these processes of protecting their work from malign misuse.

  3. Advances in Neuroscience and the Biological and Toxin Weapons Convention

    PubMed Central

    Dando, Malcolm

    2011-01-01

    This paper investigates the potential threat to the prohibition of the hostile misuse of the life sciences embodied in the Biological and Toxin Weapons Convention from the rapid advances in the field of neuroscience. The paper describes how the implications of advances in science and technology are considered at the Five Year Review Conferences of the Convention and how State Parties have developed their appreciations since the First Review Conference in 1980. The ongoing advances in neurosciences are then assessed and their implications for the Convention examined. It is concluded that State Parties should consider a much more regular and systematic review system for such relevant advances in science and technology when they meet at the Seventh Review Conference in late 2011, and that neuroscientists should be much more informed and engaged in these processes of protecting their work from malign misuse. PMID:21350673

  4. Dispersive estimates for massive Dirac operators in dimension two

    NASA Astrophysics Data System (ADS)

    Erdoğan, M. Burak; Green, William R.; Toprak, Ebru

    2018-05-01

    We study the massive two dimensional Dirac operator with an electric potential. In particular, we show that the t-1 decay rate holds in the L1 →L∞ setting if the threshold energies are regular. We also show these bounds hold in the presence of s-wave resonances at the threshold. We further show that, if the threshold energies are regular then a faster decay rate of t-1(log ⁡ t) - 2 is attained for large t, at the cost of logarithmic spatial weights. The free Dirac equation does not satisfy this bound due to the s-wave resonances at the threshold energies.

  5. Dimension-5 C P -odd operators: QCD mixing and renormalization

    DOE PAGES

    Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; ...

    2015-12-23

    Here, we study the off-shell mixing and renormalization of flavor-diagonal dimension-five T- and P-odd operators involving quarks, gluons, and photons, including quark electric dipole and chromoelectric dipole operators. Furthermore, we present the renormalization matrix to one loop in themore » $$\\bar{MS}$$ scheme. We also provide a definition of the quark chromoelectric dipole operator in a regularization-independent momentum-subtraction scheme suitable for nonperturbative lattice calculations and present the matching coefficients with the $$\\bar{MS}$$ scheme to one loop in perturbation theory, using both the naïve dimensional regularization and ’t Hooft–Veltman prescriptions for γ 5.« less

  6. The gravitational potential of axially symmetric bodies from a regularized green kernel

    NASA Astrophysics Data System (ADS)

    Trova, A.; Huré, J.-M.; Hersant, F.

    2011-12-01

    The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.

  7. Power corrections to the HTL effective Lagrangian of QED

    NASA Astrophysics Data System (ADS)

    Carignano, Stefano; Manuel, Cristina; Soto, Joan

    2018-05-01

    We present compact expressions for the power corrections to the hard thermal loop (HTL) Lagrangian of QED in d space dimensions. These are corrections of order (L / T) 2, valid for momenta L ≪ T, where T is the temperature. In the limit d → 3 we achieve a consistent regularization of both infrared and ultraviolet divergences, which respects the gauge symmetry of the theory. Dimensional regularization also allows us to witness subtle cancellations of infrared divergences. We also discuss how to generalize our results in the presence of a chemical potential, so as to obtain the power corrections to the hard dense loop (HDL) Lagrangian.

  8. Single-particle tracking of quantum dot-conjugated prion proteins inside yeast cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsuji, Toshikazu; Kawai-Noma, Shigeko; Pack, Chan-Gi

    2011-02-25

    Research highlights: {yields} We develop a method to track a quantum dot-conjugated protein in yeast cells. {yields} We incorporate the conjugated quantum dot proteins into yeast spheroplasts. {yields} We track the motions by conventional or 3D tracking microscopy. -- Abstract: Yeast is a model eukaryote with a variety of biological resources. Here we developed a method to track a quantum dot (QD)-conjugated protein in the budding yeast Saccharomyces cerevisiae. We chemically conjugated QDs with the yeast prion Sup35, incorporated them into yeast spheroplasts, and tracked the motions by conventional two-dimensional or three-dimensional tracking microscopy. The method paves the way towardmore » the individual tracking of proteins of interest inside living yeast cells.« less

  9. Secure positioning technique based on the encrypted visible light map

    NASA Astrophysics Data System (ADS)

    Lee, Y. U.; Jung, G.

    2017-01-01

    For overcoming the performance degradation problems of the conventional visible light (VL) positioning system, which are due to the co-channel interference by adjacent light and the irregularity of the VL reception position in the three dimensional (3-D) VL channel, the secure positioning technique based on the two dimensional (2-D) encrypted VL map is proposed, implemented as the prototype for the specific embedded positioning system, and verified by performance tests in this paper. It is shown from the test results that the proposed technique achieves the performance enhancement over 21.7% value better than the conventional one in the real positioning environment, and the well known PN code is the optimal stream encryption key for the good VL positioning.

  10. Space-based optical image encryption.

    PubMed

    Chen, Wen; Chen, Xudong

    2010-12-20

    In this paper, we propose a new method based on a three-dimensional (3D) space-based strategy for the optical image encryption. The two-dimensional (2D) processing of a plaintext in the conventional optical encryption methods is extended to a 3D space-based processing. Each pixel of the plaintext is considered as one particle in the proposed space-based optical image encryption, and the diffraction of all particles forms an object wave in the phase-shifting digital holography. The effectiveness and advantages of the proposed method are demonstrated by numerical results. The proposed method can provide a new optical encryption strategy instead of the conventional 2D processing, and may open up a new research perspective for the optical image encryption.

  11. Regularity of center-of-pressure trajectories depends on the amount of attention invested in postural control

    PubMed Central

    Donker, Stella F.; Roerdink, Melvyn; Greven, An J.

    2007-01-01

    The influence of attention on the dynamical structure of postural sway was examined in 30 healthy young adults by manipulating the focus of attention. In line with the proposed direct relation between the amount of attention invested in postural control and regularity of center-of-pressure (COP) time series, we hypothesized that: (1) increasing cognitive involvement in postural control (i.e., creating an internal focus by increasing task difficulty through visual deprivation) increases COP regularity, and (2) withdrawing attention from postural control (i.e., creating an external focus by performing a cognitive dual task) decreases COP regularity. We quantified COP dynamics in terms of sample entropy (regularity), standard deviation (variability), sway-path length of the normalized posturogram (curviness), largest Lyapunov exponent (local stability), correlation dimension (dimensionality) and scaling exponent (scaling behavior). Consistent with hypothesis 1, standing with eyes closed significantly increased COP regularity. Furthermore, variability increased and local stability decreased, implying ineffective postural control. Conversely, and in line with hypothesis 2, performing a cognitive dual task while standing with eyes closed led to greater irregularity and smaller variability, suggesting an increase in the “efficiency, or “automaticity” of postural control”. In conclusion, these findings not only indicate that regularity of COP trajectories is positively related to the amount of attention invested in postural control, but also substantiate that in certain situations an increased internal focus may in fact be detrimental to postural control. PMID:17401553

  12. OMFIT Tokamak Profile Data Fitting and Physics Analysis

    DOE PAGES

    Logan, N. C.; Grierson, B. A.; Haskey, S. R.; ...

    2018-01-22

    Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less

  13. OMFIT Tokamak Profile Data Fitting and Physics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, N. C.; Grierson, B. A.; Haskey, S. R.

    Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less

  14. Fe3O4-in-silica super crystal of defined interstices for single protein molecules entrapment under magnetic flux.

    PubMed

    Ye, Lin; Yu, Chih Hao; Jiang, PengJu; Qiu, Lin; Ng, Olivia T W; Yung, Ken K L; He, Heyong; Tsang, Shik Chi

    2010-09-28

    Confocal fluorescence demonstrates that single molecules of dye-labelled Cytochrome C or B5 containing paramagnetic Fe(III) can be magnetically placed into the interstices of super-crystal which is composed of three dimensional regular arrays of Fe(3)O(4) nanoparticles.

  15. Orchestra Festival Evaluations: Interjudge Agreement and Relationships between Performance Categories and Final Ratings.

    ERIC Educational Resources Information Center

    Garman, Barry R.; And Others

    1991-01-01

    Band, orchestra, and choir festival evaluations are a regular part of many secondary school music programs, and most such festivals engage adjudicators who rate each group's performance. Because music ensemble performance is complex and multi-dimensional, it does not lend itself readily to precise measurement; generally, musical performances are…

  16. Development of a Scale Measuring Trait Anxiety in Physical Education

    ERIC Educational Resources Information Center

    Barkoukis, Vassilis; Rodafinos, Angelos; Koidou, Eirini; Tsorbatzoudis, Haralambos

    2012-01-01

    The aim of the present study was to examine the validity and reliability of a multi-dimensional measure of trait anxiety specifically designed for the physical education lesson. The Physical Education Trait Anxiety Scale was initially completed by 774 high school students during regular school classes. A confirmatory factor analysis supported the…

  17. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, Josef; Magnera, Thomas F.; David, Donald E.; Harrison, Robin M.

    1999-01-01

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures.

  18. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION

    PubMed Central

    Allen, Genevera I.; Tibshirani, Robert

    2015-01-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823

  19. TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.

    PubMed

    Allen, Genevera I; Tibshirani, Robert

    2010-06-01

    Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.

  20. On the Hodge-type decomposition and cohomology groups of k-Cauchy-Fueter complexes over domains in the quaternionic space

    NASA Astrophysics Data System (ADS)

    Chang, Der-Chen; Markina, Irina; Wang, Wei

    2016-09-01

    The k-Cauchy-Fueter operator D0(k) on one dimensional quaternionic space H is the Euclidean version of spin k / 2 massless field operator on the Minkowski space in physics. The k-Cauchy-Fueter equation for k ≥ 2 is overdetermined and its compatibility condition is given by the k-Cauchy-Fueter complex. In quaternionic analysis, these complexes play the role of Dolbeault complex in several complex variables. We prove that a natural boundary value problem associated to this complex is regular. Then by using the theory of regular boundary value problems, we show the Hodge-type orthogonal decomposition, and the fact that the non-homogeneous k-Cauchy-Fueter equation D0(k) u = f on a smooth domain Ω in H is solvable if and only if f satisfies the compatibility condition and is orthogonal to the set ℋ(k)1 (Ω) of Hodge-type elements. This set is isomorphic to the first cohomology group of the k-Cauchy-Fueter complex over Ω, which is finite dimensional, while the second cohomology group is always trivial.

  1. One-loop corrections from higher dimensional tree amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  2. Phase retrieval and 3D imaging in gold nanoparticles based fluorescence microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev

    2017-02-01

    Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.

  3. One-loop corrections from higher dimensional tree amplitudes

    DOE PAGES

    Cachazo, Freddy; He, Song; Yuan, Ellis Ye

    2016-08-01

    We show how one-loop corrections to scattering amplitudes of scalars and gauge bosons can be obtained from tree amplitudes in one higher dimension. Starting with a complete tree-level scattering amplitude of n + 2 particles in five dimensions, one assumes that two of them cannot be “detected” and therefore an integration over their LIPS is carried out. The resulting object, function of the remaining n particles, is taken to be four-dimensional by restricting the corresponding momenta. We perform this procedure in the context of the tree-level CHY formulation of amplitudes. The scattering equations obtained in the procedure coincide with thosemore » derived by Geyer et al. from ambitwistor constructions and recently studied by two of the authors for bi-adjoint scalars. They have two sectors of solutions: regular and singular. We prove that the contribution from regular solutions generically gives rise to unphysical poles. However, using a BCFW argument we prove that the unphysical contributions are always homogeneous functions of the loop momentum and can be discarded. We also show that the contribution from singular solutions turns out to be homogeneous as well.« less

  4. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, J.; Magnera, T.F.; David, D.E.; Harrison, R.M.

    1999-03-02

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures. 9 figs.

  5. Biosensors: Viruses for ultrasensitive assays

    NASA Astrophysics Data System (ADS)

    Donath, Edwin

    2009-04-01

    A three-dimensional assay based on genetically engineered viral nanoparticles and nickel nanohairs can detect much lower levels of protein markers associated with heart attacks than conventional assays.

  6. Study of X(5568) in a unitary coupled-channel approximation of BK¯ and Bs π

    NASA Astrophysics Data System (ADS)

    Sun, Bao-Xi; Dong, Fang-Yong; Pang, Jing-Long

    2017-07-01

    The potential of the B meson and the pseudoscalar meson is constructed up to the next-to-leading order Lagrangian, and then the BK¯ and Bs π interaction is studied in the unitary coupled-channel approximation. A resonant state with a mass about 5568 MeV and JP =0+ is generated dynamically, which can be associated with the X(5568) state announced by the D0 Collaboration recently. The mass and the decay width of this resonant state depend on the regularization scale in the dimensional regularization scheme, or the maximum momentum in the momentum cutoff regularization scheme. The scattering amplitude of the vector B meson and the pseudoscalar meson is calculated, and an axial-vector state with a mass near 5620 MeV and JP =1+ is produced. Their partners in the charm sector are also discussed.

  7. Microscopic Spin Model for the STOCK Market with Attractor Bubbling on Regular and Small-World Lattices

    NASA Astrophysics Data System (ADS)

    Krawiecki, A.

    A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.

  8. Slow dynamics and regularization phenomena in ensembles of chaotic neurons

    NASA Astrophysics Data System (ADS)

    Rabinovich, M. I.; Varona, P.; Torres, J. J.; Huerta, R.; Abarbanel, H. D. I.

    1999-02-01

    We have explored the role of calcium concentration dynamics in the generation of chaos and in the regularization of the bursting oscillations using a minimal neural circuit of two coupled model neurons. In regions of the control parameter space where the slowest component, namely the calcium concentration in the endoplasmic reticulum, weakly depends on the other variables, this model is analogous to three dimensional systems as found in [1] or [2]. These are minimal models that describe the fundamental characteristics of the chaotic spiking-bursting behavior observed in real neurons. We have investigated different regimes of cooperative behavior in large assemblies of such units using lattice of non-identical Hindmarsh-Rose neurons electrically coupled with parameters chosen randomly inside the chaotic region. We study the regularization mechanisms in large assemblies and the development of several spatio-temporal patterns as a function of the interconnectivity among nearest neighbors.

  9. Vorticity vector-potential method based on time-dependent curvilinear coordinates for two-dimensional rotating flows in closed configurations

    NASA Astrophysics Data System (ADS)

    Fu, Yuan; Zhang, Da-peng; Xie, Xi-lin

    2018-04-01

    In this study, a vorticity vector-potential method for two-dimensional viscous incompressible rotating driven flows is developed in the time-dependent curvilinear coordinates. The method is applicable in both inertial and non-inertial frames of reference with the advantage of a fixed and regular calculation domain. The numerical method is applied to triangle and curved triangle configurations in constant and varying rotational angular velocity cases respectively. The evolutions of flow field are studied. The geostrophic effect, unsteady effect and curvature effect on the evolutions are discussed.

  10. Defects in regular nanosystems and interference spectra at reemission of electromagnetic field attosecond pulses

    NASA Astrophysics Data System (ADS)

    Matveev, V. I.; Makarov, D. N.

    2017-01-01

    The effect of defects in nanostructured targets on interference spectra at the reemission of attosecond electromagnetic pulses has been considered. General expressions have been obtained for calculations of spectral distributions for one-, two-, and three-dimensional multiatomic nanosystems consisting of identical complex atoms with defects such as bends, vacancies, and breaks. Changes in interference spectra by a linear chain with several removed atoms (chain with breaks) and by a linear chain with a bend have been calculated as examples allowing a simple analytical representation. Generalization to two- and three-dimensional nanosystems has been developed.

  11. Scars of the Wigner Function.

    PubMed

    Toscano; de Aguiar MA; Ozorio De Almeida AM

    2001-01-01

    We propose a picture of Wigner function scars as a sequence of concentric rings along a two-dimensional surface inside a periodic orbit. This is verified for a two-dimensional plane that contains a classical hyperbolic orbit of a Hamiltonian system with 2 degrees of freedom. The stationary wave functions are the familiar mixture of scarred and random waves, but the spectral average of the Wigner functions in part of the plane is nearly that of a harmonic oscillator and individual states are also remarkably regular. These results are interpreted in terms of the semiclassical picture of chords and centers.

  12. Probing quantum gravity through exactly soluble midi-superspaces I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashtekar, A.; Pierri, M.

    1996-12-01

    It is well-known that the Einstein-Rosen solutions to the 3+1- dimensional vacuum Einstein{close_quote}s equations are in one to one correspondence with solutions of 2+1-dimensional general relativity coupled to axi-symmetric, zero rest mass scalar fields. We first re-examine the quantization of this midi-superspace paying special attention to the asymptotically flat boundary conditions and to certain functional analytic subtleties associated with regularization. We then use the resulting quantum theory to analyze several conceptual and technical issues of quantum gravity. {copyright} {ital 1996 American Institute of Physics.}

  13. Spatiotemporal dynamics of oscillatory cellular patterns in three-dimensional directional solidification.

    PubMed

    Bergeon, N; Tourret, D; Chen, L; Debierre, J-M; Guérin, R; Ramirez, A; Billia, B; Karma, A; Trivedi, R

    2013-05-31

    We report results of directional solidification experiments conducted on board the International Space Station and quantitative phase-field modeling of those experiments. The experiments image for the first time in situ the spatially extended dynamics of three-dimensional cellular array patterns formed under microgravity conditions where fluid flow is suppressed. Experiments and phase-field simulations reveal the existence of oscillatory breathing modes with time periods of several 10's of minutes. Oscillating cells are usually noncoherent due to array disorder, with the exception of small areas where the array structure is regular and stable.

  14. Vorticity vector-potential method based on time-dependent curvilinear coordinates for two-dimensional rotating flows in closed configurations

    NASA Astrophysics Data System (ADS)

    Fu, Yuan; Zhang, Da-peng; Xie, Xi-lin

    2018-03-01

    In this study, a vorticity vector-potential method for two-dimensional viscous incompressible rotating driven flows is developed in the time-dependent curvilinear coordinates. The method is applicable in both inertial and non-inertial frames of reference with the advantage of a fixed and regular calculation domain. The numerical method is applied to triangle and curved triangle configurations in constant and varying rotational angular velocity cases respectively. The evolutions of flow field are studied. The geostrophic effect, unsteady effect and curvature effect on the evolutions are discussed.

  15. Trailblazing Teacher Contract Agreement Adopted in Baltimore

    ERIC Educational Resources Information Center

    Education Digest: Essential Readings Condensed for Quick Review, 2011

    2011-01-01

    The Baltimore City Public Schools made national headlines late last year when the district adopted a new contract designed to take student learning and teacher professionalism to the next level. The three-year deal replaced conventional approaches to compensation--regular pay increases based on years in the system--with a new approach that gives…

  16. Conventions of Courtship: Gender and Race Differences in the Significance of Dating Rituals

    ERIC Educational Resources Information Center

    Jackson, Pamela Braboy; Kleiner, Sibyl; Geist, Claudia; Cebulko, Kara

    2011-01-01

    Dating rituals include dating--courtship methods that are regularly enacted. This study explores gender and race differences in the relative importance placed on certain symbolic activities previously identified by the dating literature as constituting such rituals. Using information collected from a racially diverse sample of college students (N…

  17. Evaluation of in-use fuel economy and on-board emissions for hybrid and regular CyRide transit buses.

    DOT National Transportation Integrated Search

    2012-10-01

    The objective of this project was to evaluate the in-use fuel economy and emission differences between hybrid-electric and : conventional transit buses for the Ames, Iowa transit authority, CyRide. These CyRide buses were deployed in the fall of : 20...

  18. Focal-Plane Alignment Sensing

    DTIC Science & Technology

    1993-02-01

    amplification induced by the inverse filter. The problem of noise amplification that arises in conventional image deblurring problems has often been... noise sensitivity, and strategies for selecting a regularization parameter have been developed. The probability of convergence to within a prescribed...Strategies in Image Deblurring .................. 12 2.2.2 CLS Parameter Selection ........................... 14 2.2.3 Wiener Parameter Selection

  19. Novel cooperative neural fusion algorithms for image restoration and image fusion.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-02-01

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods.

  20. Respiratory infections and pneumonia: potential benefits of switching from smoking to vaping.

    PubMed

    Campagna, Davide; Amaradio, Maria Domenica; Sands, Mark F; Polosa, Riccardo

    2016-01-01

    Abstaining from tobacco smoking is likely to lower the risk of respiratory infections and pneumonia. Unfortunately, quitting smoking is not easy. Electronic cigarettes (ECs) are emerging as an attractive long-term alternative nicotine source to conventional cigarettes and are being adopted by smokers who wish to reduce or quit cigarette consumption. Also, given that the propylene glycol in EC aerosols is a potent bactericidal agent, switching from smoking to regular vaping is likely to produce additional lung health benefits. Here, we critically address some of the concerns arising from regular EC use in relation to lung health, including respiratory infections and pneumonia. In conclusion, smokers who quit by switching to regular ECs use can reduce risk and reverse harm from tobacco smoking. Innovation in the e-vapour category is likely not only to further minimise residual health risks, but also to maximise health benefits.

  1. Simpler grammar, larger vocabulary: How population size affects language

    PubMed Central

    2018-01-01

    Languages with many speakers tend to be structurally simple while small communities sometimes develop languages with great structural complexity. Paradoxically, the opposite pattern appears to be observed for non-structural properties of language such as vocabulary size. These apparently opposite patterns pose a challenge for theories of language change and evolution. We use computational simulations to show that this inverse pattern can depend on a single factor: ease of diffusion through the population. A population of interacting agents was arranged on a network, passing linguistic conventions to one another along network links. Agents can invent new conventions, or replicate conventions that they have previously generated themselves or learned from other agents. Linguistic conventions are either Easy or Hard to diffuse, depending on how many times an agent needs to encounter a convention to learn it. In large groups, only linguistic conventions that are easy to learn, such as words, tend to proliferate, whereas small groups where everyone talks to everyone else allow for more complex conventions, like grammatical regularities, to be maintained. Our simulations thus suggest that language, and possibly other aspects of culture, may become simpler at the structural level as our world becomes increasingly interconnected. PMID:29367397

  2. Transparency-enhancing technology allows three-dimensional assessment of gastrointestinal mucosa: A porcine model.

    PubMed

    Mizutani, Hiroya; Ono, Satoshi; Ushiku, Tetsuo; Kudo, Yotaro; Ikemura, Masako; Kageyama, Natsuko; Yamamichi, Nobutake; Fujishiro, Mitsuhiro; Someya, Takao; Fukayama, Masashi; Koike, Kazuhiko; Onodera, Hiroshi

    2018-02-01

    Although high-resolution three-dimensional imaging of endoscopically resected gastrointestinal specimens can help elucidating morphological features of gastrointestinal mucosa or tumor, there are no established methods to achieve this without breaking specimens apart. We evaluated the utility of transparency-enhancing technology for three-dimensional assessment of gastrointestinal mucosa in porcine models. Esophagus, stomach, and colon mucosa samples obtained from a sacrificed swine were formalin-fixed and paraffin-embedded, and subsequently deparaffinized for analysis. The samples were fluorescently stained, optically cleared using transparency-enhancing technology: ilLUmination of Cleared organs to IDentify target molecules method (LUCID), and visualized using laser scanning microscopy. After observation, all specimens were paraffin-embedded again and evaluated by conventional histopathological assessment to measure the impact of transparency-enhancing procedures. As a result, microscopic observation revealed horizontal section views of mucosa at deeper levels and enabled the three-dimensional image reconstruction of glandular and vascular structures. Besides, paraffin-embedded specimens after transparency-enhancing procedures were all assessed appropriately by conventional histopathological staining. These results suggest that transparency-enhancing technology may be feasible for clinical application and enable the three-dimensional structural analysis of endoscopic resected specimen non-destructively. Although there remain many limitations or problems to be solved, this promising technology might represent a novel histopathological method for evaluating gastrointestinal cancers. © 2018 Japanese Society of Pathology and John Wiley & Sons Australia, Ltd.

  3. 3D hybrid profile order technique in a single breath-hold 3D T2-weighted fast spin-echo sequence: Usefulness in diagnosis of small liver lesions.

    PubMed

    Hirata, Kenichiro; Nakaura, Takeshi; Okuaki, Tomoyuki; Tsuda, Noriko; Taguchi, Narumi; Oda, Seitaro; Utsunomiya, Daisuke; Yamashita, Yasuyuki

    2018-01-01

    We compared the efficacy of three-dimensional (3D) isotropic T2-weighted fast spin-echo imaging using a 3D hybrid profile order technique with a single-breath-hold (3D-Hybrid BH) with a two-dimensional (2D) T2-weighted fast spin-echo conventional respiratory-gated (2D-Conventional RG) technique for visualising small liver lesions. This study was approved by our institutional review board. The requirement to obtain written informed consent was waived. Fifty patients with small (≤15mm) hepatocellular carcinomas (HCC) (n=26), or benign cysts (n=24), had undergone hepatic MRI including both 2D-Conventional RG and 3D-Hybrid BH. We calculated the signal-to-noise ratio (SNR) and tumour-to-liver contrast (TLC). The diagnostic performance of the two protocols was analysed. The image acquisition time was 89% shorter with the 3D-Hybrid BH than with 2D-Conventional RG. There was no significant difference in the SNR between the two protocols. The area under the curve (AUC) of the TLC was significantly higher on 3D-Hybrid BH than on 2D-Conventional RG. The 3D-Hybrid BH sequence significantly improved diagnostic performance for small liver lesions with a shorter image acquisition time without sacrificing accuracy. Copyright © 2017. Published by Elsevier B.V.

  4. CT colonography with reduced bowel preparation after incomplete colonoscopy in the elderly.

    PubMed

    Iafrate, F; Hassan, C; Zullo, A; Stagnitti, A; Ferrari, R; Spagnuolo, A; Laghi, A

    2008-07-01

    We prospectively assessed the feasibility and acceptance of computerized tomographic colonography (CTC) without bowel cathartic preparation in elderly patients after incomplete colonoscopy. A total of 136 patients underwent CTC without cathartic preparation. The time delay between conventional colonoscopy and CTC ranged between 3 and 20 days, depending on the clinical situation. Before CTC, fecal tagging was achieved by adding diatrizoate meglumine and diatrizoate sodium to regular meals. CTCs were interpreted using a primary two-dimensional (2D) approach and 3D images for further characterization. Patients were interviewed before and 2 weeks after CTC to assess preparation acceptance. CTC was feasible and technically successful in all the 136 patients. Fecal tagging was judged as excellent in 113 (83%) patients and sufficient in 23 (17%). Average CT image interpretation time was 14.8 min. Six (4.4%) cases of colorectal cancer and nine (6.6%) large polyps were detected, as well as 23 (11.3%) extracolonic findings of high clinical importance. No major side effect occurred, although 25% patients reported minor side effects, especially diarrhea. Overall, 76/98 patients replied that they would be willing to repeat the test if necessary. CTC without cathartic preparation is a technically feasible and safe procedure to complete a colonic study in the elderly, prompting its use in clinical practice.

  5. Graphene-Like-Graphite as Fast-Chargeable and High-Capacity Anode Materials for Lithium Ion Batteries.

    PubMed

    Cheng, Qian; Okamoto, Yasuharu; Tamura, Noriyuki; Tsuji, Masayoshi; Maruyama, Shunya; Matsuo, Yoshiaki

    2017-11-01

    Here we propose the use of a carbon material called graphene-like-graphite (GLG) as anode material of lithium ion batteries that delivers a high capacity of 608 mAh/g and provides superior rate capability. The morphology and crystal structure of GLG are quite similar to those of graphite, which is currently used as the anode material of lithium ion batteries. Therefore, it is expected to be used in the same manner of conventional graphite materials to fabricate the cells. Based on the data obtained from various spectroscopic techniques, we propose a structural GLG model in which nanopores and pairs of C-O-C units are introduced within the carbon layers stacked with three-dimensional regularity. Three types of highly ionic lithium ions are found in fully charged GLG and stored between its layers. The oxygen atoms introduced within the carbon layers seem to play an important role in accommodating a large amount of lithium ions in GLG. Moreover, the large increase in the interlayer spacing observed for fully charged GLG is ascribed to the migration of oxygen atoms within the carbon layer introduced in the state of C-O-C to the interlayer space maintaining one of the C-O bonds.

  6. Gravity and antigravity in a brane world with metastable gravitons

    NASA Astrophysics Data System (ADS)

    Gregory, R.; Rubakov, V. A.; Sibiryakov, S. M.

    2000-09-01

    In the framework of a five-dimensional three-brane model with quasi-localized gravitons we evaluate metric perturbations induced on the positive tension brane by matter residing thereon. We find that at intermediate distances, the effective four-dimensional theory coincides, up to small corrections, with General Relativity. This is in accord with Csaki, Erlich and Hollowood and in contrast to Dvali, Gabadadze and Porrati. We show, however, that at ultra-large distances this effective four-dimensional theory becomes dramatically different: conventional tensor gravity changes into scalar anti-gravity.

  7. Three-dimensional fabric reinforced plastics for cryogenic use

    NASA Astrophysics Data System (ADS)

    Iwasaki, Y.; Yasuda, J.; Hirokawa, T.; Noma, K.; Nishijima, S.; Okada, T.

    Three-dimensional fabric reinforced plastics (3DFRPs) have been developed as insulating and/or structural materials in superconducting magnets. Three-dimensional fabrics were designed with practical applications in fibre composites of 3DFRP. The mechanical properties such as Young's modulus, Poisson's ratio, tensile strength and the compressive strength down to liquid helium temperature were measured. Thermal contraction was also measured. The cryogenic characteristics of 3DFRPs were compared with those of conventional laminates. The newly developed 3DFRPs were found to show satisfactory characteristics not only at room temperature but also at low temperatures.

  8. A Locally Adaptive Regularization Based on Anisotropic Diffusion for Deformable Image Registration of Sliding Organs

    PubMed Central

    Pace, Danielle F.; Aylward, Stephen R.; Niethammer, Marc

    2014-01-01

    We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall. PMID:23899632

  9. A locally adaptive regularization based on anisotropic diffusion for deformable image registration of sliding organs.

    PubMed

    Pace, Danielle F; Aylward, Stephen R; Niethammer, Marc

    2013-11-01

    We propose a deformable image registration algorithm that uses anisotropic smoothing for regularization to find correspondences between images of sliding organs. In particular, we apply the method for respiratory motion estimation in longitudinal thoracic and abdominal computed tomography scans. The algorithm uses locally adaptive diffusion tensors to determine the direction and magnitude with which to smooth the components of the displacement field that are normal and tangential to an expected sliding boundary. Validation was performed using synthetic, phantom, and 14 clinical datasets, including the publicly available DIR-Lab dataset. We show that motion discontinuities caused by sliding can be effectively recovered, unlike conventional regularizations that enforce globally smooth motion. In the clinical datasets, target registration error showed improved accuracy for lung landmarks compared to the diffusive regularization. We also present a generalization of our algorithm to other sliding geometries, including sliding tubes (e.g., needles sliding through tissue, or contrast agent flowing through a vessel). Potential clinical applications of this method include longitudinal change detection and radiotherapy for lung or abdominal tumours, especially those near the chest or abdominal wall.

  10. Autostereoscopic display based on two-layer lenticular lenses.

    PubMed

    Zhao, Wu-Xiang; Wang, Qiong-Hua; Wang, Ai-Hong; Li, Da-Hai

    2010-12-15

    An autostereoscopic display based on two-layer lenticular lenses is proposed. The two-layer lenticular lenses include one-layer conventional lenticular lenses and additional one-layer concentrating-light lenticular lenses. Two prototypes of the proposed and conventional autostereoscopic displays are developed. At the optimum three-dimensional view distance, the luminance distribution of the prototypes along the horizontal direction is measured. By calculating the luminance distribution, the crosstalk of the prototypes is obtained. Compared with the conventional autostereoscopic display, the proposed autostereoscopic display has less crosstalk, a wider view angle, and higher efficiency of light utilization.

  11. A Three-Dimensional DOSY HMQC Experiment for the High-Resolution Analysis of Complex Mixtures

    NASA Astrophysics Data System (ADS)

    Barjat, Hervé; Morris, Gareth A.; Swanson, Alistair G.

    1998-03-01

    A three-dimensional experiment is described in which NMR signals are separated according to their proton chemical shift,13C chemical shift, and diffusion coefficient. The sequence is built up from a stimulated echo sequence with bipolar field gradient pulses and a conventional decoupled HMQC sequence. Results are presented for a model mixture of quinine, camphene, and geraniol in deuteriomethanol.

  12. [Three-dimensional reconstruction of functional brain images].

    PubMed

    Inoue, M; Shoji, K; Kojima, H; Hirano, S; Naito, Y; Honjo, I

    1999-08-01

    We consider PET (positron emission tomography) measurement with SPM (Statistical Parametric Mapping) analysis to be one of the most useful methods to identify activated areas of the brain involved in language processing. SPM is an effective analytical method that detects markedly activated areas over the whole brain. However, with the conventional presentations of these functional brain images, such as horizontal slices, three directional projection, or brain surface coloring, makes understanding and interpreting the positional relationships among various brain areas difficult. Therefore, we developed three-dimensionally reconstructed images from these functional brain images to improve the interpretation. The subjects were 12 normal volunteers. The following three types of images were constructed: 1) routine images by SPM, 2) three-dimensional static images, and 3) three-dimensional dynamic images, after PET images were analyzed by SPM during daily dialog listening. The creation of images of both the three-dimensional static and dynamic types employed the volume rendering method by VTK (The Visualization Toolkit). Since the functional brain images did not include original brain images, we synthesized SPM and MRI brain images by self-made C++ programs. The three-dimensional dynamic images were made by sequencing static images with available software. Images of both the three-dimensional static and dynamic types were processed by a personal computer system. Our newly created images showed clearer positional relationships among activated brain areas compared to the conventional method. To date, functional brain images have been employed in fields such as neurology or neurosurgery, however, these images may be useful even in the field of otorhinolaryngology, to assess hearing and speech. Exact three-dimensional images based on functional brain images are important for exact and intuitive interpretation, and may lead to new developments in brain science. Currently, the surface model is the most common method of three-dimensional display. However, the volume rendering method may be more effective for imaging regions such as the brain.

  13. Multiscale solutions of radiative heat transfer by the discrete unified gas kinetic scheme

    NASA Astrophysics Data System (ADS)

    Luo, Xiao-Ping; Wang, Cun-Hai; Zhang, Yong; Yi, Hong-Liang; Tan, He-Ping

    2018-06-01

    The radiative transfer equation (RTE) has two asymptotic regimes characterized by the optical thickness, namely, optically thin and optically thick regimes. In the optically thin regime, a ballistic or kinetic transport is dominant. In the optically thick regime, energy transport is totally dominated by multiple collisions between photons; that is, the photons propagate by means of diffusion. To obtain convergent solutions to the RTE, conventional numerical schemes have a strong dependence on the number of spatial grids, which leads to a serious computational inefficiency in the regime where the diffusion is predominant. In this work, a discrete unified gas kinetic scheme (DUGKS) is developed to predict radiative heat transfer in participating media. Numerical performances of the DUGKS are compared in detail with conventional methods through three cases including one-dimensional transient radiative heat transfer, two-dimensional steady radiative heat transfer, and three-dimensional multiscale radiative heat transfer. Due to the asymptotic preserving property, the present method with relatively coarse grids gives accurate and reliable numerical solutions for large, small, and in-between values of optical thickness, and, especially in the optically thick regime, the DUGKS demonstrates a pronounced computational efficiency advantage over the conventional numerical models. In addition, the DUGKS has a promising potential in the study of multiscale radiative heat transfer inside the participating medium with a transition from optically thin to optically thick regimes.

  14. A quadratically regularized functional canonical correlation analysis for identifying the global structure of pleiotropy with NGS data

    PubMed Central

    Zhu, Yun; Fan, Ruzong; Xiong, Momiao

    2017-01-01

    Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore correlation information of genetic variants, effectively reduce data dimensions, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new statistic method referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the ten competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and ten other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the ten other statistics. PMID:29040274

  15. The Self-Evolving Cosmos: A Phenomenological Approach to Nature's Unity-in-Diversity

    NASA Astrophysics Data System (ADS)

    Rosen, Steven M.

    ch. 1. Introduction: individuation and the quest for unity -- ch. 2. The obstacle to unification in modern physics. 2.1. Introduction. 2.2. Does contemporary mathematical physics actually depart from the classical formulation? -- ch. 3. The phenomenological challenge to the classical formula -- ch. 4. Topological phenomenology. 4.1. Introduction. 4.2. Phenomenological intuition, topology, and the Klein bottle. 4.3. The physical significance of the Klein bottle -- ch. 5. The dimensional family of topological spinors. 5.1. Generalization of intuitive topology. 5.2. Topodimensional spin matrix -- ch. 6. Basic principles of dimensional transformation. 6.1. Synsymmetry and the self-transformation of space. 6.2. From symmetry breaking to dimensional generation. 6.3. The three basic stages of dimensional generation. 6.4. Kleinian topogeny -- ch. 7. Waves carrying waves: the co-evolution of lifeworlds -- ch. 8. The forces of nature. 8.1. The phenomenon of light. 8.2. Phenomenological Kaluza-Klein theory. 8.3. Summary comparison of conventional and topo-phenomenological approaches to Kaluza-Klein theory -- ch. 9. Cosmogony, symmetry, and phenomenological intuition. 9.1. Conventional view of the evolving cosmos. 9.2. The problem of symmetry. 9.3. A new kind of clarity -- ch. 10. The self-evolving cosmos. 10.1. Introduction to the cosmogonic matrix. 10.2. Overview of cosmic evolution. 10.3. The role of the fermions in dimensional generation. 10.4. Projective stages of cosmogony: dimensional divergence. 10.5. Proprioceptive stages of cosmogony: dimensional convergence. 10.6. Conclusion: wider horizons of cosmic evolution -- ch. 11. The psychophysics of cosmogony. 11.1. Psychical aspects of the fundamental particles. 11.2. Toward a reflexive physics. 11.3. Concretization of the self-evolving cosmos.

  16. A characterization of linearly repetitive cut and project sets

    NASA Astrophysics Data System (ADS)

    Haynes, Alan; Koivusalo, Henna; Walton, James

    2018-02-01

    For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.

  17. Surgical outcomes of total laparoscopic hysterectomy with 2-dimensional versus 3-dimensional laparoscopic surgical systems.

    PubMed

    Yazawa, Hiroyuki; Takiguchi, Kaoru; Imaizumi, Karin; Wada, Marina; Ito, Fumihiro

    2018-04-17

    Three-dimensional (3D) laparoscopic surgical systems have been developed to account for the lack of depth perception, a known disadvantage of conventional 2-dimensional (2D) laparoscopy. In this study, we retrospectively compared the outcomes of total laparoscopic hysterectomy (TLH) with 3D versus conventional 2D laparoscopy. From November 2014, when we began using a 3D laparoscopic system at our hospital, to December 2015, 47 TLH procedures were performed using a 3D laparoscopic system (3D-TLH). The outcomes of 3D-TLH were compared with the outcomes of TLH using the conventional 2D laparoscopic system (2D-TLH) performed just before the introduction of the 3D system. The 3D-TLH group had a statistically significantly shorter mean operative time than the 2D-TLH group (119±20 vs. 137±20 min), whereas the mean weight of the resected uterus and mean intraoperative blood loss were not statistically different. When we compared the outcomes for 20 cases in each group, using the same energy sealing device in a short period of time, only mean operative time was statistically different between the 3D-TLH and 2D-TLH groups (113±19 vs. 133±21 min). During the observation period, there was one occurrence of postoperative peritonitis in the 2D-TLH group and one occurrence of vaginal cuff dehiscence in each group, which was not statistically different. The surgeon and assistant surgeons did not report any symptoms attributable to the 3D imaging system such as dizziness, eyestrain, nausea, and headache. Therefore, we conclude that the 3D laparoscopic system could be used safely and efficiently for TLH.

  18. Three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions sequence for routine imaging of the spine: preliminary experience

    PubMed Central

    Tins, B; Cassar-Pullicino, V; Haddaway, M; Nachtrab, U

    2012-01-01

    Objectives The bulk of spinal imaging is still performed with conventional two-dimensional sequences. This study assesses the suitability of three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions (SPACE) sequence for routine spinal imaging. Methods 62 MRI examinations of the spine were evaluated by 2 examiners in consensus for the depiction of anatomy and presence of artefact. We noted pathologies that might be missed using the SPACE sequence only or the SPACE and a sagittal T1 weighted sequence. The reference standards were sagittal and axial T1 weighted and T2 weighted sequences. At a later date the evaluation was repeated by one of the original examiners and an additional examiner. Results There was good agreement of the single evaluations and consensus evaluation for the conventional sequences: κ>0.8, confidence interval (CI)>0.6–1.0. For the SPACE sequence, depiction of anatomy was very good for 84% of cases, with high interobserver agreement, but there was poor interobserver agreement for other cases. For artefact assessment of SPACE, κ=0.92, CI=0.92–1.0. The SPACE sequence was superior to conventional sequences for depiction of anatomy and artefact resistance. The SPACE sequence occasionally missed bone marrow oedema. In conjunction with sagittal T1 weighted sequences, no abnormality was missed. The isotropic SPACE sequence was superior to conventional sequences in imaging difficult anatomy such as in scoliosis and spondylolysis. Conclusion The SPACE sequence allows excellent assessment of anatomy owing to high spatial resolution and resistance to artefact. The sensitivity for bone marrow abnormalities is limited. PMID:22374284

  19. Three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions sequence for routine imaging of the spine: preliminary experience.

    PubMed

    Tins, B; Cassar-Pullicino, V; Haddaway, M; Nachtrab, U

    2012-08-01

    The bulk of spinal imaging is still performed with conventional two-dimensional sequences. This study assesses the suitability of three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions (SPACE) sequence for routine spinal imaging. 62 MRI examinations of the spine were evaluated by 2 examiners in consensus for the depiction of anatomy and presence of artefact. We noted pathologies that might be missed using the SPACE sequence only or the SPACE and a sagittal T(1) weighted sequence. The reference standards were sagittal and axial T(1) weighted and T(2) weighted sequences. At a later date the evaluation was repeated by one of the original examiners and an additional examiner. There was good agreement of the single evaluations and consensus evaluation for the conventional sequences: κ>0.8, confidence interval (CI)>0.6-1.0. For the SPACE sequence, depiction of anatomy was very good for 84% of cases, with high interobserver agreement, but there was poor interobserver agreement for other cases. For artefact assessment of SPACE, κ=0.92, CI=0.92-1.0. The SPACE sequence was superior to conventional sequences for depiction of anatomy and artefact resistance. The SPACE sequence occasionally missed bone marrow oedema. In conjunction with sagittal T(1) weighted sequences, no abnormality was missed. The isotropic SPACE sequence was superior to conventional sequences in imaging difficult anatomy such as in scoliosis and spondylolysis. The SPACE sequence allows excellent assessment of anatomy owing to high spatial resolution and resistance to artefact. The sensitivity for bone marrow abnormalities is limited.

  20. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    PubMed

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P < 0.001). However, there was no significant difference in the maximum force applied by the novices to the mitral valve during suturing (P = 0.7) and suture tying (P = 0.6) using either 2D or 3D visualization. The mean time required and forces applied by both the experts and the novices were significantly less using the conventional surgical technique than when using the robotic system with either 2D or 3D vision (P < 0.001). Despite high-quality binocular images, both the experts and the novices applied significantly more force to the cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  1. Study on relationship between pollen exine ornamentation pattern and germplasm evolution in flowering crabapple

    PubMed Central

    Zhang, Wang-Xiang; Zhao, Ming-Ming; Fan, Jun-Jun; Zhou, Ting; Chen, Yong-Xia; Cao, Fu-Liang

    2017-01-01

    Pollen ornamentation patterns are important in the study of plant genetic evolution and systematic taxonomy. However, they are normally difficult to quantify. Based on observations of pollen exine ornamentation characteristics of 128 flowering crabapple germplasms (44 natural species and 84 varieties), three qualitative variables with binary properties (Xi: regularity of pollen exine ornamentation; Yi: scope of ornamentation arrangement regularity; Zi: ornamentation arrangement patterns) were extracted to establish a binary three-dimensional data matrix (Xi Yi Zi) and the matrix data were converted to decimal data through weight assignment, which facilitated the unification of qualitative analysis and quantitative analysis. The result indicates that from species population to variety population and from parent population to variety population, the exine ornamentation of all three dimensions present the evolutionary trend of regular → irregular, wholly regular → partially regular, and single pattern → multiple patterns. Regarding the evolutionary degree, the regularity of ornamentation was significantly lower in both the variety population and progeny population, with a degree of decrease 0.82–1.27 times that of the regularity range of R-type ornamentation. In addition, the evolutionary degree significantly increased along Xi  → Yi → Zi. The result also has certain reference values for defining the taxonomic status of Malus species. PMID:28059122

  2. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  3. Integrated structure vacuum tube

    NASA Technical Reports Server (NTRS)

    Dimeff, J.; Kerwin, W. J. (Inventor)

    1976-01-01

    High efficiency, multi-dimensional thin film vacuum tubes suitable for use in high temperature, high radiation environments are described. The tubes are fabricated by placing thin film electrode members in selected arrays on facing interior wall surfaces of an alumina substrate envelope. Cathode members are formed using thin films of triple carbonate. The photoresist used in photolithography aids in activation of the cathodes by carbonizing and reacting with the reduced carbonates when heated in vacuum during forming. The finely powdered triple carbonate is mixed with the photoresist used to delineate the cathode locations in the conventional solid state photolithographic manner. Anode and grid members are formed using thin films of refractory metal. Electron flow in the tubes is between grid elements from cathode to anode as in a conventional three-dimensional tube.

  4. Aerodynamic Design of Axial-flow Compressors. Volume 2

    NASA Technical Reports Server (NTRS)

    1956-01-01

    Available experimental two-dimensional-cascade data for conventional compressor blade sections are correlated. The two-dimensional cascade and some of the principal aerodynamic factors involved in its operation are first briefly described. Then the data are analyzed by examining the variation of cascade performance at a reference incidence angle in the region of minimum loss. Variations of reference incidence angle, total-pressure loss, and deviation angle with cascade geometry, inlet Mach number, and Reynolds number are investigated. From the analysis and the correlations of the available data, rules and relations are evolved for the prediction of the magnitude of the reference total-pressure loss and the reference deviation and incidence angles for conventional blade profiles. These relations are developed in simplified forms readily applicable to compressor design procedures.

  5. Role of Hemidivisional Corneal Topographic Astigmatisms (CorTs) in the Regularization and Reduction of Irregular Astigmatism.

    PubMed

    Alpins, Noel; Ong, James K Y; Stamatelatos, George

    2018-03-01

    To demonstrate how the concept of hemidivisional corneal topographic astigmatism (hemiCorT) enables the planning of hemidivisional corneal treatments to reduce irregularity and overall astigmatism. Whole-of-cornea corneal topographic astigmatism (CorT) is calculated from topography data derived from a corneal topographer or tomographer. The cornea is conceptually divided into 2 hemidivisions along the flat meridian of the CorT. For each hemidivision, hemiCorTs are calculated. The regularization treatment for each hemidivision is the treatment required to target the whole-of-cornea CorT, which is a symmetrical orthogonal corneal astigmatism. The regularization is then combined with astigmatism reduction treatment, which could be a conventional refractive treatment or a vector-planned treatment. For each hemidivision, the combined astigmatic effect of the regularization treatment and reduction treatment can be determined through double-angle vector summation. The 2 hemidivisional treatments together regularize and reduce corneal astigmatism. A theoretical pair of hemidivisional treatments is derived from an actual example of a cornea displaying idiopathic asymmetric nonorthogonal astigmatism. HemiCorTs allow for the design of hemidivisional corneal treatments of asymmetric nonorthogonal astigmatism. Such treatments should be suitable in the routine treatment of commonly occurring irregular astigmatism, while also allowing the spherical refractive error to be treated concurrently.

  6. Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction

    NASA Astrophysics Data System (ADS)

    Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong

    2018-12-01

    The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.

  7. The impact of the fabrication method on the three-dimensional accuracy of an implant surgery template.

    PubMed

    Matta, Ragai-Edward; Bergauer, Bastian; Adler, Werner; Wichmann, Manfred; Nickenig, Hans-Joachim

    2017-06-01

    The use of a surgical template is a well-established method in advanced implantology. In addition to conventional fabrication, computer-aided design and computer-aided manufacturing (CAD/CAM) work-flow provides an opportunity to engineer implant drilling templates via a three-dimensional printer. In order to transfer the virtual planning to the oral situation, a highly accurate surgical guide is needed. The aim of this study was to evaluate the impact of the fabrication method on the three-dimensional accuracy. The same virtual planning based on a scanned plaster model was used to fabricate a conventional thermo-formed and a three-dimensional printed surgical guide for each of 13 patients (single tooth implants). Both templates were acquired individually on the respective plaster model using an optical industrial white-light scanner (ATOS II, GOM mbh, Braunschweig, Germany), and the virtual datasets were superimposed. Using the three-dimensional geometry of the implant sleeve, the deviation between both surgical guides was evaluated. The mean discrepancy of the angle was 3.479° (standard deviation, 1.904°) based on data from 13 patients. Concerning the three-dimensional position of the implant sleeve, the highest deviation was in the Z-axis at 0.594 mm. The mean deviation of the Euclidian distance, dxyz, was 0.864 mm. Although the two different fabrication methods delivered statistically significantly different templates, the deviations ranged within a decimillimeter span. Both methods are appropriate for clinical use. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  8. Extra-dimensional Demons: a method for incorporating missing tissue in deformable image registration.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Reh, Douglas D; Gallia, Gary L; Siewerdsen, Jeffrey H

    2012-09-01

    A deformable registration method capable of accounting for missing tissue (e.g., excision) is reported for application in cone-beam CT (CBCT)-guided surgical procedures. Excisions are identified by a segmentation step performed simultaneous to the registration process. Tissue excision is explicitly modeled by increasing the dimensionality of the deformation field to allow motion beyond the dimensionality of the image. The accuracy of the model is tested in phantom, simulations, and cadaver models. A variant of the Demons deformable registration algorithm is modified to include excision segmentation and modeling. Segmentation is performed iteratively during the registration process, with initial implementation using a threshold-based approach to identify voxels corresponding to "tissue" in the moving image and "air" in the fixed image. With each iteration of the Demons process, every voxel is assigned a probability of excision. Excisions are modeled explicitly during registration by increasing the dimensionality of the deformation field so that both deformations and excisions can be accounted for by in- and out-of-volume deformations, respectively. The out-of-volume (i.e., fourth) component of the deformation field at each voxel carries a magnitude proportional to the excision probability computed in the excision segmentation step. The registration accuracy of the proposed "extra-dimensional" Demons (XDD) and conventional Demons methods was tested in the presence of missing tissue in phantom models, simulations investigating the effect of excision size on registration accuracy, and cadaver studies emulating realistic deformations and tissue excisions imparted in CBCT-guided endoscopic skull base surgery. Phantom experiments showed the normalized mutual information (NMI) in regions local to the excision to improve from 1.10 for the conventional Demons approach to 1.16 for XDD, and qualitative examination of the resulting images revealed major differences: the conventional Demons approach imparted unrealistic distortions in areas around tissue excision, whereas XDD provided accurate "ejection" of voxels within the excision site and maintained the registration accuracy throughout the rest of the image. Registration accuracy in areas far from the excision site (e.g., > ∼5 mm) was identical for the two approaches. Quantitation of the effect was consistent in analysis of NMI, normalized cross-correlation (NCC), target registration error (TRE), and accuracy of voxels ejected from the volume (true-positive and false-positive analysis). The registration accuracy for conventional Demons was found to degrade steeply as a function of excision size, whereas XDD was robust in this regard. Cadaver studies involving realistic excision of the clivus, vidian canal, and ethmoid sinuses demonstrated similar results, with unrealistic distortion of anatomy imparted by conventional Demons and accurate ejection and deformation for XDD. Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate "ejection" of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance.

  9. Book Review:

    NASA Astrophysics Data System (ADS)

    Louko, Jorma

    2007-04-01

    Bastianelli and van Nieuwenhuizen's monograph `Path Integrals and Anomalies in Curved Space' collects in one volume the results of the authors' 15-year research programme on anomalies that arise in Feynman diagrams of quantum field theories on curved manifolds. The programme was spurred by the path-integral techniques introduced in Alvarez-Gaumé and Witten's renowned 1983 paper on gravitational anomalies which, together with the anomaly cancellation paper by Green and Schwarz, led to the string theory explosion of the 1980s. The authors have produced a tour de force, giving a comprehensive and pedagogical exposition of material that is central to current research. The first part of the book develops from scratch a formalism for defining and evaluating quantum mechanical path integrals in nonlinear sigma models, using time slicing regularization, mode regularization and dimensional regularization. The second part applies this formalism to quantum fields of spin 0, 1/2, 1 and 3/2 and to self-dual antisymmetric tensor fields. The book concludes with a discussion of gravitational anomalies in 10-dimensional supergravities, for both classical and exceptional gauge groups. The target audience is researchers and graduate students in curved spacetime quantum field theory and string theory, and the aims, style and pedagogical level have been chosen with this audience in mind. Path integrals are treated as calculational tools, and the notation and terminology are throughout tailored to calculational convenience, rather than to mathematical rigour. The style is closer to that of an exceedingly thorough and self-contained review article than to that of a textbook. As the authors mention, the first part of the book can be used as an introduction to path integrals in quantum mechanics, although in a classroom setting perhaps more likely as supplementary reading than a primary class text. Readers outside the core audience, including this reviewer, will gain from the book a heightened appreciation of the central role of regularization as a defining ingredient of a quantum field theory and will be impressed by the agreement of results arising from different regularization schemes. The readers may in particular enjoy the authors' `brief history of anomalies' in quantum field theory, as well as a similar historical discussion of path integrals in quantum mechanics.

  10. Integrative analysis of gene expression and copy number alterations using canonical correlation analysis.

    PubMed

    Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus

    2010-04-15

    With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.

  11. Theory of thermionic emission from a two-dimensional conductor and its application to a graphene-semiconductor Schottky junction

    NASA Astrophysics Data System (ADS)

    Trushin, Maxim

    2018-04-01

    The standard theory of thermionic emission developed for three-dimensional semiconductors does not apply to two-dimensional materials even for making qualitative predictions because of the vanishing out-of-plane quasiparticle velocity. This study reveals the fundamental origin of the out-of-plane charge carrier motion in a two-dimensional conductor due to the finite quasiparticle lifetime and huge uncertainty of the out-of-plane momentum. The theory is applied to a Schottky junction between graphene and a bulk semiconductor to derive a thermionic constant, which, in contrast to the conventional Richardson constant, is determined by the Schottky barrier height and Fermi level in graphene.

  12. Three-dimensional imaging technology offers promise in medicine.

    PubMed

    Karako, Kenji; Wu, Qiong; Gao, Jianjun

    2014-04-01

    Medical imaging plays an increasingly important role in the diagnosis and treatment of disease. Currently, medical equipment mainly has two-dimensional (2D) imaging systems. Although this conventional imaging largely satisfies clinical requirements, it cannot depict pathologic changes in 3 dimensions. The development of three-dimensional (3D) imaging technology has encouraged advances in medical imaging. Three-dimensional imaging technology offers doctors much more information on a pathology than 2D imaging, thus significantly improving diagnostic capability and the quality of treatment. Moreover, the combination of 3D imaging with augmented reality significantly improves surgical navigation process. The advantages of 3D imaging technology have made it an important component of technological progress in the field of medical imaging.

  13. Quality Control of Laser-Beam-Melted Parts by a Correlation Between Their Mechanical Properties and a Three-Dimensional Surface Analysis

    NASA Astrophysics Data System (ADS)

    Grimm, T.; Wiora, G.; Witt, G.

    2017-03-01

    Good correlations between three-dimensional surface analyses of laser-beam-melted parts of nickel alloy HX and their mechanical properties were found. The surface analyses were performed with a confocal microscope, which offers a more profound surface data basis than a conventional, two-dimensional tactile profilometry. This new approach results in a wide range of three-dimensional surface parameters, which were each evaluated with respect to their feasibility for quality control in additive manufacturing. As a result of an automated surface analysis process by the confocal microscope and an industrial six-axis robot, the results are an innovative approach for quality control in additive manufacturing.

  14. Full High-definition three-dimensional gynaecological laparoscopy--clinical assessment of a new robot-assisted device.

    PubMed

    Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus

    2014-01-01

    To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.

  15. The N-Simplex and Its Generalizations towards Fractals

    ERIC Educational Resources Information Center

    Kosi-Ulbl, Irena; Pagon, Dusan

    2002-01-01

    Nature is full of different crystals and many of them have shapes of regular geometric objects. Those in which the fractal structure of a geometric object can be recognized are especially unusual. In this paper a generalization of one of these shapes is described: a formation, based on an n-dimensional simplex. The construction of an n-dimensional…

  16. Iterative reconstruction for CT perfusion with a prior-image induced hybrid nonlocal means regularization: Phantom studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Bin; Lyu, Qingwen; Ma, Jianhua

    2016-04-15

    Purpose: In computed tomography perfusion (CTP) imaging, an initial phase CT acquired with a high-dose protocol can be used to improve the image quality of later phase CT acquired with a low-dose protocol. For dynamic regions, signals in the later low-dose CT may not be completely recovered if the initial CT heavily regularizes the iterative reconstruction process. The authors propose a hybrid nonlocal means (hNLM) regularization model for iterative reconstruction of low-dose CTP to overcome the limitation of the conventional prior-image induced penalty. Methods: The hybrid penalty was constructed by combining the NLM of the initial phase high-dose CT inmore » the stationary region and later phase low-dose CT in the dynamic region. The stationary and dynamic regions were determined by the similarity between the initial high-dose scan and later low-dose scan. The similarity was defined as a Gaussian kernel-based distance between the patch-window of the same pixel in the two scans, and its measurement was then used to weigh the influence of the initial high-dose CT. For regions with high similarity (e.g., stationary region), initial high-dose CT played a dominant role for regularizing the solution. For regions with low similarity (e.g., dynamic region), the regularization relied on a low-dose scan itself. This new hNLM penalty was incorporated into the penalized weighted least-squares (PWLS) for CTP reconstruction. Digital and physical phantom studies were performed to evaluate the PWLS-hNLM algorithm. Results: Both phantom studies showed that the PWLS-hNLM algorithm is superior to the conventional prior-image induced penalty term without considering the signal changes within the dynamic region. In the dynamic region of the Catphan phantom, the reconstruction error measured by root mean square error was reduced by 42.9% in PWLS-hNLM reconstructed image. Conclusions: The PWLS-hNLM algorithm can effectively use the initial high-dose CT to reconstruct low-dose CTP in the stationary region while reducing its influence in the dynamic region.« less

  17. Two- to three-dimensional crossover in a dense electron liquid in silicon

    NASA Astrophysics Data System (ADS)

    Matmon, Guy; Ginossar, Eran; Villis, Byron J.; Kölker, Alex; Lim, Tingbin; Solanki, Hari; Schofield, Steven R.; Curson, Neil J.; Li, Juerong; Murdin, Ben N.; Fisher, Andrew J.; Aeppli, Gabriel

    2018-04-01

    Doping of silicon via phosphine exposures alternating with molecular beam epitaxy overgrowth is a path to Si:P substrates for conventional microelectronics and quantum information technologies. The technique also provides a well-controlled material for systematic studies of two-dimensional lattices with a half-filled band. We show here that for a dense (ns=2.8 ×1014 cm-2) disordered two-dimensional array of P atoms, the full field magnitude and angle-dependent magnetotransport is remarkably well described by classic weak localization theory with no corrections due to interaction. The two- to three-dimensional crossover seen upon warming can also be interpreted using scaling concepts developed for anistropic three-dimensional materials, which work remarkably except when the applied fields are nearly parallel to the conducting planes.

  18. Graphic Novels, Web Comics, and Creator Blogs: Examining Product and Process

    ERIC Educational Resources Information Center

    Carter, James Bucky

    2011-01-01

    Young adult literature (YAL) of the late 20th and early 21st century is exploring hybrid forms with growing regularity by embracing textual conventions from sequential art, video games, film, and more. As well, Web-based technologies have given those who consume YAL more immediate access to authors, their metacognitive creative processes, and…

  19. Evaluation of the Computer Assisted Instruction Title I Project, 1980-81.

    ERIC Educational Resources Information Center

    Metrics Associates, Inc., Chelmsford, MA.

    For the third year, six Massachusetts communities incorporated computer assisted instruction (CAI) into their regular Title I programs, for students in grades 1 through 9. It was hoped that CAI would produce reading and mathematics gains over and above those which are characteristic of more conventional Title I instructional approaches. A pretest,…

  20. A Classroom Demonstration of Garlic Extract and Conventional Antibiotics' Antimicrobial Activity

    ERIC Educational Resources Information Center

    Ekunsanmi, Toye J.

    2005-01-01

    The Kirby-Bauer method is regularly used to test bacterial susceptibility to antibiotics, and is often employed in the classroom for teaching this concept. In this exercise, additional materials and instructions were given to students for the preparation of garlic extract and loading on blank BBL paper discs. They were further instructed to test…

  1. CoCom and the Future of Conventional Arms Exports in the Former Communist Bloc

    DTIC Science & Technology

    1993-12-01

    structure and background of each of these firms. The first example is the Godollo Machine Factory which was primarily involved in repair and renovation of...regularizedŚ Such an attitude is noteworthy because the potential for growth in the space industry is so great. Not only does Russia produce the " Energia

  2. U.N. Convention Against Torture (CAT): Overview and Application to Interrogation Techniques

    DTIC Science & Technology

    2009-01-26

    105 See, e.g., Zubeda v. Ashcroft, 333 F.3d 46 (3rd Cir. 2003) (“[r]ape can constitute torture”); Al- Saher v...Crimes Act have previously been found to be of sufficient severity to constitute torture. See Al- Saher , 268 F.3d at 1143 (regular, severe beatings and

  3. The Social Status of Children with Disabilities and Their Families as Determined by Census Data

    ERIC Educational Resources Information Center

    Tyndik, A. O.; Vasin, S. A.

    2016-01-01

    Russia's ratification of the UN Convention on the Rights of Persons with Disabilities (CRPD) has necessitated that regular monitoring studies of the social situation of people with disabilities and families with disabled members be conducted. These studies have exacerbated the issue of obtaining accessible data that is suitable for these purposes.…

  4. On the Solutions of a 2+1-Dimensional Model for Epitaxial Growth with Axial Symmetry

    NASA Astrophysics Data System (ADS)

    Lu, Xin Yang

    2018-04-01

    In this paper, we study the evolution equation derived by Xu and Xiang (SIAM J Appl Math 69(5):1393-1414, 2009) to describe heteroepitaxial growth in 2+1 dimensions with elastic forces on vicinal surfaces is in the radial case and uniform mobility. This equation is strongly nonlinear and contains two elliptic integrals and defined via Cauchy principal value. We will first derive a formally equivalent parabolic evolution equation (i.e., full equivalence when sufficient regularity is assumed), and the main aim is to prove existence, uniqueness and regularity of strong solutions. We will extensively use techniques from the theory of evolution equations governed by maximal monotone operators in Banach spaces.

  5. Equilibrium and nonequilibrium models on solomon networks with two square lattices

    NASA Astrophysics Data System (ADS)

    Lima, F. W. S.

    We investigate the critical properties of the equilibrium and nonequilibrium two-dimensional (2D) systems on Solomon networks with both nearest and random neighbors. The equilibrium and nonequilibrium 2D systems studied here by Monte Carlo simulations are the Ising and Majority-vote 2D models, respectively. We calculate the critical points as well as the critical exponent ratios γ/ν, β/ν, and 1/ν. We find that numerically both systems present the same exponents on Solomon networks (2D) and are of different universality class than the regular 2D ferromagnetic model. Our results are in agreement with the Grinstein criterion for models with up and down symmetry on regular lattices.

  6. Analytic regularization of uniform cubic B-spline deformation fields.

    PubMed

    Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C

    2012-01-01

    Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.

  7. Visualizing second order tensor fields with hyperstreamlines

    NASA Technical Reports Server (NTRS)

    Delmarcelle, Thierry; Hesselink, Lambertus

    1993-01-01

    Hyperstreamlines are a generalization to second order tensor fields of the conventional streamlines used in vector field visualization. As opposed to point icons commonly used in visualizing tensor fields, hyperstreamlines form a continuous representation of the complete tensor information along a three-dimensional path. This technique is useful in visulaizing both symmetric and unsymmetric three-dimensional tensor data. Several examples of tensor field visualization in solid materials and fluid flows are provided.

  8. Flat holographic stereograms synthesized from computer-generated images by using LiNbO3 crystal

    NASA Astrophysics Data System (ADS)

    Qu, Zhi-Min; Liu, Jinsheng; Xu, Liangying

    1991-02-01

    In this paper we used a novel method for synthesizing computer gene rated images in which by means of a series of intermediate holograms recorded on Fe--doped LiNbO crystals a high quality flat stereograni with wide view angle and much deep 3D image ha been obtained. 2. INTRODUCTITJN As we all know the conventional holography is very limited. With the help of a contineous wave laser only stationary objects can be re corded due tO its insufficient power. Although some moving objects could be recorded by a pulsed laser the dimensions and kinds of object are restricted. If we would like to see a imaginary object or a three dimensional image designed by computer it is very difficult by means of above conventional holography. Of course if we have a two-dimensional image on a comouter screen we can rotate it to give a three-dimensional perspective but we can never really see it as a solid. However flat holographic stereograrns synthesized from computer generated images will make one directly see the comoute results in the form of 3D image. Obviously it will have wide applications in design architecture medicine education and arts. 406 / SPIE Vol. 1238 Three-Dimensional Holography: Science Culture Education (1989)

  9. Stress and displacement pattern evaluation using two different palatal expanders in unilateral cleft lip and palate: a three-dimensional finite element analysis.

    PubMed

    Mathew, Anoop; Nagachandran, K S; Vijayalakshmi, Devaki

    2016-12-01

    In this finite element (FE) study, the stress distribution and displacement pattern was evaluated in the mid-palatal area and around circum-maxillary sutures exerted by bone-borne palatal expander (BBPE) in comparison with conventional HYRAX rapid palatal expander in unilateral cleft lip and palate. Computed tomography scan images of a patient with unilateral cleft palate was used to create a FE model of the maxillary bone along with circum-maxillary sutures. A three-dimensional model of the conventional HYRAX (Hygienic Rapid Expander) expander and custom-made BBPE was created by laser scanning and programmed into the FE model. With the BBPE, the maximum stress was observed at the implant insertion site, whereas with the conventional HYRAX expander, it was at the dentition level. Among the circum-maxillary sutures, the zygomaticomaxillary suture experienced maximum stress followed by the zygomaticotemporal and nasomaxillary sutures. Displacement in the X-axis (transverse) was highest on the cleft side, and in the Y-axis (antero-posterior), it was highest in the posterior region in the BBPE. The total displacement was observed maximum in the mid-palatal cleft area in the BBPE, and it produced true skeletal expansion at the alveolar level without any dental tipping when compared with the conventional HYRAX expander.

  10. Modified Dispersion Relations: from Black-Hole Entropy to the Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Garattini, Remo

    2012-07-01

    Quantum Field Theory is plagued by divergences in the attempt to calculate physical quantities. Standard techniques of regularization and renormalization are used to keep under control such a problem. In this paper we would like to use a different scheme based on Modified Dispersion Relations (MDR) to remove infinities appearing in one loop approximation in contrast to what happens in conventional approaches. In particular, we apply the MDR regularization to the computation of the entropy of a Schwarzschild black hole from one side and the Zero Point Energy (ZPE) of the graviton from the other side. The graviton ZPE is connected to the cosmological constant by means of of the Wheeler-DeWitt equation.

  11. Gauged supergravities from M-theory reductions

    NASA Astrophysics Data System (ADS)

    Katmadas, Stefanos; Tomasiello, Alessandro

    2018-04-01

    In supergravity compactifications, there is in general no clear prescription on how to select a finite-dimensional family of metrics on the internal space, and a family of forms on which to expand the various potentials, such that the lower-dimensional effective theory is supersymmetric. We propose a finite-dimensional family of deformations for regular Sasaki-Einstein seven-manifolds M 7, relevant for M-theory compactifications down to four dimensions. It consists of integrable Cauchy-Riemann structures, corresponding to complex deformations of the Calabi-Yau cone M 8 over M 7. The non-harmonic forms we propose are the ones contained in one of the Kohn-Rossi cohomology groups, which is finite-dimensional and naturally controls the deformations of Cauchy-Riemann structures. The same family of deformations can be also described in terms of twisted cohomology of the base M 6, or in terms of Milnor cycles arising in deformations of M 8. Using existing results on SU(3) structure compactifications, we briefly discuss the reduction of M-theory on our class of deformed Sasaki-Einstein manifolds to four-dimensional gauged supergravity.

  12. AdS3 to dS3 transition in the near horizon of asymptotically de Sitter solutions

    NASA Astrophysics Data System (ADS)

    Sadeghian, S.; Vahidinia, M. H.

    2017-08-01

    We consider two solutions of Einstein-Λ theory which admit the extremal vanishing horizon (EVH) limit, odd-dimensional multispinning Kerr black hole (in the presence of cosmological constant) and cosmological soliton. We show that the near horizon EVH geometry of Kerr has a three-dimensional maximally symmetric subspace whose curvature depends on rotational parameters and the cosmological constant. In the Kerr-dS case, this subspace interpolates between AdS3 , three-dimensional flat and dS3 by varying rotational parameters, while the near horizon of the EVH cosmological soliton always has a dS3 . The feature of the EVH cosmological soliton is that it is regular everywhere on the horizon. In the near EVH case, these three-dimensional parts turn into the corresponding locally maximally symmetric spacetimes with a horizon: Kerr-dS3 , flat space cosmology or BTZ black hole. We show that their thermodynamics match with the thermodynamics of the original near EVH black holes. We also briefly discuss the holographic two-dimensional CFT dual to the near horizon of EVH solutions.

  13. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    PubMed

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  14. Exobiology, SETI, von Neumann and geometric phase control.

    PubMed

    Hansson, P A

    1995-11-01

    The central difficulties confronting us at present in exobiology are the problems of the physical forces which sustain three-dimensional organisms, i.e., how one dimensional systems with only nearest interaction and two dimensional ones with its regular vibrations results in an integrated three-dimensional functionality. For example, a human lung has a dimensionality of 2.9 and thus should be measured in m2.9. According to thermodynamics, the first life-like system should have a small number of degrees of freedom, so how can evolution, via cycles of matter, lead to intelligence and theoretical knowledge? Or, more generally, what mechanisms constrain and drive this evolution? We are now on the brink of reaching an understanding below the photon level, into the domain where quantum events implode to the geometric phase which maintains the history of a quantum object. Even if this would exclude point to point communication, it could make it possible to manipulate the molecular level from below, in the physical scale, and result in a new era of geometricised engineering. As such, it would have a significant impact on space exploration and exobiology.

  15. Dynamics of influence and social balance in spatially-embedded regular and random networks

    NASA Astrophysics Data System (ADS)

    Singh, P.; Sreenivasan, S.; Szymanski, B.; Korniss, G.

    2015-03-01

    Structural balance - the tendency of social relationship triads to prefer specific states of polarity - can be a fundamental driver of beliefs, behavior, and attitudes on social networks. Here we study how structural balance affects deradicalization in an otherwise polarized population of leftists and rightists constituting the nodes of a low-dimensional social network. Specifically, assuming an externally moderating influence that converts leftists or rightists to centrists with probability p, we study the critical value p =pc , below which the presence of metastable mixed population states exponentially delay the achievement of centrist consensus. Above the critical value, centrist consensus is the only fixed point. Complementing our previously shown results for complete graphs, we present results for the process on low-dimensional networks, and show that the low-dimensional embedding of the underlying network significantly affects the critical value of probability p. Intriguingly, on low-dimensional networks, the critical value pc can show non-monotonicity as the dimensionality of the network is varied. We conclude by analyzing the scaling behavior of temporal variation of unbalanced triad density in the network for different low-dimensional network topologies. Supported in part by ARL NS-CTA, ONR, and ARO.

  16. Applications of Three-Dimensional Printing in Surgery.

    PubMed

    Li, Chi; Cheung, Tsz Fung; Fan, Vei Chen; Sin, Kin Man; Wong, Chrisity Wai Yan; Leung, Gilberto Ka Kit

    2017-02-01

    Three-dimensional (3D) printing is a rapidly advancing technology in the field of surgery. This article reviews its contemporary applications in 3 aspects of surgery, namely, surgical planning, implants and prostheses, and education and training. Three-dimensional printing technology can contribute to surgical planning by depicting precise personalized anatomy and thus a potential improvement in surgical outcome. For implants and prosthesis, the technology might overcome the limitations of conventional methods such as visual discrepancy from the recipient's body and unmatching anatomy. In addition, 3D printing technology could be integrated into medical school curriculum, supplementing the conventional cadaver-based education and training in anatomy and surgery. Future potential applications of 3D printing in surgery, mainly in the areas of skin, nerve, and vascular graft preparation as well as ear reconstruction, are also discussed. Numerous trials and studies are still ongoing. However, scientists and clinicians are still encountering some limitations of the technology including high cost, long processing time, unsatisfactory mechanical properties, and suboptimal accuracy. These limitations might potentially hamper the applications of this technology in daily clinical practice.

  17. Evolved atmospheric entry corridor with safety factor

    NASA Astrophysics Data System (ADS)

    Liang, Zixuan; Ren, Zhang; Li, Qingdong

    2018-02-01

    Atmospheric entry corridors are established in previous research based on the equilibrium glide condition which assumes the flight-path angle to be zero. To get a better understanding of the highly constrained entry flight, an evolved entry corridor that considers the exact flight-path angle is developed in this study. Firstly, the conventional corridor in the altitude vs. velocity plane is extended into a three-dimensional one in the space of altitude, velocity, and flight-path angle. The three-dimensional corridor is generated by a series of constraint boxes. Then, based on a simple mapping method, an evolved two-dimensional entry corridor with safety factor is obtained. The safety factor is defined to describe the flexibility of the flight-path angle for a state within the corridor. Finally, the evolved entry corridor is simulated for the Space Shuttle and the Common Aero Vehicle (CAV) to demonstrate the effectiveness of the corridor generation approach. Compared with the conventional corridor, the evolved corridor is much wider and provides additional information. Therefore, the evolved corridor would benefit more to the entry trajectory design and analysis.

  18. Human neural stem cell-derived cultures in three-dimensional substrates form spontaneously functional neuronal networks.

    PubMed

    Smith, Imogen; Silveirinha, Vasco; Stein, Jason L; de la Torre-Ubieta, Luis; Farrimond, Jonathan A; Williamson, Elizabeth M; Whalley, Benjamin J

    2017-04-01

    Differentiated human neural stem cells were cultured in an inert three-dimensional (3D) scaffold and, unlike two-dimensional (2D) but otherwise comparable monolayer cultures, formed spontaneously active, functional neuronal networks that responded reproducibly and predictably to conventional pharmacological treatments to reveal functional, glutamatergic synapses. Immunocytochemical and electron microscopy analysis revealed a neuronal and glial population, where markers of neuronal maturity were observed in the former. Oligonucleotide microarray analysis revealed substantial differences in gene expression conferred by culturing in a 3D vs a 2D environment. Notable and numerous differences were seen in genes coding for neuronal function, the extracellular matrix and cytoskeleton. In addition to producing functional networks, differentiated human neural stem cells grown in inert scaffolds offer several significant advantages over conventional 2D monolayers. These advantages include cost savings and improved physiological relevance, which make them better suited for use in the pharmacological and toxicological assays required for development of stem cell-based treatments and the reduction of animal use in medical research. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Sensor assembly method using silicon interposer with trenches for three-dimensional binocular range sensors

    NASA Astrophysics Data System (ADS)

    Nakajima, Kazuhiro; Yamamoto, Yuji; Arima, Yutaka

    2018-04-01

    To easily assemble a three-dimensional binocular range sensor, we devised an alignment method for two image sensors using a silicon interposer with trenches. The trenches were formed using deep reactive ion etching (RIE) equipment. We produced a three-dimensional (3D) range sensor using the method and experimentally confirmed that sufficient alignment accuracy was realized. It was confirmed that the alignment accuracy of the two image sensors when using the proposed method is more than twice that of the alignment assembly method on a conventional board. In addition, as a result of evaluating the deterioration of the detection performance caused by the alignment accuracy, it was confirmed that the vertical deviation between the corresponding pixels in the two image sensors is substantially proportional to the decrease in detection performance. Therefore, we confirmed that the proposed method can realize more than twice the detection performance of the conventional method. Through these evaluations, the effectiveness of the 3D binocular range sensor aligned by the silicon interposer with the trenches was confirmed.

  20. PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD

    NASA Astrophysics Data System (ADS)

    Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao

    Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.

  1. The initial value problem in Lagrangian drift kinetic theory

    NASA Astrophysics Data System (ADS)

    Burby, J. W.

    2016-06-01

    > Existing high-order variational drift kinetic theories contain unphysical rapidly varying modes that are not seen at low orders. These unphysical modes, which may be rapidly oscillating, damped or growing, are ushered in by a failure of conventional high-order drift kinetic theory to preserve the structure of its parent model's initial value problem. In short, the (infinite dimensional) system phase space is unphysically enlarged in conventional high-order variational drift kinetic theory. I present an alternative, `renormalized' variational approach to drift kinetic theory that manifestly respects the parent model's initial value problem. The basic philosophy underlying this alternate approach is that high-order drift kinetic theory ought to be derived by truncating the all-orders system phase-space Lagrangian instead of the usual `field particle' Lagrangian. For the sake of clarity, this story is told first through the lens of a finite-dimensional toy model of high-order variational drift kinetics; the analogous full-on drift kinetic story is discussed subsequently. The renormalized drift kinetic system, while variational and just as formally accurate as conventional formulations, does not support the troublesome rapidly varying modes.

  2. Anatomy, Variants, and Pathologies of the Superior Glenohumeral Ligament: Magnetic Resonance Imaging with Three-Dimensional Volumetric Interpolated Breath-Hold Examination Sequence and Conventional Magnetic Resonance Arthrography

    PubMed Central

    Ogul, Hayri; Karaca, Leyla; Can, Cahit Emre; Pirimoglu, Berhan; Tuncer, Kutsi; Topal, Murat; Okur, Aylin

    2014-01-01

    The purpose of this review was to demonstrate magnetic resonance (MR) arthrography findings of anatomy, variants, and pathologic conditions of the superior glenohumeral ligament (SGHL). This review also demonstrates the applicability of a new MR arthrography sequence in the anterosuperior portion of the glenohumeral joint. The SGHL is a very important anatomical structure in the rotator interval that is responsible for stabilizing the long head of the biceps tendon. Therefore, a torn SGHL can result in pain and instability. Observation of the SGHL is difficult when using conventional MR imaging, because the ligament may be poorly visualized. Shoulder MR arthrography is the most accurately established imaging technique for identifying pathologies of the SGHL and associated structures. The use of three dimensional (3D) volumetric interpolated breath-hold examination (VIBE) sequences produces thinner image slices and enables a higher in-plane resolution than conventional MR arthrography sequences. Therefore, shoulder MR arthrography using 3D VIBE sequences may contribute to evaluating of the smaller intraarticular structures such as the SGHL. PMID:25053912

  3. Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Ugai, Keizo

    2003-06-01

    This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.

  4. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    PubMed

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  5. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso

    PubMed Central

    Kong, Shengchun; Nan, Bin

    2013-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses. PMID:24516328

  6. On the basis property of the system of eigenfunctions and associated functions of a one-dimensional Dirac operator

    NASA Astrophysics Data System (ADS)

    Savchuk, A. M.

    2018-04-01

    We study a one-dimensional Dirac system on a finite interval. The potential (a 2× 2 matrix) is assumed to be complex- valued and integrable. The boundary conditions are assumed to be regular in the sense of Birkhoff. It is known that such an operator has a discrete spectrum and the system \\{\\mathbf{y}_n\\}_1^∞ of its eigenfunctions and associated functions is a Riesz basis (possibly with brackets) in L_2\\oplus L_2. Our results concern the basis property of this system in the spaces L_μ\\oplus L_μ for μ\

  7. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.

    1981-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.

  8. Contextuality as a Resource for Models of Quantum Computation with Qubits

    NASA Astrophysics Data System (ADS)

    Bermejo-Vega, Juan; Delfosse, Nicolas; Browne, Dan E.; Okay, Cihan; Raussendorf, Robert

    2017-09-01

    A central question in quantum computation is to identify the resources that are responsible for quantum speed-up. Quantum contextuality has been recently shown to be a resource for quantum computation with magic states for odd-prime dimensional qudits and two-dimensional systems with real wave functions. The phenomenon of state-independent contextuality poses a priori an obstruction to characterizing the case of regular qubits, the fundamental building block of quantum computation. Here, we establish contextuality of magic states as a necessary resource for a large class of quantum computation schemes on qubits. We illustrate our result with a concrete scheme related to measurement-based quantum computation.

  9. In situ calibration of an infrared imaging video bolometer in the Large Helical Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J.; Pandya, S. N.

    The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.

  10. Three ways to solve critical ϕ4 theory on 4 ‑ 𝜖 dimensional real projective space: Perturbation, bootstrap, and Schwinger-Dyson equation

    NASA Astrophysics Data System (ADS)

    Hasegawa, Chika; Nakayama, Yu

    2018-03-01

    In this paper, we solve the two-point function of the lowest dimensional scalar operator in the critical ϕ4 theory on 4 ‑ 𝜖 dimensional real projective space in three different methods. The first is to use the conventional perturbation theory, and the second is to impose the cross-cap bootstrap equation, and the third is to solve the Schwinger-Dyson equation under the assumption of conformal invariance. We find that the three methods lead to mutually consistent results but each has its own advantage.

  11. On the application of a fast polynomial transform and the Chinese remainder theorem to compute a two-dimensional convolution

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R.; Reed, I. S.; Wu, C.

    1980-01-01

    A fast algorithm is developed to compute two dimensional convolutions of an array of d sub 1 X d sub 2 complex number points, where d sub 2 = 2(M) and d sub 1 = 2(m-r+) for some 1 or = r or = m. This algorithm requires fewer multiplications and about the same number of additions as the conventional fast fourier transform method for computing the two dimensional convolution. It also has the advantage that the operation of transposing the matrix of data can be avoided.

  12. Volume determination of irregularly-shaped quasi-spherical nanoparticles.

    PubMed

    Attota, Ravi Kiran; Liu, Eileen Cherry

    2016-11-01

    Nanoparticles (NPs) are widely used in diverse application areas, such as medicine, engineering, and cosmetics. The size (or volume) of NPs is one of the most important parameters for their successful application. It is relatively straightforward to determine the volume of regular NPs such as spheres and cubes from a one-dimensional or two-dimensional measurement. However, due to the three-dimensional nature of NPs, it is challenging to determine the proper physical size of many types of regularly and irregularly-shaped quasi-spherical NPs at high-throughput using a single tool. Here, we present a relatively simple method that determines a better volume estimate of NPs by combining measurements from their top-down projection areas and peak heights using two tools. The proposed method is significantly faster and more economical than the electron tomography method. We demonstrate the improved accuracy of the combined method over scanning electron microscopy (SEM) or atomic force microscopy (AFM) alone by using modeling, simulations, and measurements. This study also exposes the existence of inherent measurement biases for both SEM and AFM, which usually produce larger measured diameters with SEM than with AFM. However, in some cases SEM measured diameters appear to have less error compared to AFM measured diameters, especially for widely used IS-NPs such as of gold, and silver. The method provides a much needed, proper high-throughput volumetric measurement method useful for many applications. Graphical Abstract The combined method for volume determination of irregularly-shaped quasi-spherical nanoparticles.

  13. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  14. Energy in higher-dimensional spacetimes

    NASA Astrophysics Data System (ADS)

    Barzegar, Hamed; Chruściel, Piotr T.; Hörzinger, Michael

    2017-12-01

    We derive expressions for the total Hamiltonian energy of gravitating systems in higher-dimensional theories in terms of the Riemann tensor, allowing a cosmological constant Λ ∈R . Our analysis covers asymptotically anti-de Sitter spacetimes, asymptotically flat spacetimes, as well as Kaluza-Klein asymptotically flat spacetimes. We show that the Komar mass equals the Arnowitt-Deser-Misner (ADM) mass in stationary asymptotically flat spacetimes in all dimensions, generalizing the four-dimensional result of Beig, and that this is no longer true with Kaluza-Klein asymptotics. We show that the Hamiltonian mass does not necessarily coincide with the ADM mass in Kaluza-Klein asymptotically flat spacetimes, and that the Witten positivity argument provides a lower bound for the Hamiltonian mass—and not for the ADM mass—in terms of the electric charge. We illustrate our results on the five-dimensional Rasheed metrics, which we study in some detail, pointing out restrictions that arise from the requirement of regularity, which have gone seemingly unnoticed so far in the literature.

  15. Learning language from the input: why innate constraints can't explain noun compounding.

    PubMed

    Ramscar, Michael; Dye, Melody

    2011-02-01

    Do the production and interpretation of patterns of plural forms in noun-noun compounds reveal the workings of innate constraints that govern morphological processing? The results of previous studies on compounding have been taken to support a number of important theoretical claims: first, that there are fundamental differences in the way that children and adults learn and process regular and irregular plurals, second, that these differences reflect formal constraints that govern the way the way regular and irregular plurals are processed in language, and third, that these constraints are unlikely to be the product of learning. In a series of seven experiments, we critically assess the evidence that is cited in support of these arguments. The results of our experiments provide little support for the idea that substantively different factors govern the patterns of acquisition, production and interpretation patterns of regular and irregular plural forms in compounds. Once frequency differences between regular and irregular plurals are accounted for, we find no evidence of any qualitative difference in the patterns of interpretation and production of regular and irregular plural nouns in compounds, in either adults or children. Accordingly, we suggest that the pattern of acquisition of both regular and irregular plurals in compounds is consistent with a simple account, in which children learn the conventions that govern plural compounding using evidence that is readily available in the distribution patterns of adult speech. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-01

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  17. Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction.

    PubMed

    Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob

    2017-03-21

    The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.

  18. Platform switching: biomechanical evaluation using three-dimensional finite element analysis.

    PubMed

    Tabata, Lucas Fernando; Rocha, Eduardo Passos; Barão, Valentim Adelino Ricardo; Assunção, Wirley Goncalves

    2011-01-01

    The objective of this study was to evaluate, using three-dimensional finite element analysis (3D FEA), the stress distribution in peri-implant bone tissue, implants, and prosthetic components of implant-supported single crowns with the use of the platform-switching concept. Three 3D finite element models were created to replicate an external-hexagonal implant system with peri-implant bone tissue in which three different implant-abutment configurations were represented. In the regular platform (RP) group, a regular 4.1-mm-diameter abutment (UCLA) was connected to regular 4.1-mm-diameter implant. The platform-switching (PS) group was simulated by the connection of a wide implant (5.0 mm diameter) to a regular 4.1-mm-diameter UCLA abutment. In the wide-platform (WP) group, a 5.0-mm-diameter UCLA abutment was connected to a 5.0-mm-diameter implant. An occlusal load of 100 N was applied either axially or obliquely on the models using ANSYS software. Both the increase in implant diameter and the use of platform switching played roles in stress reduction. The PS group presented lower stress values than the RP and WP groups for bone and implant. In the peri-implant area, cortical bone exhibited a higher stress concentration than the trabecular bone in all models and both loading situations. Under oblique loading, higher intensity and greater distribution of stress were observed than under axial loading. Platform switching reduced von Mises (17.5% and 9.3% for axial and oblique loads, respectively), minimum (compressive) (19.4% for axial load and 21.9% for oblique load), and maximum (tensile) principal stress values (46.6% for axial load and 26.7% for oblique load) in the peri-implant bone tissue. Platform switching led to improved biomechanical stress distribution in peri-implant bone tissue. Oblique loads resulted in higher stress concentrations than axial loads for all models. Wide-diameter implants had a large influence in reducing stress values in the implant system.

  19. Biodynamic profiling of three-dimensional tissue growth techniques

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Merrill, Dan; Turek, John; Nolte, David

    2016-03-01

    Three-dimensional tissue culture presents a more biologically relevant environment in which to perform drug development than conventional two-dimensional cell culture. However, obtaining high-content information from inside three dimensional tissue has presented an obstacle to rapid adoption of 3D tissue culture for pharmaceutical applications. Biodynamic imaging is a high-content three-dimensional optical imaging technology based on low-coherence interferometry and digital holography that uses intracellular dynamics as high-content image contrast. In this paper, we use biodynamic imaging to compare pharmaceutical responses to Taxol of three-dimensional multicellular spheroids grown by three different growth techniques: rotating bioreactor, hanging-drop and plate-grown spheroids. The three growth techniques have systematic variations among tissue cohesiveness and intracellular activity and consequently display different pharmacodynamics under identical drug dose conditions. The in vitro tissue cultures are also compared to ex vivo living biopsies. These results demonstrate that three-dimensional tissue cultures are not equivalent, and that drug-response studies must take into account the growth method.

  20. Four Dimensional Analysis of Free Electron Lasers in the Amplifier Configuration

    DTIC Science & Technology

    2007-12-01

    FEL. The power capability of this device was so much greater than that of conventional klystrons and magnetrons that records for peak power ...understand the four dimensional behavior of the high power FEL amplifier. The simulation program required dimensionless input parameters, which make...33 OPTICAL PARAMETERS inP Seed laser power inT Seed pulse duration S Distance to First Optic 0Z Rayleigh length 2 0 0 WZ π λ= λ

  1. The Airborne Optical Systems Testbed (AOSTB)

    DTIC Science & Technology

    2017-05-31

    appropriate color to each pixel in and displayed in a two -dimensional array. Another method is to render a 3D model from the data and display the model as if...USA Distribution A: Public Release ALBOTA@LL.MIT.EDU ABSTRACT Over the last two decades MIT Lincoln Laboratory (MITLL) has pioneered the development... two -dimensional (2D) grid of detectors. Rather than measuring intensity, as in a conventional camera, these detectors measure the photon time-of

  2. Manifold learning in machine vision and robotics

    NASA Astrophysics Data System (ADS)

    Bernstein, Alexander

    2017-02-01

    Smart algorithms are used in Machine vision and Robotics to organize or extract high-level information from the available data. Nowadays, Machine learning is an essential and ubiquitous tool to automate extraction patterns or regularities from data (images in Machine vision; camera, laser, and sonar sensors data in Robotics) in order to solve various subject-oriented tasks such as understanding and classification of images content, navigation of mobile autonomous robot in uncertain environments, robot manipulation in medical robotics and computer-assisted surgery, and other. Usually such data have high dimensionality, however, due to various dependencies between their components and constraints caused by physical reasons, all "feasible and usable data" occupy only a very small part in high dimensional "observation space" with smaller intrinsic dimensionality. Generally accepted model of such data is manifold model in accordance with which the data lie on or near an unknown manifold (surface) of lower dimensionality embedded in an ambient high dimensional observation space; real-world high-dimensional data obtained from "natural" sources meet, as a rule, this model. The use of Manifold learning technique in Machine vision and Robotics, which discovers a low-dimensional structure of high dimensional data and results in effective algorithms for solving of a large number of various subject-oriented tasks, is the content of the conference plenary speech some topics of which are in the paper.

  3. Properties of a new small-world network with spatially biased random shortcuts

    NASA Astrophysics Data System (ADS)

    Matsuzawa, Ryo; Tanimoto, Jun; Fukuda, Eriko

    2017-11-01

    This paper introduces a small-world (SW) network with a power-law distance distribution that differs from conventional models in that it uses completely random shortcuts. By incorporating spatial constraints, we analyze the divergence of the proposed model from conventional models in terms of fundamental network properties such as clustering coefficient, average path length, and degree distribution. We find that when the spatial constraint more strongly prohibits a long shortcut, the clustering coefficient is improved and the average path length increases. We also analyze the spatial prisoner's dilemma (SPD) games played on our new SW network in order to understand its dynamical characteristics. Depending on the basis graph, i.e., whether it is a one-dimensional ring or a two-dimensional lattice, and the parameter controlling the prohibition of long-distance shortcuts, the emergent results can vastly differ.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Ying; Li, Hong; Bridges, Denzel

    We report that the continuing miniaturization of microelectronics is pushing advanced manufacturing into nanomanufacturing. Nanojoining is a bottom-up assembly technique that enables functional nanodevice fabrication with dissimilar nanoscopic building blocks and/or molecular components. Various conventional joining techniques have been modified and re-invented for joining nanomaterials. Our review surveys recent progress in nanojoining methods, as compared to conventional joining processes. Examples of nanojoining are given and classified by the dimensionality of the joining materials. At each classification, nanojoining is reviewed and discussed according to materials specialties, low dimensional processing features, energy input mechanisms and potential applications. The preparation of new intermetallicmore » materials by reactive nanoscale multilayer foils based on self-propagating high-temperature synthesis is highlighted. This review will provide insight into nanojoining fundamentals and innovative applications in power electronics packaging, plasmonic devices, nanosoldering for printable electronics, 3D printing and space manufacturing.« less

  5. Estimation of two-dimensional motion velocity using ultrasonic signals beamformed in Cartesian coordinate for measurement of cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kaburaki, Kaori; Mozumi, Michiya; Hasegawa, Hideyuki

    2018-07-01

    Methods for the estimation of two-dimensional (2D) velocity and displacement of physiological tissues are necessary for quantitative diagnosis. In echocardiography with a phased array probe, the accuracy in the estimation of the lateral motion is lower than that of the axial motion. To improve the accuracy in the estimation of the lateral motion, in the present study, the coordinate system for ultrasonic beamforming was changed from the conventional polar coordinate to the Cartesian coordinate. In a basic experiment, the motion velocity of a phantom, which was moved at a constant speed, was estimated by the conventional and proposed methods. The proposed method reduced the bias error and standard deviation in the estimated motion velocities. In an in vivo measurement, intracardiac blood flow was analyzed by the proposed method.

  6. Large-eddy simulation of turbulent flow with a surface-mounted two-dimensional obstacle

    NASA Technical Reports Server (NTRS)

    Yang, Kyung-Soo; Ferziger, Joel H.

    1993-01-01

    In this paper, we perform a large eddy simulation (LES) of turbulent flow in a channel containing a two-dimensional obstacle on one wall using a dynamic subgrid-scale model (DSGSM) at Re = 3210, based on bulk velocity above the obstacle and obstacle height; the wall layers are fully resolved. The low Re enables us to perform a DNS (Case 1) against which to validate the LES results. The LES with the DSGSM is designated Case 2. In addition, an LES with the conventional fixed model constant (Case 3) is conducted to allow identification of improvements due to the DSGSM. We also include LES at Re = 82,000 (Case 4) using conventional Smagorinsky subgrid-scale model and a wall-layer model. The results will be compared with the experiment of Dimaczek et al.

  7. Three-dimensional fluorescent microscopy via simultaneous illumination and detection at multiple planes.

    PubMed

    Ma, Qian; Khademhosseinieh, Bahar; Huang, Eric; Qian, Haoliang; Bakowski, Malina A; Troemel, Emily R; Liu, Zhaowei

    2016-08-16

    The conventional optical microscope is an inherently two-dimensional (2D) imaging tool. The objective lens, eyepiece and image sensor are all designed to capture light emitted from a 2D 'object plane'. Existing technologies, such as confocal or light sheet fluorescence microscopy have to utilize mechanical scanning, a time-multiplexing process, to capture a 3D image. In this paper, we present a 3D optical microscopy method based upon simultaneously illuminating and detecting multiple focal planes. This is implemented by adding two diffractive optical elements to modify the illumination and detection optics. We demonstrate that the image quality of this technique is comparable to conventional light sheet fluorescent microscopy with the advantage of the simultaneous imaging of multiple axial planes and reduced number of scans required to image the whole sample volume.

  8. Cluster-size entropy in the Axelrod model of social influence: Small-world networks and mass media

    NASA Astrophysics Data System (ADS)

    Gandica, Y.; Charmell, A.; Villegas-Febres, J.; Bonalde, I.

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy Sc, which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the Sc(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait qc and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  9. Choice of regularization in adjoint tomography based on two-dimensional synthetic tests

    NASA Astrophysics Data System (ADS)

    Valentová, Lubica; Gallovič, František; Růžek, Bohuslav; de la Puente, Josep; Moczo, Peter

    2015-08-01

    We present synthetic tests of 2-D adjoint tomography of surface wave traveltimes obtained by the ambient noise cross-correlation analysis across the Czech Republic. The data coverage may be considered perfect for tomography due to the density of the station distribution. Nevertheless, artefacts in the inferred velocity models arising from the data noise may be still observed when weak regularization (Gaussian smoothing of the misfit gradient) or too many iterations are considered. To examine the effect of the regularization and iteration number on the performance of the tomography in more detail we performed extensive synthetic tests. Instead of the typically used (although criticized) checkerboard test, we propose to carry out the tests with two different target models-simple smooth and complex realistic models. The first test reveals the sensitivity of the result on the data noise, while the second helps to analyse the resolving power of the data set. For various noise and Gaussian smoothing levels, we analysed the convergence towards (or divergence from) the target model with increasing number of iterations. Based on the tests we identified the optimal regularization, which we then employed in the inversion of 16 and 20 s Love-wave group traveltimes.

  10. Predictive sparse modeling of fMRI data for improved classification, regression, and visualization using the k-support norm.

    PubMed

    Belilovsky, Eugene; Gkirtzou, Katerina; Misyrlis, Michail; Konova, Anna B; Honorio, Jean; Alia-Klein, Nelly; Goldstein, Rita Z; Samaras, Dimitris; Blaschko, Matthew B

    2015-12-01

    We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Accelerating 4D flow MRI by exploiting vector field divergence regularization.

    PubMed

    Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian

    2016-01-01

    To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.

  12. Cluster-size entropy in the Axelrod model of social influence: small-world networks and mass media.

    PubMed

    Gandica, Y; Charmell, A; Villegas-Febres, J; Bonalde, I

    2011-10-01

    We study the Axelrod's cultural adaptation model using the concept of cluster-size entropy S(c), which gives information on the variability of the cultural cluster size present in the system. Using networks of different topologies, from regular to random, we find that the critical point of the well-known nonequilibrium monocultural-multicultural (order-disorder) transition of the Axelrod model is given by the maximum of the S(c)(q) distributions. The width of the cluster entropy distributions can be used to qualitatively determine whether the transition is first or second order. By scaling the cluster entropy distributions we were able to obtain a relationship between the critical cultural trait q(c) and the number F of cultural features in two-dimensional regular networks. We also analyze the effect of the mass media (external field) on social systems within the Axelrod model in a square network. We find a partially ordered phase whose largest cultural cluster is not aligned with the external field, in contrast with a recent suggestion that this type of phase cannot be formed in regular networks. We draw a q-B phase diagram for the Axelrod model in regular networks.

  13. Assessing the Effectiveness of a 3-D Instructional Game on Improving Mathematics Achievement and Motivation of Middle School Students

    ERIC Educational Resources Information Center

    Bai, Haiyan; Pan, Wei; Hirumi, Astusi; Kebritchi, Mansureh

    2012-01-01

    This research study assessed the effectiveness of a three-dimensional mathematics game, DimensionM, through a pretest-posttest control group quasi-experimental design. Participants consisted of 437 eighth graders. The classrooms were randomly assigned either to the treatment group that utilized DimensionM as a supplement to regular classroom…

  14. Thick de Sitter brane solutions in higher dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzhunushaliev, Vladimir; Department of Physics and Microelectronic Engineering, Kyrgyz-Russian Slavic University, Bishkek, Kievskaya Str. 44, 720021, Kyrgyz Republic; Folomeev, Vladimir

    2009-01-15

    We present thick de Sitter brane solutions which are supported by two interacting phantom scalar fields in five-, six-, and seven-dimensional spacetime. It is shown that for all cases regular solutions with anti-de Sitter asymptotic (5D problem) and a flat asymptotic far from the brane (6D and 7D cases) exist. We also discuss the stability of our solutions.

  15. Momentum Probabilities for a Single Quantum Particle in Three-Dimensional Regular "Infinite" Wells: One Way of Promoting Understanding of Probability Densities

    ERIC Educational Resources Information Center

    Riggs, Peter J.

    2013-01-01

    Students often wrestle unsuccessfully with the task of correctly calculating momentum probability densities and have difficulty in understanding their interpretation. In the case of a particle in an "infinite" potential well, its momentum can take values that are not just those corresponding to the particle's quantised energies but…

  16. HP-9810A calculator programs for plotting the 2-dimensional motion of cyclindrical payloads relative to the shuttle orbiter

    NASA Technical Reports Server (NTRS)

    Wilson, S. W.

    1976-01-01

    The HP-9810A calculator programs described provide the capability to generate HP-9862A plotter displays which depict the apparent motion of a free-flying cyclindrical payload relative to the shuttle orbiter body axes by projecting the payload geometry into the orbiter plane of symmetry at regular time intervals.

  17. Optimized SIFTFlow for registration of whole-mount histology to reference optical images

    PubMed Central

    Shojaii, Rushin; Martel, Anne L.

    2016-01-01

    Abstract. The registration of two-dimensional histology images to reference images from other modalities is an important preprocessing step in the reconstruction of three-dimensional histology volumes. This is a challenging problem because of the differences in the appearances of histology images and other modalities, and the presence of large nonrigid deformations which occur during slide preparation. This paper shows the feasibility of using densely sampled scale-invariant feature transform (SIFT) features and a SIFTFlow deformable registration algorithm for coregistering whole-mount histology images with blockface optical images. We present a method for jointly optimizing the regularization parameters used by the SIFTFlow objective function and use it to determine the most appropriate values for the registration of breast lumpectomy specimens. We demonstrate that tuning the regularization parameters results in significant improvements in accuracy and we also show that SIFTFlow outperforms a previously described edge-based registration method. The accuracy of the histology images to blockface images registration using the optimized SIFTFlow method was assessed using an independent test set of images from five different lumpectomy specimens and the mean registration error was 0.32±0.22  mm. PMID:27774494

  18. A Novel Multiobjective Evolutionary Algorithm Based on Regression Analysis

    PubMed Central

    Song, Zhiming; Wang, Maocai; Dai, Guangming; Vasile, Massimiliano

    2015-01-01

    As is known, the Pareto set of a continuous multiobjective optimization problem with m objective functions is a piecewise continuous (m − 1)-dimensional manifold in the decision space under some mild conditions. However, how to utilize the regularity to design multiobjective optimization algorithms has become the research focus. In this paper, based on this regularity, a model-based multiobjective evolutionary algorithm with regression analysis (MMEA-RA) is put forward to solve continuous multiobjective optimization problems with variable linkages. In the algorithm, the optimization problem is modelled as a promising area in the decision space by a probability distribution, and the centroid of the probability distribution is (m − 1)-dimensional piecewise continuous manifold. The least squares method is used to construct such a model. A selection strategy based on the nondominated sorting is used to choose the individuals to the next generation. The new algorithm is tested and compared with NSGA-II and RM-MEDA. The result shows that MMEA-RA outperforms RM-MEDA and NSGA-II on the test instances with variable linkages. At the same time, MMEA-RA has higher efficiency than the other two algorithms. A few shortcomings of MMEA-RA have also been identified and discussed in this paper. PMID:25874246

  19. Elaborate SMART MCNP Modelling Using ANSYS and Its Applications

    NASA Astrophysics Data System (ADS)

    Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng

    2017-09-01

    An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.

  20. Three-dimensional high-definition flow in the diagnosis of placental lakes.

    PubMed

    Inubashiri, Eisuke; Deguchi, Keizou; Abe, Kiyotaka; Saitou, Atushi; Watanabe, Yukio; Akutagawa, Noriyuki; Kuroki, Katumaru; Sugawara, Masaki; Maeda, Nobuhiko

    2014-10-01

    Placental lakes are sonolucent areas often found in the normal placenta. Most of them are asymptomatic. They are sometimes related to placenta accreta or intrauterine fetal growth restriction, among other conditions. Although Doppler sonography is useful for evaluating noxious placental lakes, it is not easy to adapt Doppler studies to conventional two-dimensional color Doppler sonography because of the low-velocity blood flow and high vascularity in the placenta. Here, we demonstrate how three-dimensional high-definition imaging of flow provides a novel visual depiction of placental lakes, which helps substantially with the differential diagnosis. As far as we know, there have been no previous reports of observation of placental lakes using three-dimensional high-definition imaging of flow.

Top